Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on April 17, 2014

There is some good news for Visual Studio enthusiasts looking to develop web applications using Node.js. The Visual Studio team (with help from the community contributors) recently released the support for Node.js in Visual Studio 2013. While this is still in beta, and you may face issues while developing apps, it is definitely worthwhile to explore the features now and provide feedback to the team.

You can down the Node.js Tools for Visual Studio (NTVS) from CodePlex here. Follow the steps in the installation wizard to setup NTVS.

Once you have successfully installed the tools, you will see a bunch of templates showing up in the New Project Dialog under the Node.js section below JavaScript.

The predefined templates help you create a New Node.js Web Application, a New Azure Website built using Node.js, and a Worker Role with support in Node.js for creating long running processes using Node.

Note that Node.js is already supported by Azure Mobile Services and you can directly run Node.js scripts from Azure Mobile Services by configuring them in the Azure portal.

You can also create a project from existing Node.js code (which is likely to be the case if you were already developing on Node.js).

Select the "From Existing Node.js code" template. The dialog will launch a wizard to let you select the folder where your Node.js project is placed. It will enlist the Node.js start-up file if it finds one like the screenshot shown below:

(It is a good option to select "Exclude node modules", since they will unlikely to be your start-up app).

You are all set, but when you build your project, it is most likely to fail, since the node modules will not be present or correctly referenced. Right click on the project and use the "Open Command Prompt Here" command to launch the command prompt and use the "npm install" command to install the node modules.

The NTVS tools also provide a nice option to manage and install global node modules. In your solution structure, expand npm and click Manage Node Modules from the Global Modules context menu. The module manager dialog will be launched where you can search and install global node modules.

You are now all set to start developing your Node.js project from within Visual Studio 2013. Since this is built on the underlying V8 JavaScript engine, you can leverage the power of debugging JavaScript applications seamlessly. You also benefit from the usual Visual Studio IntelliSense features for JavaScript. In future posts we will explore more regarding the debugging and deployment aspects, and look at creating a worker role with Node.


Posted by Sandeep Chanda on April 7, 2014

Overview

The world of HTML5 hybrid app development frameworks just got hotter with the beta release of Ionic Actinium, the Ionic Framework dubbed as "Bootstrap for Native", by the makers of Drifty, already popular for tools such as Jetstrap and Codiqa.

HTML5 is the platform of choice for desktop web applications and mobile websites and is gaining immense popularity for building hybrid and/or native self-contained mobile apps. Using HTML5 helps reduce the steep learning curve involved in developing native apps and in turn reduces time to market.

The Ionic Framework uses HTML5, CSS and JavaScript, packaged using the Cordova tools for creating platform specific iOS and Android apps. A lot of its core features are also built using AngularJS. Using AngularJS is a highly recommended option for building the apps using Ionic.

In this post, we will explore how you can setup Ionic on a Windows machine and then start building Android apps using the framework.

Prerequisites

Following are the prerequisites:

  1. You must have JDK and the Android SDK installed. Typically look for JDK 7, but it worked for JDK 6 on my machine. From the Android perspective, you need the latest version, currently 22.6.2 for the SDK tools, and 19.0.1 for the platform-tools. You must also have a device configuration (AVD) for the latest API level as illustrated in the figure below:

  2. Download Apache Ant from here, and note the path of the extracted folder from the zip. It should be something like C:\ apache-ant-1.9.2-bin.
  3. Install Node.js from here.
  4. Configure the PATH variable in your system environment variables to include the path for JDK, Android SDK tools and platform tools, and the Ant bin folder that you extracted in step 2. You should create individual variables for each and then specify the %<Variable_name>% annotation to specify the paths in the PATH variable.
    1. ANDROID_HOME : C:\Users\sandeep.chanda\AppData\Local\Android\android-sdk\tools
    2. ANDROID_PLATFORM: C:\Users\sandeep.chanda\AppData\Local\Android\android-sdk\platform-tools
    3. JAVA_HOME: C:\Program Files (x86)\Java\jdk1.6.0_39
    4. ANTS_HOME: C:\Software\apache-ant-1.9.2-bin\apache-ant-1.9.2\bin
    5. PATH: %ANDROID_HOME%; %JAVA_HOME%... etc.
    The following figure illustrates:

  5. Download Console 2, and extract the package into a folder.

You are now all set to configure Ionic and start building the apps.

Configure Ionic

Open an instance of Console 2 and execute the following commands

  1. First we will install Cordova if not installed already. Run the following command
    • npm install -g cordova
  2. There is a command line utility for Ionic to build and package Ionic apps using Gulp. Run the following command to install the utility
    • npm install -g gulp ionic

That's all. You are all set to now run Ionic projects.

Create an Ionic Project

First you need to create an Ionic project to get a template for building Ionic apps. You can run the following command to create an Ionic project.

  • ionic start [Project Name]

This will download the necessary packages and will create the project folder structure as shown in the figure below:

This comes with a bootstrap template for building Ionic apps. You can directly build and run this in the Android emulator and you will get a basic template with navigation elements in place. To create the Android APK and deploy it in the emulator, first change into the project directory in Console 2

  • cd [Project Name]

Next configure Android using the following command

  • ionic platform android

You can now build and run the app in the emulator using the commands

  • ionic build android
  • ionic emulate android

This will build and launch the app in the emulator as shown below:

Start Building Ionic Apps

In the Ionic project folder structure, you will notice a www folder. This is where all your HTML5 pages will go. There are additional elements that we will explore in a future session, but navigate into the www folder and open the Index.html file using an editor. You will find the different elements that form a basic Ionic landing page. The header contains references to Ionic CSS, AngularJS, Cordova, and your application specific JS files with controller, app, and service logic.

The body element consists of the navigation structure and a place for rendering the views that are defined in the templates folder:

Now you can start creating your own views in the templates folder and build your app!


Posted by Sandeep Chanda on March 25, 2014

I always look forward to attending retrospective meetings in an agile setup. It is the time to reflect upon how the team fared and make amendments for the future. There is always tons of learning and from every project springs some unique surprises during a retrospective session.

Team Foundation Server (TFS) Analytics can aid a retrospective discussion, and, more interestingly, be used as a form of gamification to bring in a bit of competitiveness amongst peers. The most interesting of the analytics you can bring into the discussion is the Code Churn report. It helps gauge the lines of code written by each member of the team and illustrates the churn generated. It can be a reflection of how much refactoring has gone in by comparing the deleted and added LOC. It may not be very useful directly for project budget and forecasting, but definitely gives a sense of achievement to the team and provides non-tangible benefits in the form of motivation. It is very easy to run the analytics reports provided by TFS. You however need to make sure that you have appropriate permissions to run.

Open Microsoft Office Excel 2013. You will see an option to connect to different sources under the Data tab. Select the option to create a connection to SQL Server analytics services as illustrated in the following figure.

This will open the data connection wizard. Type the TFS server name in the first step of the wizard and click next to step 2 that will bring the list of available cubes and perspectives as shown in the figure below:

Notice that Team System is the cube holding all possible dimensions and facts that you can create the analytics on, however specific perspective analytics are pre-created like Code Churn and Code Coverage.

Select the Code Churn perspective and finish. You will be prompted to choose the format in which you want to import the data. Select the PowerPivot option as shown:

From the PivotTable fields, choose the Code Churn Attributes as Column Values.

Scroll down the fields' information and select "Checked In By" under the Version Control Changeset category. This will get added as a Row Value and you will see a report generated as shown in the following figure.

This imported data shows the Code Churn Count, Lines Added, Deleted and the total impact in the form of Total Lines of Code. You can further dissect the information by adding Year / Month attributes to determine the highest and lowest code churn months / years. In addition, comparing with estimated hours of effort, you can use this information to drive sizing discussions.

There are additional perspectives that TFS has pre-generated like the Builds, Code Coverage, Tests, and Work Items. Each of these perspectives are useful for an effective retrospective discussing build quality, work progress, and tested code paths for sunny and rainy day scenarios.


Posted by Sandeep Chanda on March 17, 2014

In one of the previous blogs, I discussed building background services that can scale by tenant. There could however be scenarios where you want the system to scale by load (e.g. number of messages to be processed from queue). Often in such scenarios you want to have control over the way the load is generated to avoid redundancy, but want the processing to happen as soon as the message arrives. You would also want maximum utilization of the CPU for processing to minimize costs of scaling. Having an effective worker role design can help you better understand the efficiency of background services. The following figure illustrates one such design using the OnStart and Run methods inside the RoleEntryPoint class.

You can use the OnStart method to create instances of scheduled services using utilities such as Quartz.Net cron service scheduler that can then run at a pre-defined interval and populate designated queues with messages to be processed. Typically, you would want only one instance from the configured instances to be able to write into the queue to avoid duplication of messages for processing. The following code shows a typical cron schedule. The service configured will have the implementation of the leased method (we discussed in the previous blog post) that will schedule the messages in queue.

public override bool OnStart()
    {
      NetworkSettings.SetServicePointManagerDefaultSetting();
 
      UnityContainer = UnityHelper.ConfigureUnity();
      QueueProvider = UnityContainer.Resolve<IQueueProvider>();
      LogService = UnityContainer.Resolve<ILogService>();
      ScheduleServices();
      return base.OnStart();
    }

The code inside the ScheduledServices method could look like:

DateTimeOffset runTime = DateBuilder.EvenMinuteDate(DateTime.Now);
        JobScheduler scheduler = new JobScheduler();
        scheduler.Schedule<SiteConfigurationService>(runTime,
          "SiteConfigurationSchedule",
          "SiteConfigurationGroup",
          RoleEnvironment.GetConfigurationSettingValue("SiteConfigurationCronSchedule"));
        scheduler.Schedule<AggregationService>(runTime,
          "AggregationSchedule",
          "AggregationGroup",
          RoleEnvironment.GetConfigurationSettingValue("AggregationCronSchedule"));

These are examples of different types of cron services that are run by Quartz.net based on the defined schedule.

The following code illustrates the implementation inside the Run method of a worker role that uses Task Parallel Library to process multiple queues the same time.

public override void Run()
    {
      while (true)
      {
        try
        {
          Parallel.Invoke(
            
            () =>
            {
              ProcessMessages<ISiteConfigurationManager, MaintenanceScheduleItem>(Constants.QueueNames.SiteConfigurationQueue, (m, e) => m.CreateSiteConfiguration(e));
            },
            () =>
            {
              var hasMessages = ProcessMessages<IAggregationManager, QueueMessage>(Constants.QueueNames.PacketDataQueue, null, (m, e) => m.ComputeSiteMetrics(e));
              if (!hasMessages)
                Thread.Sleep(200);
            });
        }
        catch (Exception ex)
        {
          LogService.Error(ex);
        }
      }
    }

This can scale to as many instances as you want, depending on the load on the queue and the expected throughput. The parallel processing will ensure that the CPU is optimally utilized in the worker role, and the run method will generate a continuous run of the instance to process items from the queue. You can also use the auto scale configuration to automatically scale the instances based on load.

Known Issues

There is one known issue you must be aware of in this design regarding the data writes on an Azure Table storage. Since multiple instance will be writing to the table, if you are running updates, there is a chance that the data could have been modified between the time you picked the record and updated it back after processing. Azure, by default, rejects such operations. You can, however, force an update by setting the table entity's ETag property to "*". The following code illustrates a generic table entity save--with forced updates.

public void Save<T>(T entity, bool isUpdate = false) where T : ITableEntity, new()
    {
      TableName = typeof(T).Name;
      if (isUpdate)
        entity.ETag = "*";
      operations.Value.Add(isUpdate ? TableOperation.Replace(entity) : TableOperation.Insert(entity));
    }

A word of caution though. This may not be the design you want to pursue if the system you are building is completely intolerant to a certain degree of data corruption at any point in time, since a forced update may result in such a behaviour.


Posted by Sandeep Chanda on March 7, 2014

Windows Azure Mobile Services now support .NET as a back end, based on the Windows Azure updates released on February 20, 2014. Although this is still in preview, it essentially means that you can now use your ASP.NET Web API services to be deployed as Mobile Services in Azure and leverage features such as Push Notifications and Mobile Authentication. Let us explore how you can design your ASP.NET Web API controllers to be deployed as a Mobile Service.

The first thing you need to do is to install the Windows Azure Mobile Services Backend NuGet package. Note that you will not be able to find the packages in NuGet, since they are in pre-release mode. Launch the Package Manager Console, and type the following commands to install the required packages in your solution.

Install-Package WindowsAzure.MobileServices.Backend –Pre
Install-Package WindowsAzure.MobileServices.Backend.Tables -Pre
Install-Package WindowsAzure.MobileServices.Backend.Entity -Pre

In addition, you will also need to use NuGet to upgrade the Web API framework to the pre-release version of ASP.NET Web API using the command:

Install-Package Microsoft.AspNet.WebApi -Pre

Next step would be to configure a starting point for kicking off the Mobile Service configuration process. For this you need to modify the Register method in the WebApiConfig class as follows:

public static void Register()
        {
            // Web API configuration and services
            // Use this class to set configuration options for your mobile service
            ConfigOptions options = new ConfigOptions();
            // Use this class to set WebAPI configuration options
            HttpConfiguration config = ServiceConfig.Initialize(new ConfigBuilder(options));
           
        }

Call the register method in your application start.

public class WebApiApplication : System.Web.HttpApplication
    {
        protected void Application_Start()
        {
            WebApiConfig.Register();
        }
    } 

The mobile service configuration uses a bunch of settings information, in the Web.config, that are overridden by the values specified in the portal.

<add key="MS_MobileServiceName" value="[your service name]" />
    <add key="MS_MasterKey" value="Overriden by portal settings" />
    <add key="MS_ApplicationKey" value="Overriden by portal settings" />
    <add key="MS_MicrosoftClientID" value="Overriden by portal settings" />
    <add key="MS_MicrosoftClientSecret" value="Overriden by portal settings" />
    <add key="MS_FacebookAppID" value="Overriden by portal settings" />
    <add key="MS_FacebookAppSecret" value="Overriden by portal settings" />
    <add key="MS_GoogleClientID" value="Overriden by portal settings" />
    <add key="MS_GoogleClientSecret" value="Overriden by portal settings" />
    <add key="MS_TwitterConsumerKey" value="Overriden by portal settings" />
    <add key="MS_TwitterConsumerSecret" value="Overriden by portal settings" />

That is it. You can start using Web API Controllers as a mobile service. In addition, you can also use the default TableController class that comes with Azure Mobile Services for remote data scenarios. You can derive your Web API controller classes from the generic TableController implementation.

public class StockController : TableController<Stock>

The model used with the TableController generic class derives from EntityData exposed by the WindowsAzure.Mobile.Service namespace, which determines how the properties for a given table data model are serialized while communicating with clients using EF as a backend store.

using Microsoft.WindowsAzure.Mobile.Service;
    using System;
    using System.Collections.Generic;
    
    public partial class Stock : EntityData
    {
        public int Id { get; set; }
        public string StockName { get; set; }
        public decimal UnitPrice { get; set; }
    }

In your controller class, override the context initialization implementation to create the column annotations.

 protected override void Initialize(HttpControllerContext controllerContext)
        {
            base.Initialize(controllerContext);
            var context = new StockMarketEntities(Services.Settings.Name.Replace('-', '_'));
            DomainManager = new EntityDomainManager<Stock>(context, Request, Services);
        }

You can now use the TableController methods inside your CRUD operations.

 // GET api/Stock
        public IQueryable<Stock> GetStocks()
        {
            return Query();
        }

To deploy the service, first download the publish profile from your Windows Azure Mobile Services Dashboard page. In Visual Studio, click on Publish from the project context menu. When prompted, select the publish profile file you just downloaded. In addition, under the File Publish settings, specify the option to remove additional files in destination.

Once publish is successful, you will be redirected to your Mobile Services landing page, which you can start using.

Another important element to note is that, if you are using EF with your Web API controllers, you should select the Async option while creating the controller methods. If you are generating the Controller from the EF model, then you can specify this while using the Wizard to create the controller class (this is only available with EF 6).

You are all set to use ASP.NET Web API as a Windows Azure Mobile Service!


Posted by Sandeep Chanda on February 25, 2014

Scott Guthrie announced in his blog last week a host of new additions to Windows Azure. One of the significant additions is the improvements in the Active Directory Services, in particular addition of quite a few popular SaaS based applications that can now leverage Active Directory Services. In this post today we will explore adding GitHub as a SaaS application for Single Sign-On (SSO) with AD users. In a future post we will explore the Active Directory Premium features (still awaiting confirmation from Microsoft for the preview).

Active Directory SSO with SaaS Applications

Active Directory now supports SSO with more than 600 SaaS based applications including support for integration with GitHub. So far AD integration with GitHub was limited to configuring LDAP only for GitHub enterprise, or using third party solutions like OneLogin. This however becomes very easy with Windows Azure Active Directory Services supporting GitHub as one of the SaaS applications. To configure, login to your Windows Azure account and navigate to the Active Directory menu. Click on the Directory where you want to create the GitHub SaaS application. Under the Applications tab inside the directory, click on the Add Application link. You will be displayed a dialog with all supported SaaS applications. Select GitHub from the Developer Services list.

Once you have added the application, you will be presented the options to configure SSO with GitHub.

There are two things you need to do. First, you need to configure the SSO service. In this, either you can choose the option to use an existing SSO, or specify that you will provide the credentials for GitHub accounts.

Next you need to assign users access on GitHub. When you click the Assign users button, you will be redirected to the Application users page, where you can add / edit users and assign them to the application.

On this page, you will see the AD users. Click Assign to provide permissions to the selected user to access GitHub. The Assign button will prompt you to enter the relevant GitHub credentials if you had chosen to enter credentials in the first step.

That's it! Your AD users are now setup to access GitHub. Navigate to the

http://myapps.microsoft.com/site, and login with your Azure credentials. You will see the list of applications that has been integrated with Windows Azure Active Directory Services, and you have access to.

Clicking on GitHub under the applications tab will take you to GitHub and will have automatically logged you in with the mapped credentials.

Note: You must have installed the Access Panel Extension browser add-on to be able to navigate to the SaaS applications. You will be prompted if you don't have it already installed.


Posted by Sandeep Chanda on February 18, 2014

Overview

I was fascinated by the idea of developing cross-platform mobile apps from within the comforts of Visual Studio. This led me to explore Mono and the Xamarinsuit for almost a year now. What I was missing was the ability to abstract common functionality and presentation behaviour from platform specific views. While Portable Class Libraries and Plugins helped realize better reusability across platforms, however I still needed a way to make the presentation layer isolated and unit testable.

MvvMCross and Ninja Coder

MvvMCross made me realize a better way to develop cross-platform apps using Xamarin based on the Model-View-ViewModel pattern. Ninja Coder, is a nice little utility that shrink wraps MvvMCross framework, and takes care of referencing the appropriate assemblies in your solution to create projects based on Xamarin and MvvMCross. After installing Ninja Coder, you will see the MvvMCross templates listed under the new projects menu, however a better option is to create your projects using the Tools → Ninja Coder for MvvMCross → Add Projects menu. It lets you select the type of platform projects you want to target in your solution, along with Core, which acts as the reusable library for common features, plugins and ViewModel.

Note that while you can use the project menu to create projects based on MvvMCross, however the recommended option would be to use the one illustrated in the figure above, since it takes care of referencing the Core assemblies in other platform specific projects.

The Core project has an App class that acts as the entry point for launching the landing page view. The MvxApplication class provides the RegisterAppStart method, where you can register the landing page ViewModel to launch.

public class App : MvxApplication
    {
        /// <summary>
        /// Initializes this instance.
        /// </summary>
        public override void Initialize()
        {
            this.CreatableTypes()
                .EndingWith("Service")
                .AsInterfaces()
                .RegisterAsLazySingleton();
 
            //// Start the app with the First View Model.
            this.RegisterAppStart<FirstViewModel>();
        }
    } 

You can use the Tools menu to add ViewModels and Views to the solution. The ViewModel will be created in the Core project with one View per target platform created in each relevant project.

You can also specify the navigation options for the View.

Note that the View Model Name textbox is smart enough to realize the case sensitivity, and you should always append the name with a ViewModel suffix. Also, don't create a View Model which is named just ViewModel since the command properties are auto generated, and will cause name conflicts in the project.

Another issue to note, whenever you have an Android or iOS project added, The Views are not automatically created. You need to specifically create the relevant files (.axml for Android) under the resources folder.

The Content loader view class within the project can then launch the view you are referring.

public class SecondView : BaseView
    {
        /// <summary>
        /// Called when [create].
        /// </summary>
        /// <param name="bundle">The bundle.</param>
        protected override void OnCreate(Bundle bundle)
        {
            base.OnCreate(bundle);
            SetContentView(Resource.Layout.SecondView);
        }
    } 

Plugins

The Ninja Coder utility allows you to configure a host of predefined plugins that you can use with your application. You can use the tools menu to add the plugins you need.

You can also specify the ViewModel that implements the plugin. In our example I have chosen the accelerometer implementation for the SecondViewModel, and you will see the last reading being fetched.

/// <summary>
        /// Gets the last reading.
        /// </summary>
        /// <returns>The Last Reading.</returns>
        public MvxAccelerometerReading GetLastReading()
        {
           return this.accelerometer.LastReading;
        } 

Summary

Ninja Coder can really shorten your learning cycle on the best practices for building a unit-testable UI layer with presentation logic isolated from the View implementation for different mobile platforms using Xamarin and MvvMCross.


Posted by Sandeep Chanda on February 11, 2014

Overview

Instrumenting your application code should be one of the key Non-Functional Requirements for consideration while building applications in Azure, specifically if they are multi-tenant SaaS-based applications. Depending on the size and nature of your application, the following areas of instrumentation should be considered:

  1. Diagnostic Logging
    1. Exception Log
    2. Verbose Log
  2. Custom Performance Metrics
  3. Application Performance Metrics
  4. Service Metering

Diagnostics

Diagnostic Logging in very simple terms could be logging all exceptions that occur in your application. You can create a very simple LogService implementation using log4net. The service can expose methods like Debug, Error, Fatal, Warn, and Info to address various instrumentation needs. The log4net, ILog interface exposes capabilities for each of these as shown in the code below

readonly ILog _logger;
 
        static LogService()
        {
            XmlConfigurator.Configure();
        }
 
        public LogService(Type logClass)
        {
            _logger = LogManager.GetLogger(logClass);
        }
 
        /// <summary>
        /// Log a message object with the Fatal level.
        /// </summary>
        /// <param name="errorMessage">The error message.</param>
        public void Fatal(string errorMessage)
        {
            if (_logger.IsFatalEnabled)
                _logger.Fatal(errorMessage);
        } 

The code block above shows an example of a Fatal log.

In addition to logging the exceptions caught in the catch block, it is a good idea to provide support for verbose logging and enable it for capturing detailed code flow if needed. To capture verbose log, you must instrument every method entry, including the parameters passed and return values. Important business rules within methods should also be instrumented as deemed necessary.

You can use the Info method to create specific logging entries such as "Method Entry", "Method Exit" with each of these entries augmented by the parameters passed and returned respectively.

Note that Windows events are captured and logged in the WADLogsTable and WADWindowsEventsLogsTable table stores. They provide additional diagnostic information into issues occurring in the configured roles.

Performance Metrics

Performance metrics are usually twofold. You can use the Telemetry SDK to create custom KPI to be monitored using App Insights. If you do not have App Insights, you can actually create a simple performance monitor for your modules, using a StopWatch and capturing the ElapsedMilliseconds for the operations you intend to participate in a performance session. The key here would be, however, to be asynchronous while capturing performance data. The following class shows a simple PerfLog entity.

public class PerfLog : TableEntity
    {
        private string operationName;
 
        public string OperationName 
        { 
            get 
            {
                return operationName;
            } 
            set 
            {
                operationName = value;
                this.RowKey = string.Format("{0}_{1}", value, this.RowKey);
            } 
        }
 	
        public int TenantId { get; set; }
        public string ParamValuesPassed { get; set; }
        public int OperationExecutionDuration { get; set; }
        public string ResultInfo { get; set; }
        public int RecordsReturned { get; set; }
    } 

Second, you can use the Windows Azure Performance Diagnostics part of Windows Azure Diagnostics (WAD) to create and use performance counters in your application. The WAD framework is built using the Event Tracing for Windows (ETW) framework and the performance counter logs are stored in the WADPerformanceCountersTable storage table. It is possible to create a custom diagnostic plan for each role using Visual Studio. The following figure illustrates the performance configuration.

Service Metering

If you are building a multi-tenant SaaS-based application, Service Metering is important to gain an understanding of usage and bill customers accordingly. The recently released Cloud Design Patterns guidance by the Patterns and Practices team elaborates on the Service Metering Guidance. Please note that there is no default mechanism in Azure to perform metering by tenant. The architecture needs to take care of tenant-based metering. An early example of this was released in the Cloud Ninja Metering Block and you can explore the source to understand a tenant metering implementation and apply it in your application. The source is available at http://cnmb.codeplex.com/SourceControl/latest.

The Azure Management Portal provides you usage details on each resource and is a great way to find out the resource allocation on roles. The following figure illustrates:

Summary

Instrumentation is a multi-pronged approach for applications hosted in Azure, and you need to consider the size, scalability, multi-tenacity, billing, and other non-functional factors into consideration while planning for instrumentation and logging.


Posted by Sandeep Chanda on January 28, 2014

Overview

The Visual Studio team recently released an application monitoring tool called Application Insights (preview) for Visual Studio Online. If you have an Azure account with subscription for Visual Studio Online, you can request a limited preview of the Application Insights platform, that allows you to monitor the performance, availability, usage and diagnostic aspects of not just web, but services, Windows Phone, Windows Store, and Azure PaaS based applications and apps.

Configuring Application Insights (On Premise)

Application Insights primarily consists of the Monitoring Agent that you need to install for on premise Windows Servers. During installation, you need to specify the Account Id, and the Instrumentation Key. The Agent installer provides three options. Use the Application Insights for Visual Studio Onlineoption to surface the monitoring data of your applications on premise to be collected and analysed in Visual Studio Online.


Figure 1: Monitoring Agent Setup

Note that, IIS Management Tools and Scriptsis a pre-requisite for installing the monitoring agent.

Configuring Application Insights (Azure PaaS)

For Windows Azure PaaS based applications, you need to modify the deployment configuration files in your Windows Azure project. A step by step guide is provided, if you select Add an Applicationfrom the Application Insights page under Visual Studio Online. Answer the questions presented by the Wizard as shown in Figure 2, and you will reach the step by step instructions.


Figure 2: Wizard Driven Interface to Add a New Application for Monitoring

You will need to download the instrumentation tool and extract the files to be placed into the AppInsightsAgentfolder inside your web role. You then need to add a Startup Task to launch the instrumentation bootstraps (unifiedbootstrap.bat file). Deploy the application, and the metrics will start showing up in the site.

Measuring Availability with Synthetic Monitors

Application Insights allows you to provide the application URL as a ping request that can then monitor performance, availability and functionality using the tool called Synthetic Monitor. You can create multiple synthetic monitors, and create a single URL test, or a multi-step test using a Visual Studio Web Test file as shown in Figure 3.


Figure 3: Multi-step Synthetic Monitor

Analysing Usage with Telemetry SDK

The Telemetry SDK with Application Insights allows you to surface usability patterns of the deployed application. You can download the Telemetry SDK using NuGet and then start using the ServerAnalytics.Start method to allow Application Insights to start capturing usage data.

using Microsoft.ApplicationInsights.Telemetry.Services; 
public override bool OnStart() 
{
    ServerAnalytics.Start("<Instrumentation Key>"); 
    return base.OnStart();
}

The above code sample, demonstrates enabling the analytics on a Worker Role.

You can also use the SDK to log KPIs for important events occurring inside your application. The method ServerAnalytics.CurrentRequest.LogEvent will help you log events and surface them under Feature --> Events in the Application Insights site.

Diagnostics

The Application Insights portal also lets you diagnose issues and exceptions as they are recorded by the monitoring agent. You can navigate to the Diagnostics page and see the events as they occur as illustrated in Figure 4.


Figure 4: Diagnostics using Application Insights

Summary

Application Insights is a very promising and powerful utility to monitor the performance, function and usability of a wide range of applications and apps, and the journey that started from IntelliTrace finally seems to have unified the application monitoring and diagnostics experience under one roof.


Posted by Sandeep Chanda on January 13, 2014

Overview

I have been doing a lot of work on Azure lately. Most recently, in a high volume message processing application that we are building for one of our customers, I quickly got into a scenario where we needed to understand how we can scale long running services to support multiple tenants.

If you are building a SaaS based application, multi-tenacity is a key quality attribute you should never lose focus of, while the architecture continues to evolve in due course.

You typically address multi-tenacity in the following tiers:

  1. In the database by partitioning the data for tenants across master tables using a tenant identifier as reference.
  2. In the services tier embedding the tenant identifier in the context (or as a claim for a claims enabled service).
  3. Creating tenant based landing pages for tenant specific users in the user interface layer. Sometimes in mobile apps or composite desktop applications, a two factor authentication is also used.

Problem

Each of these layers can scale fairly easily, however multi-tenacity is often difficult to address in case of background services. How do you allocate an instance to process data for a particular tenant, allowing multiple tenants to be processed simultaneously?

Solution

The issue can be handled by using a combination of Azure Queues and the Blob Leasing mechanism with the Worker Role.

The Background Service Contract

Create an abstract base class defining the contracts for operations that can run under lease (meaning no instance can enter this method other than the one that acquires the lease first), and also the contract for operations that can run without lease.

/// <summary>
    /// 
    /// </summary>
    public abstract class BackgroundServiceBase
    {
        /// <summary>
        /// Gets or sets the unity container.
        /// </summary>
        /// <value>
        /// The unity container.
        /// </value>
        public IUnityContainer UnityContainer { get; set; }
 
        /// <summary>
        /// Gets or sets the BLOB provider.
        /// </summary>
        /// <value>
        /// The BLOB provider.
        /// </value>
        public IBlobProvider BlobProvider { get; set; }
 

        /// <summary>
        /// Gets or sets the proposed lease id.
        /// </summary>
        /// <value>
        /// The proposed lease id.
        /// </value>
        public string ProposedLeaseId { get; set; }


 	/// <summary>
        /// Initializes a new instance of the <see cref="BackgroundServiceBase" /> 	class.
        /// </summary>
        public BackgroundServiceBase()
        {
            UnityContainer = UnityHelper.ConfigureUnity();
            BlobProvider = UnityContainer.Resolve<IBlobProvider>();
        }

                
        /// <summary>
        /// Executes the specified context.
        /// </summary>
        public virtual void Execute()
        {
            // Avoid race condition
            Thread.Sleep((new Random()).Next(100, 200));
            if (BlobProvider.AcquireLease(TimeSpan.FromSeconds(45), ProposedLeaseId))
            {
                LeasedOperations();
            }
            else
            {
                Thread.Sleep(15 * 1000);
            }
 
            NonLeasedOperations();
        }
 

        /// <summary>
        /// Non leased operations.
        /// </summary>
        public abstract void NonLeasedOperations();
        
 
        /// <summary>
        /// Leased operations.
        /// </summary>
        public abstract void LeasedOperations();
    }

Using Unity, the blob provider is injected and is responsible for acquiring lease using the Azure Blob Container.

Implementing the Background Service

Now create a long service that inherits the background service base, and implements the Leased and Non Leased Operations.

Write code inside the leased operations method to create a queue to store the tenant information. An example is shown in the code block below.

/// <summary>
        /// Leased operations.
        /// </summary>
        public override void LeasedOperations()
        {
            try
            {
                var tenants = UnitOfWork.Repository.Get<Tenant>();
                foreach (var tenant in tenants)
                {
                    var licenseContext = LicenseManager.TenantLicenseContext(tenant.TenantId);
                    if (licenseContext != null && licenseContext.Features.Contains((int)LicenseFeatures.[Name of the Feature]))
                        QueueProvider.PutMessage(Constants.QueueNames.[Queue Name Here], tenant.TenantId.ToString(), TimeSpan.FromSeconds(45));
                }
            }
            catch (Exception ex)
            {
                LogProvider.Error(ex);
            }
        }

The function gets the tenant information, checks if the tenant has appropriate licenses for the service and then puts the tenant id in queue. Note that you don’t have to fetch tenant id’s from the database all the time, you can fetch them from let’s say a cache, since this information doesn’t change that often. Also note that the message is put in the queue for only 45 seconds, precisely the duration of the lease.

Once you have put the tenant ids in queue, read them in the NonLeasedOperations and then create the business manager instances to process data based on the tenant id.

/// <summary>
        /// Non leased operations.
        /// </summary>
        public override void NonLeasedOperations()
        {
            try
            {
                int processedCount = 0;
                while (processedCount == 0)
                {
                    var queueMessage = QueueProvider.ReadMessage(Constants.QueueNames.[Queue Name Here], TimeSpan.FromSeconds(30));
                    if (queueMessage == null)
                        return;
                    QueueProvider.DeleteMessage(Constants.QueueNames.[Queue Name Here], queueMessage.MessageId, queueMessage.PopReciept);
                    var businessManager = UnityContainer.Resolve<IBusinessManager>(new ParameterOverrides{
                    {"tenantId", int.Parse(queueMessage.Message)},
                    {"container", UnityContainer}
                    });
                    businessManager.ProcessResults(ref processedCount);
                }
            }
            catch (Exception ex)
            {
                LogProvider.Error(ex);
            }
        }

There are a couple of important things to note in the Non Leased Operations method. One is how the manager instance is created allowing multiple tenants to be processed at the same time, and the processCount variable in the while loop. You will need to update this variable in the manager code to indicate that there are items to process for the tenant, and if there are none, then move to the next tenant immediately.

Now you can create this service instance in the worker role and run it. You can also run it in a schedule using schedulers like Quartz.net.


Sitemap