Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on September 15, 2014

NuGet has been a fairly popular mechanism to publish and distribute packaged components to be consumed by Visual Studio projects and solutions. Releases from the Microsoft product teams are increasingly being distributed as NuGet packages and it is officially the package manager for the Microsoft development platform. including .NET.

NuGet.org is the central package repository used by authors and consumers for global open distribution. One limitation of NuGet central repository is that, in large scale enterprise teams, it often results in package version mismatch across teams/solutions/projects. If not managed early this spirals into a significant application versioning problem for release managers during deployment.

One approach to solving this problem would be to use a Local NuGet Server that you can provision for your enterprise. It mimics the central repository, however it remains in the control of your release managers who can now decide which package versions to release for your consumers. The idea is that your Visual Studio users will point to your local NuGet server instead of the central repository and the release management team will control what versions of packages the teams use for consistency. The following figure illustrates the process:

It is very easy to create a NuGet server. You can use the nuget command line tool to publish packages. You will need an API Key and the host URL.

Developers using Visual Studio can go to Tools  →  Options  →  NuGet Package Manager → Package Sources and add the internal package server as a source.

While local NuGet servers are used today as a mechanism for distributing internal packages, they can also be extended to become a gated process for distributing global packages to bring consistency in the versions used across teams.


Posted by Sandeep Chanda on September 3, 2014

Microsoft's recent addition into the world of NoSQL Databases has been greeted with quite a fanfare and with mixed reviews from competing products. What is interesting is that Microsoft chose DocumentDB as a new Azure-only feature against enhancing its already existing table storage capabilities.

DocumentDB is a JSON document only database as a service. A significant feature included in DocumentDB, that is missing in its traditional rivals, is the support for rich queries (including support for LINQ) and transaction support. What is also interesting is that the new SQL syntax for querying JSON documents automatically recognizes native JavaScript constructs. It also supports programmability features such as user defined functions, stored procedures, and triggers. Given that it is backed by Azure with high availability and scalability, the offering seems to hold an extremely promising future.

To start, first create a new instance of DocumentDB in your Microsoft Azure Preview portal.

Click New in the preview portal and select DocumentDB. Specify a name and additional details like the capacity configuration and resource group. Go ahead and click the Create button to create an instance of DocumentDB. After creating the instance you can get the URI and keys by clicking on the Keys tile.

Done! You are now good to start using DocumentDB to store and query JSON documents. In your instance of Visual Studio, run the following NuGet command using the package manager console to install the pre-requisites in order to start programming with DocumentDB.

PM> Install-Package Microsoft.Azure.Documents.Client -Pre

If you want to program it using JavaScript, you can also install the JavaScript SDK from here https://github.com/Azure/azure-documentdb-js, and then leverage the REST interface to access DocumentDB using permissions authorization. In a future post, we will look at some of the language constructs in programming with DocumentDB.


Posted by Sandeep Chanda on August 25, 2014

Enterprise monitoring needs over the years have been addressed by Microsoft Systems Centre Operations Manager to a large extent. The problem however is that SCOM produces a lot of noise and the data could very quickly become irrelevant for producing any actionable information. IT teams very easily fall in the trap of configuring SCOM for every possible scheme of alerts, but do not put effective mechanisms in place to improve the alert to noise ratio by creating usable knowledge base out of the alerts that are generated by SCOM. Splunk and its cloud avatar, Hunk could be very useful in the following aspects:

  1. Providing actionable analytics using the alert log in the form of self-service dashboards
  2. Isolation of vertical and horizontal monitoring needs
  3. Generating context around alerts or a group of alerts
  4. Collaboration between IT administrators and business analysts
  5. Creating a consistent alerting scale for participating systems
  6. Providing a governance model for iteratively fine tuning the system.

In your enterprise, Splunk could be positioned in a layer above SCOM, where it gets the alert log as input for processing and analysis. This pair can be used to address the following enterprise monitoring needs of an organization:

  1. Global Service Monitoring - Provides information on the overall health of the infrastructure, which includes surfacing actionable information on disk and CPU usage. It could also be extended to include the network performance and the impact specific software applications are having on the health of the system. Splunk will augment SCOM in creating dashboards from the data collected that could help make decisions. For example, looking at the CPU usage trends on a timeline, IT owners can decide increasing or decreasing the core fabric.
  2. Application Performance Monitoring - Splunk can be extremely useful in making business decisions out of the instrumentation you do in code and the trace log it generates. You can identify purchase patterns of your customers. The application logs and alerts generated by custom applications and commercial of the shelf software (COTS) could be routed to Splunk via SCOM using the management packs. Splunk can then help you create management dashboards that in-turn will help the executive team decide the future course of business.

Using Splunk in conjunction with SCOM provides you a very robust enterprise monitoring infrastructure. That said, the true benefit of this stack can be realized only with an appropriate architecture for alert design, a process guidance on thresholds, and identification of key performance indicators to improve the signal to noise ratio.


Posted by Sandeep Chanda on August 14, 2014

In Visual Studio 2013, the team unified the performance and diagnostics experience (memory profiling, etc.) under one umbrella and named it Performance and Diagnostics Hub. Available under the Debug menu, this option reduces lot of clutter in terms of profiling client and server side code during a debug operation. There was lot of visual noise in the IDE in the 2012 version and the hub is a significant addition in improving developer productivity.

In the Performance and Diagnostics hub, you may select the target, and specify the performance tools with which you would want to run diagnostics. There are various tools that you can use to start capturing the performance matrices like CPU Usage and Memory Allocation. You can collect CPU utilization matrices on a Windows forms based or WPF application.

The latest release of Update 3 brings with it some key enhancements to the CPU and memory usage tools. In the CPU usage tool, you can now right-click on a function name that was captured as part of the diagnostics and click View Source. This will allow you to easily navigate to the code that is consuming CPU in your application. The memory usage tool now allows you to capture memory usage for Win32 and WPF applications.

The hub will also allow you to figure hot paths in the application code that might be causing more CPU cycles and may need refactoring.

You can also look for functions that is doing most work as illustrated in the figure below.

Overall, the Performance and Diagnostics hub has become a useful arsenal for developer productivity and catering to non-functional aspects of the application scope.


Posted by Sandeep Chanda on August 5, 2014

Microsoft Azure Service Bus Event Hubs provide a topic based publish/subscribe messaging platform that allows for high throughput and low latency message processing. A preview version has been recently released by the Microsoft Azure team.

Event Hubs is a component of the service bus, and works alongside service bus topics and queues. Event Hubs provide a perfect platform for collecting event streams from multiple devices and sending them to an analytics engine for processing. This makes it ideal for an Internet of Things (IoT) scenario, where you can capture events from various connected devices and make meaningful decisions based on the ingested event stream.

You can also make use of analytics on the ingress to perform tenant billing, and performance monitoring amongst many possibilities. Event Hubs not only provide a reliable message processing platform, but also support durability for a predefined retention period allowing consumers to connect back in case of a failure.

Getting Started

An Event Hub is part of a Service Bus namespace, and typically consists of a Publisher Policy, Consumer Groups and Partition Keys. A publisher is a logical concept for publishing a message into an Event Hub, and a consumer is a logical concept for receiving messages. Partition allows for scaling Event Hubs and subscribers connect to a partition. Events in a partition are also ordered for delivery.

Currently the supported protocols for pub-sub are HTTP and AMQP. Note that for receiving data only AMQP is currently supported.

The Azure Service Bus NuGet package provides the EventProcessorHost and the EventHubClient API to process messages and send messages to the hub respectively. To start a host that can listen for incoming messages you can create a new instance of the EventProcessorHostas shown below:

host = new EventProcessorHost(
                hostName,
                eventHubName,
                consumerGroupName,
                eventHubConnectionString,
                storageConnectionString, eventHubName.ToLowerInvariant());

Note that it is a good practice to share the hub name in lowercase to avoid any case conflicts on names that the subscribers may present. You need to provide the connection strings for the event hub on the service bus namespace, the storage connection string for the queue, the name of the consumer group and a host name. You can then create a processor factory implementation using the IEventProcessorFactory interface to provide a factory implementation for processing the incoming messages. The host instance can then register the factory to listen for ingress using the RegisterEventProcessorFactoryAsync method. Similarly from the client, you can create an instance of the Event Hub Client using the EventHubClient.CreateFromConnectionString method, and then start sending messages using the SendAsync method that the client exposes.


Posted by Sandeep Chanda on July 31, 2014

In the previous post you learned how to setup an Express Node.js application in Microsoft Azure and also make it a unit of continuous deployment using Git. An Express Node.js application in silos without a data store to back it is not very useful. In this post you will explore setting up a MongoDB database using the Microsoft Azure marketplace that can then act as a repository for your Express Node.js web application to store large scale unstructured data. Hosted in Azure, it is limited only by the ability of the platform to scale, which is virtually infinite.

Getting Started

The first thing you would need to do is to subscribe to the MongoLab service from the Microsoft Azure store. MongoLab is a fully hosted MongoDB cloud database that is available with all the major cloud providers, including Azure.

To add MongoLab service to your subscription, click New in your management portal, and select the Store (preview) option.

Note that, depending on your subscription, the store may or may not be available to you. Reach out to Azure support if you need more details.

Find MongoLab from under the App Services category in the store and select to add it to your subscription.

500 MB is free to use. Enter your subscription details in the form that is presented and then click Purchase to complete the operation of adding it to your subscription. You can now use the Mongoose Node.js driver to connect to the MongoLab service database and start storing your model data.

Installation

To install the Mongoose driver, run the following command in your console:

 npm install mongoose –save

You are now all set to connect to the MongoDB database hosted in Azure. Get the connection string for the hosted instance and then use it in your express Node.js application controller code:

var mongoose = require('mongoose');
mongoose.connect('[connection string]');

You can use the model function to get associate a model with the Mongoose schema and then perform your operations on the model data.


Posted by Sandeep Chanda on July 28, 2014

Express is a powerful, yet lightweight and flexible web application framework for Node.js. In this post we will explore how you can create and deploy an express application in Microsoft Azure.

Prerequisites

First and foremost you need Node.js. Once you have installed Node.js, use the command prompt to install Express.

npm install express

You can also use the –g switch to install express globally rather than to a specific directory.

In addition, you will need to create a web site in Microsoft Azure that will host the application. If you have the Azure SDK for Node.js then you would already have the command line tools, but if not, use the following command to install the Azure command line tool:

npm install azure-cli

Create an Express App

Once you have installed Node.js, use the command prompt to create the Express scaffolding using the express command. This will install the scaffolding templates for views, controllers and other relevant resources with Jade and Stylus support:

express --css stylus [Your App Name] 

Next, run the install command to install the dependencies

npm install

This command will install the additional dependencies that are required by Express. The express command creates a package.json file that will be used by the Azure command tool to deploy the application and the dependencies in Azure.

The express command creates a folder structure for views, controllers and models to which you can add your own. To modify the default view, you can edit the index.jade file under the views folder and add your own mark-up code. The app.js file under the application folder will contain an instance of express:

var express = require('express');
var app = express();

You can now use VERBs to start defining routes.

Deploy an Express App

In order to deploy the Express app in Azure, first, install the Azure command tools for Node.js if you don’t have the SDK installed. The next thing you need to do is to get the publish settings from Azure and import it into your Node.js application using the following commands

azure account download
azure account import <publish settings file path>

Next, you need to create a web site in Azure, and also create a local git repository inside your application folder.

azure site create [your site name] --git

You can now commit your files to your local git repository and then use it to push for deployment to Azure using the following command:

git push azure master

You are now all set. The express Node.js application is deployed in Azure.


Posted by Sandeep Chanda on July 8, 2014

The Microsoft Open Technologies team announced the support for Cordova tools in Visual Studio 2013 Update 2 during Tech-Ed US. If you have been building cross platform apps using Apache Cordova, you would know that it is a great tool for leveraging your existing skills and assets in HTML5 and JavaScript, and then use them to package the assets for building native apps in Android, iOS, and Windows Store/Phone.

You can download the Cordova tools for Visual Studio here. An important pre-requisite to note is that it will run only on Windows 8.1. The multi-device hybrid apps that you create using the tools support the following platforms:

  1. Android version 4+
  2. iOS 6 and iOS 7
  3. Windows 8+ Store
  4. Windows 8+ Phone

Once you install the tool, the additional dependencies will automatically be downloaded to support debugging and running the apps from within Visual Studio for various devices. Some of the significant dependencies are:

  1. Node.js
  2. Android SDK
  3. Apache Ant
  4. Java SDK
  5. SQLLite
  6. Apple iTunes

For Android apps, you can use the Ripple emulator to run the app for devices of various form factors. For iOS apps, you need to run Visual Studio from a Mac with support for Xcode 5.1. The tools, however, provide a remote agent that can be used to configure a remote Mac to allow you to remote debug. To set up the remote agent, you need to first install the ios-sim node module and then start the build server with the allowEmulate option set to true. In Visual Studio, you need to configure the port and address of the remote build server using the Multi-Device Hybrid Apps in general settings under Options.

You are now all set to create your first Cordova project. You will find the project template under JavaScript to create a new blank Cordova project. You will find that the default solution structure has been organized into folders to store the CSS files, Scripts and Images. In addition, there will be a folder called Res to store the platform specific assets and certificates. There will also be a folder called Merges to add platform specific code.


Posted by Sandeep Chanda on June 30, 2014

In line with the Cloud First strategy, the Microsoft Azure team is aggressively working towards creating a Connected Systems and Devices platform using the Azure Service Bus, called the Microsoft Azure Intelligent Systems Service. This will enable enterprises to aggregate data from a variety of devices and sensors, and other lines of business applications. The data can then be made as a source for big data analytics using Azure HD Insight.

With all the scalability and performance attributes of Microsoft Azure in play, the platform could address the needs of industries such as transportation, manufacturing, supply chain, retail and healthcare, and enable the creation of smart cities and buildings. The idea of the Internet of Everything is to be able to support more efficient use of resources by providing insight on the data aggregated from them. As Todd Holmquist-Sutherland and Paolo Salvatori included in their Build 2014 conference session, the significance of Internet of Things (IoT) is in building the business of data driven insight. In addition, the significance is not just limited to getting business insights, but also to take appropriate action using the platform. For example, in a smart grid, the system could decide based on insights, the power consumption trends of house-holds and take action to control the ingress.

The Azure IoT stack provides a Cloud Gateway that acts as a secure boundary for communicating with the backend services. The gateway interprets messages from proprietary protocols and sends it to the Ingress Messaging Layer. The Cloud Gateway supports two patterns for communication. A Device Direct pattern where the IP Capable Devices are able to communicate directly with Azure Service Bus for storage and notification, or the brokered Custom Gateway where a set of Cloud Services broker the communication. The gateway allows partitioning of resources that are assigned to specific population of devices, allowing the ability to scale on demand.

The Cloud Gateway interacts with the Azure Service Bus using a topic based publisher subscriber pattern. Messages from the telemetry pump are split into alerts and data, and are sent to the alert processor and storage pre-processor respectively. This video from Build provides a quick preview into what's in store for the future of IoT with Microsoft Azure.


Posted by Sandeep Chanda on June 24, 2014

With mobile services in the fray, an ecosystem of publishers are constantly giving rise to data and business processes that are being licensed to consumers. This has resulted in significant investments towards outsourcing API management to providers than can scale the infrastructure to support the growing demands of consumers. Microsoft Azure is not behind in this race, and part of its major May 12, 2014 release announced support for API Management.

In today's post (and future posts as well), we will explore some of the API Management capabilities that Microsoft Azure brings to the table.

Before you read further, note that this like many other features announced in the post is still an early preview release not meant to be used for applications in production.

Azure API Management

Microsoft Azure API Management addresses some key issues concerning sharing of Cloud assets by creating a façade on top them and isolating the developer and publisher personas to control access. It also allows metering by developers to control usage and provide rich analytics on it.

In Azure, APIs are published as Products, containing one or more APIs wrapping the core backend operations.

Groups

The visibility to Products are controlled using Groups. There are three built-in groups namely Administrators, Developers and Guests. While administrators are primarily responsible for authoring, developers are consumers of the authored APIs.

Policies

Azure API Management also provides a feature called Policies that are used to control the behaviour of APIs. Policies contain a series of instructions that can be executed on the request of an API operation.

Developer Portal

APIs can be explored through a developer portal, where developers can read the documentation about API operations and ultimately consume them in their application. In a future blog post, we will look at creating and publishing a Cloud service as API and then consuming it in an application using the developer portal.


Sitemap