Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on November 17, 2014

With the recent release of Visual Studio 2015 Preview, the on premise Release Management feature is now introduced in Visual Studio Online. Release management allows you to create a release pipeline and orchestrate the release of your application to different environments. Using the Visual Studio Online edition for release management allows you to scale your release operations on demand and realize the benefits of using a cloud based service.

The Visual Studio Release Management client is what you will still use to configure releases in your Visual Studio Online account.

Specify your Visual Studio Online URL to connect and start configuring the release template. There are four stages to configuring release management in Visual Studio Online.

  1. First you need to configure the environment. This would include among other steps, configuring the different stage types to represent the steps to production.
  2. Next you need to configure the environment and server paths.
  3. Once you are done with the first two steps, you can then create a release template. If you are using any tools you can add them. You can also add your actions to augment the built-in release management actions.
  4. Start managing the release.

You could potentially define your stages as testing (/ QA), pre-production (/ UAT), and then finally production. Configure these under the stage types as shown below. The goal is to configure them in the line up to production which is the ultimate release you will manage.

In addition, you can also optionally specify the technology types to determine what is supported by each environment.

Next step, you should configure your environment for release. If this is a Microsoft Azure environment, then you can directly retrieve the details from your subscription as illustrated below.

If you have PowerShell scripts from an existing application to deploy to an environment, you can use them directly without using an agent. Alternatively you can also use an agent to deploy.

Next step you can define custom actions that you will use during the release management process. Predefined release management actions for some common activities are already available with the client and are supported in Visual Studio Online as the following figure shows:

You are now all set to create the release template components and then use them to build an automated or approval based release process.

The release template provides a workflow style interface to allow you configure different stages in the release pipeline. You can also use tagging to allow reusing stages across environments.

Visual Studio 2015 is bringing a host of new additions including significant ones around developer productivity. Watch out for a future post on them!


Posted by Sandeep Chanda on November 3, 2014

Web Components are redefining the way you build for the web! They are touted to be the future of web development and are definitely showing a lot of promise. Web Components allow you to build widgets that can be used reliably and will be resilient to changes in the future—as opposed to the current approach of building them using HTML and JavaScript.

The real issue in the current approach with HTML and JavaScript is that the widgets that are build using them are not truly encapsulated in the DOM from one another, leading to cross references and ultimately errors in the rendered layout. You cannot easily isolate content from the widget presentation, making it difficult to build widgets that can be reused in a reliable fashion.

Web Components expose some powerful features like Shadow DOM and Templates that are built for DOM encapsulation and reuse in the form of widget templates allowing you to separate content from the infrastructure. Note that Web Components are designed around HTML and JavaScript, so there is no new skill you need to learn to start leveraging them right away.

Shadow DOM is comprised of a feature called shadow root to support the DOM encapsulation process. Browsers supporting Web Components (e.g. Chrome 35+) recognize a JavaScript method called createShadowRooton HTML elements that allows the element to update its content by overriding the predefined content from the static mark-up. This is used in conjunction with new supported tags like template and content to create reusable widgets. Here is an example in code:

<template id="detailsTagTemplate">
<style>
…
</style>
<div class="details">
<content></content>
</div>
</template>

The JavaScript code will look like:

document.querySelector('#detailsTag').textContent = [your message goes here]; 

This can create magic by dynamically allowing you to project different content inside the details DIV tag. The template element is never rendered and the content tag replaces the text content with your message. This combination opens up a plethora of opportunities, letting you create reusable widgets and use them in your applications without having to worry about cross references.


Posted by Sandeep Chanda on October 22, 2014

Docker has sort of revolutionized the micro-services ecosystem since its first launch little more than a year back. The recent announcement from Microsoft about the partnership with Docker is a significant move, with some even calling it the best thing that has happened to Microsoft since .NET. This partnership will allow developers to create Windows Server Docker containers!

What is interesting is that this move will produce a mixed bag of efforts and investment directly from the Windows Server product team, as well as from the open source community that has been championing the cause for Docker. Thus getting it a serious footprint in the world of distributed applications enabling development, build, and distribution.

Dockerized apps for Linux containers on Windows Azure have already been in play for a while now. With this new initiative, Windows Server based containers will see the light of day. This is very exciting for developers as it will allow them to create and distribute applications on a mixed platform of both Linux and Windows. To align with the Docker platform, Microsoft will focus on the Windows Server Container infrastructure that will allow developers in the .NET world to share, publish and ship containers to virtually any location running the next gen Windows Server, including Microsoft Azure. The following initiatives have been worked out:

  1. Docker Engine supporting Windows Server images in the Docker Hub.
  2. Portability with Docker Remote API for multi-container applications.
  3. Integration of Docker Hub with Microsoft Azure Management Portal for easy provisioning and configuration.
  4. MS Open Tech will contribute the code to Docker Client supporting the provisioning of multi-container Docker applications using the Remote API.

This partnership should silence the reservations critics had regarding the success of the Docker platform and will be a great win for developers in the .NET world!


Posted by Sandeep Chanda on October 15, 2014

In one of the previous blog posts, I introduced DocumentDB - Microsoft's debut into the world of NoSQL databases. You learned how it is different for being a JSON document only database. You also learned to create an instance of DocumentDB in Azure.

In the previous post, you used NuGet to install the required packages to program against DocumentDB in a .NET application. Today let's explore some of the programming constructs to operate on an instance of DocumentDB.

First step is to create a repository to allow you connect to your instance of DocumentDB. Create a repository class and reference the Microsoft.Azure.Documents.Client namespace in it. The Database object can be used to create an instance. The following code illustrates:

Database db = DbClient.CreateDatabaseAsync(new Database { Id = DbId } ).Result; 

Here DbClient is a property of type DatabaseClient exposed by Microsoft.Azure.Documents.Client API in your repository class. It provides the method CreateDatabaseAsync to connect to DocumentDB. You need to have the following key values from your instance of DocumentDB in azure:

  1. End point URL from Azure Management Portal
  2. Authentication Key
  3. Database Id
  4. Collection name

You can create an instance of DocumentClient using the following construct:

private static DocumentClient DbClient
    {
        get
        {
            Uri endpointUri = new Uri(ConfigurationManager.AppSettings["endpoint"]);
                return new DocumentClient(endpointUri, ConfigurationManager.AppSettings["authKey"];

        }
    }

Next you need to create a Document Collection using the method CreateDocumentCollectionAsync.

DocumentCollection collection = DbClient. CreateDocumentCollectionAsync ( Database.SelfLink, new DocumentCollection { Id = CollectionId } ).Result; 

You are now all set to perform DocumentDB operations using the repository. Note that you need to reference Microsoft.Azure.Documents.Linq to use Linq constructs for querying. Here is an example:

var results = DbClient.CreateDocumentQuery<T>(collection.DocumentsLink); 

Note that whatever entity replaces type T, the properties of that entity must be decorated with JsonProperty attribute to allow JSON serialization.

To create an entry you can use the CreateDocumentAsync method as shown here:

DbClient.CreateDocumentAsync(collection.SelfLink, T); 

In a similar fashion, you can also use the equivalent update method to update the data in your instance of DocumentDB.

Beyond .NET, DocumentDB also provides libraries to allow using JavaScript and Node.js. The interesting aspect is it allows T-SQL style operations such as creation of stored procedures, triggers, and user defined functions using JavaScript. You can write procedural logic in JavaScript, with atomic transactions. Performance is typically very good with JSON mapped all the way from the client side to DocumentDB as the unit of storage.  


Posted by Sandeep Chanda on October 10, 2014

The ongoing Xamarin Evolve conference is generating a lot of enthusiasm amongst cross-platform developers across the globe.

Xamarin has so far showcased an Android player, a simulator with hardware acceleration that claims to be much faster than the emulator with Android SDK. It is based on OpenGL and utilizes hardware accelerated virtualization with VT-x and AMD-V. The player also relies on Virtual Box 4.3 or higher to run. It would run equally well on Windows (7 or later) and OS X (10.7 or higher). After installing the player, you can select the emulator image to run. Select the device to simulate from the Device Manager. The emulator will then run exactly like the Android SDK emulator and you can perform various actions (typical of a hardware operation) by clicking the buttons provided on the right hand side. You can also simulate operations like multi-touch, battery operations, and location controls, etc. To install your apps for testing, you can drag and drop the APK file into the player.

Another cool release is the profiler that can be leveraged to perform code analysis of the C# code and profile it for potential performance bottlenecks and leaks. The profiler performs two important tasks. It does sampling for tracking memory allocation and looks at the call tree to determine the order of calling functions. It also provides a snapshot of memory usage on a timeline allowing the administrators to gain valuable insights into memory usage patterns.

My most favourite feature so far, however, is the preview of Sketches. Sketches provides an environment to quickly evaluate code and analyse the outcome. It offers immediate results without having the need to compile or deploy and you can use it from your Xamarin Studio. More on Sketches in the next post after I install and give it a try myself.


Posted by Sandeep Chanda on September 29, 2014

Azure is increasingly becoming the scalable CMS platform with support for a host of popular CMS providers via the marketplace. The list already includes some of the big names in the CMS industry, like Umbraco, Kentico, Joomla, and DNN.

The most recent addition to this list is WordPress. It is very simple to create a WordPress website. Go to the Azure Preview Portal and click New to go to the Gallery. Select Web from the navigation pane and you will see Scalable WordPress listed as one of the options (along with other options such as Umbraco and Zoomla).

Scalable WordPress uses Azure Storage by default to store site content. This automatically allows you to use Azure CDN for the media content that you want to use in your WordPress website.

Once you select Scalable WordPress, you will be redirected to the website configuration pane, where you can specify the name of the website, the database and the storage configuration settings. You are all set!

Login to your WordPress site dashboard to configure plug-ins like Jetpack. Jetpack, formerly available with WordPress.com, is now also available with Scalable WordPress. Your WordPress CMS site hosted in Azure can now support millions of visits and scale on demand. The Azure WordPress CMS website will support auto-scale out of the box. You can also enable backup and restore features available with Azure websites for your CMS site. It will also support publishing of content from stage to production.


Posted by Sandeep Chanda on September 15, 2014

NuGet has been a fairly popular mechanism to publish and distribute packaged components to be consumed by Visual Studio projects and solutions. Releases from the Microsoft product teams are increasingly being distributed as NuGet packages and it is officially the package manager for the Microsoft development platform. including .NET.

NuGet.org is the central package repository used by authors and consumers for global open distribution. One limitation of NuGet central repository is that, in large scale enterprise teams, it often results in package version mismatch across teams/solutions/projects. If not managed early this spirals into a significant application versioning problem for release managers during deployment.

One approach to solving this problem would be to use a Local NuGet Server that you can provision for your enterprise. It mimics the central repository, however it remains in the control of your release managers who can now decide which package versions to release for your consumers. The idea is that your Visual Studio users will point to your local NuGet server instead of the central repository and the release management team will control what versions of packages the teams use for consistency. The following figure illustrates the process:

It is very easy to create a NuGet server. You can use the nuget command line tool to publish packages. You will need an API Key and the host URL.

Developers using Visual Studio can go to Tools  →  Options  →  NuGet Package Manager → Package Sources and add the internal package server as a source.

While local NuGet servers are used today as a mechanism for distributing internal packages, they can also be extended to become a gated process for distributing global packages to bring consistency in the versions used across teams.


Posted by Sandeep Chanda on September 3, 2014

Microsoft's recent addition into the world of NoSQL Databases has been greeted with quite a fanfare and with mixed reviews from competing products. What is interesting is that Microsoft chose DocumentDB as a new Azure-only feature against enhancing its already existing table storage capabilities.

DocumentDB is a JSON document only database as a service. A significant feature included in DocumentDB, that is missing in its traditional rivals, is the support for rich queries (including support for LINQ) and transaction support. What is also interesting is that the new SQL syntax for querying JSON documents automatically recognizes native JavaScript constructs. It also supports programmability features such as user defined functions, stored procedures, and triggers. Given that it is backed by Azure with high availability and scalability, the offering seems to hold an extremely promising future.

To start, first create a new instance of DocumentDB in your Microsoft Azure Preview portal.

Click New in the preview portal and select DocumentDB. Specify a name and additional details like the capacity configuration and resource group. Go ahead and click the Create button to create an instance of DocumentDB. After creating the instance you can get the URI and keys by clicking on the Keys tile.

Done! You are now good to start using DocumentDB to store and query JSON documents. In your instance of Visual Studio, run the following NuGet command using the package manager console to install the pre-requisites in order to start programming with DocumentDB.

PM> Install-Package Microsoft.Azure.Documents.Client -Pre

If you want to program it using JavaScript, you can also install the JavaScript SDK from here https://github.com/Azure/azure-documentdb-js, and then leverage the REST interface to access DocumentDB using permissions authorization. In a future post, we will look at some of the language constructs in programming with DocumentDB.


Posted by Sandeep Chanda on August 25, 2014

Enterprise monitoring needs over the years have been addressed by Microsoft Systems Centre Operations Manager to a large extent. The problem however is that SCOM produces a lot of noise and the data could very quickly become irrelevant for producing any actionable information. IT teams very easily fall in the trap of configuring SCOM for every possible scheme of alerts, but do not put effective mechanisms in place to improve the alert to noise ratio by creating usable knowledge base out of the alerts that are generated by SCOM. Splunk and its cloud avatar, Hunk could be very useful in the following aspects:

  1. Providing actionable analytics using the alert log in the form of self-service dashboards
  2. Isolation of vertical and horizontal monitoring needs
  3. Generating context around alerts or a group of alerts
  4. Collaboration between IT administrators and business analysts
  5. Creating a consistent alerting scale for participating systems
  6. Providing a governance model for iteratively fine tuning the system.

In your enterprise, Splunk could be positioned in a layer above SCOM, where it gets the alert log as input for processing and analysis. This pair can be used to address the following enterprise monitoring needs of an organization:

  1. Global Service Monitoring - Provides information on the overall health of the infrastructure, which includes surfacing actionable information on disk and CPU usage. It could also be extended to include the network performance and the impact specific software applications are having on the health of the system. Splunk will augment SCOM in creating dashboards from the data collected that could help make decisions. For example, looking at the CPU usage trends on a timeline, IT owners can decide increasing or decreasing the core fabric.
  2. Application Performance Monitoring - Splunk can be extremely useful in making business decisions out of the instrumentation you do in code and the trace log it generates. You can identify purchase patterns of your customers. The application logs and alerts generated by custom applications and commercial of the shelf software (COTS) could be routed to Splunk via SCOM using the management packs. Splunk can then help you create management dashboards that in-turn will help the executive team decide the future course of business.

Using Splunk in conjunction with SCOM provides you a very robust enterprise monitoring infrastructure. That said, the true benefit of this stack can be realized only with an appropriate architecture for alert design, a process guidance on thresholds, and identification of key performance indicators to improve the signal to noise ratio.


Posted by Sandeep Chanda on August 14, 2014

In Visual Studio 2013, the team unified the performance and diagnostics experience (memory profiling, etc.) under one umbrella and named it Performance and Diagnostics Hub. Available under the Debug menu, this option reduces lot of clutter in terms of profiling client and server side code during a debug operation. There was lot of visual noise in the IDE in the 2012 version and the hub is a significant addition in improving developer productivity.

In the Performance and Diagnostics hub, you may select the target, and specify the performance tools with which you would want to run diagnostics. There are various tools that you can use to start capturing the performance matrices like CPU Usage and Memory Allocation. You can collect CPU utilization matrices on a Windows forms based or WPF application.

The latest release of Update 3 brings with it some key enhancements to the CPU and memory usage tools. In the CPU usage tool, you can now right-click on a function name that was captured as part of the diagnostics and click View Source. This will allow you to easily navigate to the code that is consuming CPU in your application. The memory usage tool now allows you to capture memory usage for Win32 and WPF applications.

The hub will also allow you to figure hot paths in the application code that might be causing more CPU cycles and may need refactoring.

You can also look for functions that is doing most work as illustrated in the figure below.

Overall, the Performance and Diagnostics hub has become a useful arsenal for developer productivity and catering to non-functional aspects of the application scope.


Sitemap