Login | Register   
RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX

Posted by Sandeep Chanda on February 19, 2015

Of all the JavaScript frameworks available today, AngularJSis probably gaining the most traction in the developer community. Given its popularity, it was about time for the Visual Studio team to natively provide support for AngularJS, including providing support for dynamic IntelliSense while writing code in Visual Studio.

Previously, you could download the AngularJS NuGet package, however it would only set up your solution to support Angular with some IntelliSense directly from the Angular objects. What it could not do, however, is understand Angular's way of doing dependency injection.

In the recent release of an extension by John Blesdoe, a member of the Visual Studio community, the IntelliSense experience has been greatly enhanced to simulate the execution while writing code. You will see native angular APIs like the route provider methods visible in Visual Studio 2013 IntelliSense as mentioned in this postfrom the team.

The first thing you need to do in order to enable those cool IntelliSense features in Visual Studio 2013 is to configure the Angular extension. You can download the extension from hereand then copy it into your Visual Studio installation folder under Microsoft Visual Studio 12\JavaScript\References. The extension works equally well with VS 2015 CTP, as well as other projects types, such as if you are doing cross-platform mobile application development using Apache Cordova.

Once you have set up the extension, the next step would be to create an SPA application in your instance of Visual Studio 2013 and reference the Angular package.

You can now use the NuGet package explorer to search for Angular.Core and add reference to the project.

You are now all set to create Angular apps and leverage the great IntelliSense features that you have always enjoyed. Note that this is only supported in Angular 1.x. AngularJS 2.0 is a paradigm shift and will require a completely new extension in future to work.

Posted by Sandeep Chanda on February 9, 2015

React.js is the breakthrough UI Framework from Facebook that is used to build composable UI components for large scale applications.

It was about time that the Facebook and Instagram teams shared their experience in building large scale web applications with the larger community. More precisely, shared the lessons they learned in production. This is the year of Facebook releasing its own dog food to the developer community in the form of a declarative, highly composable JavaScript UI framework called React.js. The most fascinating aspect of React.js is that at its core it is designed to react to underlying data changes and is aware enough to update only the changed parts. This idea is powerful for large scale applications with frequent data changes.

To get started, you can download the React.js framework starter kit from here. Once you have downloaded the framework you can start using the resources to create your React.js UI using any of your favorite editors, such as Sublime Text  or Notepad++. The framework uses the concept of virtual DOM diff for high performance and optionally supports a new XML notation for creating JS objects using HTML syntax (JSX). The first step in using the React.js framework in your application is to provide a reference to the react.js file in your HTML5 UI.

 <script src="build/react.js"></script>

JSX is a clean way of separating a template from the display logic. While it is not required, it keeps the UI component look clean. At its core, React.js uses the render function to perform the DOM mutation. You need to reference the JSX transform libraries if you want to use the declarative JSX syntax.

 <script src="build/JSXTransformer.js"></script>

Now you can use the JSX syntax to write code in React DOM. You can write the React code inside your HTML template as shown here or in a separate JS file.

    <div id="getTransformed"></div>
    <script type="text/jsx">

The React code to transform the getTransformed div will look like:

 var Transformed = React.createClass({
    render: function() {
        return <div>I am {this.props.name}</div>;
React.render(<Transformed name="Transformed!" />, document. getElementById('getTransformed'));

You can practice writing React code in JSFiddle here and see the UI get transformed in the console.

Posted by Sandeep Chanda on January 26, 2015

With asynchronous programming and TPL gaining popularity amongst .NET developers, the general availability of a stable release for the Immutable Collections feature has found a strong audience with developers especially in the world of cross platform app development, where lot of the data transfer code is wrapped in portable class libraries and is shared across different multi-threaded clients in the form of Windows 8, Store, or WPF applications. What is interesting is that the immutable collections are not part of the core framework. Instead, they are required to be separately installed using the Microsoft.Bcl.Immutable package. Note that there has been recent additions to the package that are still in preview (such as the addition of ImmutableArray<T>), so to explore the newly added members, you must select the pre-release option instead of the stable version in the package manager or console.

Install-Package Microsoft.Bcl.Immutable -Pre

Earlier thread-safe patterns (early TPL days) would promote usage of concurrent collections which are a definite alternative to immutable collections, however an expensive one. Concurrent collections, internally use expensive locking mechanisms for thread safety that immutable collections are able to avoid. It is interesting to note that internal nodes in immutable collections are not immutable to reduce garbage during the construction of the collection.

Another important thing to note is that immutable collections only offer reference equality to avoid expensive computations of value equality on collections.

Once you have installed the nuget package, you can start using immutable collections in your application / library. You will also notice the ToImmutableList() extension on collections. A good design practice for using immutable collections is to make the immutable collection properties read-only, and set them using the constructor.

public Cart(IEnumerable<Item> items)
        Items = items.ToImmutableList();
    public ImmutableList<Item> Items { get; private set; }

Now in your comparison methods, you can compare the instances and avoid creating a new instance of Item in case of a match in a thread-safe fashion.

return Object.ReferenceEquals(Items, value) ? this : new Item(value); 

Recommended practice is also to use the Create method when creating a new instance of an immutable collection or use the builder where the nodes are not frozen until the ToImmutable call is made.

Posted by Sandeep Chanda on January 13, 2015

Last week the OData team at Microsoft released a preview of the RESTier framework, built to allow developers to quickly bootstrap an OData service implementation. As a matter of fact, as is claimed in this MSDN blog post, it will actually take developers fewer than 100 lines of code in one controller to build a standardized OData V4 service with rich domain logic.

RESTier is based on Web API OData. It is inspired from the simplicity of building WCF data services with the flexibility of using Web API. It likely fills the much needed, and long ignored, void in the data services ecosystem that would allow developers to focus on their all-important business logic.

Note that the framework is in preview, so you can install it using the nuget console. It will not appear in the package explorer. To install RESTier in your SPA, Web API or MVC project, open the nuget console, and run the following command

PM> Install-Package Microsoft.Restier -Pre

You can now start creating the Domain files based on the associated data providers. Note that in its current form, RESTier only supports EF 6 as the data provider. So if your project has an Order repository, with the database context as OrderContext, then you can create an OrderDomain class with the following code:

using Microsoft.Restier.EntityFramework;

public class OrderDomain : DbDomain<OrderContext>
    public OrderContext Context 
        get { return DbContext; } 

For the controller, you can add the OrderController.cs class under the Controllers folder that inherits ODataDomainController class part of the RESTier Web API namespace.

public class OrderController : ODataDomainController<OrderDomain>
    private OrderContext DbContext
        get { return Domain.Context;}

To complete the bootstrapping process, you also need to register the OData endpoint in your WebApiConfig.cs file (for a Web API project).

           "OrderApi", "api/Order",
            new ODataDomainBatchHandler(server)); 

All set. Run the application and browse http://locahost/api/Order and then you will see the entity set. You can now run the all familiar OData commands for CRUD operations.

Posted by Sandeep Chanda on January 5, 2015

While Team Foundation Server now also supports local workspaces, where you can check out files and track changes without having a read-only lock on them, there could be possible challenges that may not allow you to use the local workspace mode in TFS. You may already be in the server connected mode and would not make your entire workspace local just for a few artifacts in your solution. This scenario becomes very realistic when you have files tracked by TFS that are not recognized by the Visual Studio ecosystem.

This could also be particularly useful for tracking the history of changes in documents checked into TFS. In any case, git gives you lot of flexibility in cloning a TFS branch and creating a local workspace that you can manipulate without having to worry about tracking the history and then check them back into TFS when you are done making the changes. This is done using the git-tfs tool that acts as a two way bridge between TFS and git. You can download the tool from the site and then use the git shell to execute the commands.

  1. To create a clone use the command git tfs clone [TFS Branch URL] [Local Workspace Folder]
  2. To get latest use the command git tfs pull
  3. To check-in git tfs checkin

Before you can successfully check-in to TFS and merge the changes from your local repository you will have to issue additional git commands to ensure the changes are recognized part of your local git repository.

To verify the status of pending changes you can use git status. To include the changes part of your commit, use git add and then git commit.

If you are not a big fan of using the git command shell and want a GUI for managing the local repository, you can also use the Visual Studio Team Client for git that is available in your team explorer, if you are using Visual Studio 2013 premium. You can use it to clone a repository from TFS and then start syncing your changes from the local repository to TFS. Once you have made any changes to the local repository, they will show up in the explorer and then you can use the commit command to commit your changes. Once you have committed, use the sync command to publish your changes back into TFS.

This provides an interesting alternative to using local workspaces in TFS and you can create as many local workspaces you need!

Posted by Sandeep Chanda on December 22, 2014

Last week the Power BI team at Microsoft announced availability of new features in Power BI for general preview. Power BI is Microsoft's attempt at creating a self-service business intelligence solution aimed towards providing decision makers with the ability to create their own dashboards from raw data and run powerful analytical queries.

Apart from new dashboard and visualization features, the new release will also preview a hybrid connectivity to on premises SQL Server Analysis Services (SSAS) instance tabular models. In line with targeting customers in the Apple world, it is also releasing the Power BI iPad app.

A whole new set of visualization elements have been added to enhance the overall experience of the user in analyzing data from different sources. You can now configure dashboards to display different types of charts such as funnel, gauge, tree map and combo charts. The dashboard also allows you to fetch data from both cloud and on premises data sources and create a more unified analytical view, augmented by natural language queries to narrow down the result set you want analyzed.

The most interesting feature, in my opinion, is the live query feature. Power BI now allows you to connect to an on premises source of analytics like SSAS cubes and when analyzing the data, it can directly run the query using your credentials. This saves you from migrating your on premises data to cloud for analysis.

Power BI reports and dashboards can be designed and shared using the Power BI designer. It allows you to model data not only from your custom data sources, but has a host of adaptors to connect with popular SaaS applications like Salesforce, Zendesk, and GitHub amongst others.

Posted by Sandeep Chanda on December 12, 2014

ECMAScript 6, the latest version of ECMAScript standard, has a host of new features added for JavaScript. Many of the new features are influenced towards making JavaScript syntactically similar to C# and CoffeeScript. Features such as function shorthand are introduced in the form of Arrows =>.

 iterateItemsInCollection() {
    this.coll.forEach(f =>
     //operate on item f;

Another noticeable feature is the class construct. Classes are based on the prototype-based object oriented pattern and support inheritance methods (static and instance) and constructors.

The most interesting new feature, however, is the introduction of Object.observe(). Note that Object.observe() is going to be available only with ES7.

Object.observe() is touted to be the future of data binding. A new upgrade to the JavaScript API, Object.observe() lets you asynchronously observe changes (adds, updates, deletes) to a JavaScript object. That means you can implement two-way data binding using Object.observe() without using any frameworks (KO, Angular…). Does this mean Object.observe() is set to obliterate frameworks such as Knockout, Angular, Ember? Not really, If it's just data binding, you can obviously use Object.observe(), but most frameworks today provide more than just data binding (e.g., routing, templates, etc.) and may be inevitable for bigger complex projects.

Object.observe() brings great performance boost over traditional dirty-checking used while implementing data binding. Testimony to this fact is that Angular team has decided to rewrite data binding implementation in Angular 2.0 using Object.observe(). As an API with great promise, Object.observe() is surely going to change a lot in the way we build client apps. You can read more here.

Posted by Sandeep Chanda on December 4, 2014

The web has come a long way from its standards being largely controlled by two major browser makers — Netscape & Microsoft, who largely dictated web standards based off of what those two browsers supported with every new release. New players (Firefox, Chrome, Opera) entered the web battleground and helped move the web towards standardization. The tipping point was reached when the web stack got a major upgrade in the form of HTML5, CSS3 and additions to JavaScript API. The future looks bright for web development with amazing new features being added to the web platform. Here are some of the prominent features that will be available to web developers to use in the not so distant future:

Web Components — Web Components is a set of cutting edge standards that allows us to build widgets for the web will full encapsulation. The encapsulation inherent in the web components solves the fundamental problem with widgets built out of HTML & JavaScript as the widget isn't isolated from the rest of the page and global styles. The main parts comprising Web Components are:

  1. HTML Templates — Reusable templates/html fragments. Read more.
  2. Shadow DOM — This is what enables encapsulation for a section of an html page/widget by introducing a new element (known as 'shadow root') in the DOM tree. Read more.
  3. Custom Elements — Custom elements are probably the best new thing available out of the box for web developers as they allow us to define new HTML elements without the need for an external library (if you've used angularjs, think custom directives). With custom elements you can have something like below in your html

    Assuming contoso is the namespace under with your timeline control goes. That one line encapsulates the complete functionality of the timeline control. Neat! Read about custom elements in more detail.

Google has released an amazing library called Polymer that is worth a look. It's essentially a polyfill for web components.

Not all browsers support Web Components today. The latest builds of Chrome & Opera do have the support for web components. What features are supported by which browsers can be checked at caniuse.com.

To get a sense of what is coming in the future releases of each browser, you can check the respective bleeding edge versions —  Chrome Canary, Firefox Nightly, Opera Next and IE.

Posted by Sandeep Chanda on November 28, 2014

Earlier this month Google released its material design specifications. Material design talks about a synergy from principles of good design and applying modern techniques in science to create a specification that provides seamless experience to users across devices. The Material Design specification is a living document and continues to get updated on a regular basis. The inspiration behind the design elements are real life cues like surfaces, and first class input methods like touch, voice, and type. The material design specification provides a unified guidance on animation aspects like responsive interaction, style elements like colors and typography, and layout principals. It also provides guidelines on UI components like Text Fields, Buttons, Grids, etc.

Material UI is a Less based CSS framework created based on the Google's Material Design specifications. It is available as a Node package. Material UI provides the mark-up for various UI components and also less variables for the color palette and classes for typography. You can use the NPM console to install the node module and then use the React library to start building your UI. In addition, you can use Browserify to perform the JSX transformation.

Here is the command to install Material UI:

npm install material-ui

Now that you have installed the Material UI in your application, you can start coding your UI using the components.

First step is to dependency inject the Material UI instance:

var react = require('react'),
  materialUI = require('material-ui'),
  LeftMenu = materialUI.Menu; 

Next you create a react class to render the Material UI component:

var navigationMenu = react.createClass({
  render: function() {
    return (
        <Menu menuItems={this.props.items} />

Finally you can render the UI component using the React.render function.

React.render(<LeftMenu items={ labelMenuItems } />, mountNode); 

You are all set to create some great UI elements using the Material Design specifications and Material UI!

Posted by Sandeep Chanda on November 17, 2014

With the recent release of Visual Studio 2015 Preview, the on premise Release Management feature is now introduced in Visual Studio Online. Release management allows you to create a release pipeline and orchestrate the release of your application to different environments. Using the Visual Studio Online edition for release management allows you to scale your release operations on demand and realize the benefits of using a cloud based service.

The Visual Studio Release Management client is what you will still use to configure releases in your Visual Studio Online account.

Specify your Visual Studio Online URL to connect and start configuring the release template. There are four stages to configuring release management in Visual Studio Online.

  1. First you need to configure the environment. This would include among other steps, configuring the different stage types to represent the steps to production.
  2. Next you need to configure the environment and server paths.
  3. Once you are done with the first two steps, you can then create a release template. If you are using any tools you can add them. You can also add your actions to augment the built-in release management actions.
  4. Start managing the release.

You could potentially define your stages as testing (/ QA), pre-production (/ UAT), and then finally production. Configure these under the stage types as shown below. The goal is to configure them in the line up to production which is the ultimate release you will manage.

In addition, you can also optionally specify the technology types to determine what is supported by each environment.

Next step, you should configure your environment for release. If this is a Microsoft Azure environment, then you can directly retrieve the details from your subscription as illustrated below.

If you have PowerShell scripts from an existing application to deploy to an environment, you can use them directly without using an agent. Alternatively you can also use an agent to deploy.

Next step you can define custom actions that you will use during the release management process. Predefined release management actions for some common activities are already available with the client and are supported in Visual Studio Online as the following figure shows:

You are now all set to create the release template components and then use them to build an automated or approval based release process.

The release template provides a workflow style interface to allow you configure different stages in the release pipeline. You can also use tagging to allow reusing stages across environments.

Visual Studio 2015 is bringing a host of new additions including significant ones around developer productivity. Watch out for a future post on them!

Thanks for your registration, follow us on our social networks to keep up-to-date