Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on December 12, 2014

ECMAScript 6, the latest version of ECMAScript standard, has a host of new features added for JavaScript. Many of the new features are influenced towards making JavaScript syntactically similar to C# and CoffeeScript. Features such as function shorthand are introduced in the form of Arrows =>.

 iterateItemsInCollection() {
    this.coll.forEach(f =>
     //operate on item f;
  } 

Another noticeable feature is the class construct. Classes are based on the prototype-based object oriented pattern and support inheritance methods (static and instance) and constructors.

The most interesting new feature, however, is the introduction of Object.observe(). Note that Object.observe() is going to be available only with ES7.

Object.observe() is touted to be the future of data binding. A new upgrade to the JavaScript API, Object.observe() lets you asynchronously observe changes (adds, updates, deletes) to a JavaScript object. That means you can implement two-way data binding using Object.observe() without using any frameworks (KO, Angular…). Does this mean Object.observe() is set to obliterate frameworks such as Knockout, Angular, Ember? Not really, If it's just data binding, you can obviously use Object.observe(), but most frameworks today provide more than just data binding (e.g., routing, templates, etc.) and may be inevitable for bigger complex projects.

Object.observe() brings great performance boost over traditional dirty-checking used while implementing data binding. Testimony to this fact is that Angular team has decided to rewrite data binding implementation in Angular 2.0 using Object.observe(). As an API with great promise, Object.observe() is surely going to change a lot in the way we build client apps. You can read more here.


Posted by Sandeep Chanda on December 4, 2014

The web has come a long way from its standards being largely controlled by two major browser makers — Netscape & Microsoft, who largely dictated web standards based off of what those two browsers supported with every new release. New players (Firefox, Chrome, Opera) entered the web battleground and helped move the web towards standardization. The tipping point was reached when the web stack got a major upgrade in the form of HTML5, CSS3 and additions to JavaScript API. The future looks bright for web development with amazing new features being added to the web platform. Here are some of the prominent features that will be available to web developers to use in the not so distant future:

Web Components — Web Components is a set of cutting edge standards that allows us to build widgets for the web will full encapsulation. The encapsulation inherent in the web components solves the fundamental problem with widgets built out of HTML & JavaScript as the widget isn't isolated from the rest of the page and global styles. The main parts comprising Web Components are:

  1. HTML Templates — Reusable templates/html fragments. Read more.
  2. Shadow DOM — This is what enables encapsulation for a section of an html page/widget by introducing a new element (known as 'shadow root') in the DOM tree. Read more.
  3. Custom Elements — Custom elements are probably the best new thing available out of the box for web developers as they allow us to define new HTML elements without the need for an external library (if you've used angularjs, think custom directives). With custom elements you can have something like below in your html
    <contoso-timeline></contoso-timeline>

    Assuming contoso is the namespace under with your timeline control goes. That one line encapsulates the complete functionality of the timeline control. Neat! Read about custom elements in more detail.

Google has released an amazing library called Polymer that is worth a look. It's essentially a polyfill for web components.

Not all browsers support Web Components today. The latest builds of Chrome & Opera do have the support for web components. What features are supported by which browsers can be checked at caniuse.com.

To get a sense of what is coming in the future releases of each browser, you can check the respective bleeding edge versions —  Chrome Canary, Firefox Nightly, Opera Next and IE.


Posted by Sandeep Chanda on November 28, 2014

Earlier this month Google released its material design specifications. Material design talks about a synergy from principles of good design and applying modern techniques in science to create a specification that provides seamless experience to users across devices. The Material Design specification is a living document and continues to get updated on a regular basis. The inspiration behind the design elements are real life cues like surfaces, and first class input methods like touch, voice, and type. The material design specification provides a unified guidance on animation aspects like responsive interaction, style elements like colors and typography, and layout principals. It also provides guidelines on UI components like Text Fields, Buttons, Grids, etc.

Material UI is a Less based CSS framework created based on the Google's Material Design specifications. It is available as a Node package. Material UI provides the mark-up for various UI components and also less variables for the color palette and classes for typography. You can use the NPM console to install the node module and then use the React library to start building your UI. In addition, you can use Browserify to perform the JSX transformation.

Here is the command to install Material UI:

npm install material-ui

Now that you have installed the Material UI in your application, you can start coding your UI using the components.

First step is to dependency inject the Material UI instance:

var react = require('react'),
  materialUI = require('material-ui'),
  LeftMenu = materialUI.Menu; 

Next you create a react class to render the Material UI component:

var navigationMenu = react.createClass({
  render: function() {
    return (
        <Menu menuItems={this.props.items} />
    );
  }
});

Finally you can render the UI component using the React.render function.

React.render(<LeftMenu items={ labelMenuItems } />, mountNode); 

You are all set to create some great UI elements using the Material Design specifications and Material UI!


Posted by Sandeep Chanda on November 17, 2014

With the recent release of Visual Studio 2015 Preview, the on premise Release Management feature is now introduced in Visual Studio Online. Release management allows you to create a release pipeline and orchestrate the release of your application to different environments. Using the Visual Studio Online edition for release management allows you to scale your release operations on demand and realize the benefits of using a cloud based service.

The Visual Studio Release Management client is what you will still use to configure releases in your Visual Studio Online account.

Specify your Visual Studio Online URL to connect and start configuring the release template. There are four stages to configuring release management in Visual Studio Online.

  1. First you need to configure the environment. This would include among other steps, configuring the different stage types to represent the steps to production.
  2. Next you need to configure the environment and server paths.
  3. Once you are done with the first two steps, you can then create a release template. If you are using any tools you can add them. You can also add your actions to augment the built-in release management actions.
  4. Start managing the release.

You could potentially define your stages as testing (/ QA), pre-production (/ UAT), and then finally production. Configure these under the stage types as shown below. The goal is to configure them in the line up to production which is the ultimate release you will manage.

In addition, you can also optionally specify the technology types to determine what is supported by each environment.

Next step, you should configure your environment for release. If this is a Microsoft Azure environment, then you can directly retrieve the details from your subscription as illustrated below.

If you have PowerShell scripts from an existing application to deploy to an environment, you can use them directly without using an agent. Alternatively you can also use an agent to deploy.

Next step you can define custom actions that you will use during the release management process. Predefined release management actions for some common activities are already available with the client and are supported in Visual Studio Online as the following figure shows:

You are now all set to create the release template components and then use them to build an automated or approval based release process.

The release template provides a workflow style interface to allow you configure different stages in the release pipeline. You can also use tagging to allow reusing stages across environments.

Visual Studio 2015 is bringing a host of new additions including significant ones around developer productivity. Watch out for a future post on them!


Posted by Sandeep Chanda on November 3, 2014

Web Components are redefining the way you build for the web! They are touted to be the future of web development and are definitely showing a lot of promise. Web Components allow you to build widgets that can be used reliably and will be resilient to changes in the future—as opposed to the current approach of building them using HTML and JavaScript.

The real issue in the current approach with HTML and JavaScript is that the widgets that are build using them are not truly encapsulated in the DOM from one another, leading to cross references and ultimately errors in the rendered layout. You cannot easily isolate content from the widget presentation, making it difficult to build widgets that can be reused in a reliable fashion.

Web Components expose some powerful features like Shadow DOM and Templates that are built for DOM encapsulation and reuse in the form of widget templates allowing you to separate content from the infrastructure. Note that Web Components are designed around HTML and JavaScript, so there is no new skill you need to learn to start leveraging them right away.

Shadow DOM is comprised of a feature called shadow root to support the DOM encapsulation process. Browsers supporting Web Components (e.g. Chrome 35+) recognize a JavaScript method called createShadowRooton HTML elements that allows the element to update its content by overriding the predefined content from the static mark-up. This is used in conjunction with new supported tags like template and content to create reusable widgets. Here is an example in code:

<template id="detailsTagTemplate">
<style>
…
</style>
<div class="details">
<content></content>
</div>
</template>

The JavaScript code will look like:

document.querySelector('#detailsTag').textContent = [your message goes here]; 

This can create magic by dynamically allowing you to project different content inside the details DIV tag. The template element is never rendered and the content tag replaces the text content with your message. This combination opens up a plethora of opportunities, letting you create reusable widgets and use them in your applications without having to worry about cross references.


Posted by Sandeep Chanda on October 22, 2014

Docker has sort of revolutionized the micro-services ecosystem since its first launch little more than a year back. The recent announcement from Microsoft about the partnership with Docker is a significant move, with some even calling it the best thing that has happened to Microsoft since .NET. This partnership will allow developers to create Windows Server Docker containers!

What is interesting is that this move will produce a mixed bag of efforts and investment directly from the Windows Server product team, as well as from the open source community that has been championing the cause for Docker. Thus getting it a serious footprint in the world of distributed applications enabling development, build, and distribution.

Dockerized apps for Linux containers on Windows Azure have already been in play for a while now. With this new initiative, Windows Server based containers will see the light of day. This is very exciting for developers as it will allow them to create and distribute applications on a mixed platform of both Linux and Windows. To align with the Docker platform, Microsoft will focus on the Windows Server Container infrastructure that will allow developers in the .NET world to share, publish and ship containers to virtually any location running the next gen Windows Server, including Microsoft Azure. The following initiatives have been worked out:

  1. Docker Engine supporting Windows Server images in the Docker Hub.
  2. Portability with Docker Remote API for multi-container applications.
  3. Integration of Docker Hub with Microsoft Azure Management Portal for easy provisioning and configuration.
  4. MS Open Tech will contribute the code to Docker Client supporting the provisioning of multi-container Docker applications using the Remote API.

This partnership should silence the reservations critics had regarding the success of the Docker platform and will be a great win for developers in the .NET world!


Posted by Sandeep Chanda on October 15, 2014

In one of the previous blog posts, I introduced DocumentDB - Microsoft's debut into the world of NoSQL databases. You learned how it is different for being a JSON document only database. You also learned to create an instance of DocumentDB in Azure.

In the previous post, you used NuGet to install the required packages to program against DocumentDB in a .NET application. Today let's explore some of the programming constructs to operate on an instance of DocumentDB.

First step is to create a repository to allow you connect to your instance of DocumentDB. Create a repository class and reference the Microsoft.Azure.Documents.Client namespace in it. The Database object can be used to create an instance. The following code illustrates:

Database db = DbClient.CreateDatabaseAsync(new Database { Id = DbId } ).Result; 

Here DbClient is a property of type DatabaseClient exposed by Microsoft.Azure.Documents.Client API in your repository class. It provides the method CreateDatabaseAsync to connect to DocumentDB. You need to have the following key values from your instance of DocumentDB in azure:

  1. End point URL from Azure Management Portal
  2. Authentication Key
  3. Database Id
  4. Collection name

You can create an instance of DocumentClient using the following construct:

private static DocumentClient DbClient
    {
        get
        {
            Uri endpointUri = new Uri(ConfigurationManager.AppSettings["endpoint"]);
                return new DocumentClient(endpointUri, ConfigurationManager.AppSettings["authKey"];

        }
    }

Next you need to create a Document Collection using the method CreateDocumentCollectionAsync.

DocumentCollection collection = DbClient. CreateDocumentCollectionAsync ( Database.SelfLink, new DocumentCollection { Id = CollectionId } ).Result; 

You are now all set to perform DocumentDB operations using the repository. Note that you need to reference Microsoft.Azure.Documents.Linq to use Linq constructs for querying. Here is an example:

var results = DbClient.CreateDocumentQuery<T>(collection.DocumentsLink); 

Note that whatever entity replaces type T, the properties of that entity must be decorated with JsonProperty attribute to allow JSON serialization.

To create an entry you can use the CreateDocumentAsync method as shown here:

DbClient.CreateDocumentAsync(collection.SelfLink, T); 

In a similar fashion, you can also use the equivalent update method to update the data in your instance of DocumentDB.

Beyond .NET, DocumentDB also provides libraries to allow using JavaScript and Node.js. The interesting aspect is it allows T-SQL style operations such as creation of stored procedures, triggers, and user defined functions using JavaScript. You can write procedural logic in JavaScript, with atomic transactions. Performance is typically very good with JSON mapped all the way from the client side to DocumentDB as the unit of storage.  


Posted by Sandeep Chanda on October 10, 2014

The ongoing Xamarin Evolve conference is generating a lot of enthusiasm amongst cross-platform developers across the globe.

Xamarin has so far showcased an Android player, a simulator with hardware acceleration that claims to be much faster than the emulator with Android SDK. It is based on OpenGL and utilizes hardware accelerated virtualization with VT-x and AMD-V. The player also relies on Virtual Box 4.3 or higher to run. It would run equally well on Windows (7 or later) and OS X (10.7 or higher). After installing the player, you can select the emulator image to run. Select the device to simulate from the Device Manager. The emulator will then run exactly like the Android SDK emulator and you can perform various actions (typical of a hardware operation) by clicking the buttons provided on the right hand side. You can also simulate operations like multi-touch, battery operations, and location controls, etc. To install your apps for testing, you can drag and drop the APK file into the player.

Another cool release is the profiler that can be leveraged to perform code analysis of the C# code and profile it for potential performance bottlenecks and leaks. The profiler performs two important tasks. It does sampling for tracking memory allocation and looks at the call tree to determine the order of calling functions. It also provides a snapshot of memory usage on a timeline allowing the administrators to gain valuable insights into memory usage patterns.

My most favourite feature so far, however, is the preview of Sketches. Sketches provides an environment to quickly evaluate code and analyse the outcome. It offers immediate results without having the need to compile or deploy and you can use it from your Xamarin Studio. More on Sketches in the next post after I install and give it a try myself.


Posted by Sandeep Chanda on September 29, 2014

Azure is increasingly becoming the scalable CMS platform with support for a host of popular CMS providers via the marketplace. The list already includes some of the big names in the CMS industry, like Umbraco, Kentico, Joomla, and DNN.

The most recent addition to this list is WordPress. It is very simple to create a WordPress website. Go to the Azure Preview Portal and click New to go to the Gallery. Select Web from the navigation pane and you will see Scalable WordPress listed as one of the options (along with other options such as Umbraco and Zoomla).

Scalable WordPress uses Azure Storage by default to store site content. This automatically allows you to use Azure CDN for the media content that you want to use in your WordPress website.

Once you select Scalable WordPress, you will be redirected to the website configuration pane, where you can specify the name of the website, the database and the storage configuration settings. You are all set!

Login to your WordPress site dashboard to configure plug-ins like Jetpack. Jetpack, formerly available with WordPress.com, is now also available with Scalable WordPress. Your WordPress CMS site hosted in Azure can now support millions of visits and scale on demand. The Azure WordPress CMS website will support auto-scale out of the box. You can also enable backup and restore features available with Azure websites for your CMS site. It will also support publishing of content from stage to production.


Posted by Sandeep Chanda on September 15, 2014

NuGet has been a fairly popular mechanism to publish and distribute packaged components to be consumed by Visual Studio projects and solutions. Releases from the Microsoft product teams are increasingly being distributed as NuGet packages and it is officially the package manager for the Microsoft development platform. including .NET.

NuGet.org is the central package repository used by authors and consumers for global open distribution. One limitation of NuGet central repository is that, in large scale enterprise teams, it often results in package version mismatch across teams/solutions/projects. If not managed early this spirals into a significant application versioning problem for release managers during deployment.

One approach to solving this problem would be to use a Local NuGet Server that you can provision for your enterprise. It mimics the central repository, however it remains in the control of your release managers who can now decide which package versions to release for your consumers. The idea is that your Visual Studio users will point to your local NuGet server instead of the central repository and the release management team will control what versions of packages the teams use for consistency. The following figure illustrates the process:

It is very easy to create a NuGet server. You can use the nuget command line tool to publish packages. You will need an API Key and the host URL.

Developers using Visual Studio can go to Tools  →  Options  →  NuGet Package Manager → Package Sources and add the internal package server as a source.

While local NuGet servers are used today as a mechanism for distributing internal packages, they can also be extended to become a gated process for distributing global packages to bring consistency in the versions used across teams.


Sitemap