Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on May 11, 2015

At the recent Build 2015 event, Microsoft announced the launch of an open source extensible tool for debugging JavaScript called Vorlon.js. Vorlon.js is powered by Node.js and you can remotely connect to a maximum of 50 devices simultaneously to run your JavaScript code with a single click. You can then see the results in your Vorlon.js dashboard. The same team that brought WebGL to the JavaScript world with the launch of Babylon.js is also the team that created this powerful remote debugging JavaScript tool.

The idea behind creating Vorlon.js was to allow developers to better collaborate with JavaScript code and debug together. The code written by one person will be visible to all and the experience is browser agnostic. No dependency, just pure play JavaScript, HTML, CSS running on the target devices.

The tool itself is a lightweight web server that can be installed locally or run from a server for the team to access the dashboard which acts as the central command centre and communicates with all the remote devices. The tool is also extensible with a plug-in framework that developers can use to create extensible plug-ins. It already comes with a few out of the box plug-ins to view console logs, inspect the DOM, and display the JavaScript variable tree using the Object Explorer.

It is very easy to start using Vorlon.js. First install the NPM package using the Node.js console:

$ npm i -g vorlon

To start running the tool, type the vorlon command:

$ vorlon

Now you have it running on the localhost port 1337 by default. To start monitoring your application you can add reference to the following script:

<script src="http://localhost:1337/vorlon.js/SESSIONID"></script>

Here SESSIONID is any string that can uniquely identify the application. You can ignore it as well, in which case it is replaced by default. You can start seeing the output, DOM, and console log in the dashboard by navigating to the following URL:

http://localhost:1337/dashboard/SESSIONID

You are now all set to use Vorlon.js.


Posted by Sandeep Chanda on April 28, 2015

While the last few decades were dominated by client server technologies, this one will be the decade of connected systems and devices operating on cloud platforms. Service orientation has paved the way for hosted APIs and Software as a Service (SaaS). Communications between publishers and subscribers of such services are getting orchestrated through the cloud -- with a hosted and managed foundation, giving rise to a new world of software defined infrastructure with programmable compute, networking, security and storage. What this means is that development teams can worry less about the hosting of the infrastructure and instead focus on optimizing the core, non-functional aspects of the system under development. The following figure illustrates the reorganized technology stack that you can expect to shape up in the near future:

At the infrastructure tier, virtual machines are now a thing of the past. The majority of the application stack will be managed by container technologies such as Docker. They are lightweight and can be built and deployed using scripts, making the GUI redundant. Microsoft Windows is already hitching a ride on container tech and making forays therein with the announcement of Nano server. Container technologies will make it very easy for DevOps teams to automate the release processes. Platforms, such as Chef, will be leveraged to turn your infrastructure into code and automate deployments and testing. Microsoft is also working very closely with Chef to extend its capabilities in Nano server.

Sitting a layer above will be APIs delivering Software as a Service. Platforms like Azure App Service and APIARY are already making it easier for developers to host their API and make it accessible via a marketplace. In addition, a variety of UI technologies are also evolving, targeting multiple form factor devices and allowing for consumption of the data from the APIs.


Posted by Sandeep Chanda on April 20, 2015

The Microsoft Azure team had been treating web and mobile app platforms independently, up until now, supporting a different hosting model for each one. While Azure Mobile Services catered to the needs generated by the mobile first approach, Azure Websites would support hosting of .NET, Node.js, Java, PHP, and Python based web applications with built-in scale. Underneath, these services were not that different from each other and it was time that the Azure team unified the different services under one platform. They did just that last month in the form of Azure App Service. The Azure App Service brings together the Web App, Mobile App, Application Logic (Workflow App) and API Services together, which can easily connect with SaaS based and on-premises applications — all with unified low cost pricing.

The Logic App is a new feature that you can use to create a workflow and automate the use of data across realms without having to write any code. You can login to the Azure preview portal and create a new Logic App by navigating to the Web + Mobile group under the New menu.

From under the Web + Mobile menu, click on the Logic App option to create a new Logic App. Specify a name for the app and select an App Service Plan.

You can also configure other options, such as the pricing package, resource group, subscription and location. You can then click on the Triggers and Actions tab to configure the trigger logic. The following figure illustrates using the Facebook connector to post a recurring message from the weather service to the timeline.


Posted by Sandeep Chanda on April 7, 2015

Last week, Microsoft released an alpha version of its application-scale JavaScript in TypeScript 1.5. The language will now feature a plug-in for one of the most popular editors in Sublime Text. The 1.5 version will incorporate new concepts from ES6 and ES7 specifications in the form of Modules, which were introduced in ES6, and Decorators. TypeScript 1.5 syntax will allow you to define external modules and allow them to be imported with the ES6 style default import/export feature incorporated as well.

To define an export module you can use the export keyword before a function in your .ts file that is likely to be referenced.

export function greet(person) {
              return "Hello" + person;
}

For default import you can also use the default keyword:

export default function greet(person) {
              return "Hello" + person;
}

In your calling script, you can now reference the file containing the greet function and then import it.

import greet from "<the referenced file name>";

You can import multiple functions by wrapping them in curly braces.

import { greet, …} from "<the referenced file name>";

Alternatively you can import all using the following syntax:

import * from "<the referenced file name>";

TypeScript 1.5 will also support a new ES7 specification called Decorators. Decorators are a superset of metadata annotations. Decorators allow you to annotate and modify classes and properties at design time. The syntax is to precede the property or class you want to annotate. For example the following syntax shows how to create a ready only property:

class Person {
  @readonly
  name() { return '${this.first} ${this.last}' }
}

In other examples of decorators, you can use the memorize to memoize an accessor.

To compile TypeScript in Node.js, you must create a git clone of the repository and then install the Jake tools and dependencies.

npm install -g jake
npm install


Posted by Sandeep Chanda on March 26, 2015

For many years now, Dominic Baier and his team at Thinktecture has been relentlessly pursuing the cause to provide a lightweight alternative to securing costly server technologies in implementing really simple claims-based identity solutions. Their IdentityServer framework has graduated into an enterprise class identity suite with many large corporations leveraging it for single sign-on. With the release of IdentityServer3, it now becomes an OWIN/ Katana based framework with hostable components to support SSO in modern web applications supporting all modern identity specifications like OpenID Connect and OAuth2.0. It is very easy to configure IdentityServer3 in your ASP.NET MVC or Web API application.

First you need to install the relevant NuGet packages in Microsoft.Owin.Host.SystemWeb and Thinktecture.IdentityServer3. Next you need to setup an OWIN startup host file that replaces the ASP.NET host. You can create a Startup.cs file in your ASP.NET MVC project and call the UseIdentityServer extension method with IAppBuilder to setup IdentityServer in your OWIN host.

public void Configuration(IAppBuilder app)
{
    var options = new IdentityServerOptions
    {
        SigningCertificate = <implementation to fetch the certificate>,
        Factory = Factory.Create()
    };

    app.UseIdentityServer(options);
}

You must also decorate the class with OwinStartupAttribute attribute.

 [assembly: OwinStartup(typeof(<your project name space>))]

In addition, in your Web.config file you must set the run all managed modules for all requests attribute to true to allow identify server resources to be loaded correctly.

It is also possible to specify the clients that will leverage the identity server for authentication and the provider supplying the identity information from a user database or LDAP repository. This configures identity server and you can browse the /identity/.well-known/opened-configuration URL to discover the end points.

To add OAuth 2.0 support, the IAppBuilder provides the UseJsonWebToken method that you can configure in your Startup.cs file

app.UseJsonWebToken(
               issuer: ConfigurationManager.AppSettings["Issuer"],
                audience: ConfigurationManager.AppSettings["Audience"],
                signingKey: signingKey); 

You are all set. You can now use the AuthorizeAttribute attribute on your controller actions to authorize resource access, and initiate authentication with IdentityServer3. IdentityServer3 will present the login page, and based on the configured identity provider will allow you to login to access the resource. The Authorize attribute is available out of the box in MVC. You can use the more robust annotated resource authorization feature in IdentityServer3. To use that, install the Thinktecture.IdentityModel.Owin.ResourceAuthorization.Mvc package and then you can start using the ResourceAuthorizationAttribute attribute in your controller actions:

 [ResourceAuthorize("Read", "OrderDetails")]

You can now isolate access control in terms of who can read the order details (in our example above) in an AuthorizationManager call that invokes the relevant manager depending on the resource being accessed.

The AuthorizationManager should be part of the OWIN startup configuration using the IAppBuilder UseResourceAuthorization method.


Posted by Sandeep Chanda on March 20, 2015

Recently, Scott Guthrie announced the release of ASP.NET 5 on his blog. This release will feature the most radical changes to the popular web development framework in its 15 year history. The preview is now available for download. According to Scott, this open source framework release will focus on making it more lean, modular, and optimized for Cloud. It will also be highly portable, running with equal ease on Windows, Linux and Mac based systems. Some of the important updates to the core framework feature include the .NET Core assembly now getting deployed as part of the app allowing you to run multiple versions side by side and keeping the application independent of the framework installed on the host OS. In addition, in order to keep the framework lightweight, the components are available as NuGet packages and you can install only what your application requires.

Another interesting update for developer productivity in Visual Studio 2015 for ASP.NET 5 is the dynamic compilation feature that allows you to reflect the changes, such as values assigned to variables, at runtime and see them updated in the UI output. This will go a long way in helping developers debug without having to waste time in recompiling the entire project.

ASP.NET 5 also comes with MVC 6, which be a more unified model than previous editions by removing the redundancy across Web API, MVC and Web Forms. This will also aid in seamless transitions from one platform to another. Syntactical improvements in Razor are also a nice addition. You can now use a much more robust declarative syntax such as using asp-validation-summary instead of @Html.ValidationSummary in your views by virtue of extending the semantics of the tags in markup.

However, the most important addition, in my opinion, is the native support for Dependency Injection. It now provides the ActivateAttribute attribute that can be leveraged to inject services via properties not just in controllers but filters and views as well.


Posted by Sandeep Chanda on March 12, 2015

The developers at AirBnb  have been working on a library named Rendr.js that will allow you to pull application logic to the server using Backbone.js  and Node.js in a way that allows it to be shared with the client. This is an interesting premise in the world of JavaScript web app development. You can potentially control the HTML you want rendered in the client, without having the client to first download the JavaScript. Underlying Rendr, there is an interesting mix of technologies at play. The first noticeable technology choice is the usage of CoffeeScript. By their own admission, this is controversial, but given the popularity, they still went ahead with using it. In the controller, instead of a typical Backbone style router, they are using real controller objects to allow related actions to be grouped into manageable units.

The routes are specified in separate route files that also support optional route parameters.

# app/routes.coffee
module.exports = (match) ->
match 'orders/:id',    'orders#show'
match 'search',        'orders#search', orderCount: 100 

Notice the optional parameter orderCount. The controller is executed in both the client and the server. In the previous route, if a user hits the resource /orders/1100, the router will look for the show action in the controller.

show: (params, callback) ->
    spec =
      model: {model: 'Order', params: params}
    @app.fetch spec, (err, results) ->
      callback(err, results) 

The fetch call provides for a layer of indirection supporting caching in client and server as the controller actions are executed.

Another technology of note here is the usage of CommonJS, which is used to require modules by Node. The same syntax is used in the client with the help of Stitch, a library written to stitch CommonJS modules in the browser.

The Views and Models extend Backbone.View and Backbone.Model respectively. Each view is associated with a Handlebars template. A getHtml() call in the server pushes all the HTML manipulation to the Handlebars template. The client uses the View#render() to invoke the getHtml() call and update the innerHTML of the template elements.

Rendr is an interesting mix of technologies and a completely new premise in developing mobile web apps using well known JavaScript libraries and frameworks. While it is not completely open sourced yet, the promise it shows is definitely worth a mention in developing real world apps.


Posted by Sandeep Chanda on February 19, 2015

Of all the JavaScript frameworks available today, AngularJSis probably gaining the most traction in the developer community. Given its popularity, it was about time for the Visual Studio team to natively provide support for AngularJS, including providing support for dynamic IntelliSense while writing code in Visual Studio.

Previously, you could download the AngularJS NuGet package, however it would only set up your solution to support Angular with some IntelliSense directly from the Angular objects. What it could not do, however, is understand Angular's way of doing dependency injection.

In the recent release of an extension by John Blesdoe, a member of the Visual Studio community, the IntelliSense experience has been greatly enhanced to simulate the execution while writing code. You will see native angular APIs like the route provider methods visible in Visual Studio 2013 IntelliSense as mentioned in this postfrom the team.

The first thing you need to do in order to enable those cool IntelliSense features in Visual Studio 2013 is to configure the Angular extension. You can download the extension from hereand then copy it into your Visual Studio installation folder under Microsoft Visual Studio 12\JavaScript\References. The extension works equally well with VS 2015 CTP, as well as other projects types, such as if you are doing cross-platform mobile application development using Apache Cordova.

Once you have set up the extension, the next step would be to create an SPA application in your instance of Visual Studio 2013 and reference the Angular package.

You can now use the NuGet package explorer to search for Angular.Core and add reference to the project.

You are now all set to create Angular apps and leverage the great IntelliSense features that you have always enjoyed. Note that this is only supported in Angular 1.x. AngularJS 2.0 is a paradigm shift and will require a completely new extension in future to work.


Posted by Sandeep Chanda on February 9, 2015

React.js is the breakthrough UI Framework from Facebook that is used to build composable UI components for large scale applications.

It was about time that the Facebook and Instagram teams shared their experience in building large scale web applications with the larger community. More precisely, shared the lessons they learned in production. This is the year of Facebook releasing its own dog food to the developer community in the form of a declarative, highly composable JavaScript UI framework called React.js. The most fascinating aspect of React.js is that at its core it is designed to react to underlying data changes and is aware enough to update only the changed parts. This idea is powerful for large scale applications with frequent data changes.

To get started, you can download the React.js framework starter kit from here. Once you have downloaded the framework you can start using the resources to create your React.js UI using any of your favorite editors, such as Sublime Text  or Notepad++. The framework uses the concept of virtual DOM diff for high performance and optionally supports a new XML notation for creating JS objects using HTML syntax (JSX). The first step in using the React.js framework in your application is to provide a reference to the react.js file in your HTML5 UI.

 <script src="build/react.js"></script>

JSX is a clean way of separating a template from the display logic. While it is not required, it keeps the UI component look clean. At its core, React.js uses the render function to perform the DOM mutation. You need to reference the JSX transform libraries if you want to use the declarative JSX syntax.

 <script src="build/JSXTransformer.js"></script>

Now you can use the JSX syntax to write code in React DOM. You can write the React code inside your HTML template as shown here or in a separate JS file.

 <body>
    <div id="getTransformed"></div>
    <script type="text/jsx">
    </script>
</body>

The React code to transform the getTransformed div will look like:

 var Transformed = React.createClass({
    render: function() {
        return <div>I am {this.props.name}</div>;
    }
});
React.render(<Transformed name="Transformed!" />, document. getElementById('getTransformed'));

You can practice writing React code in JSFiddle here and see the UI get transformed in the console.


Posted by Sandeep Chanda on January 26, 2015

With asynchronous programming and TPL gaining popularity amongst .NET developers, the general availability of a stable release for the Immutable Collections feature has found a strong audience with developers especially in the world of cross platform app development, where lot of the data transfer code is wrapped in portable class libraries and is shared across different multi-threaded clients in the form of Windows 8, Store, or WPF applications. What is interesting is that the immutable collections are not part of the core framework. Instead, they are required to be separately installed using the Microsoft.Bcl.Immutable package. Note that there has been recent additions to the package that are still in preview (such as the addition of ImmutableArray<T>), so to explore the newly added members, you must select the pre-release option instead of the stable version in the package manager or console.

Install-Package Microsoft.Bcl.Immutable -Pre

Earlier thread-safe patterns (early TPL days) would promote usage of concurrent collections which are a definite alternative to immutable collections, however an expensive one. Concurrent collections, internally use expensive locking mechanisms for thread safety that immutable collections are able to avoid. It is interesting to note that internal nodes in immutable collections are not immutable to reduce garbage during the construction of the collection.

Another important thing to note is that immutable collections only offer reference equality to avoid expensive computations of value equality on collections.

Once you have installed the nuget package, you can start using immutable collections in your application / library. You will also notice the ToImmutableList() extension on collections. A good design practice for using immutable collections is to make the immutable collection properties read-only, and set them using the constructor.

public Cart(IEnumerable<Item> items)
    {
        Items = items.ToImmutableList();
    }
    public ImmutableList<Item> Items { get; private set; }

Now in your comparison methods, you can compare the instances and avoid creating a new instance of Item in case of a match in a thread-safe fashion.

return Object.ReferenceEquals(Items, value) ? this : new Item(value); 

Recommended practice is also to use the Create method when creating a new instance of an immutable collection or use the builder where the nodes are not frozen until the ToImmutable call is made.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date