Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on March 26, 2015

For many years now, Dominic Baier and his team at Thinktecture has been relentlessly pursuing the cause to provide a lightweight alternative to securing costly server technologies in implementing really simple claims-based identity solutions. Their IdentityServer framework has graduated into an enterprise class identity suite with many large corporations leveraging it for single sign-on. With the release of IdentityServer3, it now becomes an OWIN/ Katana based framework with hostable components to support SSO in modern web applications supporting all modern identity specifications like OpenID Connect and OAuth2.0. It is very easy to configure IdentityServer3 in your ASP.NET MVC or Web API application.

First you need to install the relevant NuGet packages in Microsoft.Owin.Host.SystemWeb and Thinktecture.IdentityServer3. Next you need to setup an OWIN startup host file that replaces the ASP.NET host. You can create a Startup.cs file in your ASP.NET MVC project and call the UseIdentityServer extension method with IAppBuilder to setup IdentityServer in your OWIN host.

public void Configuration(IAppBuilder app)
{
    var options = new IdentityServerOptions
    {
        SigningCertificate = <implementation to fetch the certificate>,
        Factory = Factory.Create()
    };

    app.UseIdentityServer(options);
}

You must also decorate the class with OwinStartupAttribute attribute.

 [assembly: OwinStartup(typeof(<your project name space>))]

In addition, in your Web.config file you must set the run all managed modules for all requests attribute to true to allow identify server resources to be loaded correctly.

It is also possible to specify the clients that will leverage the identity server for authentication and the provider supplying the identity information from a user database or LDAP repository. This configures identity server and you can browse the /identity/.well-known/opened-configuration URL to discover the end points.

To add OAuth 2.0 support, the IAppBuilder provides the UseJsonWebToken method that you can configure in your Startup.cs file

app.UseJsonWebToken(
               issuer: ConfigurationManager.AppSettings["Issuer"],
                audience: ConfigurationManager.AppSettings["Audience"],
                signingKey: signingKey); 

You are all set. You can now use the AuthorizeAttribute attribute on your controller actions to authorize resource access, and initiate authentication with IdentityServer3. IdentityServer3 will present the login page, and based on the configured identity provider will allow you to login to access the resource. The Authorize attribute is available out of the box in MVC. You can use the more robust annotated resource authorization feature in IdentityServer3. To use that, install the Thinktecture.IdentityModel.Owin.ResourceAuthorization.Mvc package and then you can start using the ResourceAuthorizationAttribute attribute in your controller actions:

 [ResourceAuthorize("Read", "OrderDetails")]

You can now isolate access control in terms of who can read the order details (in our example above) in an AuthorizationManager call that invokes the relevant manager depending on the resource being accessed.

The AuthorizationManager should be part of the OWIN startup configuration using the IAppBuilder UseResourceAuthorization method.


Posted by Sandeep Chanda on March 20, 2015

Recently, Scott Guthrie announced the release of ASP.NET 5 on his blog. This release will feature the most radical changes to the popular web development framework in its 15 year history. The preview is now available for download. According to Scott, this open source framework release will focus on making it more lean, modular, and optimized for Cloud. It will also be highly portable, running with equal ease on Windows, Linux and Mac based systems. Some of the important updates to the core framework feature include the .NET Core assembly now getting deployed as part of the app allowing you to run multiple versions side by side and keeping the application independent of the framework installed on the host OS. In addition, in order to keep the framework lightweight, the components are available as NuGet packages and you can install only what your application requires.

Another interesting update for developer productivity in Visual Studio 2015 for ASP.NET 5 is the dynamic compilation feature that allows you to reflect the changes, such as values assigned to variables, at runtime and see them updated in the UI output. This will go a long way in helping developers debug without having to waste time in recompiling the entire project.

ASP.NET 5 also comes with MVC 6, which be a more unified model than previous editions by removing the redundancy across Web API, MVC and Web Forms. This will also aid in seamless transitions from one platform to another. Syntactical improvements in Razor are also a nice addition. You can now use a much more robust declarative syntax such as using asp-validation-summary instead of @Html.ValidationSummary in your views by virtue of extending the semantics of the tags in markup.

However, the most important addition, in my opinion, is the native support for Dependency Injection. It now provides the ActivateAttribute attribute that can be leveraged to inject services via properties not just in controllers but filters and views as well.


Posted by Sandeep Chanda on March 12, 2015

The developers at AirBnb  have been working on a library named Rendr.js that will allow you to pull application logic to the server using Backbone.js  and Node.js in a way that allows it to be shared with the client. This is an interesting premise in the world of JavaScript web app development. You can potentially control the HTML you want rendered in the client, without having the client to first download the JavaScript. Underlying Rendr, there is an interesting mix of technologies at play. The first noticeable technology choice is the usage of CoffeeScript. By their own admission, this is controversial, but given the popularity, they still went ahead with using it. In the controller, instead of a typical Backbone style router, they are using real controller objects to allow related actions to be grouped into manageable units.

The routes are specified in separate route files that also support optional route parameters.

# app/routes.coffee
module.exports = (match) ->
match 'orders/:id',    'orders#show'
match 'search',        'orders#search', orderCount: 100 

Notice the optional parameter orderCount. The controller is executed in both the client and the server. In the previous route, if a user hits the resource /orders/1100, the router will look for the show action in the controller.

show: (params, callback) ->
    spec =
      model: {model: 'Order', params: params}
    @app.fetch spec, (err, results) ->
      callback(err, results) 

The fetch call provides for a layer of indirection supporting caching in client and server as the controller actions are executed.

Another technology of note here is the usage of CommonJS, which is used to require modules by Node. The same syntax is used in the client with the help of Stitch, a library written to stitch CommonJS modules in the browser.

The Views and Models extend Backbone.View and Backbone.Model respectively. Each view is associated with a Handlebars template. A getHtml() call in the server pushes all the HTML manipulation to the Handlebars template. The client uses the View#render() to invoke the getHtml() call and update the innerHTML of the template elements.

Rendr is an interesting mix of technologies and a completely new premise in developing mobile web apps using well known JavaScript libraries and frameworks. While it is not completely open sourced yet, the promise it shows is definitely worth a mention in developing real world apps.


Posted by Sandeep Chanda on February 19, 2015

Of all the JavaScript frameworks available today, AngularJSis probably gaining the most traction in the developer community. Given its popularity, it was about time for the Visual Studio team to natively provide support for AngularJS, including providing support for dynamic IntelliSense while writing code in Visual Studio.

Previously, you could download the AngularJS NuGet package, however it would only set up your solution to support Angular with some IntelliSense directly from the Angular objects. What it could not do, however, is understand Angular's way of doing dependency injection.

In the recent release of an extension by John Blesdoe, a member of the Visual Studio community, the IntelliSense experience has been greatly enhanced to simulate the execution while writing code. You will see native angular APIs like the route provider methods visible in Visual Studio 2013 IntelliSense as mentioned in this postfrom the team.

The first thing you need to do in order to enable those cool IntelliSense features in Visual Studio 2013 is to configure the Angular extension. You can download the extension from hereand then copy it into your Visual Studio installation folder under Microsoft Visual Studio 12\JavaScript\References. The extension works equally well with VS 2015 CTP, as well as other projects types, such as if you are doing cross-platform mobile application development using Apache Cordova.

Once you have set up the extension, the next step would be to create an SPA application in your instance of Visual Studio 2013 and reference the Angular package.

You can now use the NuGet package explorer to search for Angular.Core and add reference to the project.

You are now all set to create Angular apps and leverage the great IntelliSense features that you have always enjoyed. Note that this is only supported in Angular 1.x. AngularJS 2.0 is a paradigm shift and will require a completely new extension in future to work.


Posted by Sandeep Chanda on February 9, 2015

React.js is the breakthrough UI Framework from Facebook that is used to build composable UI components for large scale applications.

It was about time that the Facebook and Instagram teams shared their experience in building large scale web applications with the larger community. More precisely, shared the lessons they learned in production. This is the year of Facebook releasing its own dog food to the developer community in the form of a declarative, highly composable JavaScript UI framework called React.js. The most fascinating aspect of React.js is that at its core it is designed to react to underlying data changes and is aware enough to update only the changed parts. This idea is powerful for large scale applications with frequent data changes.

To get started, you can download the React.js framework starter kit from here. Once you have downloaded the framework you can start using the resources to create your React.js UI using any of your favorite editors, such as Sublime Text  or Notepad++. The framework uses the concept of virtual DOM diff for high performance and optionally supports a new XML notation for creating JS objects using HTML syntax (JSX). The first step in using the React.js framework in your application is to provide a reference to the react.js file in your HTML5 UI.

 <script src="build/react.js"></script>

JSX is a clean way of separating a template from the display logic. While it is not required, it keeps the UI component look clean. At its core, React.js uses the render function to perform the DOM mutation. You need to reference the JSX transform libraries if you want to use the declarative JSX syntax.

 <script src="build/JSXTransformer.js"></script>

Now you can use the JSX syntax to write code in React DOM. You can write the React code inside your HTML template as shown here or in a separate JS file.

 <body>
    <div id="getTransformed"></div>
    <script type="text/jsx">
    </script>
</body>

The React code to transform the getTransformed div will look like:

 var Transformed = React.createClass({
    render: function() {
        return <div>I am {this.props.name}</div>;
    }
});
React.render(<Transformed name="Transformed!" />, document. getElementById('getTransformed'));

You can practice writing React code in JSFiddle here and see the UI get transformed in the console.


Posted by Sandeep Chanda on January 26, 2015

With asynchronous programming and TPL gaining popularity amongst .NET developers, the general availability of a stable release for the Immutable Collections feature has found a strong audience with developers especially in the world of cross platform app development, where lot of the data transfer code is wrapped in portable class libraries and is shared across different multi-threaded clients in the form of Windows 8, Store, or WPF applications. What is interesting is that the immutable collections are not part of the core framework. Instead, they are required to be separately installed using the Microsoft.Bcl.Immutable package. Note that there has been recent additions to the package that are still in preview (such as the addition of ImmutableArray<T>), so to explore the newly added members, you must select the pre-release option instead of the stable version in the package manager or console.

Install-Package Microsoft.Bcl.Immutable -Pre

Earlier thread-safe patterns (early TPL days) would promote usage of concurrent collections which are a definite alternative to immutable collections, however an expensive one. Concurrent collections, internally use expensive locking mechanisms for thread safety that immutable collections are able to avoid. It is interesting to note that internal nodes in immutable collections are not immutable to reduce garbage during the construction of the collection.

Another important thing to note is that immutable collections only offer reference equality to avoid expensive computations of value equality on collections.

Once you have installed the nuget package, you can start using immutable collections in your application / library. You will also notice the ToImmutableList() extension on collections. A good design practice for using immutable collections is to make the immutable collection properties read-only, and set them using the constructor.

public Cart(IEnumerable<Item> items)
    {
        Items = items.ToImmutableList();
    }
    public ImmutableList<Item> Items { get; private set; }

Now in your comparison methods, you can compare the instances and avoid creating a new instance of Item in case of a match in a thread-safe fashion.

return Object.ReferenceEquals(Items, value) ? this : new Item(value); 

Recommended practice is also to use the Create method when creating a new instance of an immutable collection or use the builder where the nodes are not frozen until the ToImmutable call is made.


Posted by Sandeep Chanda on January 13, 2015

Last week the OData team at Microsoft released a preview of the RESTier framework, built to allow developers to quickly bootstrap an OData service implementation. As a matter of fact, as is claimed in this MSDN blog post, it will actually take developers fewer than 100 lines of code in one controller to build a standardized OData V4 service with rich domain logic.

RESTier is based on Web API OData. It is inspired from the simplicity of building WCF data services with the flexibility of using Web API. It likely fills the much needed, and long ignored, void in the data services ecosystem that would allow developers to focus on their all-important business logic.

Note that the framework is in preview, so you can install it using the nuget console. It will not appear in the package explorer. To install RESTier in your SPA, Web API or MVC project, open the nuget console, and run the following command

PM> Install-Package Microsoft.Restier -Pre

You can now start creating the Domain files based on the associated data providers. Note that in its current form, RESTier only supports EF 6 as the data provider. So if your project has an Order repository, with the database context as OrderContext, then you can create an OrderDomain class with the following code:

using Microsoft.Restier.EntityFramework;

public class OrderDomain : DbDomain<OrderContext>
{
    public OrderContext Context 
    { 
        get { return DbContext; } 
    }
}

For the controller, you can add the OrderController.cs class under the Controllers folder that inherits ODataDomainController class part of the RESTier Web API namespace.

public class OrderController : ODataDomainController<OrderDomain>
{
    private OrderContext DbContext
    {
        get { return Domain.Context;}
    }
}

To complete the bootstrapping process, you also need to register the OData endpoint in your WebApiConfig.cs file (for a Web API project).

config.MapODataDomainRoute<OrderController>(
           "OrderApi", "api/Order",
            new ODataDomainBatchHandler(server)); 

All set. Run the application and browse http://locahost/api/Order and then you will see the entity set. You can now run the all familiar OData commands for CRUD operations.


Posted by Sandeep Chanda on January 5, 2015

While Team Foundation Server now also supports local workspaces, where you can check out files and track changes without having a read-only lock on them, there could be possible challenges that may not allow you to use the local workspace mode in TFS. You may already be in the server connected mode and would not make your entire workspace local just for a few artifacts in your solution. This scenario becomes very realistic when you have files tracked by TFS that are not recognized by the Visual Studio ecosystem.

This could also be particularly useful for tracking the history of changes in documents checked into TFS. In any case, git gives you lot of flexibility in cloning a TFS branch and creating a local workspace that you can manipulate without having to worry about tracking the history and then check them back into TFS when you are done making the changes. This is done using the git-tfs tool that acts as a two way bridge between TFS and git. You can download the tool from the site and then use the git shell to execute the commands.

  1. To create a clone use the command git tfs clone [TFS Branch URL] [Local Workspace Folder]
  2. To get latest use the command git tfs pull
  3. To check-in git tfs checkin

Before you can successfully check-in to TFS and merge the changes from your local repository you will have to issue additional git commands to ensure the changes are recognized part of your local git repository.

To verify the status of pending changes you can use git status. To include the changes part of your commit, use git add and then git commit.

If you are not a big fan of using the git command shell and want a GUI for managing the local repository, you can also use the Visual Studio Team Client for git that is available in your team explorer, if you are using Visual Studio 2013 premium. You can use it to clone a repository from TFS and then start syncing your changes from the local repository to TFS. Once you have made any changes to the local repository, they will show up in the explorer and then you can use the commit command to commit your changes. Once you have committed, use the sync command to publish your changes back into TFS.

This provides an interesting alternative to using local workspaces in TFS and you can create as many local workspaces you need!


Posted by Sandeep Chanda on December 22, 2014

Last week the Power BI team at Microsoft announced availability of new features in Power BI for general preview. Power BI is Microsoft's attempt at creating a self-service business intelligence solution aimed towards providing decision makers with the ability to create their own dashboards from raw data and run powerful analytical queries.

Apart from new dashboard and visualization features, the new release will also preview a hybrid connectivity to on premises SQL Server Analysis Services (SSAS) instance tabular models. In line with targeting customers in the Apple world, it is also releasing the Power BI iPad app.

A whole new set of visualization elements have been added to enhance the overall experience of the user in analyzing data from different sources. You can now configure dashboards to display different types of charts such as funnel, gauge, tree map and combo charts. The dashboard also allows you to fetch data from both cloud and on premises data sources and create a more unified analytical view, augmented by natural language queries to narrow down the result set you want analyzed.

The most interesting feature, in my opinion, is the live query feature. Power BI now allows you to connect to an on premises source of analytics like SSAS cubes and when analyzing the data, it can directly run the query using your credentials. This saves you from migrating your on premises data to cloud for analysis.

Power BI reports and dashboards can be designed and shared using the Power BI designer. It allows you to model data not only from your custom data sources, but has a host of adaptors to connect with popular SaaS applications like Salesforce, Zendesk, and GitHub amongst others.


Posted by Sandeep Chanda on December 12, 2014

ECMAScript 6, the latest version of ECMAScript standard, has a host of new features added for JavaScript. Many of the new features are influenced towards making JavaScript syntactically similar to C# and CoffeeScript. Features such as function shorthand are introduced in the form of Arrows =>.

 iterateItemsInCollection() {
    this.coll.forEach(f =>
     //operate on item f;
  } 

Another noticeable feature is the class construct. Classes are based on the prototype-based object oriented pattern and support inheritance methods (static and instance) and constructors.

The most interesting new feature, however, is the introduction of Object.observe(). Note that Object.observe() is going to be available only with ES7.

Object.observe() is touted to be the future of data binding. A new upgrade to the JavaScript API, Object.observe() lets you asynchronously observe changes (adds, updates, deletes) to a JavaScript object. That means you can implement two-way data binding using Object.observe() without using any frameworks (KO, Angular…). Does this mean Object.observe() is set to obliterate frameworks such as Knockout, Angular, Ember? Not really, If it's just data binding, you can obviously use Object.observe(), but most frameworks today provide more than just data binding (e.g., routing, templates, etc.) and may be inevitable for bigger complex projects.

Object.observe() brings great performance boost over traditional dirty-checking used while implementing data binding. Testimony to this fact is that Angular team has decided to rewrite data binding implementation in Angular 2.0 using Object.observe(). As an API with great promise, Object.observe() is surely going to change a lot in the way we build client apps. You can read more here.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date