Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on February 19, 2015

Of all the JavaScript frameworks available today, AngularJSis probably gaining the most traction in the developer community. Given its popularity, it was about time for the Visual Studio team to natively provide support for AngularJS, including providing support for dynamic IntelliSense while writing code in Visual Studio.

Previously, you could download the AngularJS NuGet package, however it would only set up your solution to support Angular with some IntelliSense directly from the Angular objects. What it could not do, however, is understand Angular's way of doing dependency injection.

In the recent release of an extension by John Blesdoe, a member of the Visual Studio community, the IntelliSense experience has been greatly enhanced to simulate the execution while writing code. You will see native angular APIs like the route provider methods visible in Visual Studio 2013 IntelliSense as mentioned in this postfrom the team.

The first thing you need to do in order to enable those cool IntelliSense features in Visual Studio 2013 is to configure the Angular extension. You can download the extension from hereand then copy it into your Visual Studio installation folder under Microsoft Visual Studio 12\JavaScript\References. The extension works equally well with VS 2015 CTP, as well as other projects types, such as if you are doing cross-platform mobile application development using Apache Cordova.

Once you have set up the extension, the next step would be to create an SPA application in your instance of Visual Studio 2013 and reference the Angular package.

You can now use the NuGet package explorer to search for Angular.Core and add reference to the project.

You are now all set to create Angular apps and leverage the great IntelliSense features that you have always enjoyed. Note that this is only supported in Angular 1.x. AngularJS 2.0 is a paradigm shift and will require a completely new extension in future to work.


Posted by Sandeep Chanda on February 9, 2015

React.js is the breakthrough UI Framework from Facebook that is used to build composable UI components for large scale applications.

It was about time that the Facebook and Instagram teams shared their experience in building large scale web applications with the larger community. More precisely, shared the lessons they learned in production. This is the year of Facebook releasing its own dog food to the developer community in the form of a declarative, highly composable JavaScript UI framework called React.js. The most fascinating aspect of React.js is that at its core it is designed to react to underlying data changes and is aware enough to update only the changed parts. This idea is powerful for large scale applications with frequent data changes.

To get started, you can download the React.js framework starter kit from here. Once you have downloaded the framework you can start using the resources to create your React.js UI using any of your favorite editors, such as Sublime Text  or Notepad++. The framework uses the concept of virtual DOM diff for high performance and optionally supports a new XML notation for creating JS objects using HTML syntax (JSX). The first step in using the React.js framework in your application is to provide a reference to the react.js file in your HTML5 UI.

 <script src="build/react.js"></script>

JSX is a clean way of separating a template from the display logic. While it is not required, it keeps the UI component look clean. At its core, React.js uses the render function to perform the DOM mutation. You need to reference the JSX transform libraries if you want to use the declarative JSX syntax.

 <script src="build/JSXTransformer.js"></script>

Now you can use the JSX syntax to write code in React DOM. You can write the React code inside your HTML template as shown here or in a separate JS file.

 <body>
    <div id="getTransformed"></div>
    <script type="text/jsx">
    </script>
</body>

The React code to transform the getTransformed div will look like:

 var Transformed = React.createClass({
    render: function() {
        return <div>I am {this.props.name}</div>;
    }
});
React.render(<Transformed name="Transformed!" />, document. getElementById('getTransformed'));

You can practice writing React code in JSFiddle here and see the UI get transformed in the console.


Posted by Sandeep Chanda on January 26, 2015

With asynchronous programming and TPL gaining popularity amongst .NET developers, the general availability of a stable release for the Immutable Collections feature has found a strong audience with developers especially in the world of cross platform app development, where lot of the data transfer code is wrapped in portable class libraries and is shared across different multi-threaded clients in the form of Windows 8, Store, or WPF applications. What is interesting is that the immutable collections are not part of the core framework. Instead, they are required to be separately installed using the Microsoft.Bcl.Immutable package. Note that there has been recent additions to the package that are still in preview (such as the addition of ImmutableArray<T>), so to explore the newly added members, you must select the pre-release option instead of the stable version in the package manager or console.

Install-Package Microsoft.Bcl.Immutable -Pre

Earlier thread-safe patterns (early TPL days) would promote usage of concurrent collections which are a definite alternative to immutable collections, however an expensive one. Concurrent collections, internally use expensive locking mechanisms for thread safety that immutable collections are able to avoid. It is interesting to note that internal nodes in immutable collections are not immutable to reduce garbage during the construction of the collection.

Another important thing to note is that immutable collections only offer reference equality to avoid expensive computations of value equality on collections.

Once you have installed the nuget package, you can start using immutable collections in your application / library. You will also notice the ToImmutableList() extension on collections. A good design practice for using immutable collections is to make the immutable collection properties read-only, and set them using the constructor.

public Cart(IEnumerable<Item> items)
    {
        Items = items.ToImmutableList();
    }
    public ImmutableList<Item> Items { get; private set; }

Now in your comparison methods, you can compare the instances and avoid creating a new instance of Item in case of a match in a thread-safe fashion.

return Object.ReferenceEquals(Items, value) ? this : new Item(value); 

Recommended practice is also to use the Create method when creating a new instance of an immutable collection or use the builder where the nodes are not frozen until the ToImmutable call is made.


Posted by Sandeep Chanda on January 13, 2015

Last week the OData team at Microsoft released a preview of the RESTier framework, built to allow developers to quickly bootstrap an OData service implementation. As a matter of fact, as is claimed in this MSDN blog post, it will actually take developers fewer than 100 lines of code in one controller to build a standardized OData V4 service with rich domain logic.

RESTier is based on Web API OData. It is inspired from the simplicity of building WCF data services with the flexibility of using Web API. It likely fills the much needed, and long ignored, void in the data services ecosystem that would allow developers to focus on their all-important business logic.

Note that the framework is in preview, so you can install it using the nuget console. It will not appear in the package explorer. To install RESTier in your SPA, Web API or MVC project, open the nuget console, and run the following command

PM> Install-Package Microsoft.Restier -Pre

You can now start creating the Domain files based on the associated data providers. Note that in its current form, RESTier only supports EF 6 as the data provider. So if your project has an Order repository, with the database context as OrderContext, then you can create an OrderDomain class with the following code:

using Microsoft.Restier.EntityFramework;

public class OrderDomain : DbDomain<OrderContext>
{
    public OrderContext Context 
    { 
        get { return DbContext; } 
    }
}

For the controller, you can add the OrderController.cs class under the Controllers folder that inherits ODataDomainController class part of the RESTier Web API namespace.

public class OrderController : ODataDomainController<OrderDomain>
{
    private OrderContext DbContext
    {
        get { return Domain.Context;}
    }
}

To complete the bootstrapping process, you also need to register the OData endpoint in your WebApiConfig.cs file (for a Web API project).

config.MapODataDomainRoute<OrderController>(
           "OrderApi", "api/Order",
            new ODataDomainBatchHandler(server)); 

All set. Run the application and browse http://locahost/api/Order and then you will see the entity set. You can now run the all familiar OData commands for CRUD operations.


Posted by Sandeep Chanda on January 5, 2015

While Team Foundation Server now also supports local workspaces, where you can check out files and track changes without having a read-only lock on them, there could be possible challenges that may not allow you to use the local workspace mode in TFS. You may already be in the server connected mode and would not make your entire workspace local just for a few artifacts in your solution. This scenario becomes very realistic when you have files tracked by TFS that are not recognized by the Visual Studio ecosystem.

This could also be particularly useful for tracking the history of changes in documents checked into TFS. In any case, git gives you lot of flexibility in cloning a TFS branch and creating a local workspace that you can manipulate without having to worry about tracking the history and then check them back into TFS when you are done making the changes. This is done using the git-tfs tool that acts as a two way bridge between TFS and git. You can download the tool from the site and then use the git shell to execute the commands.

  1. To create a clone use the command git tfs clone [TFS Branch URL] [Local Workspace Folder]
  2. To get latest use the command git tfs pull
  3. To check-in git tfs checkin

Before you can successfully check-in to TFS and merge the changes from your local repository you will have to issue additional git commands to ensure the changes are recognized part of your local git repository.

To verify the status of pending changes you can use git status. To include the changes part of your commit, use git add and then git commit.

If you are not a big fan of using the git command shell and want a GUI for managing the local repository, you can also use the Visual Studio Team Client for git that is available in your team explorer, if you are using Visual Studio 2013 premium. You can use it to clone a repository from TFS and then start syncing your changes from the local repository to TFS. Once you have made any changes to the local repository, they will show up in the explorer and then you can use the commit command to commit your changes. Once you have committed, use the sync command to publish your changes back into TFS.

This provides an interesting alternative to using local workspaces in TFS and you can create as many local workspaces you need!


Posted by Sandeep Chanda on December 22, 2014

Last week the Power BI team at Microsoft announced availability of new features in Power BI for general preview. Power BI is Microsoft's attempt at creating a self-service business intelligence solution aimed towards providing decision makers with the ability to create their own dashboards from raw data and run powerful analytical queries.

Apart from new dashboard and visualization features, the new release will also preview a hybrid connectivity to on premises SQL Server Analysis Services (SSAS) instance tabular models. In line with targeting customers in the Apple world, it is also releasing the Power BI iPad app.

A whole new set of visualization elements have been added to enhance the overall experience of the user in analyzing data from different sources. You can now configure dashboards to display different types of charts such as funnel, gauge, tree map and combo charts. The dashboard also allows you to fetch data from both cloud and on premises data sources and create a more unified analytical view, augmented by natural language queries to narrow down the result set you want analyzed.

The most interesting feature, in my opinion, is the live query feature. Power BI now allows you to connect to an on premises source of analytics like SSAS cubes and when analyzing the data, it can directly run the query using your credentials. This saves you from migrating your on premises data to cloud for analysis.

Power BI reports and dashboards can be designed and shared using the Power BI designer. It allows you to model data not only from your custom data sources, but has a host of adaptors to connect with popular SaaS applications like Salesforce, Zendesk, and GitHub amongst others.


Posted by Sandeep Chanda on December 12, 2014

ECMAScript 6, the latest version of ECMAScript standard, has a host of new features added for JavaScript. Many of the new features are influenced towards making JavaScript syntactically similar to C# and CoffeeScript. Features such as function shorthand are introduced in the form of Arrows =>.

 iterateItemsInCollection() {
    this.coll.forEach(f =>
     //operate on item f;
  } 

Another noticeable feature is the class construct. Classes are based on the prototype-based object oriented pattern and support inheritance methods (static and instance) and constructors.

The most interesting new feature, however, is the introduction of Object.observe(). Note that Object.observe() is going to be available only with ES7.

Object.observe() is touted to be the future of data binding. A new upgrade to the JavaScript API, Object.observe() lets you asynchronously observe changes (adds, updates, deletes) to a JavaScript object. That means you can implement two-way data binding using Object.observe() without using any frameworks (KO, Angular…). Does this mean Object.observe() is set to obliterate frameworks such as Knockout, Angular, Ember? Not really, If it's just data binding, you can obviously use Object.observe(), but most frameworks today provide more than just data binding (e.g., routing, templates, etc.) and may be inevitable for bigger complex projects.

Object.observe() brings great performance boost over traditional dirty-checking used while implementing data binding. Testimony to this fact is that Angular team has decided to rewrite data binding implementation in Angular 2.0 using Object.observe(). As an API with great promise, Object.observe() is surely going to change a lot in the way we build client apps. You can read more here.


Posted by Sandeep Chanda on December 4, 2014

The web has come a long way from its standards being largely controlled by two major browser makers — Netscape & Microsoft, who largely dictated web standards based off of what those two browsers supported with every new release. New players (Firefox, Chrome, Opera) entered the web battleground and helped move the web towards standardization. The tipping point was reached when the web stack got a major upgrade in the form of HTML5, CSS3 and additions to JavaScript API. The future looks bright for web development with amazing new features being added to the web platform. Here are some of the prominent features that will be available to web developers to use in the not so distant future:

Web Components — Web Components is a set of cutting edge standards that allows us to build widgets for the web will full encapsulation. The encapsulation inherent in the web components solves the fundamental problem with widgets built out of HTML & JavaScript as the widget isn't isolated from the rest of the page and global styles. The main parts comprising Web Components are:

  1. HTML Templates — Reusable templates/html fragments. Read more.
  2. Shadow DOM — This is what enables encapsulation for a section of an html page/widget by introducing a new element (known as 'shadow root') in the DOM tree. Read more.
  3. Custom Elements — Custom elements are probably the best new thing available out of the box for web developers as they allow us to define new HTML elements without the need for an external library (if you've used angularjs, think custom directives). With custom elements you can have something like below in your html
    <contoso-timeline></contoso-timeline>

    Assuming contoso is the namespace under with your timeline control goes. That one line encapsulates the complete functionality of the timeline control. Neat! Read about custom elements in more detail.

Google has released an amazing library called Polymer that is worth a look. It's essentially a polyfill for web components.

Not all browsers support Web Components today. The latest builds of Chrome & Opera do have the support for web components. What features are supported by which browsers can be checked at caniuse.com.

To get a sense of what is coming in the future releases of each browser, you can check the respective bleeding edge versions —  Chrome Canary, Firefox Nightly, Opera Next and IE.


Posted by Sandeep Chanda on November 28, 2014

Earlier this month Google released its material design specifications. Material design talks about a synergy from principles of good design and applying modern techniques in science to create a specification that provides seamless experience to users across devices. The Material Design specification is a living document and continues to get updated on a regular basis. The inspiration behind the design elements are real life cues like surfaces, and first class input methods like touch, voice, and type. The material design specification provides a unified guidance on animation aspects like responsive interaction, style elements like colors and typography, and layout principals. It also provides guidelines on UI components like Text Fields, Buttons, Grids, etc.

Material UI is a Less based CSS framework created based on the Google's Material Design specifications. It is available as a Node package. Material UI provides the mark-up for various UI components and also less variables for the color palette and classes for typography. You can use the NPM console to install the node module and then use the React library to start building your UI. In addition, you can use Browserify to perform the JSX transformation.

Here is the command to install Material UI:

npm install material-ui

Now that you have installed the Material UI in your application, you can start coding your UI using the components.

First step is to dependency inject the Material UI instance:

var react = require('react'),
  materialUI = require('material-ui'),
  LeftMenu = materialUI.Menu; 

Next you create a react class to render the Material UI component:

var navigationMenu = react.createClass({
  render: function() {
    return (
        <Menu menuItems={this.props.items} />
    );
  }
});

Finally you can render the UI component using the React.render function.

React.render(<LeftMenu items={ labelMenuItems } />, mountNode); 

You are all set to create some great UI elements using the Material Design specifications and Material UI!


Posted by Jason Bloomberg on November 25, 2014

Perhaps you have noticed that my Twitter handle is @theebizwizard. You may have even noticed that my Skype handle is as well. It occurred to me that I’ve never told the story about that handle. I’d say it’s about time.

Ebusiness, as we gray-haired geezers are happy to relate, is a hype term from the dot.com crazy period of the late 1990s. The e prefix stands for electronic, as in e-mail. Well, if we can electronic-ify our mail into email, and we can electronic-ify our commerce into ecommerce, let’s take the next step and do the same thing with our entire business!

The core idea made perfect sense. Even as long ago as the 20th century, many businesses became increasingly dependent on their IT, and in some cases, we could even say that their business was IT. Take banking, for example. Money was no longer cash in a drawer, it was bits on a wire. Everything a bank does touches its technology.

But then two things happened to this essentially good idea. First, the pundits and vendors took the term and hype-ified it, essentially turning ebusiness into dot.com insanity, the enterprise version. Ebusiness rode the wave up to the top and predictably crashed along with the rest of the dot.com nonsense.

The second thing that happened to ebusiness, however, is more important. We finally realized that even ebusinesses aren’t made up entirely of technology. In reality, businesses are made up of people, just as they have been since the dawn of commerce back in the stone ages. Technology only gives us tools – increasingly powerful tools, but tools nevertheless.

What about Wizard?

There’s more to the theebizwizard story, however – the word wizard. This story begins in 1996, when my first wife and I were thinking about buying a mansion in Pittsburgh and turning it into a bed and breakfast. We never ended up buying the mansion, but I did buy the domain name rhodes.com, as a fellow named Rhodes originally owned the mansion.

It may be hard to believe now, but in those days it was bad form to own domain names you weren’t using, so I turned rhodes.com into my personal web site. For my email address I concocted wizard@rhodes.com – to indicate I could have selected any email address, not to indicate any kind of magical ability on my part.

For the next eleven years my personal email was wizard@rhodes.com and my personal web site was at www.rhodes.com, where I hosted my JavaScript and Java games. People would periodically ask to buy the domain or flame me for owning it, but I resisted, setting what I thought was an outrageously high price.

But to my surprise, in 2006 someone agreed to my price -- $50,000. For a domain name! It seemed the dot.com craziness hadn’t entirely gone away after all. (The buyer put up a tourist site promoting Rhodes, Greece, until they went out of business. To this day www.rhodes.com is still on the market.)

So, to make a long story, well, even longer, when it came time to create a Twitter handle, I put together ebusiness and wizard – not to indicate I was a wizard at ebusiness (although thank you for thinking that!) but rather as a reminder of the two sides of hype. Yes, ebusiness as a term came and went, but the fact I was able to sell a domain name for more than I made in two years as a high school teacher shows the true power of hype if you know how to use it (or if you just get lucky).

Now It’s Digital

In the 2000s, I successfully rode the SOA hype term up and then down. Today I’m doing the same thing with digital transformation. I’m the first to admit this new term – especially the word digital – has its flaws, but it represents a set of very powerful and important ideas nevertheless.

In fact, we’ve come full circle with this story, as digital business more or less means the same thing as ebusiness. Admittedly, today digital technologies include mobile and social media, where ebusiness focused primarily on the web, so the technology story today is noisier and more confusing. But the fundamental principles still remain: in particular, the fact that digital is about people.

Today, the fundamental driving force behind digital transformation is the shifting preferences and behaviors of users – either consumers or employees, who after all, are also consumers. Consumers demand multiple technology touchpoints, and it is for that reason (and only that reason) that digital is about technology at all.

Nevertheless, people are missing this fundamental point, just as they did with ebusiness back in the day. It seems that every day I spot yet another digital consultant or digital analyst hanging out their shingle, promising to help companies with their digital strategies. And what do those strategies consist of? Mobile strategies. Twitter strategies. Even web strategies. The list goes on and on. In other words, technology strategies.

What about customer strategies? Not sexy enough. Businesses have needed customers since the dawn of time, after all. What’s new or exciting about that? Today, people are confused about mobile and social media and don’t even get me started about the Internet of Things. Those are the hot topics! And hot topics are where the money is!

Well, yes and no. Yes, there’s money in hype – that’s the lesson wizard@rhodes.com taught me, after all. But never forget that people are – and will always be – at the center of business. Even ebusiness, or now, digital business.

Avoid Shiny Things

The reason it’s so easy to miss this fundamental principle is due to what I like to call the shiny things problem. People like shiny things, after all. Why? Because they’re shiny. The shinier the better. And if something is really shiny, you’ll forget all about what problem you’re trying to solve.

Techies frequently fall for shiny things. It seems that every new programming language or open source product is the next shiny thing. Ooh, Haskell! We gotta program in Haskell! Docker! We gotta use Docker! Etc. Etc. Believe it or not, SOA was a shiny thing in its day. I spent half a day in my SOA class trying to convince a roomful of architects that if something ain’t broke, don’t fix it. Don’t do SOA because it’s shiny!

Now digital transformation is the next shiny thing. People want it because, well, because it’s shiny! My advice: don’t fall for the shininess trap. Sometimes you do really need shiny things, and then by all means, go for it. But start with the problem you’re trying to solve. In the case of digital transformation, that starting point always centers on the customer, not the technology.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date