Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on June 29, 2016

With most of the SharePoint development now focusing around leveraging the Client Side Object Model (CSOM), guidance was long due from the community to write down the best practices to leverage the model using JavaScript. The Office 365 developer patterns and practices team has recently announced the release of a JavaScript Core Library to package some of the common practices and accelerate the development of SharePoint using client-side technologies.

The library provides fluent APIs to perform CSOM operations. In addition, it also has support for ES6 promise specifications for chaining asynchronous operations. The library works perfectly with in a SharePoint script editor Web part as well as with a module loader like requirejs.

To configure, first you can add the NodeJS package to your project using NPM:

npm install sp-pnp-js --save-dev

Once you have configured the package, you can import the root object and start interaction with the API. You can also leverage the API from within a Visual Studio TypeScript project. First you need to add the requirejs NuGet package and then use the module loader to load the pnp library.

Here is the requirejs code illustrating the module dependencies:

require(["jquery", "pnp", "fetch", "es6-promise.min"], function ($, app) {

    $(function () {
        app.render($("#content"), {
            "jquery": $
        });
    });

});

You will notice that apart from the module dependencies for jquery and the app launcher, there is additional dependencies for fetch and es6 promise modules. The fetch library supports cross origin request response against an API. The es6 promise library allows you to chain requests based on the promise style of programming in JS.

Here is a sample app code leveraging the pnp module:

import pnp from "pnp";

class App {

    render(element: HTMLElement, preloadedModules: any[]) {

        let $ = preloadedModules["jquery"];

        $(element).append(`${pnp.sp.web.select("Title").get()}`);
    }
}

You can also leverage the promise style as shown in the example below:

pnp.sp.crossDomainWeb().select("Title").get().then(function (result) {
         //perform further operations on result
    });


Posted by Gigi Sayfan on June 28, 2016

Xamarin creates mobile app development tools that are built on top of the Mono Project. Xamarin always provided, arguably, the most polished cross-platform development environment, but, it was pretty pricey. Recently Microsoft acquired Xamarin, and in the new spirit of openness Microsoft has made Xamarin free. That means it costs nothing to developers and you can also look at the code and even contribute if you're so inclined.

There are some services that you still need to pay for such as Xamarin test cloud and training in Xamarin university. But, those are extras most developers and organizations can do without. The organizations that do require them usually can afford to pay for them.

Why is it such a big deal? Xamarin provides a mature, well-thought-out and well-engineered solution for cross-platform app development.

With Xamarin, you develop in C# and have the power of the .NET framework behind you. Xamarin does the heavy lifting of translating your C# code to the native mobile OS. You can target iOS, Android and, of course, Windows phone. Xamarin provides an interesting mix of approaches. You get cross-platform capability with Xamarin.Forms, which gives you the native look and feel and you can also get full access to each target platform capabilities using Xamarin.Mac and Xamarin.Android. The main benefit is that you can start prototyping, and even begin actual development quickly, for all supported platforms using Xamarin.Forms, knowing that if you do need to write low-level platform-specific code this route is always open to you and it will integrate cleanly with the cross-platform code.


Posted by Sandeep Chanda on June 22, 2016

While there are several scenarios that may require you to run .NET code from within Node.js like- programming against a Windows specific interface or running a T-SQL query, there could be possible scenarios where you might have to execute a Node.js code from a .NET application. The most obvious one is where you have to return results from the .NET code to the calling Node script using a callback function, but there could be other possible scenarios like hybrid teams working on processes that run both Node and .NET applications. With Node.js getting a fairly large share of the server side development in recent years, the possibility of such hybrid development could become commonplace.

Edge.js really solves the problem of marshalling between .NET and Node.js (using the V8 engine and .NET CLR) thereby allowing each of these server side platforms to run in-process with one another in Windows, Linux and Mac. Edge can compile the CLR Code (it is primarily C#, but could compile any CLR supported language) and provides an asynchronous mechanism for interoperable scripts. Edge.js allows you to not only marshal data but JS proxies, specifically for .NET to the Func<object, Task<object>> delegate.

To install Edge.js in your .NET application, you can use the NuGet package.

Once you have successfully installed the package, you will see the Edge folder appearing in your solution.

You can then reference the EdgeJs namespace in your class files. The following code illustrates:

Note how the code uses the .NET CLR async await mechanism to support asynchronous callback of a JavaScript function using Node.js and Edge.js. This opens up several possibilities to call server side JavaScript from a .NET application using Edge.


Posted by Gigi Sayfan on June 16, 2016

In today's information-rich world, people read more than ever. We are constantly bombarded with text. Software developers, in particular, read a great deal. But, what part do books play in all this reading? Also, what is a book exactly these days?

I was always an avid reader. I read a lot in general and software development related books were my preferred channel for improving my knowledge and understanding. Back then, the Internet had barely started reaching the mainstream. Companies had libraries and developers had stacks of books on their desks with lots of post-it notes and highlighted sections. Browsing meant physically turning pages in a book. The equivalent of Stack Overflow was asking the department genius. Fast forward to the present — developers have an overwhelming number of options for accessing information — across all dimensions: programming languages, frameworks, databases and methodologies.

The pace of innovation in all of these areas seems to have increased as well. How can a developer make sense out of this abundance? Many developers give up and don't try to understand things in depth. They focus on getting the job done, following architectures and patterns designed by others, using frameworks that encapsulate many operational best practices and assembling together loosely-coupled components. When they need to address a specific problem they look for a similar project on GitHub, a Stack Overflow answer or a blog. This is not necessarily a bad thing. A small number of people write the foundational frameworks and libraries and many other people reap the benefits. This shows maturity and advances in ergonomic design. The 90's holy grail of reuse is finally here. But, that leaves software development books in an awkward position. They are not a useful medium anymore, by and large, for the majority of developers.

There are some books that communicate general concepts well, but most software development books explain how to use a particular framework or tool. Paper books are disappearing fast. Even e-books don't seem to cover these needs. In the past, books tried to keep up-to-date by releasing new versions. But, there is a new trend of "live" books that are constantly updated. This may be the future of software books, but is it really a book anymore?


Posted by Sandeep Chanda on June 7, 2016

SonarQube is a popular open source platform for managing quality in the scope of an application life cycle. It covers the seven axes of quality around the source code, namely — code clones, unit testing, complexity, potential source of bugs, adherence to static rules, documentation in the form of comments, and architecture and design. The beauty of SonarQube is not only its ability to combine matrices for better correlation and analysis, but also to mix them with historical results. SonarQube is extensible using plugins and provides out-of-the-box support for multiple languages including C#. It also offers a plugin for MS Build, letting you integrate SonarQube with Team Build definitions in TFS and making code debt analysis part of your build definitions.

To configure SonarQube for TFS, first you can download SonarQube.

Next you can download the C# and MS Build plugins.

Note that you will need Java running on your system to configure and run SonarQube.

Extract the downloaded package to a local folder in your system and place the C# plugin jar file under the extensions\plugins directory. Run the StartSonar.bat file in the bin folder to start the SonarQube server. SonarQube by default runs on the 9000 port. Once the server is started you can navigate to the http://localhost:9000 url to access the SonarQube portal.

Next extract the MS Build plugin package to a local folder and verify that the sonar.host.url property in the SonarQube.Analysis.xml file has the correct SonarQube server address configured.

You are now ready to configure SonarQube analysis with your TFS Team build definition. Modify your team build definition to set the Pre-build script path under advance properties to the full path to MSBuild.SonarQube.Runner.exe file. Also set the Pre-build script arguments to contain the following four arguments:

  • begin
  • /k: [the project key of the SonarQube project]
  • /n: [the project name]
  • /v: [the project version]

Also set the Post-test script path to the full path to MSBuild.SonarQube.Runner.exe, and the Post-test script arguments to contain the argument "end".

You are all set. Once you run the build, in the build report you will see the SonarQube analysis summary and a link to see the analysis results that will direct you to the dashboard.


Posted by Sandeep Chanda on May 27, 2016

Ever since the introduction of ASP.NET MVC and subsequently Web API, there has been some confusion brewing in the .NET Web development community in relation to the versioning practices being followed by the platform developers within the realm of ASP.NET. ASP.NET MVC and Web API spawned their versions different from ASP.NET and continued to release their own in spite of being part of ASP.NET.

Popularity of ASP.NET Forms also took a beating given the ever-increasing demand for ASP.NET MVC and Web API in building enterprise-grade Web applications. This resulted in ASP.NET MVC and Web API garnering more attention from the platform developers and ultimately resulting in them releasing more frequent versions. The release of ASP.NET 5 only added to the confusion with vNext also being a term used interchangeably.

Sometime during the beginning of this year, the ASP.NET platform development team decided to drop this nomenclature and agreed to completely rebrand ASP.NET as ASP.NET Core 1.0. This came riding quickly on the heels of rebranding .NET as .NET Core. Now it is no longer a newer version of an existing Web development framework, that is better and bigger than its predecessor. It is a completely brand new Web platform written from the ground up for .NET Core. It is actually much more lightweight than ASP.NET 4.6.

While the Core 1.0 version is not as complete as 4.6, with the release of RC2 a few weeks back, the framework is really coming close to general availability. There are significant gaps still in what 4.6 offers, and what is available in Core 1.0, but it is a unified platform, nevertheless, with MVC and Web API being part of it and not being branded as separate frameworks. This is very promising indeed!

The biggest change in RC2 is that there is a new .NET CLI, that replaces DNX- the unified .NET library for running applications in Windows, Mac, and Linux. RC2 has also updated the hosting mechanism to a console app, giving developers more flexibility in controlling the way their Core app will run and making the tool chain consistent for both .NET Core and ASP.NET Core. ASP.NET Core provides for a WebHostBuilder class that gives you the power to configure your Web application the way you want it, including the ability to optionally host it on IIS. In addition to some groundbreaking changes, RC2 also gives you the ability to host your ASP.NET Core applications in Azure.

A paradigm shift in Web development is coming our way with ASP.NET Core, and at this point we are eagerly awaiting the RTM release!


Posted by Gigi Sayfan on May 18, 2016

Software is infamously hard. This notion dates back to the 70s with the software crisis and the "No Silver Bullet" paper by Fred Brooks. The bigger the system, the more complicated it is to build software for it. The traditional trajectory is build a system to spec and watch it decay over time and rot until it is impossible to add new features or fix bugs due to the overwhelming complexity.

But, it doesn't have to be this way. Robust software (per my own definition) is a software system that gets better and better over time. Its architecture becomes simpler and more generic as it incorporates more real world use cases. Its test suite gets more comprehensive as it checks for more combinations of inputs and environment. Its real-world performance improves as insights into usage patterns allow for pragmatic optimizations. Its technology stack and third-party dependencies are getting upgraded to take advantage of their improvements. The team gets more familiar with the general architecture of the system and the business domain (working on such a system is a pleasure so churn will be low). The operations team gathers experience, automates more and more processes and builds or incorporates existing tools to manage the system.

The team develops APIs to expose the functionality in a loosely coupled way and integrates with external systems. This may sound like a pipe dream to some of you, but it is possible. It takes a lot of commitment and confidence, but the cool thing is that if you're able to follow this route you'll produce software that is not only high quality but also fast to develop and adapt to different business needs. It does take a significant amount of experience to balance the trade-offs between infrastructure and applications needs. If you can pull it off, you will be handsomely rewarded. The first step in your journey is to realize the status quo is broken.


Posted by Sandeep Chanda on May 16, 2016

The Azure Internet of Things team has recently open sourced the gateway SDKs that can be used to build and deploy applications for Azure IoT.

There are two classes of SDKs that have been made available. The device SDK that allows developers to connect client devices to the Azure IoT Hub and the service SDK that enables management of the IoT service instances in your hub. The device SDK supports a range of OSes running on low fidelity devices that typically support network communication, have the ability to establish a secure communication channel with the IoT Hub, are able to generate secure tokens for authentication and have a minimum of 64 KB RAM as the memory footprint.

The device SDK is available in C, .NET, Java, Node.js, and Python, while the service SDK is currently available in .NET, Node.js, and Java. In order to register clients using the device SDK, you will first create an IoT Hub instance in Azure using the management portal and then use the connection string of your IoT Hub to register a new device. If you reference the .NET SDK, you can use the Microsoft.Azure.Devices.Client library that exposes various methods to interact with the gateway such as the Create method to create a DeviceClient and SendEventAsync to send an event to the device hub. The Microsoft.Azure.Devices.Client library supports both AMQP and HTTPS protocols. The messages can also be sent in batches using the SendEventBatchAsync method that will send a collection of Message to the device hub.

The services SDK is available as the Microsoft.Azure.Devices library. You can use the RegistryManager class to register a device:

rm.AddDeviceAsync(new Device(Id)); 

To receive device-to-cloud messages, you can create a receiver using the EventHubClient class and use the ReceiveAsync method to start receiving event data asynchronously.

You can clone the IoT Gateway SDKs repository from GitHub and customize it for your own gateway solutions using Azure IoT.

The Azure IoT Gateway is promising because, while developers can connect their devices to IoT platforms, there are many scenarios that require edge intelligence, e.g. sensors that cannot connect to the cloud on their own. The IoT Gateway SDKs make it simple for developers to develop on-premise custom computation wherever a standard solution doesn't work.


Posted by Sandeep Chanda on May 5, 2016

During the Build 2016 conference, Vittorio Bertocci, the Principal Program Manager at the Microsoft Identity division announced the availability of a new authentication library named MSAL (Microsoft Authentication Library). It is poised to become one unified library that provides a single programming model for different identity providers such as Microsoft Accounts, and Azure Active Directory.

MSAL finds its origins in ADAL which was tailored to work exclusively with Azure AD and ADFS. MSAL is better in terms that it supports apps, agnostic of the authority mechanism being MSA or any Azure AD tenant. It also provides better protocol compliance and overcomes some of the issues with ADAL such as working with cache in multi-tenant applications. Another feature that makes it a universal identity provider is that it supports standard definition scopes instead of resources that are proprietary to Active Directory. With MSAL you don’t need to know native protocols like OAuth and Open ID Connect. It provides the necessary wrappers for you to program with the library and perform identity related operations at a high level without having to know a lot of details about the native protocols. Notably multi-factor authentication is supported out of the box. Overall, however, the most fascinating feature of this library is the ability for the app to ask for permissions incrementally and support transparent refresh tokens.

The two primary operations exposed by MSAL are:

  1. PublicClientApplication — used for desktop clients and mobile apps
  2. ConfidentialClientApplication — for server side apps and other web based resources

You can start using MSAL using the new authority endpoint. Note that you need to register your app first and get the client id. The new endpoint supports both personal and work accounts. During the authentication process you will receive both the sign in info and also an authorization code that can be used to obtain an access token. In a single sign-on scenario, that token can be used to access other secured resources that are part of the same sign-in. The following code illustrates how the ConfidentialClientApplication primitive is used to fetch the token and access the resource securely:

ConfidentialClientApplication clientApp = new ConfidentialClientApplication(clientId, null,
new ClientCredential(appKey), new MSALSessionCache(userId, this.HttpContext)); 

You can then use the AcquireTokenSilentAsync method to get the token by asking for the scopes you need.

MSAL aspires to provide an end-to-end identity solution, not just for your own and Microsoft APIs, but also any third-party APIs that choose to leverage MSAL. Today it supports applications developed using .NET and cross platform apps built using Xamarin. Future iterations will support native and JavaScript based apps.


Posted by Sandeep Chanda on April 28, 2016

Alberto Brandolini has been evangelizing the idea of EventStorming for a while now. It is a powerful workshop format to breakdown complex business problems pertaining to real world scenarios. The idea took its shape from the Event Sourcing implementation style laid out by Domain Driven Design. The outcome of the workshop format produces a model perfectly aligned with the idea of DDD and lets you identify aggregate and context boundaries fairly quickly. The approach also leverages easy-to-use notations and doesn't require UML, that in itself, might become a deterrent for some participants of a workshop who are not so familiar with UML notations.

The core idea of EventStorming is to make the workshop more engaging and evoke thought provoking responses from the participants. A lot of times discovery is superficial and figuring out the details are deferred for later. On the contrary, EventStorming allows participants to ask some very deep questions about the business problem that were very likely playing in their sub-conscious minds. It creates an atmosphere where the right questions can arise.

A core theme of this approach is unlimited modeling space. Modeling complex business problems is often constrained by space limitations (mostly whiteboard), but the approach allows anything to be leveraged as a platform where the problem can be modeled. You may pick anything that can come in handy and help you get rid of the space limitations.

Another core theme of this approach is the focus on Domain Events. Domain Events represent meaningful actions in the domain with suitable predecessors and successors. Placing events in a timeline on a surface allows people to visualize upstream and downstream activities and model the flow easily in their minds. Domain events are further annotated with user actions that are represented as Commands. You can also color code the representation to distinguish between user actions and system commands.

The next aspect to investigate is Aggregates. Aggregates here should represent a section of the system that receives the commands and decides on their execution. Aggregates produce Domain Events.

While a Domain Event is key to this exploration technique, along the way you are also encouraged and motivated to explore Subdomains, Bounded Context and User Personas. Subsequently, you also look at Acceptance Tests to remove any amount of ambiguity arising out of edge-case scenarios.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date