Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on July 30, 2015

Agile and Scrum are great in terms of delivering software in an iterative and predictable fashion, promoting development that is aligned towards the expected outcome by accepting early feedback. That said, the quality and longevity of the application is often driven by sound engineering practices put in place during the course of development. This also means that while burn down charts, velocity, and story level progress measures have their value in providing a sense of completion, unless some process guidance is established to measure engineering success during the application lifecycle, it is difficult to be certain about the behaviour of the application during go live and thereafter. An unpredictable behaviour does not instill confidence in using the product, ultimately spoiling the reputation of the project team engaged in delivering a quality software. The question is then, what matrices are key towards reporting and measuring engineering work?

LOC vs LOPC

Traditionally, raw lines of code (LOC) was used as a measure to qualify engineering productivity, however the approach is significantly flawed. A seasoned programmer can produce the same outcome in significantly fewer lines of code in comparison with a newbie. It is important for the code to stick around. A good measure, in that case, would be lines of productive code (LOPC). Measuring LOPC over a timeline gives you a good idea about individual developer productivity during the course of development and will empower you to make decisions in optimizing the team composition. For example, you can plot every 100 LOPC checked-in by a programmer over a time graph and it will help you predict behaviour. A developer shows significant improvement if he or she is taking, on average, less time to deliver 100 LOPC since beginning the program.

Code Churn

Code Churn is another critical factor in measuring engineering success. Refactoring causes code churn. While the team may be producing lots of lines of code in producing software, however the gap between LOC and LOPC is increasing — showing significant churn. This analysis helps nudge a programmer who is not putting sufficient effort towards writing quality code the first time around. Over a period of time, as team members get a better understanding of the requirements, the churn should reduce. If that is not the case it is an indicator that you need to make changes in your team composition.


Posted by Sandeep Chanda on July 14, 2015

When it comes to enterprise data visualization, Power BI has been leading from the front. It not only allows you to create great visualizations from your datasets, transforming the way you spot trends and make decisions to move forward, it also provides a platform for developers to extend its capabilities. The Power BI REST API has been available for a while now. You can use it to retrieve datasets stored in Microsoft Azure and then create visualizations that suit your needs. You can also add the visualizations to your ASP.NET web application hosted in Azure, making the visualizations available to a bigger group of your target audience. The Power BI team has taken a leap forward with the announcement of the availability of extensions in the form of Power BI Visuals.

The Power BI visuals project provides a set of visualizations that you can use to extend the capabilities of Power BI. The 20-odd out-of-the-box visualizations are ready to use with default capabilities of Power BI such as selection and filtering. The visuals are built using D3.js, but you also have the choice of leveraging Web GL, SVG, and other graphical technologies. The project also provides the framework for you to build and test the visualizations. Everything is compiled down to JavaScript running on all modern browsers. The project also contains a playground to demonstrate the capabilities. You can run the project using Node.js, however you would also need Visual Studio 2013 (or above) and TypeScript 1.4 for Visual Studio to execute the sample solution.

Once you have cloned the repository from GitHub, you can use the npm install  command to install the development dependencies. If you also want to test the visualizations you would need the Chutzpah JavaScript test runner and Jasmine-JQuery to be placed in the src\clients\externals\thirdparty\jasminejquery  folder inside the repository. You can then use the npm test  command to test.

The PowerBI visualization lifecycle includes three methods on the IVisual  interface that the project provides.

  1. init method when the PowerBI visual is first created.
  2. update method, whenever the host has an update for the visual.
  3. destroy method, whenever the visual is about to be disposed.

A cheer meter implementation has been provided here as an example to demonstrate the Power BI visual extensions.


Posted by Sandeep Chanda on July 6, 2015

You can now create ASP.NET Docker containers from within your Visual Studio IDE with the release of Visual Studio 2015 Tools for Docker. Note that the tool is still in preview.

This is definitely a good news for those looking to run ASP.NET on Linux. You can also very easily host the container running on a Linux VM in Microsoft Azure. The tool installs the Docker command line interface (CLI) for full control of the container environment using PowerShell and also provides an easy to publish user interface that integrates with the web publishing mechanism available with Visual Studio. It also automatically generates the necessary certificates.

You can use the tool to configure a Docker container based on a Linux VM to host in Azure. Next you can create a publishing profile on your ASP.NET 5 web application project. The publishing profile will allow you to choose a Docker container as a publish target. Once you have configured the profile, you can right click on your web project to deploy the updates to the configured container in literally a single click.

You can also automate the publishing using MSBuild, PowerShell or Bash script from a Linux or Mac machine. You need to specify the publishing profile in your choice of script.

The following example illustrates using PowerShell to publish your ASP.NET 5 web application to a hosted or on premise container as configured in the publishing profile:

.\aspnet5-Docker-publish.ps1 -packOutput $env:USERPROFILE\AppData\Local\Temp\PublishTemp -pubxmlFile .\ aspnet5-Docker.pubxml

You also have the option to turn on/off the process of creating a Docker container with the publish profile and you can choose to only create the image in your Docker host by setting the DockerBuildOnly configuration section to true or false.

Note that the .NET Core is still being built for Docker so, for this release, the tool uses the Mono runtime to provision your .NET applications.


Posted by Sandeep Chanda on June 26, 2015

Yesterday, the Microsoft Azure team announced the availability of the Azure Resource Usage and Rate Card API that developers on the Azure platform can now leverage to programmatically retrieve usage and billing information. This feature will now allow enterprises in turn to charge their customers based on usage. This was long overdue for multi-tenant systems hosted on Azure and will allow accurate tracking of cloud spend and also make it more predictable to manage the cost of your operations on the cloud. Specifically, using the Billing API, there are two areas that you can query at your subscription level:

  1. Resource usage: The resource usage REST API allows you to get data consumption at a subscription level. The API acts as a resource provider as part of the Azure resource manager and you can use the role based access control features to allow/deny access to data. The URI you would call is the Usage Aggregates resource https://management.azure.com/subscriptions/{subscription-Id}/providers/Microsoft.Commerce/UsageAggregates. You need to pass the API version, report start and end date time. You need to also pass the granularity value as daily or hourly. What you will get back, amongst other things, is the usage start and end times representing the timestamp of the actual recorded usage, the meter category (storage or otherwise), meter subcategory representing if the storage is geo-redundant, and finally the quantity in units (typically GB). You can also send the show details flag as true in which case the response will also show the region and the project using the resource.
  2. Rate card: The rate card REST API allows you to fetch the pricing information by locale, currency, and region. The URI you would call is the Rate Card resource https://management.azure.com/subscriptions/{subscription-Id}/providers/Microsoft.Commerce/RateCard. The response will give you the meter rates (based on the currency specified in the input) for all the available meter categories like cloud services, networking, virtual machines, etc.

The API can be used against various scenarios such as finding the monthly spend, setup alerts if the usage varies against a specific threshold, and metering tenants based on usage.


Posted by Sandeep Chanda on June 17, 2015

The Command Query Responsibility Segregation (CQRS) pattern isolates the data querying aspects of an application from the insert, update and delete operations. There are limited use cases regarding when you should use this pattern and it doesn't apply to request response style scenarios where the updated results may need to be displayed to the user immediately after an insert / update / delete. You must carefully evaluate your requirements to determine if CQRS is suitable to address your architectural requirements. Typically, requirements that are more sophisticated than just information systems driven CRUD operations, such as information in different transient states of representation, are good candidates for CQRS.

Event Sourcing is a useful scenario where the CQRS pattern can be leveraged. In an event sourcing scenario, the application stores state transitions as events in an event store. The read and write models in the scenario may be in different states, but the application eventually gets a consistent current state by playing the events in sequence. A good example is a highly scalable hotel reservation system in which certain attributes of the reservation can be modified until midnight of the day before arrival. The query and command operations in this scenario can be dealt with separately using the CQRS pattern where the states maybe not in sync but will eventually become consistent to determine the current state of the reservation.

CQRSLite is a useful and lightweight CQRS and Event Sourcing framework to start getting your head wrapped around the pattern. To create a more robust architecture around the CQRS pattern, often adding a complex event processing (CEP) tool or a bus is useful. Event Store is an open source, high performance, scalable and highly available CEP written in JavaScript. Client interfaces are also provided in .NET apart from native HTTP.


Posted by Sandeep Chanda on June 1, 2015

Last week's Google I/O 2015 event saw a slew of announcements, most notably the announcement of Android M. A number of new features were also in the upcoming 7.5 version of Google Play Services. The version brings along quite a few interesting features and optimizations to the entire Android ecosystem. The integration of Google Smart Lock with Android apps is a cool new addition to this version. Chrome allows you to save your Open ID and password based credentials using the Chrome Password Manager. You can now retrieve the stored credentials using the newly added Credential API in your Android Apps. The credentials can be retrieved as part of the login process on Apps running on any device.

The API automatically provides the necessary UI to prompt users to fetch and store the credentials for the purpose of future authentication. To store the credentials, you can use the Credential API's Auth.CredentialsApi.save() method and to retrieve the stored credentials, you can use the Auth.CredentialsApi.request() method. Other than sign-in you can also use the API to rapid on-board the user by partially filling the sign-up form for the app.

Another interesting addition was the release of App Invites (beta). The feature allows you to share your App with people you know. You can create actionable invite cards and send them via email, enabling you to market your app better. The invitations can be sent via SMS as well, providing a wider outreach. You can also personalize the access to Apps such as adding discount codes for certain invitees. You can send an invitation by creating an Intent using the AppInviteInvitation.IntentBuilder class. The Intent contains a title, the message, and a deep link data.

If the App is already installed on the user's device, the user can follow the invitation workflow generated by the deep link data. If the app is not installed, then they can choose to do so from the Play Store. The service is also available on iOS.


Posted by Sandeep Chanda on May 26, 2015

GitColony provides an easy-to-use, one-stop-shop collaborative environment for your code reviews and QA processes. It directly integrates with your GitHub repository and provides an intuitive and gamified dashboard that helps you review code as you write, instead of holding a large delta to get piled up for a review over a period of time. With GitColony, you don't have to wait for pull requests to review tons of code at one go. You can review code as it is written in the form of Partial Reviews making your review more actionable. It also remembers the last review so that the same review need not be done twice.

More interestingly, you can setup business rules around code quality and as they get built, you can get notifications on the critical paths which you have identified. If there are any changes to files in the critical path, you would certainly know about it. Using the rules you can reinforce the code review policies as well. The process to request a review is also very simple. You can tag the user you want to participate in reviewing your code in your commit message and a review request will automatically get created.

Setting up GitColony is pretty easy. After registering your company and pointing to your GitHub account, you can setup the repositories to sync as shown in the following figure.

You can also configure the people in your team who will collaborate and participate in the review process.

This is however an optional step during configuration and you can always come back to add people. Note that the pricing option you select will limit the number of people you can add to your GitColony account for collaboration. You can also setup the profile of each person collaborating on the platform.

You are now all set to use GitColony. The GitColony dashboard will display everything you need to know for monitoring the quality of your code. In addition, the dashboard will also display the incidents that are assigned to you.

GitColony not only allows you to establish a code review process, it also supports a robust QA process. It provides a QA plugin that allows you to automate tests by recording actions from the browser. It also provides for an integrated Dev-QA environment by allowing the QA team to vote for approving or rejecting a live branch or a pull request. This can ensure that no code makes its way to production unless certified by a QA team.


Posted by Sandeep Chanda on May 11, 2015

At the recent Build 2015 event, Microsoft announced the launch of an open source extensible tool for debugging JavaScript called Vorlon.js. Vorlon.js is powered by Node.js and you can remotely connect to a maximum of 50 devices simultaneously to run your JavaScript code with a single click. You can then see the results in your Vorlon.js dashboard. The same team that brought WebGL to the JavaScript world with the launch of Babylon.js is also the team that created this powerful remote debugging JavaScript tool.

The idea behind creating Vorlon.js was to allow developers to better collaborate with JavaScript code and debug together. The code written by one person will be visible to all and the experience is browser agnostic. No dependency, just pure play JavaScript, HTML, CSS running on the target devices.

The tool itself is a lightweight web server that can be installed locally or run from a server for the team to access the dashboard which acts as the central command centre and communicates with all the remote devices. The tool is also extensible with a plug-in framework that developers can use to create extensible plug-ins. It already comes with a few out of the box plug-ins to view console logs, inspect the DOM, and display the JavaScript variable tree using the Object Explorer.

It is very easy to start using Vorlon.js. First install the NPM package using the Node.js console:

$ npm i -g vorlon

To start running the tool, type the vorlon command:

$ vorlon

Now you have it running on the localhost port 1337 by default. To start monitoring your application you can add reference to the following script:

<script src="http://localhost:1337/vorlon.js/SESSIONID"></script>

Here SESSIONID is any string that can uniquely identify the application. You can ignore it as well, in which case it is replaced by default. You can start seeing the output, DOM, and console log in the dashboard by navigating to the following URL:

http://localhost:1337/dashboard/SESSIONID

You are now all set to use Vorlon.js.


Posted by Sandeep Chanda on April 28, 2015

While the last few decades were dominated by client server technologies, this one will be the decade of connected systems and devices operating on cloud platforms. Service orientation has paved the way for hosted APIs and Software as a Service (SaaS). Communications between publishers and subscribers of such services are getting orchestrated through the cloud -- with a hosted and managed foundation, giving rise to a new world of software defined infrastructure with programmable compute, networking, security and storage. What this means is that development teams can worry less about the hosting of the infrastructure and instead focus on optimizing the core, non-functional aspects of the system under development. The following figure illustrates the reorganized technology stack that you can expect to shape up in the near future:

At the infrastructure tier, virtual machines are now a thing of the past. The majority of the application stack will be managed by container technologies such as Docker. They are lightweight and can be built and deployed using scripts, making the GUI redundant. Microsoft Windows is already hitching a ride on container tech and making forays therein with the announcement of Nano server. Container technologies will make it very easy for DevOps teams to automate the release processes. Platforms, such as Chef, will be leveraged to turn your infrastructure into code and automate deployments and testing. Microsoft is also working very closely with Chef to extend its capabilities in Nano server.

Sitting a layer above will be APIs delivering Software as a Service. Platforms like Azure App Service and APIARY are already making it easier for developers to host their API and make it accessible via a marketplace. In addition, a variety of UI technologies are also evolving, targeting multiple form factor devices and allowing for consumption of the data from the APIs.


Posted by Sandeep Chanda on April 20, 2015

The Microsoft Azure team had been treating web and mobile app platforms independently, up until now, supporting a different hosting model for each one. While Azure Mobile Services catered to the needs generated by the mobile first approach, Azure Websites would support hosting of .NET, Node.js, Java, PHP, and Python based web applications with built-in scale. Underneath, these services were not that different from each other and it was time that the Azure team unified the different services under one platform. They did just that last month in the form of Azure App Service. The Azure App Service brings together the Web App, Mobile App, Application Logic (Workflow App) and API Services together, which can easily connect with SaaS based and on-premises applications — all with unified low cost pricing.

The Logic App is a new feature that you can use to create a workflow and automate the use of data across realms without having to write any code. You can login to the Azure preview portal and create a new Logic App by navigating to the Web + Mobile group under the New menu.

From under the Web + Mobile menu, click on the Logic App option to create a new Logic App. Specify a name for the app and select an App Service Plan.

You can also configure other options, such as the pricing package, resource group, subscription and location. You can then click on the Triggers and Actions tab to configure the trigger logic. The following figure illustrates using the Facebook connector to post a recurring message from the weather service to the timeline.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date