Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Sandeep Chanda on August 24, 2015

The recently concluded ContainerCon generated quite a lot of excitement around container technologies. The updates from the Linux Container (LXC) project were particularly interesting. Canonical, the company that is spearheading the work behind Ubuntu's fast dense and secure container management project, had shared the concept of Linux Container Hypervisor (LXD) last year. LXD is a new stratum on top of LXC that brings the advantages of legacy hypervisors into the modern world of containers. What is particularly important is that the LXD project provides RESTful APIs that can be used to start, stop, clone, and migrate the containers on any given host. Hosts can also run LXD clusters delivering cloud infrastructure at higher speeds and low cost. LXDs can also run along with Docker, which allows resources to be used as a pool of micro-services delivering complex web applications. The most fascinating aspect of LXD is that the underlying container technology is decoupled from the RESTful API driving the functionality which allows it to be used as a cross-functional container management layer.

The RESTful API allows communication between LXD and its clients. The calls over HTTP are encapsulated over SSL. You can do a GET / to get all the available end points. This will also give you the list of available versions. You can then do a GET to / [version]/Images/* to get the list of publicly available images. The API also supports the recursion argument to allow optimizing queries against large collections.

A GET operation on the [version]/containers API gets the list of containers. It also specifies the authentication and the operation type. A POST operation on the same API will allow you to create a container. The return type is either a background operation or an error. There are a bunch of management operations you can perform on each container by using the [version]/containers/ [name] API.


Posted by Gigi Sayfan on August 14, 2015

What is the best programming language across the board? There is no such thing, of course. Each programming language has different strengths and weaknesses, different design goals and an entire ecosystem surrounding it that includes community, libraries, tools, and documentation — all of which are partially dependent on how long the language has been around.

Alright, so there isn't a best programming language. Let's just use the old adage and pick the best tool for the job. I'm sorry to disappoint you again. There isn't a "best tool for the job" either. It is very easy for each particular project to rule out a lot of languages, and sometimes you'll end up with just a single choice. But more often than not, your choice (or your short list of choices) will be dictated by external constraints and not by the "superiority" of the language you end up with.

Most large software development organizations have a limited number of programming languages with which they work (sometimes just one). Sure, for some standalone tools you may use whatever you want, but we're talking here about your bread and butter. The big enterprise systems, your cash cow.

Those systems often have tens, if not hundreds, of man years invested in them. They have special processes, tools, build systems, test systems, deployment systems and operational experience. Introducing a new programming language into the mix will have so much upfront cost that it is pretty much out of the question. This is especially true when the upside is that you'll be able to use a cool new language and get to experience all its rough edges first hand.

But, even if you were able to combat all that and you've hypnotized everyone and persuaded upper management to let you rewrite the organization's system from scratch using the "best" language ever. Cool, what now?

Am I against trying new languages? Quite the contrary. The recent trend towards SOA and microservices provides a great escape hatch. Those services depend on each other at the API level only. If your architecture is already service-oriented you can easily experiment with implementing small new services in whatever language you choose or you can migrate a small existing service. It may still be a lot of effort because you'll still need to put the necessary infrastructure in place, but it may be acceptable for a small non-critical service to not have the best initial infrastructure.

The other case for trying new languages is of course starting a greenfield project or a whole new company, where you really start from scratch.

I will not discuss specific languages today, but here is a short list of some relatively new and very promising languages I have my eye on:

  • Go
  • Rust
  • Elm
  • Julia
  • Scala
  • Clojure

A special mention goes to C#, which is not new, but seems to be the only language that manages to add new features and capabilities and push the envelope of mainstream programming languages without becoming bloated, inconsistent and confusing.


Posted by Sandeep Chanda on August 11, 2015

Last month, Microsoft finally made the announcement for general availability of Visual Studio 2015 along with an Update 5 for Visual Studio 2013 to enable support for some of the updated framework features and the latest Framework 4.6. While the release of Visual Studio 2015 was no less a fanfare event, the release of Framework 4.6 has been marred by some inherent issues in the framework. We are still anticipating some announcements from Microsoft as to when the issues are expected to be resolved.

As far as Visual Studio 2015 is concerned, it comes with some cool new features. A bunch of those are illustrated by Scott Guthrie in his blog post. There are some significant tooling improvements in terms of supporting JavaScript frameworks such as Node.js. Configuration is now managed through JSON-based configuration files and the JSON editor is rich even with support for Bower. The JavaScript editor now supports rich syntax for Angular.js and Facebook's React.js framework.

I liked the out-of-the-box feature integration with Azure App Insights (much as they did with integrating Azure Web Sites with an ASP.NET VS template sometime back). It is a nifty add-in to instrument user behaviour in your application without having to program them in. Of course, if you want more, you can still program with the Data Collection API, but this OOB integration allows you immediate traceability.

The update also offers the availability of ASP.NET 5.0 preview templates. You can now create an ASP.NET 5.0 application (a radical shift from traditional ASP.NET web applications) as an open source cross platform framework for building modern web applications. ASP.NET 5 applications are run using the .NET Execution Environment (more on this in the next post) that allows it to be run cross platform (equally efficient on Mac, Linux and Windows).

After creating an ASP.NET 5.0 project, when you look at the solution, there are a bunch of new additions. You have the new Startup.cs class to define the ASP.NET pipeline using configuration. You would also notice a bunch of .JSON files for packaging different components and configuration information. You will also notice configuration for task-based JS runners like Gulp.

Another item you will find is the presence of wwwroot. This represents the root location from where the HTTP requests are handled and you will see the presence of static content in this folder.

Edition wise, there are certain changes. You now get a fully loaded Enterprise Edition, which replaces the Ultimate version in the outgoing platform. More on the tools and especially the .NET Execution Environment in future posts!


Posted by Gigi Sayfan on August 7, 2015

Hi. My name is Gigi Sayfan and I'll be writing here weekly. In this first post I would like to tell you a little bit about me and what I plan to cover. I'm also very interested to know what you care about and I'll gladly take ideas, requests and suggestions.

I'm a passionate software developer. Over the past 20 years, I worked for large corporations, small startups and everything in between. I have written production code in many programming languages and for many operating systems and devices. My current role is the director of software infrastructure at Aclima. We design and deploy large-scale distributed sensor networks for environmental quality and hope to make our planet a little healthier. I still write code every day and, in addition, I like to write technical articles about software development. I used to write for Dr. Dobbs and these days I write regularly for DevX.

I have a lovely wife and three kids that I tried to infect with the programming bug (see what I did there) with varying degrees of success. When I don't code or read and write about software development I lift weights, play basketball and do some Brazilian Jiu Jitsu.

What am I going to write about? Well, pretty much everything. There are so many cool topics to talk about. I'm getting excited already.

Programming Languages

Theoretically all Turing-complete programming languages are equivalent, but we all know the difference between theory and practice. There are so many new developments in programming languages, such as Modern C++, EcmaScript 6, Go, Rust, Elm, C#, and Python 3, to name a few. I have always loved programming languages and I plan do to deep dives, comparisons, reviews of new versions, and more.

Databases

Databases were considered perfect for a long time. You designed your relational schema, put your data in, got a DBA or two to optimize your queries and you were good to go. But, everything changed when the data didn't fit into a single database anymore. NoSQL was born with its many variations and companies started to innovate significantly around data storage.

Build Systems

Build systems are so important for enterprise development. I created a couple of build systems from scratch and I believe that a good build system used properly is critical to the success of enterprise-scale projects.

Automation

Automation of pretty much any system is key. You'll never be able to scale by just adding people. The nice thing about automation is that it is such a simple concept. Whatever you do, just have a program or a machine do it for you. It takes some hard work and imagination to automate certain aspects, but there is also a great deal of low hanging fruit.

Testing

Testing is yet another pillar of professional software development. Everyone knows that and these days many actually take it seriously and practice it. There is so much you can test and how you go about it pretty much dictates your speed of development. There are many dimensions and approaches to testing that I plan to explore in detail with you.

Deployment

Deployments, such as with a database, used to be fairly straightforward. Now, that systems are often extremely complicated, a large deployment is not so simple anymore--with private hosting, cloud providers, private clouds, containers, virtual machines, etc. An abundance of new technologies and approaches now exist, each with their own pros, cons and maturity level.

Distributed Systems

Distributed systems are another piece of the puzzle. With big data you need to split your processing across multiple machines. We will explore lots of options and innovation in this space as well.

Development Life Cycle

The software development life cycle is another topic that never ceases to generate a lot of controversy. There are multiple methodologies and everybody seems to have their own little nuances. Agile methods dominate here. However, in certain domains such as life-/mission-critical software and heavily regulated industries, other methods are more appropriate.

Open Source

Open Source keeps growing and even companies, such as Microsoft, that were once considered as distant as possible from open source are now fully onboard. The penetration of open source into the enterprise is a very interesting trend.

Web Development

The web is still a boiling pot of ideas and disruption. New languages, new browsers, new runtimes--here is a lot to observe and discuss in this space.

Hiring

The so-called talent wars and the difficulty in finding good software engineers are very real. It appears to just get worse and worse.

Culture

A company culture is often not very tangible, but somehow when you take a step back and look at successful companies it is evident that culture is real and can make or break you. Often big undertakings can flop with no clear reason other than culture. A prominent recent example would be Google+.

The Past and The Future

There is a lot to learn from our history. Luckily, the history of software is relatively short and well documented. The phrase "Those who don't study history are doomed to repeat it," is just as appropriate for software.

We live in the future. You can see the change happening in real time. Science fiction turns into science faster and faster.


Posted by Sandeep Chanda on July 30, 2015

Agile and Scrum are great in terms of delivering software in an iterative and predictable fashion, promoting development that is aligned towards the expected outcome by accepting early feedback. That said, the quality and longevity of the application is often driven by sound engineering practices put in place during the course of development. This also means that while burn down charts, velocity, and story level progress measures have their value in providing a sense of completion, unless some process guidance is established to measure engineering success during the application lifecycle, it is difficult to be certain about the behaviour of the application during go live and thereafter. An unpredictable behaviour does not instill confidence in using the product, ultimately spoiling the reputation of the project team engaged in delivering a quality software. The question is then, what matrices are key towards reporting and measuring engineering work?

LOC vs LOPC

Traditionally, raw lines of code (LOC) was used as a measure to qualify engineering productivity, however the approach is significantly flawed. A seasoned programmer can produce the same outcome in significantly fewer lines of code in comparison with a newbie. It is important for the code to stick around. A good measure, in that case, would be lines of productive code (LOPC). Measuring LOPC over a timeline gives you a good idea about individual developer productivity during the course of development and will empower you to make decisions in optimizing the team composition. For example, you can plot every 100 LOPC checked-in by a programmer over a time graph and it will help you predict behaviour. A developer shows significant improvement if he or she is taking, on average, less time to deliver 100 LOPC since beginning the program.

Code Churn

Code Churn is another critical factor in measuring engineering success. Refactoring causes code churn. While the team may be producing lots of lines of code in producing software, however the gap between LOC and LOPC is increasing — showing significant churn. This analysis helps nudge a programmer who is not putting sufficient effort towards writing quality code the first time around. Over a period of time, as team members get a better understanding of the requirements, the churn should reduce. If that is not the case it is an indicator that you need to make changes in your team composition.


Posted by Sandeep Chanda on July 14, 2015

When it comes to enterprise data visualization, Power BI has been leading from the front. It not only allows you to create great visualizations from your datasets, transforming the way you spot trends and make decisions to move forward, it also provides a platform for developers to extend its capabilities. The Power BI REST API has been available for a while now. You can use it to retrieve datasets stored in Microsoft Azure and then create visualizations that suit your needs. You can also add the visualizations to your ASP.NET web application hosted in Azure, making the visualizations available to a bigger group of your target audience. The Power BI team has taken a leap forward with the announcement of the availability of extensions in the form of Power BI Visuals.

The Power BI visuals project provides a set of visualizations that you can use to extend the capabilities of Power BI. The 20-odd out-of-the-box visualizations are ready to use with default capabilities of Power BI such as selection and filtering. The visuals are built using D3.js, but you also have the choice of leveraging Web GL, SVG, and other graphical technologies. The project also provides the framework for you to build and test the visualizations. Everything is compiled down to JavaScript running on all modern browsers. The project also contains a playground to demonstrate the capabilities. You can run the project using Node.js, however you would also need Visual Studio 2013 (or above) and TypeScript 1.4 for Visual Studio to execute the sample solution.

Once you have cloned the repository from GitHub, you can use the npm install  command to install the development dependencies. If you also want to test the visualizations you would need the Chutzpah JavaScript test runner and Jasmine-JQuery to be placed in the src\clients\externals\thirdparty\jasminejquery  folder inside the repository. You can then use the npm test  command to test.

The PowerBI visualization lifecycle includes three methods on the IVisual  interface that the project provides.

  1. init method when the PowerBI visual is first created.
  2. update method, whenever the host has an update for the visual.
  3. destroy method, whenever the visual is about to be disposed.

A cheer meter implementation has been provided here as an example to demonstrate the Power BI visual extensions.


Posted by Sandeep Chanda on July 6, 2015

You can now create ASP.NET Docker containers from within your Visual Studio IDE with the release of Visual Studio 2015 Tools for Docker. Note that the tool is still in preview.

This is definitely a good news for those looking to run ASP.NET on Linux. You can also very easily host the container running on a Linux VM in Microsoft Azure. The tool installs the Docker command line interface (CLI) for full control of the container environment using PowerShell and also provides an easy to publish user interface that integrates with the web publishing mechanism available with Visual Studio. It also automatically generates the necessary certificates.

You can use the tool to configure a Docker container based on a Linux VM to host in Azure. Next you can create a publishing profile on your ASP.NET 5 web application project. The publishing profile will allow you to choose a Docker container as a publish target. Once you have configured the profile, you can right click on your web project to deploy the updates to the configured container in literally a single click.

You can also automate the publishing using MSBuild, PowerShell or Bash script from a Linux or Mac machine. You need to specify the publishing profile in your choice of script.

The following example illustrates using PowerShell to publish your ASP.NET 5 web application to a hosted or on premise container as configured in the publishing profile:

.\aspnet5-Docker-publish.ps1 -packOutput $env:USERPROFILE\AppData\Local\Temp\PublishTemp -pubxmlFile .\ aspnet5-Docker.pubxml

You also have the option to turn on/off the process of creating a Docker container with the publish profile and you can choose to only create the image in your Docker host by setting the DockerBuildOnly configuration section to true or false.

Note that the .NET Core is still being built for Docker so, for this release, the tool uses the Mono runtime to provision your .NET applications.


Posted by Sandeep Chanda on June 26, 2015

Yesterday, the Microsoft Azure team announced the availability of the Azure Resource Usage and Rate Card API that developers on the Azure platform can now leverage to programmatically retrieve usage and billing information. This feature will now allow enterprises in turn to charge their customers based on usage. This was long overdue for multi-tenant systems hosted on Azure and will allow accurate tracking of cloud spend and also make it more predictable to manage the cost of your operations on the cloud. Specifically, using the Billing API, there are two areas that you can query at your subscription level:

  1. Resource usage: The resource usage REST API allows you to get data consumption at a subscription level. The API acts as a resource provider as part of the Azure resource manager and you can use the role based access control features to allow/deny access to data. The URI you would call is the Usage Aggregates resource https://management.azure.com/subscriptions/{subscription-Id}/providers/Microsoft.Commerce/UsageAggregates. You need to pass the API version, report start and end date time. You need to also pass the granularity value as daily or hourly. What you will get back, amongst other things, is the usage start and end times representing the timestamp of the actual recorded usage, the meter category (storage or otherwise), meter subcategory representing if the storage is geo-redundant, and finally the quantity in units (typically GB). You can also send the show details flag as true in which case the response will also show the region and the project using the resource.
  2. Rate card: The rate card REST API allows you to fetch the pricing information by locale, currency, and region. The URI you would call is the Rate Card resource https://management.azure.com/subscriptions/{subscription-Id}/providers/Microsoft.Commerce/RateCard. The response will give you the meter rates (based on the currency specified in the input) for all the available meter categories like cloud services, networking, virtual machines, etc.

The API can be used against various scenarios such as finding the monthly spend, setup alerts if the usage varies against a specific threshold, and metering tenants based on usage.


Posted by Sandeep Chanda on June 17, 2015

The Command Query Responsibility Segregation (CQRS) pattern isolates the data querying aspects of an application from the insert, update and delete operations. There are limited use cases regarding when you should use this pattern and it doesn't apply to request response style scenarios where the updated results may need to be displayed to the user immediately after an insert / update / delete. You must carefully evaluate your requirements to determine if CQRS is suitable to address your architectural requirements. Typically, requirements that are more sophisticated than just information systems driven CRUD operations, such as information in different transient states of representation, are good candidates for CQRS.

Event Sourcing is a useful scenario where the CQRS pattern can be leveraged. In an event sourcing scenario, the application stores state transitions as events in an event store. The read and write models in the scenario may be in different states, but the application eventually gets a consistent current state by playing the events in sequence. A good example is a highly scalable hotel reservation system in which certain attributes of the reservation can be modified until midnight of the day before arrival. The query and command operations in this scenario can be dealt with separately using the CQRS pattern where the states maybe not in sync but will eventually become consistent to determine the current state of the reservation.

CQRSLite is a useful and lightweight CQRS and Event Sourcing framework to start getting your head wrapped around the pattern. To create a more robust architecture around the CQRS pattern, often adding a complex event processing (CEP) tool or a bus is useful. Event Store is an open source, high performance, scalable and highly available CEP written in JavaScript. Client interfaces are also provided in .NET apart from native HTTP.


Posted by Sandeep Chanda on June 1, 2015

Last week's Google I/O 2015 event saw a slew of announcements, most notably the announcement of Android M. A number of new features were also in the upcoming 7.5 version of Google Play Services. The version brings along quite a few interesting features and optimizations to the entire Android ecosystem. The integration of Google Smart Lock with Android apps is a cool new addition to this version. Chrome allows you to save your Open ID and password based credentials using the Chrome Password Manager. You can now retrieve the stored credentials using the newly added Credential API in your Android Apps. The credentials can be retrieved as part of the login process on Apps running on any device.

The API automatically provides the necessary UI to prompt users to fetch and store the credentials for the purpose of future authentication. To store the credentials, you can use the Credential API's Auth.CredentialsApi.save() method and to retrieve the stored credentials, you can use the Auth.CredentialsApi.request() method. Other than sign-in you can also use the API to rapid on-board the user by partially filling the sign-up form for the app.

Another interesting addition was the release of App Invites (beta). The feature allows you to share your App with people you know. You can create actionable invite cards and send them via email, enabling you to market your app better. The invitations can be sent via SMS as well, providing a wider outreach. You can also personalize the access to Apps such as adding discount codes for certain invitees. You can send an invitation by creating an Intent using the AppInviteInvitation.IntentBuilder class. The Intent contains a title, the message, and a deep link data.

If the App is already installed on the user's device, the user can follow the invitation workflow generated by the deep link data. If the app is not installed, then they can choose to do so from the Play Store. The service is also available on iOS.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date