Login | Register   
RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX

Posted by Gigi Sayfan on October 6, 2015

The value of college education in general is a hotly debated topic these days, especially in the US. I'll focus here on computer science education. There is little question that the average (and even most of the less than average) comp sci graduates will have no problem landing a job right out of school. The shortage of good software engineers is getting more urgent as more and more of our world is run by software. So, economically it may be a good decision to pursue a comp sci degree, although many vocational programming schools have popped up and apparently place most of their graduates.

But, how much value does the comp sci education provide to the aspiring software engineer once they're out of school and successfully land their first job? How much value does an organization derive from the academic training of a college educated software engineer?

In my opinion, not much at all. I have a computer science degree. My son just started a comp sci program. I interview a lot of software engineering candidates with and without degrees.

The curriculum hasn't changed much since my days (more than 25 years ago). The material is appropriate academically and computer science is a fascinating field, but what is being taught has very little bearing on the day-to-day tasks of a software engineer.

The real world is super nuanced. Take, for example, the very important issue of performance. The performance of a system has so many dimensions and can be improved in myriad ways: changing the requirements, improving perceived performance, providing approximate results, trade off space vs. speed vs. power, trade off flexibility vs. hard coding, selection of libraries, how much security and debugging support do you throw in, selection of file formats and communication protocols, hardware, caching and more. Then, there is of course algorithmic complexity, but even then, most of the time, it is about intelligently mixing together existing algorithms and data structures. In all my years in the industry, developing high performance production systems and working with other engineers who developed low-level code and inner-loop algorithms, I don't recall a single case where formal analysis was used. It was always about empirical profiling, identifying hotspots and making appropriate changes.

Note, that pure computer science is very important for advancing the field and it is applied by a very small number of people who do basic research and core technology. It's just not especially relevant for the day-to-day work of the vast majority of software developers.

Posted by Sandeep Chanda on September 29, 2015

The SQL Elastic Database Pool allows you to run SQL databases with a private pool of resources dedicated for that purpose. Azure SQL database capabilities have been significantly enhanced recently to provide support for high-end scalability, allowing management of fairly large scale databases with huge amounts of compute and storage.

While cloud services in Azure were built to scale from the get-go, there were limitations around scaling the SQL database, especially if you were building a multi-tenant application. Not anymore. With the elastic database pool you can isolate the database needs for each customer and charge them based on consumption of actual resources.

It is very typical of SaaS-based applications to use a separate database for each tenant. Without the elastic pool, you always ended up allocating more resources from the start, not knowing how the actual consumer consumption would be. Or if you started with low allocation of resources, you always risked performance. With SQL elastic database pool, you don't have this problem anymore. You can create a private pool of resources (compute, I/O etc.), and then run multiple isolated databases. You can also set the SLAs for each database for peak and low usage depending on the predicted customer usage. You can also leverage the management APIs to script the configuration of the databases. In addition, you can also run queries that span multiple databases (pretty cool!).

The elastic database pool has three pricing tiers: Standard, Basic and Premium Elastic. These offer a pretty wide range of pricing and resource choices to setup your database pool. You can also very easily migrate between the pricing tiers, allowing you the flexibility to gradually move to a higher pricing tier as the usage grows.

Posted by Gigi Sayfan on September 25, 2015

Innovation in programming language design has been very visible over the last two decades. Java was the catalyst. It introduced many novel concepts such as garbage collection, the Java Virtual Machine and real cross-platform development to the mainstream. It also had cool applets. A few years later Microsoft came out with C# and a little later the scene exploded with dynamic languages like Python and Ruby gaining a lot of traction and real world success. Then, a lot of other languages piggybacked on the JVM and of course JavaScript became the de-facto language of the web. But in the frenzy, good old static, compiled languages were left behind. C++ took its time trying to get with the program and there were no other contenders for the main stage (D is nice, but never became popular).

Enter Mozilla - Mozilla has always created innovative stuff. Firefox was built in C++ on a technology called XPCOM (Cross Platform Component Object Model). This took Microsoft's very successful COM technology and created a cross-platform version from scratch. A couple of independent cool products were even developed on top of it (Python IDE Komodo from ActiveState). But, it was a very complicated piece of software.

Fast forward to today and Mozilla is building a new prototype browser using a new language of its own making, known as Rust. Rust is very unique. It brings memory management to the front and in the process also takes care of concurrent programming. It's able to detect a slew of issues at compile time that traditionally were discovered only at runtime. It is said - tongue in cheek - that if your program compiles it is correct. The problem is that getting a Rust program to compile is a non-trivial adventure. I played with Rust a little bit and it requires a lot of persistence. Right now, version 1.3 is out. There is an enthusiastic community around Rust and a lot of things are done right. They put a lot of emphasis on documentation. There is support for projects and packaging is not an afterthought (Python's Achilles' heel). Rust has great integration with C and other languages, so you can leverage many existing libraries.

I believe Rust is going to be a major player where critical, secure and performance-sensitive code is required. Give it a try, but don't count on it for serious production code just yet.

Posted by Sandeep Chanda on September 22, 2015

Webhooks is a popular pattern that has been available for a while now and has already been exposed by some popular service providers such as DropBox, GitHub, PayPal and MailChimp, amongst others. Webhooks are simple Pub/Sub models that allows a consumer to subscribe to events published by services. For example, you can subscribe to an event in DropBox whenever a new file is created or an existing file is updated. Similarly, in GitHub you can subscribe to the code commit information.

The .NET web development team recently announced the support for Webhooks in ASP.NET. You can now both send and receive Webhooks with ASP.NET MVC 5 and Web API 2. Although still in preview, it provides out-of-the-box support for DropBox, PayPal, Slack, WordPress, etc. The receiver model allows you to receive Webhooks from these providers as well as custom Webhooks that you may have created (like other ASP.NET MVC 5 web applications or Web API 2.0 services).

On the sender side, it provides support for storing and managing subscriptions. You can also use the sender and receiver parts in isolation. The way the Webhooks functionality works in ASP.NET is that the Webhooks server exposes the event subscription information. It then creates an HTTP POST to the matching subscriber URI based on the filters describing the event, along with the message payload.

For the receiver side, you must install the appropriate NuGet packages for receiving the Webhooks. For example, you must install the Microsoft.AspNet.WebHooks.Receivers.GitHub package to receive Webhooks from GitHub. If you want to create a receiver from a custom ASP.NET site, then you must install the Microsoft.AspNet.WebHooks.Receivers.Custom package.

A good use case for Webhooks is to subscribe to events happening in SalesForce. Although SalesForce doesn't directly expose Webhooks, the SOAP services inside can be configured for the Webhooks pattern to publish events. The article here describes the details of configuring SalesForce for Webhooks.

Posted by Gigi Sayfan on September 15, 2015

One of the never ending flame wars involves the use of programmer editors vs. integrated development environments (IDEs). Most developers have a proclivity for one over the other. A small percentage use both. Let's examine the terrain first.

On the text editors side, we have the long time competitors VI (or VIM) and Emacs, which are used pretty much exclusively in *nix environment (although there are Windows ports, of course). Then, there are recent and modern cross-platform editors such as Sublime Text and Atom.

On the IDE side the offerings used to be language specific, but recently most of the big players support multiple languages  —  often via plugins. Visual Studio is often considered the Holy Grail, but the various offerings from JetBrains are right up there. There is also Eclipse and NetBeans  — and many more.

One group I'll ignore here is simple text editors like Notepad or Nano. No doubt some people use them for serious programming and can be very successful, but those are few and far between. Editing files is just a small part of the whole development lifecycle.

Modern software development of enterprise scale systems involves managing multiple projects (often tens or hundreds) using multiple programming languages and tech stacks, with source control, testing and deployment, interaction with other teams, integration with third-party services, lots of open source and communicating with multiple people and other teams.

Activities such as browsing, searching the source code and refactoring require structured support. Integrated debugging, including remote debugging is a big concern.

In the end, your development environment must support your development process. The difference between the customizable programmer editors crowd and the IDE crowd boils down to the integration question. Do you want to glue together various command-line tools, install and upgrade independent plugins and script your way to total control? Or do you prefer to let someone else do most of the work in the form of an IDE that still allows you to customize and script it if needed.

I am firmly on the IDE side. I've used Visual Studio for a long time and sometimes I've used Eclipse and NetBeans but these days, PyCharm is where I spend most of my time. I almost never install any plugins (a notable exception was ReSharper) and I use VIM often when I need to edit files on remote systems. I get many productivity benefits from the IDE (in particular interactive debugging), but it does limit my ability to adopt new languages for serious development projects until they get adequate IDE support.

Posted by Sandeep Chanda on September 8, 2015

PubNub is a global data stream network, providing real-time communication for IoT, mobile and web. It has a wide series of solutions for most common use cases in event and data stream needs ranging from home automation, device signalling, to wearables, geolocation and financial data streaming.

Recently the company produced an open source platform called EON for creating charts and maps that provides real-time data stream animation. You can create analytical dashboards that reflect the true nature of the changing data, thereby allowing you to take action in the real world based on what you see.

For starters, you can include the EON JS and CSS files in your page to start leveraging the charting and map features.

<script type="text/javascript" src="http://pubnub.github.io/eon/lib/eon.js"></script>
<link type="text/css" rel="stylesheet" href="http://pubnub.github.io/eon/lib/eon.css" />

To create your own custom charts or maps, you can clone the repository, install the bower dependencies and then compile using gulp.

EON charts are based on C3.js and you need to use the C3 chart generation config to configure your charts with data. The C3 config is supplied as a parameter to eon.generate() method from the framework:

var channel = "c3-bar" + Math.random(); eon.chart({ channel: channel, generate: { bindto: '#chart', data: { labels: true, type: 'bar' }, bar: { width: { ratio: 0.5 } }, tooltip: { show: false } } });

You can then use the PubNub publishing feature to publish a data stream to the channel created above.

var pubnub = PUBNUB.init({ publish_key: '<publish key>', subscribe_key: <subscribe key> }); setInterval(function(){ pubnub.publish({ channel: channel, message: { columns: [ ['Austin', Math.floor(Math.random() * 99)], ['New York', Math.floor(Math.random() * 99)], ['San Francisco', Math.floor(Math.random() * 99)], ['Portland', Math.floor(Math.random() * 99)] ] } }); }, 1000); 

Similarly for maps, you can embed a map using eon.map():

eon.map({ id: 'map', mb_token: <token>, channel: channel, connect: connect });

Then use the PubNub publishing feature to publish the map data stream:

var pubnub = PUBNUB.init({ publish_key: '<publish key>', subscribe_key: '<subscribe key> ' }); setInterval(function(){ PUBNUB.publish({ channel: 'eon-map', message: [ {"latlng":[31,-99]}, {"latlng":[32,-100]}, {"latlng":[33,-101]}, {"latlng":[35,-102]} ] }); }, 1000); 

Posted by Sandeep Chanda on August 31, 2015

The Microsoft .NET team has lately been putting forth a great deal of effort to increase the footprint of the .NET Framework in the world of building cross platform and open sourced applications. The .NET Execution Engine (DNX) is the result of such effort. It is a cross-platform open source software development kit that hosts the .NET Core and the runtime to effortlessly run .NET applications in Windows, Mac, and Linux based systems. The behaviour is not altered nor is the functionality reduced if you migrate your applications from one platform to another, making your applications universal and platform agnostic. The SDK was built primarily targeting ASP.NET 5 applications, however it can run any .NET based application like a console app.

The engine completely takes care of the bootstrapping aspects of running a .NET application making it extremely easy for you to develop one application that can run with equal ease on all the three main operating systems. In addition, the engine also leverages the package management benefits of NuGet, thereby allowing you to build and distribute modules easily and efficiently. It can not only automatically cross compile the packages for each environment, but can also output NuGet packages for distribution. It also allows runtime edit of the source and in-memory compilation to let you switch dependencies without having to re-compile the entire application.

Unlike the older versions of .NET applications, that use XML based project and solution files, the DNX projects are simply a folder with a project.json file. The folder contains the necessary artefacts and the JSON file has the project metadata, dependencies and target framework information. It is all you need to run the application (other than the application configuration files and binaries, of course).

This also makes versioning of dependencies pretty easy. The dependencies are referenced in projects using a global.json file that remains at the solution level. The project JSON configuration file also supports commands that you can use to execute a .NET entry point with arguments. For example, Hosting command to host the application in web is a configuration entry in your project.json file. You can also distribute commands using NuGet and then using the engine to universally load them in a machine.

Posted by Sandeep Chanda on August 24, 2015

The recently concluded ContainerCon generated quite a lot of excitement around container technologies. The updates from the Linux Container (LXC) project were particularly interesting. Canonical, the company that is spearheading the work behind Ubuntu's fast dense and secure container management project, had shared the concept of Linux Container Hypervisor (LXD) last year. LXD is a new stratum on top of LXC that brings the advantages of legacy hypervisors into the modern world of containers. What is particularly important is that the LXD project provides RESTful APIs that can be used to start, stop, clone, and migrate the containers on any given host. Hosts can also run LXD clusters delivering cloud infrastructure at higher speeds and low cost. LXDs can also run along with Docker, which allows resources to be used as a pool of micro-services delivering complex web applications. The most fascinating aspect of LXD is that the underlying container technology is decoupled from the RESTful API driving the functionality which allows it to be used as a cross-functional container management layer.

The RESTful API allows communication between LXD and its clients. The calls over HTTP are encapsulated over SSL. You can do a GET / to get all the available end points. This will also give you the list of available versions. You can then do a GET to / [version]/Images/* to get the list of publicly available images. The API also supports the recursion argument to allow optimizing queries against large collections.

A GET operation on the [version]/containers API gets the list of containers. It also specifies the authentication and the operation type. A POST operation on the same API will allow you to create a container. The return type is either a background operation or an error. There are a bunch of management operations you can perform on each container by using the [version]/containers/ [name] API.

Posted by Gigi Sayfan on August 14, 2015

What is the best programming language across the board? There is no such thing, of course. Each programming language has different strengths and weaknesses, different design goals and an entire ecosystem surrounding it that includes community, libraries, tools, and documentation — all of which are partially dependent on how long the language has been around.

Alright, so there isn't a best programming language. Let's just use the old adage and pick the best tool for the job. I'm sorry to disappoint you again. There isn't a "best tool for the job" either. It is very easy for each particular project to rule out a lot of languages, and sometimes you'll end up with just a single choice. But more often than not, your choice (or your short list of choices) will be dictated by external constraints and not by the "superiority" of the language you end up with.

Most large software development organizations have a limited number of programming languages with which they work (sometimes just one). Sure, for some standalone tools you may use whatever you want, but we're talking here about your bread and butter. The big enterprise systems, your cash cow.

Those systems often have tens, if not hundreds, of man years invested in them. They have special processes, tools, build systems, test systems, deployment systems and operational experience. Introducing a new programming language into the mix will have so much upfront cost that it is pretty much out of the question. This is especially true when the upside is that you'll be able to use a cool new language and get to experience all its rough edges first hand.

But, even if you were able to combat all that and you've hypnotized everyone and persuaded upper management to let you rewrite the organization's system from scratch using the "best" language ever. Cool, what now?

Am I against trying new languages? Quite the contrary. The recent trend towards SOA and microservices provides a great escape hatch. Those services depend on each other at the API level only. If your architecture is already service-oriented you can easily experiment with implementing small new services in whatever language you choose or you can migrate a small existing service. It may still be a lot of effort because you'll still need to put the necessary infrastructure in place, but it may be acceptable for a small non-critical service to not have the best initial infrastructure.

The other case for trying new languages is of course starting a greenfield project or a whole new company, where you really start from scratch.

I will not discuss specific languages today, but here is a short list of some relatively new and very promising languages I have my eye on:

  • Go
  • Rust
  • Elm
  • Julia
  • Scala
  • Clojure

A special mention goes to C#, which is not new, but seems to be the only language that manages to add new features and capabilities and push the envelope of mainstream programming languages without becoming bloated, inconsistent and confusing.

Posted by Sandeep Chanda on August 11, 2015

Last month, Microsoft finally made the announcement for general availability of Visual Studio 2015 along with an Update 5 for Visual Studio 2013 to enable support for some of the updated framework features and the latest Framework 4.6. While the release of Visual Studio 2015 was no less a fanfare event, the release of Framework 4.6 has been marred by some inherent issues in the framework. We are still anticipating some announcements from Microsoft as to when the issues are expected to be resolved.

As far as Visual Studio 2015 is concerned, it comes with some cool new features. A bunch of those are illustrated by Scott Guthrie in his blog post. There are some significant tooling improvements in terms of supporting JavaScript frameworks such as Node.js. Configuration is now managed through JSON-based configuration files and the JSON editor is rich even with support for Bower. The JavaScript editor now supports rich syntax for Angular.js and Facebook's React.js framework.

I liked the out-of-the-box feature integration with Azure App Insights (much as they did with integrating Azure Web Sites with an ASP.NET VS template sometime back). It is a nifty add-in to instrument user behaviour in your application without having to program them in. Of course, if you want more, you can still program with the Data Collection API, but this OOB integration allows you immediate traceability.

The update also offers the availability of ASP.NET 5.0 preview templates. You can now create an ASP.NET 5.0 application (a radical shift from traditional ASP.NET web applications) as an open source cross platform framework for building modern web applications. ASP.NET 5 applications are run using the .NET Execution Environment (more on this in the next post) that allows it to be run cross platform (equally efficient on Mac, Linux and Windows).

After creating an ASP.NET 5.0 project, when you look at the solution, there are a bunch of new additions. You have the new Startup.cs class to define the ASP.NET pipeline using configuration. You would also notice a bunch of .JSON files for packaging different components and configuration information. You will also notice configuration for task-based JS runners like Gulp.

Another item you will find is the presence of wwwroot. This represents the root location from where the HTTP requests are handled and you will see the presence of static content in this folder.

Edition wise, there are certain changes. You now get a fully loaded Enterprise Edition, which replaces the Ultimate version in the outgoing platform. More on the tools and especially the .NET Execution Environment in future posts!

Thanks for your registration, follow us on our social networks to keep up-to-date