Login | Register   
RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX

Posted by Gigi Sayfan on October 6, 2015

The value of college education in general is a hotly debated topic these days, especially in the US. I'll focus here on computer science education. There is little question that the average (and even most of the less than average) comp sci graduates will have no problem landing a job right out of school. The shortage of good software engineers is getting more urgent as more and more of our world is run by software. So, economically it may be a good decision to pursue a comp sci degree, although many vocational programming schools have popped up and apparently place most of their graduates.

But, how much value does the comp sci education provide to the aspiring software engineer once they're out of school and successfully land their first job? How much value does an organization derive from the academic training of a college educated software engineer?

In my opinion, not much at all. I have a computer science degree. My son just started a comp sci program. I interview a lot of software engineering candidates with and without degrees.

The curriculum hasn't changed much since my days (more than 25 years ago). The material is appropriate academically and computer science is a fascinating field, but what is being taught has very little bearing on the day-to-day tasks of a software engineer.

The real world is super nuanced. Take, for example, the very important issue of performance. The performance of a system has so many dimensions and can be improved in myriad ways: changing the requirements, improving perceived performance, providing approximate results, trade off space vs. speed vs. power, trade off flexibility vs. hard coding, selection of libraries, how much security and debugging support do you throw in, selection of file formats and communication protocols, hardware, caching and more. Then, there is of course algorithmic complexity, but even then, most of the time, it is about intelligently mixing together existing algorithms and data structures. In all my years in the industry, developing high performance production systems and working with other engineers who developed low-level code and inner-loop algorithms, I don't recall a single case where formal analysis was used. It was always about empirical profiling, identifying hotspots and making appropriate changes.

Note, that pure computer science is very important for advancing the field and it is applied by a very small number of people who do basic research and core technology. It's just not especially relevant for the day-to-day work of the vast majority of software developers.

Posted by Gigi Sayfan on September 30, 2015

Open source development has never been more prevalent and successful than it is today. The biggest players regularly publish the latest and greatest technology with permissive licenses. Some companies even do their whole development in public — inviting anyone to fork, download and send pull requests.

Google has always been a strong advocate of open source and even funded a lot of non-Google open source projects through their Summer of Code program. Facebook publishes not just software, but even the specs to their servers.

Microsoft recently joined the party and open-sourced key parts of their technology stack and the company even develops core pieces openly on Gitgub.

The big question is whether or not the agile development style mixes well with open source. On the face of it, they are polar opposites. Agile evolved from tightly knit co-located teams. Open source is all about strangers who have never met collaborating across the globe.

All the same, there are many similarities and shared principles between the two methodologies. Both paradigms put a great deal of emphasis on testing and automation. Both favor small and quick iterations. Obviously, co-located Agile teams can publish the results of their work as open source. The more interesting case is when an agile development team, that is not co-located, can still successfully take advantage of open source methodologies such as rigorous source control policies, pull requests and continuous integration tools.

Posted by Sandeep Chanda on September 29, 2015

The SQL Elastic Database Pool allows you to run SQL databases with a private pool of resources dedicated for that purpose. Azure SQL database capabilities have been significantly enhanced recently to provide support for high-end scalability, allowing management of fairly large scale databases with huge amounts of compute and storage.

While cloud services in Azure were built to scale from the get-go, there were limitations around scaling the SQL database, especially if you were building a multi-tenant application. Not anymore. With the elastic database pool you can isolate the database needs for each customer and charge them based on consumption of actual resources.

It is very typical of SaaS-based applications to use a separate database for each tenant. Without the elastic pool, you always ended up allocating more resources from the start, not knowing how the actual consumer consumption would be. Or if you started with low allocation of resources, you always risked performance. With SQL elastic database pool, you don't have this problem anymore. You can create a private pool of resources (compute, I/O etc.), and then run multiple isolated databases. You can also set the SLAs for each database for peak and low usage depending on the predicted customer usage. You can also leverage the management APIs to script the configuration of the databases. In addition, you can also run queries that span multiple databases (pretty cool!).

The elastic database pool has three pricing tiers: Standard, Basic and Premium Elastic. These offer a pretty wide range of pricing and resource choices to setup your database pool. You can also very easily migrate between the pricing tiers, allowing you the flexibility to gradually move to a higher pricing tier as the usage grows.

Posted by Gigi Sayfan on September 25, 2015

Innovation in programming language design has been very visible over the last two decades. Java was the catalyst. It introduced many novel concepts such as garbage collection, the Java Virtual Machine and real cross-platform development to the mainstream. It also had cool applets. A few years later Microsoft came out with C# and a little later the scene exploded with dynamic languages like Python and Ruby gaining a lot of traction and real world success. Then, a lot of other languages piggybacked on the JVM and of course JavaScript became the de-facto language of the web. But in the frenzy, good old static, compiled languages were left behind. C++ took its time trying to get with the program and there were no other contenders for the main stage (D is nice, but never became popular).

Enter Mozilla - Mozilla has always created innovative stuff. Firefox was built in C++ on a technology called XPCOM (Cross Platform Component Object Model). This took Microsoft's very successful COM technology and created a cross-platform version from scratch. A couple of independent cool products were even developed on top of it (Python IDE Komodo from ActiveState). But, it was a very complicated piece of software.

Fast forward to today and Mozilla is building a new prototype browser using a new language of its own making, known as Rust. Rust is very unique. It brings memory management to the front and in the process also takes care of concurrent programming. It's able to detect a slew of issues at compile time that traditionally were discovered only at runtime. It is said - tongue in cheek - that if your program compiles it is correct. The problem is that getting a Rust program to compile is a non-trivial adventure. I played with Rust a little bit and it requires a lot of persistence. Right now, version 1.3 is out. There is an enthusiastic community around Rust and a lot of things are done right. They put a lot of emphasis on documentation. There is support for projects and packaging is not an afterthought (Python's Achilles' heel). Rust has great integration with C and other languages, so you can leverage many existing libraries.

I believe Rust is going to be a major player where critical, secure and performance-sensitive code is required. Give it a try, but don't count on it for serious production code just yet.

Posted by Sandeep Chanda on September 22, 2015

Webhooks is a popular pattern that has been available for a while now and has already been exposed by some popular service providers such as DropBox, GitHub, PayPal and MailChimp, amongst others. Webhooks are simple Pub/Sub models that allows a consumer to subscribe to events published by services. For example, you can subscribe to an event in DropBox whenever a new file is created or an existing file is updated. Similarly, in GitHub you can subscribe to the code commit information.

The .NET web development team recently announced the support for Webhooks in ASP.NET. You can now both send and receive Webhooks with ASP.NET MVC 5 and Web API 2. Although still in preview, it provides out-of-the-box support for DropBox, PayPal, Slack, WordPress, etc. The receiver model allows you to receive Webhooks from these providers as well as custom Webhooks that you may have created (like other ASP.NET MVC 5 web applications or Web API 2.0 services).

On the sender side, it provides support for storing and managing subscriptions. You can also use the sender and receiver parts in isolation. The way the Webhooks functionality works in ASP.NET is that the Webhooks server exposes the event subscription information. It then creates an HTTP POST to the matching subscriber URI based on the filters describing the event, along with the message payload.

For the receiver side, you must install the appropriate NuGet packages for receiving the Webhooks. For example, you must install the Microsoft.AspNet.WebHooks.Receivers.GitHub package to receive Webhooks from GitHub. If you want to create a receiver from a custom ASP.NET site, then you must install the Microsoft.AspNet.WebHooks.Receivers.Custom package.

A good use case for Webhooks is to subscribe to events happening in SalesForce. Although SalesForce doesn't directly expose Webhooks, the SOAP services inside can be configured for the Webhooks pattern to publish events. The article here describes the details of configuring SalesForce for Webhooks.

Posted by Gigi Sayfan on September 15, 2015

One of the never ending flame wars involves the use of programmer editors vs. integrated development environments (IDEs). Most developers have a proclivity for one over the other. A small percentage use both. Let's examine the terrain first.

On the text editors side, we have the long time competitors VI (or VIM) and Emacs, which are used pretty much exclusively in *nix environment (although there are Windows ports, of course). Then, there are recent and modern cross-platform editors such as Sublime Text and Atom.

On the IDE side the offerings used to be language specific, but recently most of the big players support multiple languages  —  often via plugins. Visual Studio is often considered the Holy Grail, but the various offerings from JetBrains are right up there. There is also Eclipse and NetBeans  — and many more.

One group I'll ignore here is simple text editors like Notepad or Nano. No doubt some people use them for serious programming and can be very successful, but those are few and far between. Editing files is just a small part of the whole development lifecycle.

Modern software development of enterprise scale systems involves managing multiple projects (often tens or hundreds) using multiple programming languages and tech stacks, with source control, testing and deployment, interaction with other teams, integration with third-party services, lots of open source and communicating with multiple people and other teams.

Activities such as browsing, searching the source code and refactoring require structured support. Integrated debugging, including remote debugging is a big concern.

In the end, your development environment must support your development process. The difference between the customizable programmer editors crowd and the IDE crowd boils down to the integration question. Do you want to glue together various command-line tools, install and upgrade independent plugins and script your way to total control? Or do you prefer to let someone else do most of the work in the form of an IDE that still allows you to customize and script it if needed.

I am firmly on the IDE side. I've used Visual Studio for a long time and sometimes I've used Eclipse and NetBeans but these days, PyCharm is where I spend most of my time. I almost never install any plugins (a notable exception was ReSharper) and I use VIM often when I need to edit files on remote systems. I get many productivity benefits from the IDE (in particular interactive debugging), but it does limit my ability to adopt new languages for serious development projects until they get adequate IDE support.

Posted by Sandeep Chanda on September 8, 2015

PubNub is a global data stream network, providing real-time communication for IoT, mobile and web. It has a wide series of solutions for most common use cases in event and data stream needs ranging from home automation, device signalling, to wearables, geolocation and financial data streaming.

Recently the company produced an open source platform called EON for creating charts and maps that provides real-time data stream animation. You can create analytical dashboards that reflect the true nature of the changing data, thereby allowing you to take action in the real world based on what you see.

For starters, you can include the EON JS and CSS files in your page to start leveraging the charting and map features.

<script type="text/javascript" src="http://pubnub.github.io/eon/lib/eon.js"></script>
<link type="text/css" rel="stylesheet" href="http://pubnub.github.io/eon/lib/eon.css" />

To create your own custom charts or maps, you can clone the repository, install the bower dependencies and then compile using gulp.

EON charts are based on C3.js and you need to use the C3 chart generation config to configure your charts with data. The C3 config is supplied as a parameter to eon.generate() method from the framework:

var channel = "c3-bar" + Math.random(); eon.chart({ channel: channel, generate: { bindto: '#chart', data: { labels: true, type: 'bar' }, bar: { width: { ratio: 0.5 } }, tooltip: { show: false } } });

You can then use the PubNub publishing feature to publish a data stream to the channel created above.

var pubnub = PUBNUB.init({ publish_key: '<publish key>', subscribe_key: <subscribe key> }); setInterval(function(){ pubnub.publish({ channel: channel, message: { columns: [ ['Austin', Math.floor(Math.random() * 99)], ['New York', Math.floor(Math.random() * 99)], ['San Francisco', Math.floor(Math.random() * 99)], ['Portland', Math.floor(Math.random() * 99)] ] } }); }, 1000); 

Similarly for maps, you can embed a map using eon.map():

eon.map({ id: 'map', mb_token: <token>, channel: channel, connect: connect });

Then use the PubNub publishing feature to publish the map data stream:

var pubnub = PUBNUB.init({ publish_key: '<publish key>', subscribe_key: '<subscribe key> ' }); setInterval(function(){ PUBNUB.publish({ channel: 'eon-map', message: [ {"latlng":[31,-99]}, {"latlng":[32,-100]}, {"latlng":[33,-101]}, {"latlng":[35,-102]} ] }); }, 1000); 

Posted by Gigi Sayfan on September 4, 2015

Pair programming is one of the most controversial agile practices and is also the least commonly used in the field, as far as I can tell. I think there are very good reasons this is the case, but perhaps not the reasons everybody thinks about.

Pair programming consists of two programmers sitting side-by-side, working on a given task. One is coding the other is observing, suggesting improvements, noticing mistakes and assisting in any other way, such as looking up documentation.

The benefits are well known. For more information you can download a PDF of The Costs and Benefits of Pair Programming

But, why didn't it take off as so many agile practices that have become mainstream staples? The reason that is often mentioned is that managers don't like seeing two expensive engineers sitting together all day and working on the same code. That may be true for some companies. But, often it's the developers themselves who dislike pair programming.

There are many reasons that some developers dislike pair programming. Many developers are simply loners and prefer to focus on the task at hand and their flow is disrupted by the constant interaction. Many developers like to work unconventional hours or from home/coffee shop and that makes them difficult to pair. The original extreme programming called for a 40 hour work week in which everybody arrived and departed at the same time, but in today's flexible work environment this is not always the case.

I, personally, have never seen full-fledged pair programming practiced and it was never even on the table as a viable alternative. My experience is based on many years of working for various startups that used many other agile practices. I tried to institute pair programming myself in a few companies, but it never caught on.

So, is pair programming a niche practice that can only be used by agile zealots that follow the letter of the law? Not necessarily. There are several situations where pair programming is priceless.

The most common one is debugging. I've used pair debugging countless times. Whenever I get stuck and can't make sense of what's going on I'll invite a fellow developer and together we are usually able to figure out the issue relatively quickly. The act of explaining what's going on (often referred to as "rubber ducking") is sometimes all it takes.

Another typical pair programming scenario is when someone is showing the ropes to a new member of the team. This a quick way to take the newcomer through each and every step involved in completing a set task and showcasing all the frameworks, tools and shortcuts that can be used.

What are your thoughts on pair programming?

Posted by Sandeep Chanda on August 31, 2015

The Microsoft .NET team has lately been putting forth a great deal of effort to increase the footprint of the .NET Framework in the world of building cross platform and open sourced applications. The .NET Execution Engine (DNX) is the result of such effort. It is a cross-platform open source software development kit that hosts the .NET Core and the runtime to effortlessly run .NET applications in Windows, Mac, and Linux based systems. The behaviour is not altered nor is the functionality reduced if you migrate your applications from one platform to another, making your applications universal and platform agnostic. The SDK was built primarily targeting ASP.NET 5 applications, however it can run any .NET based application like a console app.

The engine completely takes care of the bootstrapping aspects of running a .NET application making it extremely easy for you to develop one application that can run with equal ease on all the three main operating systems. In addition, the engine also leverages the package management benefits of NuGet, thereby allowing you to build and distribute modules easily and efficiently. It can not only automatically cross compile the packages for each environment, but can also output NuGet packages for distribution. It also allows runtime edit of the source and in-memory compilation to let you switch dependencies without having to re-compile the entire application.

Unlike the older versions of .NET applications, that use XML based project and solution files, the DNX projects are simply a folder with a project.json file. The folder contains the necessary artefacts and the JSON file has the project metadata, dependencies and target framework information. It is all you need to run the application (other than the application configuration files and binaries, of course).

This also makes versioning of dependencies pretty easy. The dependencies are referenced in projects using a global.json file that remains at the solution level. The project JSON configuration file also supports commands that you can use to execute a .NET entry point with arguments. For example, Hosting command to host the application in web is a configuration entry in your project.json file. You can also distribute commands using NuGet and then using the engine to universally load them in a machine.

Posted by Gigi Sayfan on August 26, 2015

Decision-making is at the heart of any organized activity and, as such, there are significant associated risks and costs. The higher up you are on the totem pole the more risk, cost and impact is associated with every decision you make. For example, if you are the CTO and decide to switch from private hosting to the cloud, that has enormous ramifications. Obviously, such a switch is not a simple process. It will entail a thorough evaluation, prototype and gradual migration. This is often the reason that many large organizations seem to move at such a glacial pace. But, there are many decisions that can be made and acted upon quickly and yet often take a very long time.

This is often tied to the reporting and approval structure in the organization. The level of delegation and the freedom of underlings to make decisions on their own without approval is often the key factor.

There are many good reasons for managers to require approval: maintain control, ensure that good decisions are being made, stay up to date and informed on higher-level decisions. The flip side is that the more a manager is involved in the decision-making process, he or she has less time to interact and coordinate with other managers and her own superiors, study the competition, think of new directions and many other management activities. This is all understood and every manager eventually finds the right balance.

What many managers miss is the impact on their subordinates. Very often, a delay in decision-making is much more than making a quick bad decision. Let's start from the ideal situation — your employees always make the right decision. In this case, any delay due to the need to ask for approval is a net loss. The more control a manager maintains and the more direct personnel he or she manages, the more loss will be accrued.

But what about the bad decisions that such processes prevent? This is obviously a win in terms of one less bad decision, but the downside is that in the long-run your subordinates will not feel accountable. They'll expect you to be the ultimate filter.

If you're aware of this then the path forward is pretty clear — delegate as much as you feel comfortable (or even more). Let your underlings make mistakes and help them improve over time. Benefit from streamlined productivity and focus on the really critical decisions.

Another important aspect is that not all bad decisions or mistakes are equal. Some mistakes are easily fixed. Decisions that may result in easily reversible mistakes are classic candidates for delegation. If the cost of a bad decision is low, just stay out of the loop.

Thanks for your registration, follow us on our social networks to keep up-to-date