Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


 
 
Posted by Gigi Sayfan on September 4, 2015

Pair programming is one of the most controversial agile practices and is also the least commonly used in the field, as far as I can tell. I think there are very good reasons this is the case, but perhaps not the reasons everybody thinks about.

Pair programming consists of two programmers sitting side-by-side, working on a given task. One is coding the other is observing, suggesting improvements, noticing mistakes and assisting in any other way, such as looking up documentation.

The benefits are well known. For more information you can download a PDF of The Costs and Benefits of Pair Programming

But, why didn't it take off as so many agile practices that have become mainstream staples? The reason that is often mentioned is that managers don't like seeing two expensive engineers sitting together all day and working on the same code. That may be true for some companies. But, often it's the developers themselves who dislike pair programming.

There are many reasons that some developers dislike pair programming. Many developers are simply loners and prefer to focus on the task at hand and their flow is disrupted by the constant interaction. Many developers like to work unconventional hours or from home/coffee shop and that makes them difficult to pair. The original extreme programming called for a 40 hour work week in which everybody arrived and departed at the same time, but in today's flexible work environment this is not always the case.

I, personally, have never seen full-fledged pair programming practiced and it was never even on the table as a viable alternative. My experience is based on many years of working for various startups that used many other agile practices. I tried to institute pair programming myself in a few companies, but it never caught on.

So, is pair programming a niche practice that can only be used by agile zealots that follow the letter of the law? Not necessarily. There are several situations where pair programming is priceless.

The most common one is debugging. I've used pair debugging countless times. Whenever I get stuck and can't make sense of what's going on I'll invite a fellow developer and together we are usually able to figure out the issue relatively quickly. The act of explaining what's going on (often referred to as "rubber ducking") is sometimes all it takes.

Another typical pair programming scenario is when someone is showing the ropes to a new member of the team. This a quick way to take the newcomer through each and every step involved in completing a set task and showcasing all the frameworks, tools and shortcuts that can be used.

What are your thoughts on pair programming?


Posted by Sandeep Chanda on August 31, 2015

The Microsoft .NET team has lately been putting forth a great deal of effort to increase the footprint of the .NET Framework in the world of building cross platform and open sourced applications. The .NET Execution Engine (DNX) is the result of such effort. It is a cross-platform open source software development kit that hosts the .NET Core and the runtime to effortlessly run .NET applications in Windows, Mac, and Linux based systems. The behaviour is not altered nor is the functionality reduced if you migrate your applications from one platform to another, making your applications universal and platform agnostic. The SDK was built primarily targeting ASP.NET 5 applications, however it can run any .NET based application like a console app.

The engine completely takes care of the bootstrapping aspects of running a .NET application making it extremely easy for you to develop one application that can run with equal ease on all the three main operating systems. In addition, the engine also leverages the package management benefits of NuGet, thereby allowing you to build and distribute modules easily and efficiently. It can not only automatically cross compile the packages for each environment, but can also output NuGet packages for distribution. It also allows runtime edit of the source and in-memory compilation to let you switch dependencies without having to re-compile the entire application.

Unlike the older versions of .NET applications, that use XML based project and solution files, the DNX projects are simply a folder with a project.json file. The folder contains the necessary artefacts and the JSON file has the project metadata, dependencies and target framework information. It is all you need to run the application (other than the application configuration files and binaries, of course).

This also makes versioning of dependencies pretty easy. The dependencies are referenced in projects using a global.json file that remains at the solution level. The project JSON configuration file also supports commands that you can use to execute a .NET entry point with arguments. For example, Hosting command to host the application in web is a configuration entry in your project.json file. You can also distribute commands using NuGet and then using the engine to universally load them in a machine.


Posted by Gigi Sayfan on August 26, 2015

Decision-making is at the heart of any organized activity and, as such, there are significant associated risks and costs. The higher up you are on the totem pole the more risk, cost and impact is associated with every decision you make. For example, if you are the CTO and decide to switch from private hosting to the cloud, that has enormous ramifications. Obviously, such a switch is not a simple process. It will entail a thorough evaluation, prototype and gradual migration. This is often the reason that many large organizations seem to move at such a glacial pace. But, there are many decisions that can be made and acted upon quickly and yet often take a very long time.

This is often tied to the reporting and approval structure in the organization. The level of delegation and the freedom of underlings to make decisions on their own without approval is often the key factor.

There are many good reasons for managers to require approval: maintain control, ensure that good decisions are being made, stay up to date and informed on higher-level decisions. The flip side is that the more a manager is involved in the decision-making process, he or she has less time to interact and coordinate with other managers and her own superiors, study the competition, think of new directions and many other management activities. This is all understood and every manager eventually finds the right balance.

What many managers miss is the impact on their subordinates. Very often, a delay in decision-making is much more than making a quick bad decision. Let's start from the ideal situation — your employees always make the right decision. In this case, any delay due to the need to ask for approval is a net loss. The more control a manager maintains and the more direct personnel he or she manages, the more loss will be accrued.

But what about the bad decisions that such processes prevent? This is obviously a win in terms of one less bad decision, but the downside is that in the long-run your subordinates will not feel accountable. They'll expect you to be the ultimate filter.

If you're aware of this then the path forward is pretty clear — delegate as much as you feel comfortable (or even more). Let your underlings make mistakes and help them improve over time. Benefit from streamlined productivity and focus on the really critical decisions.

Another important aspect is that not all bad decisions or mistakes are equal. Some mistakes are easily fixed. Decisions that may result in easily reversible mistakes are classic candidates for delegation. If the cost of a bad decision is low, just stay out of the loop.


Posted by Sandeep Chanda on August 24, 2015

The recently concluded ContainerCon generated quite a lot of excitement around container technologies. The updates from the Linux Container (LXC) project were particularly interesting. Canonical, the company that is spearheading the work behind Ubuntu's fast dense and secure container management project, had shared the concept of Linux Container Hypervisor (LXD) last year. LXD is a new stratum on top of LXC that brings the advantages of legacy hypervisors into the modern world of containers. What is particularly important is that the LXD project provides RESTful APIs that can be used to start, stop, clone, and migrate the containers on any given host. Hosts can also run LXD clusters delivering cloud infrastructure at higher speeds and low cost. LXDs can also run along with Docker, which allows resources to be used as a pool of micro-services delivering complex web applications. The most fascinating aspect of LXD is that the underlying container technology is decoupled from the RESTful API driving the functionality which allows it to be used as a cross-functional container management layer.

The RESTful API allows communication between LXD and its clients. The calls over HTTP are encapsulated over SSL. You can do a GET / to get all the available end points. This will also give you the list of available versions. You can then do a GET to / [version]/Images/* to get the list of publicly available images. The API also supports the recursion argument to allow optimizing queries against large collections.

A GET operation on the [version]/containers API gets the list of containers. It also specifies the authentication and the operation type. A POST operation on the same API will allow you to create a container. The return type is either a background operation or an error. There are a bunch of management operations you can perform on each container by using the [version]/containers/ [name] API.


Posted by Gigi Sayfan on August 17, 2015

The Agile style of development is considered best for many organizations and projects. There is a lot of criticism too, but overall it gained significant mind share and developers use it all over. The big question is, "What is Agile development?" You could look at the Agile Manifesto. This is a great collection of values and principles. However, the Agile manifesto is not a recipe. If you want to create an Agile process out of it, you have a lot to figure out. Let's take a look at some of the common, or at least well known, Agile processes out there: Extreme Programming, Scrum, Kanban, Lean Software Development and Crystal Clear. They are all quite similar, actually, with different emphasis here and there, but overall there is a sense of a common core.

The other important thing about Agile development in general is that it prefers lightweight methods over precisely prescribed processes (one of the core values). As such, Agile methods are in a difficult place, because they can either stay on the fuzzy/fluffy edge of the spectrum or succumb to the grind and actually recommend specific practices and processes.

I've been practicing Agile development for more than 15 years in the trenches and it's never done by the book. In the field, Agile is often some adaptation of Agile methods, with or without the actual terminology.

Some concepts and practices made it to the mainstream big time, including automated testing, continuous integration, short iterations, refactoring -- others, not so much, or not uniformly.

The most important practice, in my opinion, is short iterations. Why is that? I discovered that if you practice short iterations almost everything else falls into place: You deliver frequently, which means you need to focus on what's most important, which means you need to define done criteria, which means you need automated builds and testing and continuous integration, which means you can refactor with confidence, etc., etc.

A short iteration is unambiguous. If you commit that you will deliver something every two weeks, you're golden. It's easy to keep track of what's going on (only a two week horizon to look at) and it is easy to measure progress and team velocity (you have milestones every two weeks). Of course it's easy to respond to change, because every two weeks you start from scratch. By definition, you can't be more than two weeks into anything.


Posted by Gigi Sayfan on August 14, 2015

What is the best programming language across the board? There is no such thing, of course. Each programming language has different strengths and weaknesses, different design goals and an entire ecosystem surrounding it that includes community, libraries, tools, and documentation — all of which are partially dependent on how long the language has been around.

Alright, so there isn't a best programming language. Let's just use the old adage and pick the best tool for the job. I'm sorry to disappoint you again. There isn't a "best tool for the job" either. It is very easy for each particular project to rule out a lot of languages, and sometimes you'll end up with just a single choice. But more often than not, your choice (or your short list of choices) will be dictated by external constraints and not by the "superiority" of the language you end up with.

Most large software development organizations have a limited number of programming languages with which they work (sometimes just one). Sure, for some standalone tools you may use whatever you want, but we're talking here about your bread and butter. The big enterprise systems, your cash cow.

Those systems often have tens, if not hundreds, of man years invested in them. They have special processes, tools, build systems, test systems, deployment systems and operational experience. Introducing a new programming language into the mix will have so much upfront cost that it is pretty much out of the question. This is especially true when the upside is that you'll be able to use a cool new language and get to experience all its rough edges first hand.

But, even if you were able to combat all that and you've hypnotized everyone and persuaded upper management to let you rewrite the organization's system from scratch using the "best" language ever. Cool, what now?

Am I against trying new languages? Quite the contrary. The recent trend towards SOA and microservices provides a great escape hatch. Those services depend on each other at the API level only. If your architecture is already service-oriented you can easily experiment with implementing small new services in whatever language you choose or you can migrate a small existing service. It may still be a lot of effort because you'll still need to put the necessary infrastructure in place, but it may be acceptable for a small non-critical service to not have the best initial infrastructure.

The other case for trying new languages is of course starting a greenfield project or a whole new company, where you really start from scratch.

I will not discuss specific languages today, but here is a short list of some relatively new and very promising languages I have my eye on:

  • Go
  • Rust
  • Elm
  • Julia
  • Scala
  • Clojure

A special mention goes to C#, which is not new, but seems to be the only language that manages to add new features and capabilities and push the envelope of mainstream programming languages without becoming bloated, inconsistent and confusing.


Posted by Sandeep Chanda on August 11, 2015

Last month, Microsoft finally made the announcement for general availability of Visual Studio 2015 along with an Update 5 for Visual Studio 2013 to enable support for some of the updated framework features and the latest Framework 4.6. While the release of Visual Studio 2015 was no less a fanfare event, the release of Framework 4.6 has been marred by some inherent issues in the framework. We are still anticipating some announcements from Microsoft as to when the issues are expected to be resolved.

As far as Visual Studio 2015 is concerned, it comes with some cool new features. A bunch of those are illustrated by Scott Guthrie in his blog post. There are some significant tooling improvements in terms of supporting JavaScript frameworks such as Node.js. Configuration is now managed through JSON-based configuration files and the JSON editor is rich even with support for Bower. The JavaScript editor now supports rich syntax for Angular.js and Facebook's React.js framework.

I liked the out-of-the-box feature integration with Azure App Insights (much as they did with integrating Azure Web Sites with an ASP.NET VS template sometime back). It is a nifty add-in to instrument user behaviour in your application without having to program them in. Of course, if you want more, you can still program with the Data Collection API, but this OOB integration allows you immediate traceability.

The update also offers the availability of ASP.NET 5.0 preview templates. You can now create an ASP.NET 5.0 application (a radical shift from traditional ASP.NET web applications) as an open source cross platform framework for building modern web applications. ASP.NET 5 applications are run using the .NET Execution Environment (more on this in the next post) that allows it to be run cross platform (equally efficient on Mac, Linux and Windows).

After creating an ASP.NET 5.0 project, when you look at the solution, there are a bunch of new additions. You have the new Startup.cs class to define the ASP.NET pipeline using configuration. You would also notice a bunch of .JSON files for packaging different components and configuration information. You will also notice configuration for task-based JS runners like Gulp.

Another item you will find is the presence of wwwroot. This represents the root location from where the HTTP requests are handled and you will see the presence of static content in this folder.

Edition wise, there are certain changes. You now get a fully loaded Enterprise Edition, which replaces the Ultimate version in the outgoing platform. More on the tools and especially the .NET Execution Environment in future posts!


Posted by Gigi Sayfan on August 7, 2015

Hi. My name is Gigi Sayfan and I'll be writing here weekly. In this first post I would like to tell you a little bit about me and what I plan to cover. I'm also very interested to know what you care about and I'll gladly take ideas, requests and suggestions.

I'm a passionate software developer. Over the past 20 years, I worked for large corporations, small startups and everything in between. I have written production code in many programming languages and for many operating systems and devices. My current role is the director of software infrastructure at Aclima. We design and deploy large-scale distributed sensor networks for environmental quality and hope to make our planet a little healthier. I still write code every day and, in addition, I like to write technical articles about software development. I used to write for Dr. Dobbs and these days I write regularly for DevX.

I have a lovely wife and three kids that I tried to infect with the programming bug (see what I did there) with varying degrees of success. When I don't code or read and write about software development I lift weights, play basketball and do some Brazilian Jiu Jitsu.

What am I going to write about? Well, pretty much everything. There are so many cool topics to talk about. I'm getting excited already.

Programming Languages

Theoretically all Turing-complete programming languages are equivalent, but we all know the difference between theory and practice. There are so many new developments in programming languages, such as Modern C++, EcmaScript 6, Go, Rust, Elm, C#, and Python 3, to name a few. I have always loved programming languages and I plan do to deep dives, comparisons, reviews of new versions, and more.

Databases

Databases were considered perfect for a long time. You designed your relational schema, put your data in, got a DBA or two to optimize your queries and you were good to go. But, everything changed when the data didn't fit into a single database anymore. NoSQL was born with its many variations and companies started to innovate significantly around data storage.

Build Systems

Build systems are so important for enterprise development. I created a couple of build systems from scratch and I believe that a good build system used properly is critical to the success of enterprise-scale projects.

Automation

Automation of pretty much any system is key. You'll never be able to scale by just adding people. The nice thing about automation is that it is such a simple concept. Whatever you do, just have a program or a machine do it for you. It takes some hard work and imagination to automate certain aspects, but there is also a great deal of low hanging fruit.

Testing

Testing is yet another pillar of professional software development. Everyone knows that and these days many actually take it seriously and practice it. There is so much you can test and how you go about it pretty much dictates your speed of development. There are many dimensions and approaches to testing that I plan to explore in detail with you.

Deployment

Deployments, such as with a database, used to be fairly straightforward. Now, that systems are often extremely complicated, a large deployment is not so simple anymore--with private hosting, cloud providers, private clouds, containers, virtual machines, etc. An abundance of new technologies and approaches now exist, each with their own pros, cons and maturity level.

Distributed Systems

Distributed systems are another piece of the puzzle. With big data you need to split your processing across multiple machines. We will explore lots of options and innovation in this space as well.

Development Life Cycle

The software development life cycle is another topic that never ceases to generate a lot of controversy. There are multiple methodologies and everybody seems to have their own little nuances. Agile methods dominate here. However, in certain domains such as life-/mission-critical software and heavily regulated industries, other methods are more appropriate.

Open Source

Open Source keeps growing and even companies, such as Microsoft, that were once considered as distant as possible from open source are now fully onboard. The penetration of open source into the enterprise is a very interesting trend.

Web Development

The web is still a boiling pot of ideas and disruption. New languages, new browsers, new runtimes--here is a lot to observe and discuss in this space.

Hiring

The so-called talent wars and the difficulty in finding good software engineers are very real. It appears to just get worse and worse.

Culture

A company culture is often not very tangible, but somehow when you take a step back and look at successful companies it is evident that culture is real and can make or break you. Often big undertakings can flop with no clear reason other than culture. A prominent recent example would be Google+.

The Past and The Future

There is a lot to learn from our history. Luckily, the history of software is relatively short and well documented. The phrase "Those who don't study history are doomed to repeat it," is just as appropriate for software.

We live in the future. You can see the change happening in real time. Science fiction turns into science faster and faster.


Posted by Sandeep Chanda on July 30, 2015

Agile and Scrum are great in terms of delivering software in an iterative and predictable fashion, promoting development that is aligned towards the expected outcome by accepting early feedback. That said, the quality and longevity of the application is often driven by sound engineering practices put in place during the course of development. This also means that while burn down charts, velocity, and story level progress measures have their value in providing a sense of completion, unless some process guidance is established to measure engineering success during the application lifecycle, it is difficult to be certain about the behaviour of the application during go live and thereafter. An unpredictable behaviour does not instill confidence in using the product, ultimately spoiling the reputation of the project team engaged in delivering a quality software. The question is then, what matrices are key towards reporting and measuring engineering work?

LOC vs LOPC

Traditionally, raw lines of code (LOC) was used as a measure to qualify engineering productivity, however the approach is significantly flawed. A seasoned programmer can produce the same outcome in significantly fewer lines of code in comparison with a newbie. It is important for the code to stick around. A good measure, in that case, would be lines of productive code (LOPC). Measuring LOPC over a timeline gives you a good idea about individual developer productivity during the course of development and will empower you to make decisions in optimizing the team composition. For example, you can plot every 100 LOPC checked-in by a programmer over a time graph and it will help you predict behaviour. A developer shows significant improvement if he or she is taking, on average, less time to deliver 100 LOPC since beginning the program.

Code Churn

Code Churn is another critical factor in measuring engineering success. Refactoring causes code churn. While the team may be producing lots of lines of code in producing software, however the gap between LOC and LOPC is increasing — showing significant churn. This analysis helps nudge a programmer who is not putting sufficient effort towards writing quality code the first time around. Over a period of time, as team members get a better understanding of the requirements, the churn should reduce. If that is not the case it is an indicator that you need to make changes in your team composition.


Posted by Sandeep Chanda on July 14, 2015

When it comes to enterprise data visualization, Power BI has been leading from the front. It not only allows you to create great visualizations from your datasets, transforming the way you spot trends and make decisions to move forward, it also provides a platform for developers to extend its capabilities. The Power BI REST API has been available for a while now. You can use it to retrieve datasets stored in Microsoft Azure and then create visualizations that suit your needs. You can also add the visualizations to your ASP.NET web application hosted in Azure, making the visualizations available to a bigger group of your target audience. The Power BI team has taken a leap forward with the announcement of the availability of extensions in the form of Power BI Visuals.

The Power BI visuals project provides a set of visualizations that you can use to extend the capabilities of Power BI. The 20-odd out-of-the-box visualizations are ready to use with default capabilities of Power BI such as selection and filtering. The visuals are built using D3.js, but you also have the choice of leveraging Web GL, SVG, and other graphical technologies. The project also provides the framework for you to build and test the visualizations. Everything is compiled down to JavaScript running on all modern browsers. The project also contains a playground to demonstrate the capabilities. You can run the project using Node.js, however you would also need Visual Studio 2013 (or above) and TypeScript 1.4 for Visual Studio to execute the sample solution.

Once you have cloned the repository from GitHub, you can use the npm install  command to install the development dependencies. If you also want to test the visualizations you would need the Chutzpah JavaScript test runner and Jasmine-JQuery to be placed in the src\clients\externals\thirdparty\jasminejquery  folder inside the repository. You can then use the npm test  command to test.

The PowerBI visualization lifecycle includes three methods on the IVisual  interface that the project provides.

  1. init method when the PowerBI visual is first created.
  2. update method, whenever the host has an update for the visual.
  3. destroy method, whenever the visual is about to be disposed.

A cheer meter implementation has been provided here as an example to demonstrate the Power BI visual extensions.


Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date