devxlogo

Error-free Software Is in Reach, but Is Anyone Reaching?

Error-free Software Is in Reach, but Is Anyone Reaching?

lmost everyone nowadays knows what the neologisms “bluescreen” and “general protection fault (gpf)” mean. That these terms are so familiar attests to the fact that computer users have come to accept software errors as an unavoidable fact of life. But now many computer professionals are starting to ask whether this situation can be allowed to continue.

According to a report by the National Institute of Standards and Technology (NIST), software errors cost us approximately $60 billion per year in lost productivity, increased time to market costs, higher market transaction costs, etc. If allowed to continue unchecked this problem’s costs may get much worse.

After all, over the last few decades hardware manufacturers have improved the manufacturing process to the point that we are surprised if our televisions or refrigerators malfunction while during this same time frame we have grown accustomed to constant errors in software. The problem we now face is that software is just beginning to find its way into refrigerators, cars, televisions, stereos, and cell phones. While today’s cars operate mechanically, tomorrow’s cars will be controlled by software. The real danger of software errors lies in the possibility that these errors will destroy the quality of products that we have come to expect to work without failure. It is not unreasonable to think that in the not-so-distant future we will have to restart our cars and refrigerators so as to reboot their internal computers when software errors cause malfunctions.

These software-controlled devices are already starting to appear. My mobile device, the Pocket PC Phone Edition is a good example. I have been using the device for the past six months and I’m sorry to say that I need to restart the device at least half a dozen times a day. There have been numerous incidents where I am unable to receive an incoming phone call because the screen freezes up. A reset is required to fix this, and my phone call is missed; faulty software is to blame. Admittedly I finally had enough and demanded the vendor to provide me with a newer model, which operates significantly better, although not perfectly.

The Culprit
In an interview with Watts S. Humphrey, a Fellow of the Carnegie Mellon Software Engineering Institute (SEI) and Former Director of Software Quality and Process for IBM, I learned that “even experienced engineers on average will inject a defect every 9 to 10 lines of code.” Watts says that by using the right development practices we can lower this by 50 percent and maybe more, but it seems that we will never realize the ideal of developers who make no mistakes. So instead we must find better means to identify and eradicate defects.

Management has come to believe the first and most important misconception: that it is impossible to ship software devoid of errors in a cost-effective way.

“The industry has not armed software developers with the testing tools necessary to meet the challenges of large scale software development”, said Amitabh Srivastava, one of only 16 distinguished engineers at Microsoft and director of Microsoft Research. In other words, the tools necessary to ensure high quality software have not evolved as quickly as the industry’s need for these tools. Srivastava says this is because much of the testing technology infrastructure is still in the academic research phase. Microsoft has since announced the formation of the a “Think Tank” on Trustworthy Computing comprising numerous professors.

In lieu of proper testing tools the industry has relied on human software testers. Human-based testing is only partially effective. Ironically, in a cost-constrained software development project, software testers are almost always the first to be pink-slipped. The result is an industry replete with software that has undergone minimal testing.

Managerial Misconceptions
To effectively address the problem of software quality we must first dismiss several misconceptions that seem to permeate the business community. Management has come to believe the first and most important misconception: that it is impossible to ship software devoid of errors in a cost-effective way. Indeed, even in university classrooms, professors teach that all software will have errors. This is simply untrue. Were this belief sustained by the automobile manufacturing industry then we would still be driving cars that break down every 5,000 miles. The truth is that errors can be avoided, and error-free software is not an unattainable reality. What’s more, the cost of shipping higher quality software can even be less than the cost of shipping the software with errors! With a little education and a few well chosen products, software quality can improve radically.

Software development is not an art, and programmers are not artists, despite any claims to the contrary.

The second misconception is that software is an art; this misconception is as damaging to software quality as it is misleading. Good software is an applied science and must be treated as such. The computer sciences have their roots firmly planted in the field of mathematics, and few would classify mathematics as an art form. Software development is not an art, and programmers are not artists, despite any claims to the contrary.

The current process of software testing is based on the third misconception: that software errors are unique. Many studies have concluded that software development groups make the same errors time and time again, irrespective of the software they are trying to build. But the QA process we use today focuses on curing existing errors rather than preventing errors. Common sense tells us that an ounce of prevention is worth a pound of cure. Yet still our QA process continues. And inevitably the QA department is unable to fulfill even this role adequately because human software testing is unlikely to test all possible combinations and permutations. Many paths of execution are left untested. In short, the current process of software testing is based on the misconception that software errors are unique; the result is a testing process that reflects the first assumption: that software errors are unavoidable.

Advances in Software Quality Through Error Prevention
The tendency to believe that software errors are unique precludes the notion that errors can be found and eliminated in a generic fashion. Studies indicate that groups of software developers make the same errors time and time again irrespective of the software they are trying to build. Software errors are not unique, and this is excellent news since it would seem to indicate that there must be a generic means of modifying the software development process so as to control the introduction of errors.

Some companies now incorporate automated software testing tools in the development process. Called “static analysis,” this process puts the burden of error detection on machines by running an automated battery of tests that can reveal errors. There are several automated testing products on the market today, including Compuware’s Boundschecker, Rational/IBM’s Purify, and Parasoft’s Insure++, but even automated testing is still testing after the fact. The true goal is to prevent errors before they happen.

Parasoft coined the phrase “Automatic Error Prevention” (AEP) to describe that preventative next step. AEP takes the results generated from automated testing and connects the errors found during automated testing, to the practice that introduced the errors. AEP can then automate the procedures that prevent the error from recurring. The idea is that each error should occur once and only once.

In Japan, IBM’s Technical Competency group has been using Jtest from Parasoft to improve the quality of Java application development. Before implementing Jtest, the testing phase for a group of 20 to 30 programs took 3 to 4 man-weeks. Now the same testing process can be completed in less than half a day by a single individual.

Users not only find errors automatically, but ensure that test results are used to improve the process and make quality progressively better. AEP is actually finding errors and automatically enforcing procedures that prevent recurrence. An extensive pilot using this methodology proved to be successful for increasing the productivity and quality of application development. IBM U.S. has since signed a global corporate-wide license contract with Parasoft.

Microsoft has its own internal tools for improving quality, although these have not yet been productized. But such products are not only used by large enterprises, there are hundreds of other companies using these kinds of products. These error prevention products are affordable, moreover the return on investment can be realized overnight.

As a software development trainer I have had the opportunity to work with hundreds of software development teams around the world. More than 90 percent of these teams have been completely unaware of the existence of AEP tools. Software quality will continue to suffer until a commitment to quality becomes more than just a catch phrase. This will require a major change in mindset for senior management, since it does not seem to be originating from the developers themselves. But for the time being I will remain suspicious of devices that merge software and hardware.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist