devxlogo

Effective Build Management: Don’t Build a House of Cards

Effective Build Management: Don’t Build a House of Cards

et’s face it?developers are creative folk who generally like to focus on what they do best: develop code. In many ways, they are similar to master chefs who delight in the planning and preparation of a meal but have little time to focus on the actual cooking. Just as a chef must be able to set an oven for the right temperature and trust that the results will be predictable and accurate, a developer must be able to depend on a build system that is automated, reliable, unobtrusive, and adaptable.

Traditionally, build systems have been an unsophisticated collection of scripts and manual steps, developed in-house. But three key trends in application development are driving up the complexity of these systems to the point that a new approach to build management is required. First, complications have arisen from the growing need to support a variety of environments, platforms, and languages. Second, code reuse and refactoring is rapidly increasing the rate of change in software applications and technology. And, perhaps most importantly, end users are simply demanding higher quality applications.

A number of vendors have introduced enterprise-class build systems specifically designed to address these issues, and development organizations should take a good, hard look at their current build systems to determine whether they are sufficiently sophisticated to tackle complex architectures and requirements. In particular, you should pay close attention to features and benefits such as:

  • Ease of administration
  • Automated population of build areas
  • Automated build execution (across platforms)
  • Integration with multiple development environments
  • Comprehensive auditing and logging
  • Integration with configuration management (CM) systems
Editor’s Note: The author, Peter Raymond, is a principal software architect for Merant, a vendor of lifecycle management products, including build management tools. We have selected this article for publication because we believe it to have objective technical merit.

Simplifying the Process
In-house build systems are usually based on a standard when they are originally designed, but as time passes and technology and requirements change, these systems grow to include multiple scripts and makefiles, file transfer mechanisms, scheduling mechanisms, etc. These home-grown systems are generally poorly documented and frequently hinge on a single developer?the resident build “wizard”?to make sense of them. When modules of code are added or reorganized or technology changes, this individual is the only one with the knowledge of the build system and the necessary makefile/scripting language skill set to manually list dependencies, edit rules and configuration files, and so forth. Of course, when a build has grown to include thousands of lines of script across many individual script files, this can turn into a severe bottleneck in the development process.

For the benefit of those who aren’t build wizards, I’ll quickly run through the anatomy of a build script. Build scripts contain all knowledge about involved targets and dependencies, as well as rules to build and link the code (with various flags and settings and likely conditional content for each platform). Therefore, a typical makefile (even for a simple program) might look like Listing 1.

See also  Should SMEs and Startups Offer Employee Benefits?

As you can imagine, hand-editing these scripts can be time-consuming and error-prone. Because enterprise-class build systems typically feature graphical front ends for build management, they provide a much simpler way of identifying dependencies, choosing rules and options, scheduling builds, and viewing progressoften without any scripts. This eliminates the need for “scripting and makefile hacking,” which not only improves the entire process but frees the “wizard” for the more important work of developing code.

Populating Build Areas
Many home-grown build systems rely on manual population of the filesystem with the assets needed for the build. At best, these systems attempt automation using ftp or a similar script-driven mechanism. Populating build areas in this way is very error prone and is hard to maintain as the system grows.

Exacerbating this problem is the dreaded “last minute fix.” It is very tempting for developers to try to include their latest and greatest code in a build, and product managers are notorious for slipping in that last, absolutely-must-have feature or fix. This is fine?if it is done in a controlled fashion. But often last minute features are added after the code is retrieved and the build is started. You can add the code and restart the build but how can you be 100 percent sure that the new code was built? As with any manual step, this process is very prone to user error.

You may be thinking that a potential work-around would be to ensure stability by performing a clean build, but this can impact performance. Alternately, the system could be designed to rely on timestamps, but these can be very unreliable due to time zone differences, etc.

It’s also important to keep in mind that most projects are not just built once. At various stages in a project’s lifecycle there are a number of different types of builds that may take place.

  • Individual developers might build their latest changes in order to catch compile errors and debug problems.
  • Unit test builds verify that a single unit or component of the project functions correctly.
  • Integration/system test builds ensure that all components hang together and the entire system functions as expected.
  • Production builds deploy the program to a production environment.

These stages are by no means fixed. They may differ between projects and organizations, but it’s usually some variation on the theme of develop, test, and release. There is clearly a logical progression of code and deliverables between build stages, and the quality of the code matures as it moves up the various stages.

These build stages typically use different flags or options, and build steps and rules will change (for instance, perhaps there is no recompilation between system test and production). In many instances, each build is done on separate physical machines. Traditionally, this process is managed with separate makefiles/scripts for each stage, but the source can be shared among the stages to consolidate activities.

See also  AI's Impact on Banking: Customer Experience and Efficiency

An enterprise-class solution, however, would be able to maintain these areas and (preferably) use search paths so that only the code needed for the current stage exists in the build area. It does this by looking for the code first in the current area and then working its way up the stages. For example, assume you have the build stages and versions of files shown in Table 1.

Build Stage

Files and version numbers

UNIT TEST

Main.cs-3

 

 

SYSTEM TEST

Main.cs-2

Tree.cs-2

 

RELEASE

Main.cs-1

Tree.cs-1

Banner.cs-1

Table 1. Build Stages

With a well-managed enterprise system, a build executed at the RELEASE stage would compile Main.cs-1, Tree.cs-1 and Banner.cs-1. If executed at the SYSTEM TEST stage it would compile Main.cs-2, Tree.cs-2 and Banner.cs-1. And finally at the UNIT TEST stage it would compile Main.cs-3, Tree.cs-2, and Banner.cs-1.

This also nicely demonstrates the importance of selecting a build system that integrates with your configuration management (CM) tool. This way, the CM tool will automatically populate the correct build areas as the code progresses through its lifecycle, which dramatically reduces the risk of incorrect versions of source code appearing in a build.

Multiplatform Support on Execution
These days, many builds aren’t delivered for just one platform. The system may need to run on a mobile phone or handheld, or in a Windows, Unix, or mainframe environment. For instance, imagine that you support both Windows and Unix. You might have a Microsoft Visual Studio project (or perhaps an nMake file) for the Windows platform and GNU Makefiles to support Unix. With an in-house solution, each platform would have its own build system with its own “wizard” to maintain its scripts and configuration. Both build systems would then need to be maintained each time a change is made.

A much more efficient approach would be to have a single interface and consistent set of features across all platforms. With a next-generation build solution, build administrators can select what type of target is being built on which platform and the tool will automatically generate rules such as those in the makefile script shown in Listing 1.

Integration with Development Environments
If you’re shopping for a build solution, there are a few other key factors to keep in mind. For instance, most builds are invoked manually. A developer using an IDE triggers a build by selecting a menu option, or the “build wizard” runs the build script with the right arguments and parameters. These different types of build invocation change depending on the build stage, and at the higher stages the build manager may not even have physical access to the machine (the system test machine could be in a data center across the globe). Therefore, it is critical for build systems to manage remote builds and offer integration with multiple developer tools.

See also  Should SMEs and Startups Offer Employee Benefits?

Comprehensive Auditing and Logging
In most script-based systems, a log of the script execution acts as a way of verifying exactly what has been done during the build, but this is normally very detailed and technical in nature. Sometimes these logs are even a straight list of compile and link lines that must be checked manually.

An enterprise-class system, on the other hand, can provide a variety of reports and logs for manual verification of the build. Audit reports can verify that the files in the build exactly match the approved configuration in the CM system before the build, and similar logs after the build can be used to verify exactly what has been done. And, if information regarding the build is captured in a relational database, graphical reports and dashboard metrics can be generated. An example of a useful measurement might be “rate of compilation failure of modules in a particular component of the system in the last two weeks.”

More Benefits of CM Integration
Of course, this again points to a key issue: to achieve a truly repeatable and reliable build, it is essential to use a build system that integrates with your CM tool. The CM system can then be used to review and approve the changes before automatically pushing altered code out to the appropriate build area. It can also maintain a history of who completed the build and capture a baseline of the build area so that the contents can easily be reproduced.

Some of the best CM systems even manage the build itself (i.e., they can record dependencies, rules, compilers/tools, and options) so they don’t just baseline the build area, but provide a complete snapshot of the entire build environment.

Storing dependencies in the CM system not only help make builds reproducible, it also provides the data necessary for complex impact analysis?almost a form of time travel. With comprehensive impact analysis capabilities, it’s possible to run “what-if” scenarios and actually answer the question “if I modify this piece of code, what components, deliverables, or customers will be affected?” before you change anything.

Ensuring that outputs and deliverables are included in the impact analysis process is typically done one of two ways. They either can be stored in the CM system, or footprint information can be added to them. This information then serves as a reference back into the CM system to identify which baseline or configuration the deliverable came from. By preserving them in this manner, it is possible to back track to the list of fixes and enhancements made in that release of the product.

Finally, flexibility is essential. Depending on the size of your project and team, you may have varying levels of requirements for a build system but be sure to choose a solution that can grow and change with your future needs.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist