Here’s the average developer’s newest dilemma: Intel and other CPU manufacturers have moved full steam ahead into creating multi-core chips to speed up computing. These chips increase processing speed not by improving the speed of a single CPU as has traditionally been the case when new chips debuted in the past, but by adding additional CPUs. These new chips contain two, four, or (soon) eight CPUs. The idea is that developers can (potentially) double, quadruple, or octuple the speed of their applications by writing code that can split operations across all the available CPUs. That’s great–in theory. In practice, however, it turns out that few developers are comfortable writing and debugging even multi-threaded code running on a single CPU, much less writing and debugging code intended to run in parallel on separate CPUs. Moreover, the languages most working developers use don’t yet contain constructs that allow them to target multiple CPUs, nor are the debuggers all multi-CPU aware or capable. Nonetheless, hardware manufacturers are already touting the speed increases businesses should expect from their software when it’s running on the new chips. And no doubt, managers will soon be asking why their new dual-CPU hardware is no faster when running their existing applications than their old hardware was.
As usual, the hardware cart has been put before the software horse. Sure, the C and C++ guys can find some first-generation tools for writing multi-core code. But for business software, the miracles won’t start appearing until developers get multi-core tools that work with their common standard languages.