Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Multi-Core Mythbusters

Some pervasive myths about running Java applications on multi-core systems are misleading developers, and it's time to shine the bright light of truth on these falsehoods.


advertisement
f you're like many engineering-minded folks, chances are you've heard of the Discovery Channel show "Mythbusters." In a typical episode, the Mythbusters team takes a popular urban myth, such as the drop-a-penny-from-the-top-of-the-Empire-State-building-and-kill-somebody myth, and uses hard scientific experiments to prove it either true or false. It's a fascinating display of special effects wizardry and science. Plus, let's admit it, they love to blow things up and what hard-core geek doesn't like to see stuff get blown up?

Unfortunately, application developers have accepted several myths in our own industry. In particular, myths about running Java applications on multi-core systems are pretty pervasive amongst developers. While not as popular or ubiquitous as some of the ones debunked on Mythbusters, they're no less false or misleading.

We admittedly don't have the same kinds of experience in blowing stuff up, nor do we have a crash test dummy to subject to our experiments (poor Buster). However, we can still analyze some of the more common assumptions in the multi-core space today and show why they are, in fact, pure myths.



The "My App's Poor Performance Will Be Saved by Moore's Law" Myth
No other concept indirectly contributes more to poorly performing applications than Moore's Law (commonly cited as "processor speeds double every eighteen months"). To the developer who wants to focus on "getting it to work before getting it fast," Moore's Law is the magic that will make any application—no matter how slow on today's hardware—run twice as fast in just a year and a half. This is not only a common misrepresentation of Moore's Law, but as many developers are starting to notice, it doesn't hold true the way it once did.

Moore's Law actually states that the number of transistors that can be applied to a particular processor or chip is what doubles every eighteen months. For years, this indirectly led to the doubling of processor speeds, as any historical analysis of processor sales will display. But since 2001 or so, the steadily climbing speeds have suddenly flattened, and it doesn't take a lot of empirical evidence to see this. In August of that year, the average CPU speed in a high-end desktop or low-end server was around 2GHz, meaning that six years and four iterations of Moore's Law later we should be seeing processors at around 16GHz or so. A quick glance through online product descriptions from any major PC supplier will show that similar-ranged machines are hovering around the 3.0-3.6GHz range, more than a bit short of that 16GHz mark (click here for an example).

Interestingly enough, Moore's Law hasn't failed but the widespread perception is that it has. The number of transistors continues to double roughly every year and a half, but instead of trying to increase the raw speed of the processors themselves, the chip manufacturers are essentially "scaling out"—putting more CPU cores into the chip. In a sense, this means that developers are staring down the barrel of a multi-CPU machine every time they build an application. Even some of the lowest-end server hardware and many laptops are now multi-CPU systems.

So barring any major scientific breakthroughs in the next couple of years, developers can expect to see systems with multiple 2GHz processors rather than single CPU machines with 16, 32, or 64GHz processors. This means they will have to be deal with concurrency in order to deliver greater performance.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap