Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Multi-Core Mythbusters : Page 2

Some pervasive myths about running Java applications on multi-core systems are misleading developers, and it's time to shine the bright light of truth on these falsehoods.


advertisement
The "My App Server Will Scale My Java Code for Me" Myth
This popular myth emerged almost simultaneously with the release of the J2EE specification. Although it is true that Java app servers provide a degree of concurrency, many Java developers are under the mistaken impression that the app server will simply take care of all of their scalability needs. The attraction is undeniable: in this worldview, a developer need never think about hard problems like concurrency, transactional boundaries, or parallel processing. They can just think in terms of traditional Java applications and objects, and the app server will simply take whatever steps are necessary to ensure that everything just works and just scales. (For C++ developers, the lack of a standard app server means that there is even more work to do to ensure proper application concurrency.)

This is probably one of the most cherished viewpoints of the J2EE server space. As such, it is one of the most tenacious in the face of arguments to the contrary. Fortunately, it doesn't take a great deal of logical reasoning to see its inherent flaws. The belief begins with the basic assumption that when a particular J2EE application does not scale, it is due to the CPU not running fast enough. For most of today's applications, however, the CPU is not being kept busy. A large part of the application server execution time is spent transmitting data back and forth across components, taking out locks, or waiting for locks to be released. Spending some quality time with a profiler can show you just how much time your application spends on these tasks, which cannot benefit from a faster CPU.

This is an issue that an app server has little control over. In fact, an app server can sometimes even contribute to the problem if it tries to maintain shared state across multiple machines, as that state will need to be transferred back and forth across the various nodes in the cluster in order to maintain the illusion of zero hardware affinity.



Thus neither the CPU nor the app server can save the developer from the need to design concurrency into the application architecture.

The "My Cluster, Bus Operating System, or App Server Will Automatically Provide Ordered Control" Myth
At the heart of any clustering or web farm approach to scalability lies the belief that if you can't get better performance and scalability by running on faster hardware, you can at least get better scalability by running a bunch of machines in parallel and executing the code on all of them. This works only up to a point.

Amdahl's Law is less well known than Moore's Law but equally as important. It loosely states that the greater the percentage of sequential operations in a particular application or program, the less benefit can be derived from parallel execution. For example, even if only one percent of a particular program must be executed in sequence (which is an astoundingly small number that for most business applications is much greater), the maximum benefit that can be derived from parallelism is 100X. This number drops dramatically given even the smallest reduction in non-sequential code; the maximum speedup for code that is 90-percent parallel (meaning 10 percent of it must be executed sequentially) is 10X. Given even a number like 30-percent sequential code (which is still pretty good), then regardless of the number of processors thrown at the problem, the maximum speedup will be somewhere in the realm of 5X.

The implication of Amdahl's Law is that the diminishing returns of adding more processors accelerate as the percentage of sequential code grows (see Figure 1).

Click to enlarge

Figure 1. Speedup per Processor Is Limited by the Percent of Serialized Code

The top line shows that when there is only 0.1-percent serialization, speedup per processor is relatively consistent for large numbers of processors. However, when you look at the bottom line with 30-percent serialization, diminishing returns are quite obvious.

The thinking behind this myth is that the app server (or OS or some other piece of middleware) can automatically do some kind of analysis to discover the sequential operations, rearrange them in some fashion, and thus optimize the sequential operations in order to minimize the cost. But this implies that the application server could rewrite your application to reorder its execution to suit its own needs. Obviously this could have disastrous effects on the consistency and correctness of your code—something no application server vendor is going to risk. The app server must execute your code exactly as it sees it. And while it might be able to spin off certain threads in certain places to improve the parallelism, it will always be hamstrung by the basic requirement that it not break the order of execution of your code.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap