Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Parallel and Concurrency Futures for Microsoft Developers

Urging developers to build parallel-capable applications is useless until they have the tools to do so—and Microsoft intends to ensure that they have them.


advertisement
arallel computing and concurrent programming are rapidly becoming mainstream topics for discussion in the corporate world. These are not new ideas; in fact they've been around for more than 30 years. However, like many long-running computer science concepts, they're only now becoming relevant to mainstream business developers due to changes in both hardware and in the overall computing environment. The truth is, the raw speed of a single CPU, or core, has remained flat for several years. Instead of making cores faster, the hardware manufacturers are increasing overall computing power by adding more cores to each computer. Therefore, the only way your application can improve performance is by exploiting the benefits of multi-core computers—through parallelism.

Like nearly all software vendors, Microsoft has relied on the rapid increase in processing power to enable new features in their products. Over the years, Windows, Office and Microsoft's other applications, tools and technologies have grown more powerful, but also evolved to consume more computing resources. They are now in a position where they must enable parallel computing to leverage future increases. As the large software vendors enable parallelism in their own products, development tools and platforms, developers will benefit from their work, because the new features to leverage parallelism will become available to them as well. Of course, the sudden end of ever-increasing single-core horsepower could simply plateau the development of new features—but that doesn't seem to be happening. Instead, there is evidence that the user experience will continue to evolve in ways that require more computing power. Technologies such as WPF and Silverlight highlight the desire for a more polished graphical experience, often including rich animations. Add in the increased use of voice recognition, video, audio, location features, motion sensors, light sensors, and ever-increasing storage density, and it becomes clear that the future still holds rapidly increasing requirements for computing power.

Users' expectations of application functionality also continue to increase. More and more, users expect applications to not only allow editing of data, but also to show related data, results of data analysis, and other information, all updated in real time as the user interacts with the application. So, while basic data entry operations require no more processing than they did 20 years ago, user demand for peripheral data processing features continues to increase at a staggering rate. With CPU speeds remaining constant, developers need access to parallel computing to meet these increasing requirements and still provide acceptable performance.



Basic Parallelism

The idea of parallel or concurrent programming is pretty simple: a program should be able to do more than one thing at a time. Most programs work sequentially, doing one thing at a time as the computer follows the instructions in your code. Today's computers are almost all dual core, and over the next few years multi-core computers will become common. A sequential program can really only use one CPU, or one core. To effectively use multiple cores, a program needs to run multiple tasks in parallel. These parallel tasks will run concurrently, on different cores in the computer. As a developer, you need to focus on this truism: Sequential applications don't gain any benefit from multiple cores. In other words, until you begin programming parallelism into your applications, their performance will remain flat. Figure 1 illustrates how a sequential application might be scheduled on a single, and then a dual core computer.

To run multiple tasks in parallel, a program must be multi-threaded. The Windows operating system, like most modern operating systems, executes code on a thread. The OS schedules each thread to execute on a computer core. If your application runs code on one thread, it can run on only one core at a time. However, if your application uses multiple threads, the code on those threads can often be scheduled to run on different cores, all at the same time, as shown in Figure 2.

 
Figure 1. Sequential Application: In the left-hand picture, the sequential program runs linearly, while on the right, even though it's running on multiple cores, it runs on only one at a time, thus its performance doesn't increase.
 
Figure 2. Multicore-Capable Application: On a single core (left image), a multicore program runs no faster than a sequential application; however, when more cores are available, the application can perform its work in less time (right image).

In Figure 2, notice how the application accomplishes the same amount of work in much less time on the dual-core machine, because some of the work occurs in parallel. The application splits its work into tasks, and executes those tasks on different threads, so they can exploit both CPU cores.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap