Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Parallel and Concurrency Futures for Microsoft Developers : Page 3

Urging developers to build parallel-capable applications is useless until they have the tools to do so—and Microsoft intends to ensure that they have them.


advertisement

Typical Uses

A typical .NET business developer will mostly likely use Parallel LINQ, and might use some of the Coordination Data Structures. And of course, any code that uses the .NET thread pool will automatically gain the benefit of the major .NET 4.0 thread pool enhancements (you'll see more about that later in this article). A typical native Windows business developer might also use various features of the Parallel Pattern Library (PPL), including new algorithms, primitives and types provided by that library. Native developers might also directly interact with the task scheduler in the Concurrency Runtime as they schedule and execute tasks within their application.

However, the majority of the new features in Visual Studio 2010 are designed to support framework designers, enabling them to create components that support parallel computing. Only a couple of the new features are intended for direct use by developers building business applications. In other words, the features in Visual Studio 2010 are mostly about laying the low-level groundwork needed to build higher level features in the future. The long-term goal is to allow the creation of frameworks and components that provide parallelism and concurrent behaviors automatically, so that typical business developers don't have to deal with the complexities and potential pitfalls inherent in parallel computing. To accomplish this goal, it is important to have a solid base on which to build these components and frameworks.

At the lowest level, this means efficiently managing how work gets scheduled across cores, which implies a component that can coordinate work at the Windows process level, or perhaps even across all processes on a computer. The resource management service layer provides this capability at the Windows process level, and Microsoft may broaden the scope in future versions of the Windows operating system. The purpose of the resource manager is to map work to threads, ensuring that there are an appropriate number of threads for the number of cores available to the application, and to assign tasks to those threads efficiently. The resource management service layer exists for both managed and unmanaged code, so there may be two managers for applications that use a mix of native and .NET components.



The resource manager API is pretty low-level, so the Concurrency Runtime includes a task scheduler that provides a higher level of abstraction. Native Windows developers will create instances of the task scheduler, and use them to queue up tasks to be scheduled across multiple threads and cores. On the managed side, the .NET 4.0 thread pool has been reimplemented to leverage something similar to the Concurrency Runtime task scheduler. This means that the thread pool now relies on a lower-level resource manager to coordinate the use of threads across the entire Windows process, not just a single .NET AppDomain within that process. It also means that the thread pool now understands how to work with tasks as defined by the TPL, as well as traditional queued work item delegates.

While most .NET developers will use PLINQ and some of the high level concepts provided by the TPL, others may use the thread pool directly, as they do today. Either way, their code will gain the benefit of these thread pool enhancements, because the Task Parallel Library (TPL) uses the thread pool, and higher-level features such as Parallel LINQ use the TPL to do their work. The TPL, Parallel LINQ, and other managed features use the Coordination Data Structures, so they all have a consistent set of thread-safe data structures. One challenge to overcome is that authors of frameworks and components often invent their own thread-safe data structures—and perhaps even their own thread or task management. The result is that different components and frameworks can't work together, or at least can't make efficient use of available thread scheduling or data structures.

While splitting work into parallel tasks is a powerful technique, it can be counterproductive to create more worker threads than the computer has CPU cores, because the operating system can waste a lot of time switching the cores from one thread to another, slicing up time so each thread gets to run for at least a little while. It is far more efficient to have the same number of busy worker threads as the computer has cores, because then each thread can keep running without being switched out for another thread. For scenarios where many threads are blocking on IO or other locks, it is ideal to have more total threads, so there are still roughly as many busy threads as the machine has cores. If all frameworks and components share a common thread pool or task scheduler and resource manager, then those low-level service layers can coordinate the use of threads across the entire application. That helps avoid the issue of over-saturating the operating system or hardware.

Similarly, sharing thread-safe data structures enables better interoperability. If all frameworks and components use the same safe collections, stacks, queues, and other data structures, then an application can safely pass references to those structures from component to component to share data in memory, regardless of the components or frameworks in use. As you can see, most of the features that will be introduced in Visual Studio 2010 focus on enabling component and framework developers in both native and managed code to leverage parallelism in a way that is more abstract, performs better, and is more interoperable than in the past. Future components and frameworks from Microsoft and many other vendors, building on this base, will provide more accessible and productive parallelism to business developers. And this is just the beginning.



Rockford Lhotka is the author of several books, including the Expert Visual Basic .NET and C# Business Objects books. He is a Microsoft Software Legend, Regional Director, MVP, and INETA speaker. He is a columnist for MSDN Online and contributing author for Visual Studio Magazine, and he regularly presents at major conferences around the world, including Microsoft PDC, Tech Ed, VS Live!, and VS Connections. Rockford is the Principal Technology Evangelist for Magenic Technologies, a company focused on delivering business value through applied technology and one of the nation's premiere Microsoft Gold Certified Partners. For more information go to www.lhotka.net.
Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap