Browse DevX
Sign up for e-mail newsletters from DevX


Effective Windows DNA Applications: Maximizing Performance with Object Pooling : Page 2

Windows Distributed interNet Application (DNA) is an architecture for building n-tiered applications for the Microsoft Windows platform, based on the Microsoft Component Object Model (COM). Developers can generally choose the language in which they are most comfortable for building components, though you should be aware that not all languages are created equal. In this article we discuss the performance implications of development languages, as well as the impact of the database and the hardware on the overall performance and scalability of your application.




Building the Right Environment to Support AI, Machine Learning and Deep Learning

Examining the Misconceptions About MTS
As some of you may note, I am a frequent contributor to the MTS and COM+ newsgroups. I never cease to be amazed at the questions that appear on a daily basis that are the result of the confusing information availabe for MTS. To quote Don Box "No technology since the dawn of COM has been more misunderstood than MTS". If I could convey everything that I have come to understand about MTS in this article, I would. However, it would take several hundred pages and numerous code examples to adequately convey the understanding that I have arrived at.

With that said, let's move forward with the focus of this article

Why Discuss the Misconceptions Surrounding MTS?
The experiences we have had with implementing real-world, practical applications based on MTS have exposed short-comings in the documentation, and also in code samples provided by Microsoft and appearing in a number of third party books. While the available material demonstrates the fundamental coding techniques necessary to get started, the in-depth information needed to build scalable systems with MTS is not readily available. This is not to say that all of the published material is incorrect, but it takes experience to separate out the wheat from the chaff—experience that developers new to this framework simply do not have.

MTS—Separating Fact from Fiction
Contrary to what the marketing literature would have us believe, MTS is not a cure-all. There are advantages to using MTS in some scenarios, as well as disadvantages. If you approach the development of a new project with an objective view based on an understanding of the technologies involved, then you are more likely to have a successful experience. Failing to understand the issues related to MTS (many of which are shared by COM+) that can have an adverse impact on performance and scalability can lead to frustration, and possibly failure of the project. Some of the most commonly misunderstood facets of developing with MTS are listed below:

MTS Applications Are Inherently More Scalable
Most developers working on n-tier applications are well-versed in coding transactions, either local (managed via a connection object) or via stored procedures. These transactions are normally performed with a transaction isolation level of 'Read Committed', which incurs minimal overhead due to contention for database resources. MTS works in conjunction with the Microsoft Distributed Transaction Coordinator (DTC) to manage transactions. In this environment, database connections are automatically enlisted in distributed transactions, which have an isolation level of 'Read Serializable'. As any skilled developer can attest, serialization is BAD. A more common term is 'bottleneck', but no matter how you phrase it the end result is the same—a limit on system throughput at some level of usage.

MTS Will Cut Up to 40% Off of Development Time Due to Eliminating the Need to Code Transactions
The automatic enlistment of connections within distributed transactions is a powerful feature. It allows coordination of transaction boundaries across multiple databases, as well as across multiple components; these are powerful arguments for implementing the DNA framework. However, with distributed transactions comes the afore mentioned issue of serialization of data access. Many many developers have little or no experience with the transaction isolation level of 'Read Serializable'. If the database schema has not been designed with this in mind, then you will run into performance issues much sooner than expected. The time saved on initial development can be a drop in the bucket when compared to the effort required to salvage a code base built on a schema that was not designed for this environment.

Object Pooling with MTS
No version of MTS running on Windows NT4 has ever implemented object pooling. Period. I realize that this statement contradicts publications by a number of noted "authorities" on MTS, as well as misinformation propagated in the newsgroups, but it is an indisputable fact.

JIT activation / ASAP Deactivation
I recently read an explanation of JIT activation / ASAP deactivation that was so far removed from the actual implementation that I felt a need to address the topic. The root of the confusion stems from Microsoft documents that indicate object pooling as a feature of MTS. As I indicated above, this functionality was never implemented. However, many developers still lack a clear understanding of what JITA actually does under the hood. I will use a simple object instantiation from a client to outline the flow of events. The client instantiates an object running within MTS, which activates the proxy/stub—not the object itself. When the client calls a method of the object, then the Object Context is established, followed by the actual method invocation. If either SetComplete or SetAbort is called then the object will be torn down when the public method goes out of scope, thus resulting in "ASAP deactivation". Object that are deactivated in this fashion are referred to as "stateless", as all local data members are lost when the object goes out of scope; the proxy/stub remains, so the client "thinks" it is still connected. Failure to call SetComplete or SetAbort results in the object being "stateful", as the local storage is maintained due to the actual object (not just the proxy/stub) remains in memory.

Database Connection Pooling
This is also a feature of ODBC (since 3.0) and the Microsoft Data Access Components (MDAC); components can utilize the services with or without MTS. The connection pooling feature is extremely beneficial in an MTS environment that has a fairly consistent rate of usage. However, in systems that experience intermittent usage there are frequently insufficient connections in the pool to satisfy the immediate demand. These connections must be opened against the database (which is an expensive activity) all at once, only for the majority to be discarded in short order. The connection pooling algorithm is very simplistic, and frequently results in the very connection thrashing it was designed to prevent. (As you shall see later, object pooling under COM+ can eliminate the inefficiencies associated with connection pooling.)

MTS Manages Connection Cleanup
If your goal is to be the skipper of the Titanic, then don't bother to properly close and release the database connections. MTS doesn't eliminate the need for good programming practices; if anything, it heightens the importance because the components are server-based. Applications running on a server need to be more robust than a desktop application. Rebooting a client workstation is an inconvenience, but the need to frequently reboot a server can be extremely expensive. To minimize resource leaks you should religiously free any resources that you have allocated, especially database connections.

Objects Perform Better When Running Under MTS
This statement qualifies as an urban legend among the Microsoft development community. How this father of all misconceptions came to be accepted as fact is beyond me, as simple logic readily reveals the short-comings of this train of thought. All objects running within MTS are wrapped within an Object Context, which incurs overhead not only for the initial instantiation but for every method call. The amount of overhead varies, with the most expensive (in terms of time) being calls to a remote Server Package that is participating in an MTS managed transaction. The principle goal of middle-ware is not to provide the best performance, but rather two-fold: a) to efficiently share a limited set of resources with a much larger pool of users, and b) to manage the complexity of deployment and system maintenance. For anyone wishing to examine performance characteristics I recommend that you start with WinDNAPerf.exe from the Platform SDK.

Why Benchmark COM+ / Object Pooling?
With all of the long-awaited features of Windows 2000, the top on my list has been COM+ object pooling. Despite all of the changes incorporated into Win2K, the behavior of COM+ as compared to MTS has not changed radically. While the rough edges have been knocked off and performance is a little better, the same underlying issues that impacted the scalability of MTS are still lurking.

After the Windows DNA 2000 Readiness Conference in Denver, I returned to the office with an understanding of the impact that object pooling could offer. This did not stem from spending hours in sessions on the advantages of object pooling, nor even the advantages of COM. If anything, what brought me to this conclusion was learning how the most impressive Microsoft benchmarks actually minimize resource thrashing. Lon Fulton presented tuning techniques employed in the Doculabs benchmarks (Web App Server Shoot-Out, PC Week, July 11th 1999)—the VC++ implementation utilized an ISAPI extension DLL that pre-allocated all database connections and memory buffers; database "transactions" utilized implicit commits in order to minimize overhead. Also, the ODBC API was used (instead of OLE DB or ADO, the data access technologies being pushed by Microsoft) as it provided better throughput.

Note that MTS was used in the VB benchmark, though the components were installed in a Library Package in order to maximize performance. Based on my knowledge of the characteristics of VB components within MTS, I question whether these components were using MTS transactions. It is not likely that the MTS objects could deliver roughly two-thirds (2/3) the throughput of the multi-threaded ISAPI extension if any style transactions (much less distributed) were employed.

Another important factor pointed me toward the potential benefits of object pooling, particularly with transactional components. This was the information contained in the Full Disclosure Report (FDR) of the recently announced TPC-C benchmarks. Once again I found pre-allocation of resources and buffering techniques, along with the ODBC API. In the code listings (see Appendix A of the FDR) I did note that transactions were employed within stored procedures.

All information that I could find pointed to nailed-up resources offering a significant advantage. I realize that this is contradictory to information appearing in MSJ over the last year or so, but I performed an initial round of benchmarks that demonstrated the clear advantage of object pooling.

Comment and Contribute






(Maximum characters: 1200). You have 1200 characters left.



Thanks for your registration, follow us on our social networks to keep up-to-date