Control Transaction Boundaries Between Layers

Control Transaction Boundaries Between Layers

hen developing a software solution on a three-layered architecture, an inevitable mismatch emerges between business transactions and system transactions. Business transactions stem from application requirements, and they are defined at the application/business layer, while system transactions (e.g., database transactions) are tied to a specific technology (SQL Server, MSMQ, etc.). No technology defines a business transaction; it’s just a description of a business process, which either fails or succeeds as a whole. This description is typically provided with domain-specific terminology and is best expressed in terms of object collaboration.

Since the layer where business transactions conceptually live has no direct access to the database, system transactions do not naturally map to business transactions.

In fact, the three-layered architecture requires you to encapsulate data-access code into a data layer to remove the database-access details from the business layer. You then use object-relational mapping routines (unless you use DataSets) on the border of the two layers to move data back and forth between the database and business objects.

Object-relation mapping routines undoubtedly make things easier by providing a bridge between objects and the database. However, they do not provide a robust, transparent way to define and control transactional boundaries at the business layer way (it’s just not their responsibility). If you do not anticipate this problem during your application design, your nice layered architecture will break once transactional requirements pop up. For instance, connection and database-related details (MSMQ details and so on) creep from the data layer into the business layer, typically as when connection and transaction ADO/ADO.NET objects are passed around the business entities (an error-prone practice).

A naïve, yet typical solution to the problem is hard-coding the dependency order among objects at design time. This is a bad idea for all but the simplest applications, because object collaboration paths vary across different application tasks. An object can be the root of a business transaction in one situation and a child in another depending on the day. Whenever it switches, you’ll have to chase down all the hard-coded assumptions in your code.

For these reasons, one of the main concerns application architects have is assuring ahead of time a robust, manageable framework for applications to map database transactions to business transactions. This way, each business object can focus on its tasks, committing and “rollbacking” its own job, oblivious to its transactional context.

The COM+ Option
A powerful, yet easy way to define transactions at the business-layer level is using COM+ declarative transactions. You just flag your business objects with the proper transactional requirements and the COM+/DTC infrastructure enlists database (and eventually MSMQ) connections into a single unit of work transparently. So you don’t have to coordinate the work of the data layer; each data layer component opens its own connection and fires data changes to the database. The DTC will commit or rollback the business transaction as a whole at the application layer.

This approach has its benefits and liabilities. The benefits include:

  • It lets you compose business components into transactionalunits easily. You can plug new components into the business transactionwithout intervening into the existing ones.
  • DTC-based transactions can spawn across process and machine boundaries.
  • DTC-based transactions can handle work distributed to more than one database server as a single transaction.

The liabilities include the following:

  • DTC-based transactions impose a performance overhead into the data access layer, about 30-40 percent on average. This is due to the fact that they have been designed to handle transactions across different databases (more exactly, across different resource managers).

I’ve personally used the COM+ approach often while working with VB6 in the pre-.NET era. I gave up some performance to gain more flexibility in my application design, even when no distributed databases were involved.

Thanks to .NET, you can now easily design a small framework that the data-access layer uses to accommodate transparent management of database transactions?without involving the DTC. This way, you can keep your focus on the business transaction scope, without incurring the overhead that COM+ transactions impose.

Enter the Connection Broker
The basic idea behind a connection broker is enabling you to roll your own library that centralizes and manages access to database resources (like the DTC does). The approach this article proposes defines two main classes that mediate the application code’s access to the actual connection and transaction objects. It doesn’t try to bring two or more connections into the same transaction as the DTC does.

The two classes are a SmartConnection class and a SmartTransaction class. The former implements IDbConnection and the latter IDbTransaction, so that they are (almost) indistinguishable from the classes the different .NET managed providers expose. Specifically, they will manage a single connection?and its associated transaction?for each execution scope. A proper algorithm will filter and monitor the access sequence to database resources, so that only the calls the logical root object places are actually forwarded to the underlying connection and transaction objects.

The business and data-access layers will use these two classes instead of the usual ones from .NET managed data providers. The SmartConnection class holds an internal reference to an actual connection, and the SmartTransaction class holds an internal reference to a transaction object. These classes blindly forward method calls via these internal references but for the Open, Close, Commit, and Rollback methods. Within these four methods, the two classes work their magic to behave as database connection and transaction brokers within a given execution scope.

Before delving into the implementation details, some explanation regarding the term “execution scope”: it encompasses, in its widest sense, all the work that different threads running on separate processes or machines perform. In a generic scenario, the transaction flow is a logical thread of execution that spawns across two or more processes. For example, consider a transaction that starts when a client calls into component C1 running on thread T1 on process P1. If component C1 creates and calls into a remote object C2 running on process P2, component C2 won’t of course share the same process or the same thread with the caller. However, C1 and C2 share the same logical thread of execution (see Figure 1). COM+ can actually trace such an execution pattern, and it guarantees that the DTC-based transaction will flow across the process (or network) boundaries.

Figure 1: Logical Thread of Execution

This article doesn’t cover the scenario above (even though it may come up in some large applications). Rather, it concentrates on the most common execution pattern in a three-layered application: the entire job performed by business and data objects that are hosted in a single process. The process hosting the business and data objects is a highly multithread environment, yet it guarantees thread isolation among objects activated by different clients?no matter which remoting technologies you choose (enterprise services, .NET remoting, or Web services). Each remote call executes within a random thread picked up from a thread pool (the .NET thread pool or the MTA thread pool) (see Figure 2).

Figure 2: Invoking Objects Remotely

Typically, the root business object executes its tasks by creating and calling into other business and data objects. If you don’t explicitly create a new thread, these new objects will all share the same thread as the original one that got the remote execution call. As you can see, even in a highly multithread environment, a thread scope of execution is sufficient for typical three-layered applications. This is exactly the scope the example connection broker in this article will support.

Author’s Note: Because connection objects cannot be marshaled by reference across processes and computers, COM+-based transactions are a must for transactions spawning more than one process. In fact, you can’t actually improve the proposed connection broker solution to handle this extended scenario.

Implementation Details
The downloadable code accompanying this article provides the entire implementation for the SmartConnection and SmartTransaction classes, along with an AuthorsBooks business object and a couple of data objects (DoAuthors and DoBooks). Note that the Author-Book business entity is implemented using an enhanced Dataset, which encapsulates validation logic into itself.

Let’s examine the SmartConnection class. To separate database resources at the thread level, mark the underlying connection object that the SmartConnection class manages by using the ThreadStatic attribute. One would be tempted to write code such as the following:

[ThreadStatic()]private static IDbConnection ms_RealConn =   new SqlConnection (); 

This won’t work since the CLR assigns the required value in only the first thread referencing the ms_RealConn variable. Whether this is a bug or not, you can workaround the issue by modifying the code as shown below. Additionally, make sure that no code accesses the ms_RealConn_private variable directly, but rather through the ms_RealConn property getter:

[ThreadStatic()]private static IDbConnection ms_RealConn_private ;private static IDbConnection ms_RealConn { get {  if (ms_RealConn_private == null)    ms_RealConn_private =      new SqlConnection ();   return ms_RealConn_private; }}

No synchronization is required since the variable has a thread scope.In the downloadable code included with this article, the SmartConnection class uses another framework-level component named DataAccess, which returns only provider-independent resources (i.e., IDbConnection instead of, say, SqlConnection). As shown in the code snippet below, a basic implementation could use a ThreadStatic counter so that in the Open method the connection is opened only when the counter is zero. Likewise, the Close method implementation closes the underlying connection only if the counter is equal to one:

public class SmartConnection : IDbConnection {... public void Open() {  if(ms_counter ==0)    ms_RealConn.Open ();   ms_counter +=1;    } public void Close() {  if(ms_counter ==1)    ms_RealConn.Close();   ms_counter -=1;    }...}

The SmartTransaction class would handle transactions in a similar way. This approach has a serious drawback, however: you have no way to detect when a business or data object forgets, for example, to call Close after it called Open or, even worse, to call Commit (or rollback) once it called BeginTransaction. In the latter case, the effects are a really disastrous. For example, the root object could call Commit, but the transaction wouldn’t actually commit since the counter is not equal to 1, meaning the whole job is doomed to be “rollbacked” by a transaction timeout later on without any notification to the client.

To overcome this issue, replace the simple counter with a Stack object and assign a Guid to each SmartConnection instance to uniquely identify it. In the Open method, instead of incrementing the counter, push the specific SmartConnection Guid to the Stack:

public class SmartConnection : IDbConnection {... public void Open() { if(ms_RealConnStack.Count ==0)   ms_RealConn.Open ();  ms_RealConnStack.Push(m_guid);    }...

In the Close method’s implementation you will try to match the Guid of the instance asking the connection to close with the Guid present at the top of the Stack. If they match, everything is okay, and if the Stack is empty once you’ve popped the Guid, you close the underlying connection. If the Guid doesn’t match, the method raises an exception to signal that a child object called Open but forgot to call Close. Essentially, it’s just like pairing opening and closing brackets in a programming language. The following code shows the implementation of the Close method of the SmartConnection class:

public class SmartConnection : IDbConnection {... public void Close() { if((Guid)(ms_RealConnStack.Peek())!=   m_guid)  throw new Exception (   "Invalid Sequence"); ms_RealConnStack.Pop();    if(ms_RealConnStack.Count ==0)   ms_RealConn.Close();  }...}

The implementation of the BeginTransaction, Commit, and Rollback methods follows the same logic to detect unmatched calls and sequence errors regarding the transaction start and outcome.

One of the last aspects to handle is detecting an incorrect call sequence within the same SmartConnection instance, such as when BeginTransaction is called before Open or Open is called twice. You can easily account for this problem by defining an instance-level ConnBrokerStatusEnum enum value that the broker checks before manipulating the ThreadStatic objects (see Listing 1). The SmartTransaction‘s ms_doomed flag prohibits a parent object from calling Commit if a child object has called RollBack.

Small Glitch in the Classes
Finally, a small problem slightly breaks the polymorphism of the SmartConnection and SmartTransaction classes. In the data-layer code, you need to assign the connection object and eventually the transaction object to a command object at some point. While the signature of the command’s connection property accepts any object implementing IDbConnection, you cannot pass a SmartConnection instance. Each provider-specific command instance casts the provided connection internally, throwing an exception if it doesn’t receive a connection object of the same provider. Obviously, the same problem applies to the command’s transaction property.

Instead of exposing the internal connection and transaction objects, you can resolve the issue by implementing the following methods in the SmartConnection and SmartTransaction methods, respectively:

public void SetConnectionToCommand(    IDbCommand p_command ) {  p_command.Connection =      ms_RealConnection ;}public void SetTransactionToCommand(IDbCommand     p_command ) {  p_command.Transaction =     ms_RealTransaction;}

Business Transactions with Less Overhead
The connection broker approach to transaction management fits the bill in most three-layered architectures. It incurs much less overhead than transactions based on COM+, so consider it in your application design when transaction boundaries are included into a single process and do not involve access to more than one resource manager.

Don’t be afraid to adopt this approach simply because you may need COM+ transactions in a future. Provided you follow a stateless approach in your business-layer design, you can easily switch to transactions based on COM+ later, by simply inheriting your business objects from the ServicedComponent class and instructing the connection broker to ignore all calls related to transaction control.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist