Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

System.Transactions and ADO.NET 2.0 : Page 4

If you use DataAdapters and love their automatic connection-opening and closing capabilities—and ever use transactions, you should first understand what's really going on underneath the covers.


advertisement
Making TableAdapters Transactional
Before I cause any confusion, TableAdapters are to DataAdapters what strongly typed DataSets are to DataSets. In Visual Studio 2005, when you design strongly typed DataSets, you also have the option of being able to add TableAdapters, which give you a convenient way of storing querying and persistence logic right beside the DataTable.

Let us quickly chalk up a data-driven Windows Forms application leveraging strongly typed DataSets and TableAdapters. This application will demonstrate using TableAdapters to perform select and update operations on the Customers table on the Northwind database.

  1. Create a new console application called TblAdapTx.
  2.  
    Figure 2: Simple Console Application: This simple console application contains a TableAdapter and DataTable in a strongly typed DataSet.
  3. Now within Visual Studio, go to the Server Explorer window. If it is not visible, you can make it visible using CTRL-ALT-S, or from the View menu choose "Server Explorer."
  4. If you do not have a connection to the Northwind database already, go ahead and create one. Make sure the authentication information you specify has sufficient privileges.
  5. Now, in your application, right-click on the project, and from the shortcut menu choose "Add" and then select "New Item." Choose "DataSet" and name it CustomersDataSet.xsd. By doing so, Visual Studio 2005 should create a new strongly typed DataSet and open it in Design view.
  6. Now from the Server Explorer, find the Customers table, and drag it on the surface of the strongly-typed DataSet designer. You should now see a strongly-typed DataSet with one DataTable and one TableAdapter created for you. See Figure 2.
  7. To use the strongly typed DataSet and the newly added TableAdapter, in your console application add the following code:
  8. CustomersDataSet.CustomersDataTable customers = new CustomersDataSet.CustomersDataTable(); CustomersDataSetTableAdapters. CustomersTableAdapter tblAdap = new CustomersDataSetTableAdapters. CustomersTableAdapter(); using (TransactionScope txScope = new TransactionScope()) { tblAdap.Fill(customers); customers.Rows[0]["ContactName"] = "Maria Velasquez"; tblAdap.Update(customers); txScope.Complete(); }

  9. Compile and run the application—it seems to work. But this is a really bad application.
If it works, how can it be a bad application? Well, when you are designing a transactional update application, your application needs to perform reliably in concurrent scenarios. In other words, if both you and I attempt to give the database conflicting instructions, one of us should be refused, and one of us should be successful. Also, whatever process the application uses to ensure this mechanism must ensure the best possible performance.

Unfortunately, the above application will offer you terrible performance, and in the event of a conflict both parties will fail. This is why the application above is bad.

Now this is quite a tall claim. Let me take you behind the scenes of how this application works and prove this claim.

Dissecting the Application
The above code never opens the connection. The TableAdapter, and hence the underlying DataAdapter, opens the connection for you. Also, since both the "Fill" and "Update" operations are within the same transaction, the data is kept consistent for you between the Fill and Update operations.

But the TableAdapter and the underlying DataAdapter closed the connection after the data was filled. So with a closed connection, how can your application ensure that it will keep the data consistent between the Fill and Update operations? There seems to be something fishy going on here.

The answer lies in the fact that the connection really isn't closed! Connection pooling keeps it open. And worse, the connection that your application thinks is closed, and hence your application thinks that it has not locked any resources—but the connection is indeed open, and has locked the rows you read out of. The worst part is, you have no way to unlock those rows. The rows will unlock when you either commit or rollback, which can get funny in concurrent scenarios as you will see shortly.

A picture is worth 1024 words, so set a breakpoint at the Fill statement. With the breakpoint set, let's see what connections are really open on the database. You can do this using the following T-SQL. Do note that dbid = 6 is Northwind in my case.

Select spid, status, program_name, cmd from master.dbo.sysprocesses with (nolock) Where dbid = 6

Figure 3 shows sp_who2 before Fill has executed, and Figure 4 shows the results of sp_who2 after the connection has executed.

 
Figure 3: Open connections before executing Fill.
 
Figure 4: Open connections after executing Fill.
As you can see from the figures, even after Fill has finished executing, you still have a connection with spid = 54 that is sitting there locking resources. Now add a partial class to your project that extends the definition of CustomersTableAdapter as shown below.



public partial class CustomersTableAdapter { public bool IsConnectionOpen() { return Connection.State == System.Data.ConnectionState.Open; } }

Right after "Fill", if you call IsConnectionOpen, it will indeed tell you that your SqlConnection is closed. So at this point, you have no control over the sysprocess with spid=54, which is now locking the entire Customers table.

You think that is bad? Wait until you run the above query right after the Update statement. You get yet another connection as shown in Figure 5.

 
Figure 5: Open connections after executing Fill.
Now, because you have yet another database connection (RM) in the same transaction, your entire transaction's isolation level will be bumped up to Serializable, and the transaction will now be promoted to MSDTC. So your transaction will end up getting more expensive, especially in concurrent scenarios.

In a development environment, however, when you are developing in a single-user scenario, you wouldn't even notice this. This Update will work because you had no second user conflicting your updates. Now consider what happens when a second user attempts to concurrently modify a row in the Customer's table. Figure 6 shows the sequence of events.

 
Figure 6: System.Transactions and TableAdapters in concurrent environments.
As you can see from Figure 6, the first user executes Fill, and without his knowledge, the underlying connection locks up all rows for a default time of one minute (time out for the transaction). Meanwhile, the second user reads from the very same table, opening a second connection.

When the second user attempts to issue an update, the database has locked the rows, so the update attempt from user 2, which is on connection #3, will head towards a command timeout, unless of course user 1 commits or rolls back his transaction before the timeout. At the same time, the isolation level of the transactions has been bumped up to Serializable, exclusively locking the relevant rows. In fact, you can't be sure of what exactly is locked. Locking is a bit like going to your favorite fast food restaurant's drive thru window. You ask for what you want at the broken speaker, but what you get may be entirely different. In the scenario above, you can't be sure of what exactly is locked—that is the database's decision.

When user 2 attempts to save his changes on connection #4, he is deadlocked by connection #3, which is timed out by connection #1.

So let's get this straight: user1 (connection #4) is deadlocked by user 2 (connection #3), which is timed out by user 1 (connection #1).

So user1 is stuck by user 1?

SQL Server will identify connection #4 as a deadlock victim, and user #1's update will fail. User #2's update will also fail because it will eventually time out due to connection #1.

So the data didn't get corrupted, but you have a concurrency mechanism scheme that not only performs much worse than a simple mechanism using SqlTransaction, it also rejects both users' changes in the event of a conflict. The behavior gets exponentially worse as you add more users to the system. Note that this behavior is specific to a SqlConnection instance connected to SQL Server 2005, which exhibits PSPE (Promotable Single Phase Enlistment).

So What Is the Solution?
Well a good lesson learned from this dissection is that you need to understand how things work behind the scenes. Ideally, resource managers should abstract all these details from you, but you still need to understand how it works. As a best practice, you should avoid letting the DataAdapter manage the connection for you when using it with System.Transactions. You should pass in an open connection. Also, if possible, you should attempt to not use System.Transactions with DataAdapters. The example above used a TransactionScope, which will commit or rollback within a using block. If you use CommitableTransaction instead, which gives you a raw transaction (you choose what/when to enlist, and when to commit or rollback), you could do some serious damage to your application.

A better way to wrap TableAdapters in transactional behavior would be to author a partial class with a BeginTransaction method. This technique would give you a convenient mechanism to wrap all commands using one SqlTransaction object. See Listing 1 for an example.

As you can see, you would ensure a clean mechanism for wrapping all SqlCommands within a TableAdapter to enlist within the same transaction without excessive locking or performance penalties.

Writing reliable code is not a choice, it is a necessity. As our lives get more and more automated, we trust software systems to do more and more critical jobs. Also, a computer manages your data for you; the data your program manages is the reason you are writing a program, not the other way around. Someone won't produce some data because you feel like writing a program. It thus becomes critical to understand how to work with data in any application. In this article, I showed you a paradigm shift introduced by .NET 2.0. System.Transactions and ADO.NET 2.0 present exciting and interesting new ways of writing reliable code. This topic is bigger than I can cover in a single article, but I hope this article piqued your interest in revolutionary way of writing reliable code. Happy coding.



Sahil Malik Sahil Malik is a Microsoft MVP, INETA speaker, a .NET author, consultant and trainer, and a well-rounded overweight geek. He has a passion for SharePoint 2007, data access, and application architecture. Sahil loves interacting with fellow geeks in real time. His talks are full of humor and practical nuggets. His talks tend to be highly charged, fast moving, and highly interactive. Be sure to check out his blog.
Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap