Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Store and Retrieve XML-based Business Objects with SQL Server and .NET (Part II) : Page 4

Improve performance and gain convenience by using custom business objects serialized to and from XML rather than forcing your presentation-tier developers to work directly with DataSets.


advertisement
Efficiency Considerations
XML serialization is an efficient process, except for the first time you initialize the XmlFormatter object. After that, the serialization of a custom object to XML is extremely fast, and the overhead is usually insignificant. The big advantage of this method is that you can commit the entire order within a stored procedure executed right at the database. Submitting an order using the DataAdapter's Update method is equivalent to executing a separate INSERT statement for each detail line. For a large order this may result in hundreds of trips to the database. You can avoid the unnecessary trips by preparing a stored procedure that acts on the nodes of an XML document passed to the stored procedure as argument.

The sample code includes a VB.NET application that creates random orders and submits them to the Northwind database using ADO.NET objects (see the IdentityTransactions project included in the sample code). Figure 1 shows the application's main form. The Add Order button creates a random order (an order with between 3 and 20 detail lines containing random product IDs, with random quantities). The Show New Orders button displays the new order in the Output window. Finally, the Commit Order button submits the new order to the database. The program also allows you to create multiple orders and submit them to the database at once. The code submits each order in the context of a separate transaction, of course.

Figure 1: The IdentityTransactions project demonstrates how to perform transactional updates with DataAdapters.
The IdentityTransactions project uses a DataSet with two DataTables, the Orders and Order Details DataTables, and populates them with random data. Then it initiates a transaction on a Connection object and calls first the Update method of the DataAdapter that corresponds to the Orders table to insert the order's header, and then the Update method of the DataAdapter that corresponds to the Order Details table to insert the order's details. I will not discuss the project's code here, but you can open it with Visual Studio and read the comments throughout the code. Listing 3 shows the code of the Commit Order button, which submits the order created by the Add Order button.

I timed the two approaches, hoping that overhead of the XML serialization process wouldn't introduce a substantial performance penalty, and was surprised to find that the XML-based approach is consistently and considerably, faster. Table 1 shows the average time (in milliseconds) to commit each detail line of an order. The first column is the number of detail lines in the order and the other two columns show the time it took to commit each detail line (it's the time taken to commit the entire order divided by the number of detail lines in the order). To time the operation of committing an order to the database I created 500 random orders with the same number of detail lines and then took the average. The Orders and Order Details tables of the Northwind database grew quite large during the tests, so the measurements weren't computed with a (practically) empty database.

Table 1: The average time (in milliseconds) required to commit orders to the Northwind database.
Number of detail lines in order Time per detail line (ms)
DataAdapter OpenXML

5

4.07



7.58

10

3.76

4.54

25

3.20

2.42

50

2.93

1.41

100

2.63

1.27

500

2.58

0.73

1000

2.74

0.49

1500

3.61

0.54


You won't be able to create large orders with the Northwind database, because the Products table has only 72 products and the order's ID along with the product ID form a unique key in the Order Details table (in other words, you can't have an order with two detail lines that refer to the same product). I had to create random products and random customers to make even a moderately sized database. As you can see, the XML-based approach performs better as the number of detail lines in the order increases, and here's why.

The first order involves substantial overhead: a temporary assembly is created (it's required by the XmlSerializer class) and the application establishes a connection to the database. Committing the first order to the database takes a few seconds. After that, orders are created and committed to the database within a few dozen milliseconds, depending on the number of detail lines they contain. Using the DataAdapter's Update method takes twice as long for orders with up to 100 detail lines, and up to seven times as long for very large orders. I've used a low-end server to time the operations (a Raid 5 system with a single 1 GHz processor and 1 GB of RAM), so the actual times are meaningless, only their ratio matters.

The XML-based approach performs better with larger orders and, for very large orders, it can commit each detail line in a few tenths of a millisecond. This is very impressive and in retrospect it shouldn't surprise us. The stored procedure can insert hundreds of detail lines with a single INSERT statement, while any other technique that uses ADO.NET objects will execute an INSERT statement for every detail row. The advantage of performing the transaction from within a stored procedure more than offsets the overhead of XML serialization.

Changing Views of Data
To sum up, the XML-based approach is both faster and more elegant than any strictly ADO.NET approach based on DataSets and DataAdapters. A DataSet isn't the most accurate representation of a business object (an order in our case). The Orders class, on the other hand, is a very accurate model of an order. It contains all the information about the order and you don't have to worry about relating an order header to its associated order details. The order's ID becomes irrelevant, as it should, because it's a meaningless value generated by the database whose sole purpose is to establish a relation between the Orders and Order Details tables. At the presentation tier, it's much more intuitive, convenient, and robust to work with an Order object than a DataSet with two related tables. The fact that it's also faster is a pleasant bonus.

In the days of COM+ developers' efforts in the middle tier centered around dismantling Recordsets into business objects and passing collections of objects between layers. One positive side effect of using business objects to pass information between layers is that developers have to design the business objects and implement methods that expose the required functionality. Designing functional business objects requires a thorough understanding of the business processes modeled in the code. Those efforts resulted in applications with a solid architecture and distinct layers of functionality. But architectural issues shouldn't depend on the data access mechanism. Regardless of whether applications pass data from the data tier to the middle tier as Recordsets or DataSets, the presentation tiers code should manipulate objects that represent specific business entities as discussed in this article series.

Of course, not all applications will benefit from this approach, and no tool will address every requirement. The approach suggested here works nicely when it comes to submitting updates that involve parent/child tables, especially when you're inserting (or updating) hundreds of rows.

As XML becomes the glue between all of the technologies that make up the .NET Framework, I expect to see major changes in many of the technologies we use today. It's likely that with a future version of SQL Server will be able to define custom classes at the level of SQL Server and populate instances of these classes with T-SQL statements. Today, support for XML was added to SQL Server as an afterthought. If XML is the best method of exchanging data between databases, why shouldn't it also be the main mechanism for moving data in and out of a database? XPath and XQuery (XML query languages) are already close to SQL and may well become alternatives for it. After all, SQL was originally conceived and designed more than three decades ago. In my opinion, our view of databases will soon change from a row/column organization (relational view) to collections of business objects (hierarchical view). Until this happens, you can take advantage of XML serialization to transform relational data into hierarchies of business objects in middle tier components and provide presentation tier developers with a better, more abstract view of the database. I'm not suggesting going back to hierarchical databases, but it's very convenient to be able to determine the format in which the database will return data. That includes defining custom formats that can be easily mapped to business objects.



Evangelos Petroutsos is a long time VB developer. When he's not writing code, he writes programming books (most published by Sybex), and articles. Reach him by e-mail by .
Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date