devxlogo

A Pure Object-oriented Domain Model by a DB Guy, Part 5

A Pure Object-oriented Domain Model by a DB Guy, Part 5

s I write this, four parts have been published in this series (see the left column). Now is the time to focus on how to access the database.

As I said, what we are going to talk about today is the Persistence Access layer, found slightly to the right in Figure 1.

Those of you who are familiar with parts 1-4 will probably notice that I’ve changed the name of one of the layers in Figure 1. My old Application layer is now called Service layer instead so as to more closely follow Martin Fowler’s names in his PoEAA book [5]. By this I mean that the classes in my old Application layer were following the Service Layer pattern and so I found that Service layer would be a better layer name.

Figure 1 The layer model

The purpose of the Service layer [5] pattern is to provide a service-based view on a Domain Model [5]. It’s also the case that the Service layer classes will often contain some functionality of their own, and not only be a pass-through to the Domain Model.

I have to admit that changing the name of that layer after so many years was a painful decision to take, but I think it is important to adjust to what I think is, or at least will become, the de facto name standard.

The Persistence Access layer itself is pretty similar to what I had in my book [7] two years ago. The most important change is that I’m now working with a rich domain model.

Talking of changes, I’m also considering getting rid of the Shared methods in the Persistence Access layer. (The reason for doing this is that using Shared methods takes away the possibility of using interfaces and the power of inheritance. I think interfaces and inheritance come in handy here as a solution not only for extending the functionality but also for extending testability.) But I’ll leave this for now, and will come back to it next time.

What’s the Problem?
So, before we get started, let’s think for a minute about the problems we are trying to solve with this layer today. First of all, the main thing that we are trying to achieve with the Persistence Access layer is to encapsulate details about the database. That is what this kind of layer is for.

Another, and less common purpose of this layer, is to completely encapsulate and hide ADO.NET for the rest of the application. Sure, ADO.NET is the best so far and it is nothing to be ashamed of, but I don’t like the so far. Microsoft has a history of switching data access APIs pretty often. When the next generation is here, we’ll only have to make changes to one layer to switch to the new API if ADO.NET-related code has been carefully encapsulated there.

We don’t have to change to a new API as soon as it arrives, of course. I still have several applications running with clients that use DAO and RDO. Those applications aren’t changed very often at all, but needless to say they still work.

One of the most important things we can do to improve scalability is decrease the times for holding onto locks. Therefore I want the Persistence Access layer to start work on the database as late as possible, yet finish as soon as possible. As you will see, I’m trying hard to gather all the information needed for the work of the Data Tier. In this way when the Persistence Access layer starts its work against the database, nothing gets in between.

Likewise, I try to reduce roundtrips between the Persistence Access layer and the database, which is actually one reason why I prefer to call stored procedures from the Persistence Access layer. As a result I’ve got a good opportunity to reduce a couple of roundtrips. I could even go as far as to bunch together several stored procedure calls into one single SQL batch (which I did in my book [7]), but what I also have now is an alternate implementation that makes one roundtrip for each stored procedure call. It’s not all that obvious in .NET that using the SQL batch solution is preferable in every situation.

Please note that there is nothing in the solutions I’m discussing today that requires the usage of stored procedures. You can like all the rest of it and use the architecture even if you prefer dynamic SQL. Some of the work that I put in the stored procedures will of course have to move to the Persistence Access layer, but that’s it.

When talking about this layer in other architectures, it’s not unusual to find it called Data Access layer, which is a relatively small difference of course. A much more important difference is that in my architecture, the Persistence Access layer knows about the Domain Model layer and not the other way round, which is what usually happens. This means that the domain model itself has no idea at all about persistence. We will discuss this more later.

Main Parts of Solution
The layer we are about to discuss consists mainly of a whole bunch of Data Mappers [5]. Nevertheless, I’m not calling it the Data Mapper layer for the reason that I also host the Unit of Work [5] in this layer. Usually the Unit of Work executes the stored procedure calls it has gathered information about from this very layer. So, as we will see, today’s discussion is mainly about Data Mappers and Unit of Work.

The purpose of the Data Mapper [5] pattern is to map between a Domain Model [5] and the database, and the purpose of the Unit of Work [5] pattern is to gather information about all changes that should be persisted against the database later on in a use case.

Today and Tomorrow
I think a warning is in order before we get started. For now, there is a lot of code that has to be written in this layer. You can of course simply create a generator, but there will still be a lot of code. Tomorrow, when I’ve thought it through and decided how I want it, I will think about automating. I think I will try to develop an automated (non-handwritten code) solution for 80% of the cases and go for manual code in the more advanced situations. I will probably also use some code generation. The Generation Gap [6] pattern comes in handy here, until we have Partial Types as of V2 of the .NET Framework.

The purpose of the Generation Gap [6] pattern is to make it possible to regenerate code, without affecting the custom written code.

And as I always say, you can obviously opt for one of the commercial solutions instead. That will save you a lot of work.

OK, let’s dive into the code.

SaveOrder() in Service layer again

In part 4 [4], I showed a Service layer method for saving an order, which you can see again in Listing 1.

Public Sub SaveOrder (ByVal order As Order)

Unit of Work

As you saw in Listing 1, the Service layer method adds information to a Unit of Work. If you compare this to Fowler’s description of Unit of Work [5], my usage of it is pretty different. Instead of gathering the information about, say, a changed property directly when it happens, I don’t gather any information until I’m preparing to contact the database. Anyway, the purpose is more or less the same, i.e. to gather information about what should be done against the database in one logical unit of work.

The main reason why I didn’t follow Unit of Work according to the way Fowler [5] discusses it is that I couldn’t come up with a good solution for how to move the Unit of Work instance over a possible network between Consumer and Service layer. Of course, it’s easily done explicitly as a parameter, but I wanted a more automatic solution. That made me decide that the solution we are about to discuss is a suitable one.

Let’s investigate my implementation of the Unit of Work a little closer. You can find my variant (or rather variants) in Figure 2.

Figure 2 The Unit or Work classes.

I think the purpose of BeginTransaction(), AddSprocCall(), AddParameter() and EndTransaction() in Figure 2 is pretty obvious. Execute() is used for the second stage when it’s time to execute all the commands whose information has been gathered when no resultset is expected (or at least not wanted) in return. ExecuteReturnDataReader() is used when you expect and want to deal with resultsets.

As you saw in Figure 2, there are currently two implementations of the MustInherit (abstract in C#) UnitOfWork class. The ScriptUnitOfWork is similar to what I wrote about in my book [7]. It creates a single SQL batch/script that is sent to the database in one go via one single ADO.NET IDbCommand. The CommandUnitOfWork will use one ADO.NET IDbCommand for each stored procedure call instead.

You might be wondering when to use each implementation. I’ll get back to that in a later article. For now, the current architecture is almost completely untuned and performance characteristics haven’t been sorted out. There are also other reasons for choosing one or the other, but it’s pretty much a matter of performance.

I think it’s very important to say a few words about the transaction handling when the Unit of Work is used. First of all, I had no explicit call to EndTransaction() in Listing 1. That is okay, because the Unit of Work will implicitly decide to end transactions that it has started. The second important thing is that I don’t say CommitTransaction() or RollbackTransaction(), but EndTransaction() (explicitly or implicitly). The reason for this is that the Unit of Work itself will decide which way to go. I can think of situations when you want to be in control of the transaction outcome, but in my case this is usually taken care of in the stored procedures.

Also note that BeginTransaction() isn’t starting a transaction. It’s just indicating that a transaction should be started at a certain place in the sequence of commands.

I could even skip explicitly calling BeginTransaction() and let the Unit of Work do that if there is more than one stored procedure call to execute. And if there is just one stored procedure call, but it still needs an explicit transaction, it’s the responsibility of the stored procedure itself. If I went this route, it would mean that I had to add a mechanism for dealing with less common situations. For example, if I don’t want a transaction to start before the first stored procedure call but only later. I will stay with the less automatic solution for now. Remember, I think it’s okay to place high requirements regarding transaction programming and such on the programmer of the Business and Data tiers.

There’s something more I have to comment on. Namely, the Unit of Work isn’t trying to be smart about the transaction code as regards using the correct transaction isolation level and avoiding risks for deadlocks. That is completely up to the developer. We’ll discuss this at length in another article. You’ll also find a lot about it in my book [7].

Another thing to think about concerning Unit of Work is where to hold on to the instance. As you saw in Listing 1, I held the instance at the method scope. You could also hold on to it at the instance level (of the Service Layer object) if you find that appropriate for the specific use case. Note, however, that if you use Just In Time Activation (JITA) in Enterprise Services or Single-call in Remoting, this won’t work without paying extra attention and overhead.

A database access helper

One very important consequence that comes for free is that your Unit of Work will become a helper for database access. All of the code that works with ADO.NET will be encapsulated here (there is an exception, but more about this in a minute).

I don’t think you should underestimate the value of a helper for database access – I personally find them very beneficial. So even if you don’t think the Unit of Work is anything for you, you should think again before you skip using some sort of helper that encapsulates ADO.NET. As I said, ADO.NET is great, but I’m pretty sure that it’s not the final data access API from Microsoft. What’s more you also don’t want to write similar code against ADO.NET over and over again, but instead generalize it as much as possible, or at least as much as is suitable. (That goes for everything, of course.)

OK, let’s get back to the scenario. We were looking at what the code for saving an order looked like in the Service layer. Now it’s time to investigate the code in the Persistence Access layer.

Save() in Persistence Access Layer

As I said, the Service layer method in Listing 1 is talking to a Unit of Work, but it’s also talking to at least one Data Mapper class. In Listing 1, the Data Mapper class is called POrder. Figure 3 shows an example of classes in the Persistence Access Layer.

Figure 3 Some of the classes in the Persistence Access Layer

As you saw in Figure 3, all the Data Mapper classes inherit from the Layer Supertype [5] class DataMapperBase. In this case, the Layer Supertype is exposing a lot of protected functionality to the sub classes.

The purpose of the Layer Supertype [5] pattern is to gather similar classes under a superclass, for example, to generalize code.

What the Data Mapper class POrder does in this particular example is actually only add information to the Unit of Work. However, the information that is typically database related is encapsulated, so that only the Data Mapper knows about it and, for example, the Domain Model classes don’t have to know about it at all. There is an example of a method in a Data Mapper in Listing 2.

Public Shared Sub SaveOrder(ByVal order As Order, _
ByVal unitOfWork As UnitOfWork)

FetchCustomers() in Service layer again
Once again, I’m repeating code from part 4. This time the code from the Service layer for fetching customers is found in Listing 4.

Public Function FetchCustomers _
(ByVal customerQuery As CustomerQuery) As IList

Finally, the instantiation code was factored out into another private method of its own. This is found in Listing 8.

Private Shared Function _InstantiateCustomers _

(ByVal unitOfWork As UnitOfWork, _
ByVal expanded As Boolean) As ArrayList

Need for the Delegate Solution?
Well, there might be situations when you need the old solution with the delegate, but for the most part I dont think you do. If you do need it, you have to add the methods again that I have dropped from the Unit of Work classes. Those were AddResultHander(), ExecuteReturnEntity() and ExecuteReturnList().

Query Object
I touched upon the Query Object [5] several times in part 3 [3] and 4 [4] where I showed how to use it for setting criteria in Query Object instances.

The purpose of the Query Object [5] pattern is to encapsulate all information for a query that is needed to execute the query against the database.

As you saw in Listing 7, I then take out the criteria information from the Query Object, and use the criteria values directly as parameters to my stored procedures. Of course, this is just an example and a simple one at that, but you can use this solution as the basis for solving much more complex problems too.

As you might remember, in part 3 [3] I also used the CustomerQuery for consumer-side filtering, so you dont only have to use Query Objects when accessing the database, of course.

My current implementation of Query Object classes such as CustomerQuery inherits from the MustInherit class QueryBase. You’ll find the QueryBase class and an example of a subclass (CustomerQuery) in Figure 4.

Figure 4 The QueryBase class

As you saw in Listing 7, Criteria() is used for taking out criteria values from the Query Object. Also note that the AddParameter() methods of the Unit of Work classes use named parameters (for example @myParameter=’TheValue’) when calling the stored procedures. Because of this and as a result of default values for the parameters in the definitions of the stored procedure, its okay to not set all parameters explicitly in the call.

How are criteria set? By setting WriteOnly properties of subclasses such as setting CustomerNo in Figure 4 to the value 5, for instance. Finally, by letting the Query Object support consumer side filtering too, the subclasses must override the Match() method.

So, to implement a Query Object, what you have to write is the code for the criteria properties and the Match() method. An example of that code is found in Listing 9.

Public WriteOnly Property Name() As String

Why Not Persistence Access?
As you will have seen by now, and as I said earlier on, the Domain Model [5] is unaware of persistence access. Instead, the Service Layer [5] uses Data Mappers [5] for instantiating the Domain Model or for moving the data from the Domain Model to the database. One early decision I took with this new architecture was just this, that is, to move the persistence access code out from the Domain Model itself. I still think it was a good idea, but the main drawback was that I had to add a way to reach some of the private data of the objects, as you saw in Listing 8.

As you see, I thought I was getting a couple of advantages also. Firstly, with this solution I can instantiate a deep object model with a single roundtrip to the database. For example, I can instantiate a customer, the orders of that customer and the items for each order with just a single stored procedure call (that in turn executes three separate SELECT statements) and a single IDataReader if I think it’s a good idea in certain specific situations. (For most situations Lazy Load [5] is used here.) There are solutions to achieve the same thing when having persistence access inside of the domain objects, but you have to use pretty twisted solutions for level three and beyond, which I dont think is such a good idea.

The purpose of the Lazy Load pattern [5] is to delay loading objects in an object graph that you might not need at all or only need later on.

You might wonder how I achieve Lazy Load when the domain model isn’t supposed to know about persistence. Thats a very good point. The current solution I use is to let some of the Service Layer classes implement the IExpander [2] interface. Then the Domain Model instance will have an IExpander instance that helps with the lazy loading. This solution is not only used for expanding a collapsed object, but also for lazy loading of sub-collections, for example.

I also believe that portability and testability will increase (I will discuss testability much more next time), and in a way this solution follows the Single Responsibility Principle (SRP) [8].

Multiple Statements
So far, we have only discussed very simple cases when all the stored procedure calls were added to the Unit of Work from a single Data Mapper method. Its just as simple to call a couple more Data Mapper methods just to add to the very same Unit of Work instance. No real difference there. The only thing to think about is that you might have to keep the Unit of Work instance at the instance level of the Service layer object, as I discussed earlier. And of course, you have to take care with resultsets so your handler is prepared for what is coming.


What is trickier is if you find that you need several Unit of Works, but have to work with one single transaction. First of all, this is something I think you should try to avoid if possible, but there will probably be situations when you need to have this option. For instance, you might first need to fetch a value from the database, and depending upon what the value was (and perhaps some other logic), you should call one or the other stored procedure afterwards. If/when you get into this situation, you can then easily solve it by adding an option to somehow read/write the connection and the transaction objects of the Unit of Work. Don’t then forget to take care when implicitly ending transactions. You should probably skip implicit ending of transactions in this case.

Enterprise Services transactions are an alternative, but unfortunately declarative transactions currently also mean distributed transactions. In my opinion distributed transactions should be used when distributed transactions are needed, and only then.

If you need to be able to use several Unit of Work instances within the same Service Layer method for example, but not within a single transaction, this is also simply solved. Just do it, so to speak. No extra complexity is added, and as long as a transaction doesnt have to span several Unit of Work instances, you dont have to be as careful at all. Just make sure that it is correct that you dont execute both of the Unit of Work instances within one single transaction. Remember, the number one goal of transactions is correctness, and nothing else.

Is the Persistence Access only for Databases?
No, not at all. Some data can be found in files, of course, and some via some Web Services, for example. The Persistence Access can and should hide most of the details, but we have to think more about how to solve the details. Currently, I think that we should add more Unit of Work implementations as necessary.

Do you remember when I talked about letting one database transaction span several Unit of Work-instances? Perhaps instead of chaining several Unit of Work-instances, we should use the Command [9] pattern here, possibly in a way combined with the Strategy pattern [9]. I havent worked through this yet, but right now it feels that it will fit nicely into the picture.

The purpose of the Command [9] pattern is to make it possible to bunch together several commands and execute them one after the other. Its pretty common that each command knows how to undo itself. The Strategy [9] pattern is used when you want to easily switch between different algorithms for a certain task.

Worth mentioning is that the Command pattern comes in as a handy solution for creating a solution for compensating to simulate transactions for files and Web Services. We can use this if we dont want to wait for standards and implementations of those standards for Web Services transactions at least.

The Compensating Resource Manager (CRM) of Enterprise Services is another solution to this problem. Its very appropriate here so I will consider that one too when I investigate this further.

My Blog

I will use my blog [10] for discussing this article series, among other things. Make sure you direct your RSS reader to the blog, if you want to follow the development of the architecture a bit closer than by just reading the articles.

References

[1] A pure object-oriented domain model by a db-guy. Part 1: Introduction
http://www.vb2themax.com/HtmlDoc.asp?Table=Articles&ID=550

[2] A pure object-oriented domain model by a db-guy. Part 2: One base class

http://www.vb2themax.com/HtmlDoc.asp?Table=Articles&ID=670

[3] A pure object-oriented domain model by a db-guy. Part 3: The consumer perspective

http://www.vb2themax.com/HtmlDoc.asp?Table=Articles&ID=690

[4] A pure object-oriented domain model by a db-guy. Part 4: The Application layer perspective

http://www.vb2themax.com/HtmlDoc.asp?Table=Articles&ID=700

[5] Martin Fowler; Patterns of Enterprise Application Architecture; Addison-Wesley 2002; ISBN 0?321?12742?0

[6] John Vlissides; Pattern Hatching; Addison-Wesley 1998; ISBN 0-201-43293-5

[7] Jimmy Nilsson; .NET Enterprise Design with Visual Basic .NET and SQL Server 2000; Sams 2001; ISBN 0-672-32233-1

[8] Robert C. Martin; Agile Software Development. Principles, Patterns and Practices; Prentice Hall 2002; ISBN 0-13-597444-5

[9] Erich Gamma et al; Design Patterns; Addison-Wesley 1994; ISBN 0-201-63361-2

[10] Jimmy Nilssons weblog; http://www.jnsk.se/weblog/

[11] http://www.c2.com/cgi/wiki?MockObject

http://www.mockobjects.com/

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist