A Pure Object-oriented Domain Model by a DB Guy, Part 4

A Pure Object-oriented Domain Model by a DB Guy, Part 4

’d like to start today by getting rid of a myth. Just because we have decided to work with custom classes doesn’t mean that we also have to decide to use a stateful model on the application server. We can work in a similar way as with DataSets so that everything is torn down at the application server after a request has been fulfilled. This is commonly known as a stateless model. The only real difference is that when using custom classes the data structure is typically smarter and more encapsulated than when using DataSets, since custom classes have custom behavior as well as the data, and won’t allow completely open access to data.

You can inherit from a DataSet and add your custom behavior to the subclass, but the encapsulation is still weak. A better solution then is to wrap a DataSet in a custom class of your own. You can find more about that here [1].

The next question then might be: But isn’t this wasteful? Creating a lot of objects only to return them to the client and then tear them down? Well, sure, it is. But it’s no more wasteful than creating a lot of DataRows in a DataSet, returning the DataSet to the client and tearing down the DataSet at the application server. That is just as wasteful.

OK, what’s the alternative then? We can be stateful on the server, letting the object stay and live between requests. We can even share objects between users, but this can be done both when DataSets are used and when custom classes are used for the domain data. The downside this time is that it very quickly gets much more complicated! To give a simple example, think about what happens when you find out that you need to add another application server to handle massive load. You then have a difficult problem to solve, with the shared and stateful objects being up-to-date on two separate application servers, then on three, and so on. This is a hard problem, at least if you want a good solution. Another really tricky problem in real world applications is the threading synchronization needed for shared objects. There are simple solutions to the problem, but those also pay a high price as far as efficiency is concerned.

So, we can write stateless applications with server-side objects per request just as many of us are used to doing, even if we decide to go for a domain model based on custom classes. We don’t have to, but it’s a simple and highly efficient solution. That’s the path I’m taking in this series of articles.

OK, things aren’t as black and white as I’m making them sound here. For instance, I like read only data to be both stateful and shared between requests. Since it doesn’t change, we don’t have the usual risk of caches getting out of synch.

Also, I often keep instances in the Application layer alive during the execution of a complete use case. That is, I often call several methods on an Application layer instance without re-instantiating it between calls. But the objects that are returned from the methods are most often not kept stateful at the application server and the objects are almost never shared between users or even requests.

My Favorite Style
One important part of my favorite default design is the layer I call the Application layer, which you can see in context of a complete package diagram shown in Figure 1.

Figure 1 The layer model

The idea of the Application layer is that it is the one and only gateway to the Business tier (and therefore also to the Data tier) from the consumers. I find this layer a great help in writing stateless server-side applications. It’s not essential, but is helpful. Other advantages are:

+        You will typically get a chunky interface instead of a chatty interface.
This is especially important if there is an AppDomain (often called a logic process) boundary between the Consumer tier and the Business tier.

+        A placeholder for service-related functionality.
This layer is very much about service-orientation and this is where I think you should deal with such things as starting and ending database transactions (logically not physically), checking privileges, high level auditing, and so on.

+        Design-friendly.
In my opinion, analyzing and designing systems by starting with the use cases is a very efficient methodology. For example, the clients understand it and participate efficiently right away. In addition, if you have the very use cases in the code, then you have very good traceability from requirements to solution. I believe that helps maintainability a lot.

+        Encapsulation.
What happens behind the Application layer is completely encapsulated for the consumer tier, which means you can change it when you need to without having to change the consumer code. You have the chance of exposing a minimal and controlled interface to the consumers.+        A general interface.
You have a pretty good chance of being able to reuse the same Application layer from different types of consumer tier applications. You might have to add a thin wrapper in some cases, but it usually won’t be any worse than that. +        A placeholder for Transaction Scripts [2].
This is the natural place to have your Transaction Scripts, if you use them. For example, this is how to coordinate actions on several domain objects, interactions with the database, and so on. To a certain degree, I think Transaction Scripts are nice, just watch out so you don’t overuse them. As always the same rule applies: namely, everything in moderation.

The Transaction Script pattern means that a couple of statements are bunched together in a method and executed one after the other.

I know you won’t believe me if I don’t give any disadvantages, so here are some:

–         Over design of simple applications.
For some simple applications, having this layer might feel like over design

–         Risk of a less rich user interface experience.
You will move some behavior pretty far away from the consumer. This means the consumer might not get instant feedback from actions, but get feedback only after several actions have been made and submitted. On the other hand, since by default I let the domain model objects travel from application server to consumer, this risk is reduced quite a lot.

It is not the objects themselves but the state of the objects that is traveling. The illusion of traveling objects is created for us through .NET serialization.

Is it necessary to have an Application layer with web applications, or is it only beneficial when you have an AppDomain boundary between the Consumer tier and the Business tier (which you generally don’t have in the case of web apps)? Well, first of all the word generally is a clue here. You might have an AppDomain boundary in the future, and even if you don’t, the other advantages apply.

Which is the best architecture?

Of course, I’m not even trying to persuade you to believe that the architecture I’m discussing today is the best whatsoever. There is of course no such architecture around today nor will there ever be. It’s like asking what the best meal/house/car etc is.

What I’m discussing here is what I personally think is the best default architecture for me and the kind of systems I usually develop. Hmmm What’s the best default meal/house/car for me and the typical situations I’m in? I’m still trying to oversimplify I guess.

Today’s work
So, let’s move on to today’s work. Do you remember the use case from part 3? Oops, I almost forgot. I should first direct any new readers to previous parts (part 1 [3], part 2 [4], part 3 [5]) if you haven’t read them already. With that out of the way, let’s get going.

Once more then, do you remember the use case from part 3? The example use case was about registering an order, but first locating a customer and the desired product. In part 3, we investigated the use case from a consumer perspective and therefore I showed the code for how the domain model was interacted on from the consumer. There was also some but very little code for calling the Application layer to fetch and update a domain model subset.

Today, we will move one step to the right in figure 1 and focus on the Application layer, and before we end today look at the Consumer Helper layer too. The Consumer Helper layer is actually very important when it comes to Windows Forms applications, but we’ll start today with Web Forms applications. It’s important to note that, if possible, I want the Application layer to be the same for both (and other) types of applications. We’ll come back to this when it’s time for a context switch.

The Application Layer Class

In the previous article I created the OrderRegistration class shown in Figure 2 to support the example use case.

Figure 2 The OrderRegistration class, old version

All the services the consumer needs for the use case are provided in a single Application layer class.

Quite a lot of the required functionality is also provided in the domain model classes that are fetched from the Application layer class. I mean, not all the functionality for the use case is found in the application layer – as much as possible, which is still adequate, should be in a rich domain model instead. Of course, the domain model classes are also often used a lot by the Application layer classes, especially for advanced operations.

Changes, Again

As you know, I’m creating the new architecture while I write these articles. Because of this I discover changes that I want to apply as I write each new part in this series. That is the case this time too.

I was stupid yesterday. I’m smart today!

Does this sound familiar? I mean, when you look back today at the code you wrote the other day, do you wonder how you could have been so stupid then? Luckily you feel very smart today.

Well, it could be worse. Think about it. You wouldn’t like it the other way round, looking at your old code and thinking that you were so smart before, but not now.

I guess this whole thing is a sign of a successful learning process. At least that is how I like to think of it. Or perhaps this is just a defence mechanism.

Well, the only change I’ve made this time is to skip the AndRefresh() style in the API. The idea with SaveAndRefreshOrder()was that it took an order as parameter, for example, and sent back the same order object after it had been saved to the database, and had had its values refreshed. This has its merits, especially as regards efficiency because it cuts out one roundtrip if the consumer wants to have the updated object back for further processing. Anyway, after having thought some more about it, I have now decided to go back to my old way of doing this. That is, to have one method that does the save and another method to do the fetch afterwards. The advantages of this are:

+        Often little need for refresh in typical web applications.
When it comes to web applications, it’s often useless to get a refreshed object back. This is because you will have a new request to render the object, and by then the refreshed object is gone if you haven’t stored it in the Session object, say. We are simplifying by deciding on a new fetch instead.

+        No real need for refresh if the only reason is to grab primary key value after insert.
Often, the reason for the refresh is to get the newly created primary key value back for new rows, when identities (database-based autonumbering of some kind) are used. Since I typically use GUIDs, created at the client for new rows, this is not a problem. If you want to use integer primary keys, you can create a custom number generator that grabs blocks of values from the database and thereby avoid the risk of bottleneck that is so common for custom primary key generators. We also avoid other problems, because of delayed key generation, if we use GUIDs or some other custom solution, but of course the architecture will support database-based autonumbering too. For now, this is another story. Hopefully I will start to address it in Part 6.

+        More obvious API regarding what is really happening when Remoting is involved.
Remoting will not send the same object back, but rather a new one instead. We’ll return to this later, but for now I think we can agree that it can cause one or two problems for us. One, at least and a rather big one at that.

+        No ByRef in the interface.
Well, skipping ByRef isn’t a real benefit as long as ByRef isn’t causing more data than is necessary to go between AppDomains. It’s more of a feeling that ByRef is often a sign of weak design. Is this asking for a barrage of emails now? Hey, I do use ByRef and I use it quite often!

+        Pushing an asynch-friendly interface.
If possible, I think it’s a good idea to prepare for, or even start directly using an interface that is asynch-friendly. If you use fire and forget-methods like Save(), then you can let them add a request to a queue, and the request will be executed when appropriate, but not in real time. The user will see the benefit of this directly because he doesn’t have to wait for the request to execute, but of course, take care so you don’t add this fire and forget-style without investigating what implications you get in your specific application.

+        Only primitive operations in the API.
The methods of the Application layer class are simpler now. Each of the methods does one thing, and does it well. By trying to get simpler methods in the interface, you will usually find that you get a more flexible and reusable API. If you think that the old solution was simpler for the user interface developer, you can always let the Consumer Helper layer simplify and hide the new API and show a more granular interface.

In a way, only trying to have primitive methods in the interface, as I talked about above, is like following the Single Responsibility Principle (SRP) [6].

So, to conclude all this, I think I am moving to a simpler and more general API in using the change just discussed. The new OrderRegistration class now looks like the one in Figure 3.

Figure 3 The OrderRegistration class, new version

You might wonder why I think a larger API (more methods) is simpler. Well, the methods in themselves are simpler than before, with only one task each. Also, the old API probably would have needed FetchCustomer() and FetchOrder() methods too, so it would become the largest one sooner or later.

The Application Layer Class
To be honest, the use case I started discussing in the last article is too simple (but that is good for the moment) which is apparent when we come to the Application layer class. All it has to do is interact with the Persistence Access layer. Anyway, in my opinion talking to the Persistence Access layer is extremely interesting in itself so let’s have a look at that now. In Listing 1 you’ll find the FetchCustomers() method.

Public Function FetchCustomers _
(ByVal customerQuery As CustomerQuery) As IList

Types of Parameters
One thing that I have avoided touching on so far is what kind of parameters to use when fetching single objects. The main reason for avoiding this was the ..AndRefresh() style of interface used last time. Now, when I have changed to, say, SaveCustomer()/FetchCustomer() instead, I have to decide on the parameter type for FetchCustomer() and FetchOrder() and so on.

My first idea was to use ordinary objects here too, such as Customer objects with present key values. Note, for example, that FetchCustomers() uses a query object as its parameter and the query object has all the important information (criteria) needed for the query to be executed. By sending a customer object as parameter to FetchCustomer(), the same pattern is used again and the customer object will have all the criteria information (the primary key value) that is needed.

I first thought I could argue for sending ordinary objects (such as Customer instances) by pointing to the Introduce Parameter Object refactoring [9], but I changed my mind about that. That refactoring is more about creating classes of related values that are used as parameters. The Customer class is not created just because of the parameter requirement. It’s already there and is designed to do other things than just be used as a parameter containing the key value.

So, what’s the problem with the object approach then? Well, it feels a bit wasteful to send a full fledged object, when there’s actually only one single property that is interesting. This is especially the feeling if there is an AppDomain boundary to be passed so that the whole object will be serialized and deserialized.

I could solve this by providing yet another form of unexpanded state of the object, having only the key value, but this adds to the complexity and it means a lot of work for a minor benefit. Another thing is that remoting creates a problem here because the object you get back is not the same as the one you sent.

I’ve mentioned the problem with remoting a couple of times and I’m still not done with it. As you’ve probably guessed already, this bothers me.

So, what is it that I’m really after? Well, the most important thing is that I don’t want to spread information about the key structure everywhere, especially not in the consumer tier. On the other hand, if all the keys are GUIDs, then it’s not that much of a problem to let the consumer know about the datatype. It’s not very likely that you will outgrow the GUID datatype, for example, and have to change it as a result Even so, it’s nice to encapsulate the information about the key datatype from the consumer if you have to change it, especially if you start with a smaller datatype. Remember, one of the main reasons for the new architecture was to provide a higher degree of encapsulation of the database so that it can evolve more easily without affecting the rest of the applicationLocating the correct Persistence Class

As a matter of fact, locating the correct persistence class for filling domain objects or saving domain objects is just part of the problem of mapping the relational database to your domain model. You might find this trivial at first, but it’s not. For example, add inheritance to the picture and the problem quickly gets complicated.

Since I haven’t decided on how I want the whole Data Mapper [2] thing yet, I write the mapping code manually and explicitly for now.

The Data Mapper pattern discusses (typically) how an object-oriented domain model interacts with a relational database.

When I have decided how I want the Data Mapper code, I have several different mechanisms from which to choose. First I have to decide on where to hold the meta data that I need for mapping between the domain model and a relational database:

  • Attributes
  • File

Interface or Not
In part 3 I accessed the Application layer class directly, without custom interfaces (Bridge pattern [10] or Separated Interface pattern [2]).

The way I see it the Bridge and Separated Interface patterns are more or less what you get when you declare an instance as an Interface instead of as a class.

Working directly with the Application layer classes is easiest, and I also think there is otherwise a risk of finding that I have to create one interface per Application layer class. Been there, done that.

On the other hand, if you are going to talk with the Application layer over a remoting boundary, then the best practice is to use custom interfaces only [11]. One good thing about this is that you only have to distribute an assembly of interfaces to the consumer and not the assembly with the Application layer code itself. Plus, the added complexity is completely encapsulated by the Consumer Helper layer.

My friend Enrico Sabbadin [13][14] had this to say about only using custom interfaces when talking over a remoting boundary:

Watch out, this applies to SAOs only!

As you know remote objects can be registered in two ways: Client Activated Object (CAO) or Server Activated Object (SAO). SAOs can be registered as singletons or single calls.

If your object resides in assembly A and implements an interface defined in assembly B AND the object is registered as an SAO, you can get a reference to it using the Activator.Getobject function having only the interface (assembly A) deployed at the client side.

This does not apply to CAO. GetObject can’t be used for CAO. And don’t be fooled by the Activator.CreateInstance overloads which get an assemblyclassname and an activator attribute where you put the remote URL. This does let your client code be unaware about the class type, BUT the class type has to be available to the runtime on the client side.

For instance the code below (client side) will fail at the Activator.CreateInstance line unless the assembly where CaObject.mycao resides is available to the client process.

Dim url As New _ Activation.UrlAttribute(“tcp://arrakis3k:8082/Myapp”)

Dim ff(0) As Object

ff(0) = url

Dim s As ObjectHandle = _
Activator.CreateInstance(“caobject”, _

“CaObject.mycao”, ff)

Dim ggg As Imyint = CType(s.Unwrap(), Imyint)

If Not RemotingServices.IsTransparentProxy(ggg) Then

Process Helpers
Because we have now added a possible remoting boundary to the picture, I’d like to end this article by discussing where to put code for business rules. This is a problem that crops up all the time, but let’s assume that the main rule is to put it in the domain classes. One specific problem I have struggled with is how to deal with code that I think should be in the domain model, but that I don’t want to execute on the client.

One reason might be that I know that a lot more data must be read from the database in order for the code to be executed and I don’t want to send the data over a slowish wire. Another reason might be that I want to execute the code in a secure and controlled environment, or perhaps where certain resources are available. A good example of this is when I want to run the business rules in a database transaction. (Database transactions are logically dealt with by the Application layer.) Yet another reason is that you don’t want to expose the algorithms at all by letting the client have the code. (If you let your domain objects travel, the client must have the code for the domain classes.)

A simple solution is, of course, to put the code in the Application layer classes, because those classes will never travel to the client. However, there is a risk that you will duplicate code over time since the same domain classes will typically be used from many different Application layer classes. The code really belongs to the domain classes, so to speak.

I still believe that some rules should be dealt with in the Data tier, in stored procedures, for example. It depends on the nature of the rules. See my book [7] for more information about putting rules in the Data tier.

I thought about subclassing the domain model classes and have the server-side logic in the subclasses only. I could then see to instantiating the subclass (CustomerEx for example) at the server, upcast to the superclass (such as Customer for example) and send the superclass to the client. When the client sends the object back, it can be downcasted to CustomerEx and the server-side code is available. I haven’t actually tried this, and it feels messy, complex and not very obvious. What’s more the client must have the code for the CustomerEx and then can do the downcast himself.

Assume you can live with the client having the code. You just don’t want him to call certain methods of the domain model. One possible solution is to use Code Access Security to check where the call is coming from. You could also add an unknown token as parameter to the methods you don’t want to run at the client. Then you keep that token unknown to the client, but well known to the Application layer. A good example of security by obscurity, right? It may feel quite a lot like a hack, but it works for some situations.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist