’d like to start today by getting rid of a myth. Just because we have decided to work with custom classes doesn’t mean that we also have to decide to use a stateful model on the application server. We can work in a similar way as with DataSets so that everything is torn down at the application server after a request has been fulfilled. This is commonly known as a stateless model. The only real difference is that when using custom classes the data structure is typically smarter and more encapsulated than when using DataSets, since custom classes have custom behavior as well as the data, and won’t allow completely open access to data.
The next question then might be: But isn’t this wasteful? Creating a lot of objects only to return them to the client and then tear them down? Well, sure, it is. But it’s no more wasteful than creating a lot of DataRows in a DataSet, returning the DataSet to the client and tearing down the DataSet at the application server. That is just as wasteful.
OK, what’s the alternative then? We can be stateful on the server, letting the object stay and live between requests. We can even share objects between users, but this can be done both when DataSets are used and when custom classes are used for the domain data. The downside this time is that it very quickly gets much more complicated! To give a simple example, think about what happens when you find out that you need to add another application server to handle massive load. You then have a difficult problem to solve, with the shared and stateful objects being up-to-date on two separate application servers, then on three, and so on. This is a hard problem, at least if you want a good solution. Another really tricky problem in real world applications is the threading synchronization needed for shared objects. There are simple solutions to the problem, but those also pay a high price as far as efficiency is concerned.
So, we can write stateless applications with server-side objects per request just as many of us are used to doing, even if we decide to go for a domain model based on custom classes. We don’t have to, but it’s a simple and highly efficient solution. That’s the path I’m taking in this series of articles.
My Favorite Style
One important part of my favorite default design is the layer I call the Application layer, which you can see in context of a complete package diagram shown in Figure 1.
Figure 1 The layer model
The idea of the Application layer is that it is the one and only gateway to the Business tier (and therefore also to the Data tier) from the consumers. I find this layer a great help in writing stateless server-side applications. It’s not essential, but is helpful. Other advantages are:
+ You will typically get a chunky interface instead of a chatty interface.
This is especially important if there is an AppDomain (often called a logic process) boundary between the Consumer tier and the Business tier.
+ A placeholder for service-related functionality.
This layer is very much about service-orientation and this is where I think you should deal with such things as starting and ending database transactions (logically not physically), checking privileges, high level auditing, and so on.
In my opinion, analyzing and designing systems by starting with the use cases is a very efficient methodology. For example, the clients understand it and participate efficiently right away. In addition, if you have the very use cases in the code, then you have very good traceability from requirements to solution. I believe that helps maintainability a lot.
What happens behind the Application layer is completely encapsulated for the consumer tier, which means you can change it when you need to without having to change the consumer code. You have the chance of exposing a minimal and controlled interface to the consumers.+ A general interface.
You have a pretty good chance of being able to reuse the same Application layer from different types of consumer tier applications. You might have to add a thin wrapper in some cases, but it usually won’t be any worse than that. + A placeholder for Transaction Scripts .
This is the natural place to have your Transaction Scripts, if you use them. For example, this is how to coordinate actions on several domain objects, interactions with the database, and so on. To a certain degree, I think Transaction Scripts are nice, just watch out so you don’t overuse them. As always the same rule applies: namely, everything in moderation.
I know you won’t believe me if I don’t give any disadvantages, so here are some:
– Over design of simple applications.
For some simple applications, having this layer might feel like over design
– Risk of a less rich user interface experience.
You will move some behavior pretty far away from the consumer. This means the consumer might not get instant feedback from actions, but get feedback only after several actions have been made and submitted. On the other hand, since by default I let the domain model objects travel from application server to consumer, this risk is reduced quite a lot.
Is it necessary to have an Application layer with web applications, or is it only beneficial when you have an AppDomain boundary between the Consumer tier and the Business tier (which you generally don’t have in the case of web apps)? Well, first of all the word generally is a clue here. You might have an AppDomain boundary in the future, and even if you don’t, the other advantages apply.
So, let’s move on to today’s work. Do you remember the use case from part 3? Oops, I almost forgot. I should first direct any new readers to previous parts (part 1 , part 2 , part 3 ) if you haven’t read them already. With that out of the way, let’s get going.
Once more then, do you remember the use case from part 3? The example use case was about registering an order, but first locating a customer and the desired product. In part 3, we investigated the use case from a consumer perspective and therefore I showed the code for how the domain model was interacted on from the consumer. There was also some but very little code for calling the Application layer to fetch and update a domain model subset.
Today, we will move one step to the right in figure 1 and focus on the Application layer, and before we end today look at the Consumer Helper layer too. The Consumer Helper layer is actually very important when it comes to Windows Forms applications, but we’ll start today with Web Forms applications. It’s important to note that, if possible, I want the Application layer to be the same for both (and other) types of applications. We’ll come back to this when it’s time for a context switch.
The Application Layer Class
In the previous article I created the OrderRegistration class shown in Figure 2 to support the example use case.
Figure 2 The OrderRegistration class, old version
All the services the consumer needs for the use case are provided in a single Application layer class.
As you know, I’m creating the new architecture while I write these articles. Because of this I discover changes that I want to apply as I write each new part in this series. That is the case this time too.
Well, the only change I’ve made this time is to skip the AndRefresh() style in the API. The idea with SaveAndRefreshOrder()was that it took an order as parameter, for example, and sent back the same order object after it had been saved to the database, and had had its values refreshed. This has its merits, especially as regards efficiency because it cuts out one roundtrip if the consumer wants to have the updated object back for further processing. Anyway, after having thought some more about it, I have now decided to go back to my old way of doing this. That is, to have one method that does the save and another method to do the fetch afterwards. The advantages of this are:
+ Often little need for refresh in typical web applications.
When it comes to web applications, it’s often useless to get a refreshed object back. This is because you will have a new request to render the object, and by then the refreshed object is gone if you haven’t stored it in the Session object, say. We are simplifying by deciding on a new fetch instead.
+ No real need for refresh if the only reason is to grab primary key value after insert.
Often, the reason for the refresh is to get the newly created primary key value back for new rows, when identities (database-based autonumbering of some kind) are used. Since I typically use GUIDs, created at the client for new rows, this is not a problem. If you want to use integer primary keys, you can create a custom number generator that grabs blocks of values from the database and thereby avoid the risk of bottleneck that is so common for custom primary key generators. We also avoid other problems, because of delayed key generation, if we use GUIDs or some other custom solution, but of course the architecture will support database-based autonumbering too. For now, this is another story. Hopefully I will start to address it in Part 6.
+ More obvious API regarding what is really happening when Remoting is involved.
Remoting will not send the same object back, but rather a new one instead. We’ll return to this later, but for now I think we can agree that it can cause one or two problems for us. One, at least and a rather big one at that.
+ No ByRef in the interface.
Well, skipping ByRef isn’t a real benefit as long as ByRef isn’t causing more data than is necessary to go between AppDomains. It’s more of a feeling that ByRef is often a sign of weak design. Is this asking for a barrage of emails now? Hey, I do use ByRef and I use it quite often!
+ Pushing an asynch-friendly interface.
If possible, I think it’s a good idea to prepare for, or even start directly using an interface that is asynch-friendly. If you use fire and forget-methods like Save(), then you can let them add a request to a queue, and the request will be executed when appropriate, but not in real time. The user will see the benefit of this directly because he doesn’t have to wait for the request to execute, but of course, take care so you don’t add this fire and forget-style without investigating what implications you get in your specific application.
+ Only primitive operations in the API.
The methods of the Application layer class are simpler now. Each of the methods does one thing, and does it well. By trying to get simpler methods in the interface, you will usually find that you get a more flexible and reusable API. If you think that the old solution was simpler for the user interface developer, you can always let the Consumer Helper layer simplify and hide the new API and show a more granular interface.
So, to conclude all this, I think I am moving to a simpler and more general API in using the change just discussed. The new OrderRegistration class now looks like the one in Figure 3.
Figure 3 The OrderRegistration class, new version
You might wonder why I think a larger API (more methods) is simpler. Well, the methods in themselves are simpler than before, with only one task each. Also, the old API probably would have needed FetchCustomer() and FetchOrder() methods too, so it would become the largest one sooner or later.
The Application Layer Class
To be honest, the use case I started discussing in the last article is too simple (but that is good for the moment) which is apparent when we come to the Application layer class. All it has to do is interact with the Persistence Access layer. Anyway, in my opinion talking to the Persistence Access layer is extremely interesting in itself so let’s have a look at that now. In Listing 1 you’ll find the FetchCustomers() method.
Public Function FetchCustomers _
(ByVal customerQuery As CustomerQuery) As IList
Types of Parameters
One thing that I have avoided touching on so far is what kind of parameters to use when fetching single objects. The main reason for avoiding this was the ..AndRefresh() style of interface used last time. Now, when I have changed to, say, SaveCustomer()/FetchCustomer() instead, I have to decide on the parameter type for FetchCustomer() and FetchOrder() and so on.
My first idea was to use ordinary objects here too, such as Customer objects with present key values. Note, for example, that FetchCustomers() uses a query object as its parameter and the query object has all the important information (criteria) needed for the query to be executed. By sending a customer object as parameter to FetchCustomer(), the same pattern is used again and the customer object will have all the criteria information (the primary key value) that is needed.
So, what’s the problem with the object approach then? Well, it feels a bit wasteful to send a full fledged object, when there’s actually only one single property that is interesting. This is especially the feeling if there is an AppDomain boundary to be passed so that the whole object will be serialized and deserialized.
I could solve this by providing yet another form of unexpanded state of the object, having only the key value, but this adds to the complexity and it means a lot of work for a minor benefit. Another thing is that remoting creates a problem here because the object you get back is not the same as the one you sent.
So, what is it that I’m really after? Well, the most important thing is that I don’t want to spread information about the key structure everywhere, especially not in the consumer tier. On the other hand, if all the keys are GUIDs, then it’s not that much of a problem to let the consumer know about the datatype. It’s not very likely that you will outgrow the GUID datatype, for example, and have to change it as a result Even so, it’s nice to encapsulate the information about the key datatype from the consumer if you have to change it, especially if you start with a smaller datatype. Remember, one of the main reasons for the new architecture was to provide a higher degree of encapsulation of the database so that it can evolve more easily without affecting the rest of the applicationLocating the correct Persistence Class
As a matter of fact, locating the correct persistence class for filling domain objects or saving domain objects is just part of the problem of mapping the relational database to your domain model. You might find this trivial at first, but it’s not. For example, add inheritance to the picture and the problem quickly gets complicated.
Since I haven’t decided on how I want the whole Data Mapper  thing yet, I write the mapping code manually and explicitly for now.
When I have decided how I want the Data Mapper code, I have several different mechanisms from which to choose. First I have to decide on where to hold the meta data that I need for mapping between the domain model and a relational database:
Interface or Not
In part 3 I accessed the Application layer class directly, without custom interfaces (Bridge pattern  or Separated Interface pattern ).
Working directly with the Application layer classes is easiest, and I also think there is otherwise a risk of finding that I have to create one interface per Application layer class. Been there, done that.
On the other hand, if you are going to talk with the Application layer over a remoting boundary, then the best practice is to use custom interfaces only . One good thing about this is that you only have to distribute an assembly of interfaces to the consumer and not the assembly with the Application layer code itself. Plus, the added complexity is completely encapsulated by the Consumer Helper layer.