Browse DevX
Sign up for e-mail newsletters from DevX


Scalable Architectural Guidelines: Designing for Flexibility : Page 3

Both stateless and stateful components have a legitimate role in any scalable system architecture. This applies equally to both Client Server and n-Tier architectures. This article covers basics and not-so-basics concepts related to scaling up and scaling out.




Building the Right Environment to Support AI, Machine Learning and Deep Learning

Delegate State Maintenance
The second way to address this is to delegate state maintenance to the base client, rather than keeping it on the server. Thus, on successive calls, the client passes in the relevant state accumulated during the previous call(s) and the server can operate with the state information which it needs. Since state is being passed back to the client, and returned to the server on subsequent calls, successive client calls can be handled by any available server.

There are a few factors which can help you to determine which method to use. If security is an issue and the absolutely highest level of security is desired in order to protect the privacy and integrity of state information, then the most secure solution is obviously to store that state somewhere on the server, rather than sending it back to the client. Additionally, if the volume of state information is large, and the network link out to the client can be slow (e.g. 56K dial-up) then for efficiency’s sake it is prudent to store that state on the server where it can be accessed quickly, rather than shuttling that information back and forth across a slow network link. Another factor to consider is that state accumulated at the client will only be available at that particular client. If you’d like to persist state between sessions, then you’ll need to store it on the server since there’s no guarantee that the user will initiate all sessions from the same workstation.

OK, so we’ve solved the problem of interim state storage so now all server methods can be stateless. Recall however, that I mentioned two ramifications of load balancing which we need to consider. The second ramification is one more obstacle which we must surmount before we can achieve our stated goal of real-time load balancing. That is that server objects must be deactivated so that successive client calls (if applicable) can be routed to any server which is available.

Did I say deactivate? Well, let me clarify up front that I’m not referring to MTS/COM+ deactivation via SetComplete. (I’ll use the term COM+ to refer to both MTS & COM+ from here on in.) Contrary to conventional wisdom, COM+ deactivation provides very little in the way of preparing your software for scalability.

The problem with COM+ deactivation, that is to say the reason it is ineffective in this regard, is that, while the object is indeed destroyed on the server (since VB6 objects are non-poolable) the client is blissfully unaware that the object has been destroyed. The client remains with a COM proxy (which it thinks is the actual COM object) pointing directly at a COM stub on a specific server. This means that any successive calls to the server object will fail unless they are routed to the original server. This means that successive calls can’t necessarily be handled by the most available server of the moment.

This is worth repeating. COM+ deactivation (or JIT - Just in Time Activation/Deactivation) is not a significant factor for improved scalability or performance. First of all, clients shouldn’t be making excessive calls out to server objects since successive calls to server objects will involve multiple calls across the network. To make matters worse, if the server is implementing COM+ JIT, this will involve repetitive creation and destruction of the server object which will most likely degrade performance, rather than enhance it. What’s that? You say your server tier is deployed locally to its client so network communication isn’t an issue? Well that’s good for now, but what if you eventually need to scale this out? Are you prepared to redesign and redevelop your interfaces? Designing for the future is a critical factor in achieving a flexible, and truly scalable system architecture.

The most important factor for both performance and scalability is that the client communicate with its inter-tier server objects in an efficient one-call, get-in, get-out’ manner. This mitigates the number of calls across the network, and reduces to one, the number of times a server object needs to be created in order to service a particular transaction. Naturally, a class which provides its entire service via a single method call is a stateless class which can be destroyed by the client after that single method call completes. We’ll discuss this particular aspect of statelessness further in a moment.

COM+ deactivation is primarily a mechanism for destroying a transactional object after its transaction has been committed. COM+ deactivation (especially in regard to non-poolable objects) does very little, practically nothing, to enhance performance on the server. Additionally, injudicious use of deactivation for an object which is called repetitively by its client can actually degrade performance. Serendipitously, the (mis)use of deactivation has resulted in the design and development of stateless classes which provide 'one-call' services to their inter-tier clients. It is precisely the one-call, 'get-in get-out' nature of these objects which enables them to be scaled out properly. Deactivation has very little to contribute in this regard. Quite a bit of information has already been published regarding the myths and facts surrounding COM+ deactivation for non-transactional, non-poolable (e.g. VB6) components.

Ted Pattison, whose books I have read and recommend highly, makes mention of this issue, in both of his books;

Programming Distributed Applications with COM and Microsoft Visual Basic 6.0
Programming Distributed Applications with COM+ and Microsoft Visual Basic 6.0

While I’ve not seen this personally, I understand that Don Box and Tim Edwald also address this issue in their respective writings. All of these authors are well known and enjoy favorable reputations among the developers’ community. It would be extremely worthwhile for any developer seeking more information on this subject to acquire and read one or more publications by any of these authors.

For more information available on-line, you can take a look at this excellent article by Ted Pattison.


Now while it’s true that successive calls must be locked to the server on which the object was originally created, I don’t wish to over-state or over-dramatize the ramifications of this fact. If the client is itself an object which exists as a component of a transaction whose life-span is measured in seconds (or even shorter), then this is not that big a deal. Ideally, inter-tier calls should involve a single stateless call and the server object should be released by the client upon its return. If the unit of work can’t be accomplished in a single call, then the next best thing is probably to maintain the server component statefully on the server, and lock this client to the specific server for all successive calls.

In this case, the client will be pinned to a specific server for a matter of milliseconds, or even seconds which is relatively a long time. However, in this scenario we are dealing with a client which will be calling into the server class for a second or third time. I’d prefer to make the assumption that the load on the server to which this client is pinned will not change all that dramatically over the next second or two, rather than go though the definite overhead of having the client release and re-instantiate the object for every successive call, even though that would allow each successive call to be assigned by the load balancer. Note that .NET introduces a new dynamic in this regard with the introduction of poolable objects for VB developers. If the class can be released to the pool for use by other clients, rather than completely destroyed, this might be a factor to recommend object deactivation, that is, release to the pool so that the object is not held idly between calls. Tread carefully though. In order to pool objects successfully you need to ensure that the overhead of pooling is not greater than the overhead of simply destroying and recreating them. But that’s another discussion for another day.

The one thing which should never be done (in terms of scalability), is to lock a persistent client (e.g. a Windows user application, or a browser session) to a specific server. Since the client will be pinned to the server over a span of minutes, or even hours, conditions on that server could degrade markedly from what they were at the time the lock was established. However, because the user’s client agent is pinned to a specific server, his or her session cannot take advantage of an alternate server which is currently under a lower level of load. Of course, if this is your only server at this point in time, then the point is moot - for now. If you do lock a persistent client to a particular machine, by storing state in the Session object for example, you will need to make substantial modifications to the state management aspects of your application in order to scale out via a replicated deployment in the future.

Let’s return to an important point which we touched upon above. It is important for both performance and scalability, that the client communicate with its remote server in an efficient one-call, get-in, get-out’ manner. This mitigates the number of calls across the network, and reduces to one, the number of times a server object needs to be created in order to service a particular transaction.

Object Oriented aficionado’s might well be wondering. 'Hold on a minute. Standard OO design mandates that classes encapsulate and maintain their own data and provide it to their clients through a defined interface.' Well, OK, that’s true, but there’s a time and place for everything. The object oriented discipline is intended to address software construction issues rather than issues of performance and scalability. As such, OO techniques are fine when developing client side software since workstation resources are relatively plentiful. They are also suitable for intra-tier class construction and communication. However, they are definitely not suitable for inter-tier class relationships. As a developer, I sometimes feel like I’m on an incredible journey. The feeling is not unlike being whisked up by a cyclone, whirled around and deposited in a strange, faraway land. To paraphrase the young lady who preceded us on just this type of journey...

I don’t think we’re on the Client anymore, Toto!

Comment and Contribute






(Maximum characters: 1200). You have 1200 characters left.



Thanks for your registration, follow us on our social networks to keep up-to-date