Scalable Architectural Guidelines: Designing for Flexibility

am frequently asked the following question,”Should I be using stateless or stateful components for scalability?”

My answer is invariably an emphatic “Yes!”

What I mean by this, is that both stateless and stateful components have a legitimate role in any scalable system architecture. This applies equally to both Client Server and n-Tier architectures. (After all, a Client Server system is an n-tier system composed of 2 tiers. In the following discussion we’ll use the term n-tier to refer to Client Server as well as classic n-tier systems, which implies 3 or more tiers.)

Before we can evaluate the question properly, we need to define what we mean when we say scalable.

As developers we are quite comfortable with the notion of efficiency. Efficiency means you use a binary search to search through an array of sorted values. It means you use a keyed item in your WHERE clause for indexed, rather than sequential retrieval. It means you process all columns in an array on a single pass, rather than scanning through the array repetitively. Enough. I’m sure you can supply plenty of your own examples. We get the message.

Efficiency establishes the baseline for your software’s performance. This baseline will naturally vary according to the supporting hardware. However, the efficiency baseline generally assumes that all of the resources of the supporting hardware, as well as the software itself, are dedicated and committed on behalf of a single user.

During the desktop era, and well on into the Client Server era, the bulk of software was indeed installed on users’ personal workstations. Thus the notion of efficiency was paramount. Efficient algorithms were sufficient to deliver excellent performance, running on a client platform dedicated exclusively to the needs of a single user. Naturally, if an old P90 wouldn’t deliver adequate performance, the workstation could be upgraded to a P133, P200 or better. This is what’s known as scaling up. But scaling up isn’t what most people are referring to when they mention scalability. We’ll touch again on this subject in a few moments.

Moving further along into the Client Server era, as software was humming briskly along on the client workstations, database administrators (the server guys) started noticing certain disquieting symptoms. Around 11:00am (that’s when the West Coast comes on-line) database performance started slowing to a crawl. This problem would manifest itself for about an hour and at about 12:00pm (that’s when the East Coast takes lunch) performance would improve somewhat. As additional offices were brought on-line with the new Client Server system, performance problems became more and more noticeable.

These DBA’s were among the first to be impacted by problems of scale. Their database servers, properly proportioned for a certain user base, were inadequate to handle an increasing community of users. Unlike a client workstation which supports a relatively fixed level of activity, the demand level on a server is not fixed, but rather grows according to its increasing user base. Thus, server platforms face problems of scale which do not impact on client workstations. In a nutshell; the level of performance a server will support is relative to the concurrent level of demand. Where demand is low, adequate support is provided. Where demand is too high, performance degrades to unacceptable levels.

Problems of scale occur when the capabilities (not necessarily the efficiency) of the software and its supporting hardware are exceeded by the demands of its growing user base. As we saw before, single user software installations are easy to scale (assuming the software is coded efficiently) since only a single user and a single supporting platform are involved. Thus, as long as the hardware can be scaled-up to accommodate the software all is well.

Multi-user systems can be scaled up as well. However the limits on scaling up are relatively finite. Go ahead, name your dream server. Budget is no problem, you’ve got the user base and revenue to justify it. Once you install this dream system you have maxed out. If your user base continues to grow, scaling-up no longer presents the solution since you are already scaled-up to the limit. It is true that at some point, perhaps next week or next month, more powerful hardware will inevitably become available. This is of relatively little comfort though, in supporting a user base which demands this additional power today.

Reality check: Scaling up is a very rational and reasonable approach for systems with a known user-base with limited growth potential. A departmental Intranet application is a good example of just such a scenario. Scaling up should be considered too limited though, for those systems which are exposed to a user-base with high, and perhaps virtually unlimited, growth potential (e.g. an Internet e-Commerce application).

A more enduring solution is to scale out. In contrast to scaling up, which is increasing the power of a single server, scaling out means adding additional servers to your supporting infrastructure. It’s easy to see how scaling out breaks the barrier presented by scaling up. Your user base keeps growing? No problem, just add more hardware. In contrast to the relatively limited options of scaling up, scaling out offers much greater growth potential. Just to give you an idea, the top-ranked (not the cheapest!) configuration for the most recent TPC-C benchmark, is composed of 32 database servers (8 CPUs each) and 4 DTC servers (4 CPUs each) for a total of 36 servers with 272 CPUs, in support of a single application!

Here are some interesting links for more details on this incredibly scaled-out system. It is gratifying to note that the supporting operating systems and server software are exclusively Microsoft Windows 2000 products.

  • www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=101091903
  • www.tpc.org/results/individual_results/Compaq/compaq.256p.091901.es.pdf
  • You might also wish to visit the Transaction Performance Processing Council
  • .

    Let’s recap a few points about scale and scalability. Problems of scale are associated with server, rather than client platforms. This is due to the fact that client load is limited to the level generated by a single user, while server demand can grow to virtually limitless proportions with an expanding user base. Solutions to this problem, at the hardware level, can be implemented in two ways. Scaling up, or increasing the power of a particular hardware component, is a relatively limited solution. Scaling out by increasing the number of hardware components in the supporting hardware infrastructure is a much more enduring solution. 

    So far we’ve been focused on scalable solutions at the hardware level. Well what about software? Good question, this is VB-2-The-Max after all! Let’s contrast the software implications, and contrast the pros and cons, of scaling up vs. scaling out.

    Scaling up usually doesn’t require any adjustment to the application software. As it ran previously on a Pentium 450Mhz, it will continue to run on an Athlon 1.4Ghz, just a bit faster. If the application is memory intensive, simply adding additional memory should improve performance, assuming the supporting OS can make use of the additional RAM. If the application is I-O bound, upgrading to a faster disk subsystem should result in a performance gain. In all these cases, it is unlikely that application code will require any adjustment in order to take advantage of the new scaled up deployment.

    Scaling out though is a very different story for obvious reasons. In order to scale out, we need to take a previously monolithic software installation and spread it out across multiple server platforms. The defining question in terms of scalability is ‘how do we architect our systems to facilitate a scaled out deployment?

    When phrased in this manner, the difference between efficiency and scalability is eminently clear. The most efficient binary search or linked list manipulations are certainly useful algorithms when programming for efficiency. However, they simply do not address the critical design and development issues which must be addressed in order to facilitate a scaled out deployment.

    Scalability Issues
    In the following paragraphs I’ll identify various scalability issues and present proposals which address these issues through specific software construction practices. By putting these architectural concepts into practice you’ll be able to develop your own highly scalable applications.

    Outward Bound
    So you’ve got the additional hardware and you’re ready to scale out. Before we begin to spread the software around, we need to consider two basic strategies; these are Replication and Distribution. Replication involves replicating the software across multiple platforms. Some sort of a load balancing mechanism is generally employed so that incoming client requests are routed to the server which is under the least load, at any given moment. Distribution involves actually breaking out the software into tiers, and installing different tiers on different platforms. With this scheme, no load balancing is required since the client continues to be serviced by the single outermost tier. It is thus irrelevant to the client whether or not the software is distributed across multiple servers since this distribution is invisible to the client.

    Replication: This strategy is fairly common for really large scale systems. There are many benefits to replication. One advantage is that it leaves the architecture of each individual deployment unchanged. Each deployment is essentially an entire system, (except for the centralized database) which operates independently of any other installations. Another benefit to replication is redundancy. A replicated system may be maintained and upgraded in a phased manner since one server can be taken offline without shutting down the entire system, as long as at least one replicated server continues to operate. Similarly, the failure of a single server will not cause the entire system to fail.

    The disadvantage to replication is that it can be a bit more expensive than distribution. Replication requires the duplication (at least) of the existing application infrastructure (except for the database server) plus an additional component, the load balancer, in order to spread requests out among the replicated servers. Different load balancing methods are available, however the best load balancing arrangements use real-time load information to route requests to the server which is under the least stress at the time of the request. The implication of this is, that under this type of load balancing, successive requests by a specific client may be serviced on different machines. We will consider the software implications of this in a few moments.

    Distribution: The advantage to distribution is that distribution can be less expensive than replication. At the lowest end, distribution can be performed by spreading the software across the existing application and database servers. Distribution is usually taken as the first step to a scaled out deployment. Distribution without replication doesn’t require any load balancing since only one server is available for client connections. However, you don’t gain any of the redundancy advantages which come with scaled out replication.

    Distribution can also be advantageous depending on the specific profile of your transactions. If you can determine that the backward facing data communications between your Business/Data layer and the database is minimal compared to its forward facing communication with the UI layer, then you might wish to distribute the Business/Data layer, or specific transactions, onto the database server in order to minimize network traffic and reduce associated latency.

    Transaction?a unit of work measured from request to response. Not to be confused with an MTS/COM+ database transaction. To be sure, an MTS/COM+ database transaction qualifies as a transaction. But not every transaction is an MTS/COM+ database transaction.

    This type of deployment architecture is fairly common, for medium and smaller systems, especially with IIS applications, where you’ll frequently find the UI and Business layers deployed on the IIS server, with the Data layer installed on the database server.

    Naturally, this scaled out distribution assumes that the database server has sufficient resources to bear the load for both the database and the Business/Data software layers. Other scenarios which would benefit from a scaled out distribution might be where a particular component or logical tier consumes an inordinate amount of resources as compared to the rest of the system. In such a case, distributing that portion of the software onto it’s own dedicated server, depending on the specific circumstance, might very well alleviate the load on the system as a whole, resulting in increased system performance.

    One more example of where it might make sense to distribute your application layers is where you can determine that the database server is idling while your IIS server is under heavy load. In this case it probably makes sense to distribute your software layers between these two servers in order to share the load equally between the two servers. Again, this is a preliminary scalability solution which addresses the current load problem with the hardware already available.

    As you can see, scaled out distribution is a viable alternative for improving performance in a limited number of scenarios. Scaled out replication on the other hand, duplicates practically the entire supporting hardware which should result in an immediate 50% reduction in server load. (That is, on the application servers. It is true that this will probably result in an immediate load increase on the database server. However, if my experiences have been typical, a database server which is adequately supporting a single application server should have power to spare in order to support a second application server. Standard disclaimer your mileage may vary!) Nonetheless, both of these options are available when scaling out and indeed many scaled out hardware configurations will ultimately contain a mix of replicated and distributed software deployments. Let’s use the mixed deployment architecture presented below when considering the software construction ramifications for both of these types of scaled deployments.

    The system configuration depicted above shows a deployment configuration with both Replicated and Distributed deployments. As you can see, both the UI and Business layers are replicated with load balancing. You can see as well how the UI and Business layers are distributed over different physical servers. Browser based clients call into the IIS servers from across the Internet. Windows clients call straight into the business layer since they provide their own client-based user interfaces. Let’s assume also, that the Data layer is installed locally to the Database server for network efficiency as described above.

    A couple of points regarding this architecture. First, my intention is not to recommend this, or any particular deployment configuration. Obviously, no deployment strategy can be proposed without a careful and comprehensive analysis of the particulars of the specific application being considered. This is certainly not the case here, since I don’t know anything about the systems you work with. I’m simply proposing this configuration as the basis of our discussion since it contains a variety of the different scaled out deployment strategies we’ve mentioned.

    Second, architecture gurus will immediately note the lack of a replicated database server. It is true that while a large scale deployment will usually implement some sort of replication, I have omitted this from the diagram. (In my defense, I’d like to point out that most large scale deployments will also include some sort of offline backup device, yet I’ve omitted this from the diagram as well, since it, like the replicated database, is largely irrelevant to our discussion.) I’d like to confine our discussion to specific coding practices which can be helpful in developing software so that it can evolve through various deployment architectures. In this context neither the backup device nor the database seems relevant. The replicated databases which I’ve seen, have used specific vendor supplied software in order to perform cross-replication. I haven’t encountered any specific application coding practices which are necessary in order to address this and consequently I’ve chosen to omit the topic of replicated databases from our discussion.

    As we previously discussed, the ideal load balancing scenario allows any client request to be handled by the server with the lowest level of stress at the actual time of the transaction. It is therefore quite possible that subsequent client requests will be handled by separate servers. This has a couple of ramifications. First of all, it is easy to see how client / server relationships must be stateless (on the server) in order for this to be accomplished. If state is accumulated on Server A, it will be of absolutely no use if the next client request is serviced by Server B.

    There are two ways to address this. The first way to address this is to establish a state maintenance repository which is available to all machines to which it is relevant. This would most probably imply additional hardware for state storage at the tier, or global application level. State is generally stored in the repository with a unique key identifying a particular session or transaction. This key is delegated back to the client and returned to the server on successive calls so that the appropriate state information can be retrieved from the repository. The diagram below shows the server architecture expanded to include this new state repository.

    Theoretically, the application database can be used as the state repository; generally though, complex relational capabilities are not required for the relatively simple task of state maintenance. LDAP, the lightweight directory access protocol, is commonly used to implement a server side state repository. One product which I have worked with, Microsoft Site Server / Personalization & Membership, is based on LDAP.

    Delegate State Maintenance
    The second way to address this is to delegate state maintenance to the base client, rather than keeping it on the server. Thus, on successive calls, the client passes in the relevant state accumulated during the previous call(s) and the server can operate with the state information which it needs. Since state is being passed back to the client, and returned to the server on subsequent calls, successive client calls can be handled by any available server.

    There are a few factors which can help you to determine which method to use. If security is an issue and the absolutely highest level of security is desired in order to protect the privacy and integrity of state information, then the most secure solution is obviously to store that state somewhere on the server, rather than sending it back to the client. Additionally, if the volume of state information is large, and the network link out to the client can be slow (e.g. 56K dial-up) then for efficiency’s sake it is prudent to store that state on the server where it can be accessed quickly, rather than shuttling that information back and forth across a slow network link. Another factor to consider is that state accumulated at the client will only be available at that particular client. If you’d like to persist state between sessions, then you’ll need to store it on the server since there’s no guarantee that the user will initiate all sessions from the same workstation.

    OK, so we’ve solved the problem of interim state storage so now all server methods can be stateless. Recall however, that I mentioned two ramifications of load balancing which we need to consider. The second ramification is one more obstacle which we must surmount before we can achieve our stated goal of real-time load balancing. That is that server objects must be deactivated so that successive client calls (if applicable) can be routed to any server which is available.

    Did I say deactivate? Well, let me clarify up front that I’m not referring to MTS/COM+ deactivation via SetComplete. (I’ll use the term COM+ to refer to both MTS & COM+ from here on in.) Contrary to conventional wisdom, COM+ deactivation provides very little in the way of preparing your software for scalability.

    The problem with COM+ deactivation, that is to say the reason it is ineffective in this regard, is that, while the object is indeed destroyed on the server (since VB6 objects are non-poolable) the client is blissfully unaware that the object has been destroyed. The client remains with a COM proxy (which it thinks is the actual COM object) pointing directly at a COM stub on a specific server. This means that any successive calls to the server object will fail unless they are routed to the original server. This means that successive calls can’t necessarily be handled by the most available server of the moment.

    This is worth repeating. COM+ deactivation (or JIT – Just in Time Activation/Deactivation) is not a significant factor for improved scalability or performance. First of all, clients shouldn’t be making excessive calls out to server objects since successive calls to server objects will involve multiple calls across the network. To make matters worse, if the server is implementing COM+ JIT, this will involve repetitive creation and destruction of the server object which will most likely degrade performance, rather than enhance it.

    What’s that? You say your server tier is deployed locally to its client so network communication isn’t an issue? Well that’s good for now, but what if you eventually need to scale this out? Are you prepared to redesign and redevelop your interfaces? Designing for the future is a critical factor in achieving a flexible, and truly scalable system architecture.

    The most important factor for both performance and scalability is that the client communicate with its inter-tier server objects in an efficient one-call, get-in, get-out’ manner. This mitigates the number of calls across the network, and reduces to one, the number of times a server object needs to be created in order to service a particular transaction. Naturally, a class which provides its entire service via a single method call is a stateless class which can be destroyed by the client after that single method call completes. We’ll discuss this particular aspect of statelessness further in a moment.

    COM+ deactivation is primarily a mechanism for destroying a transactional object after its transaction has been committed. COM+ deactivation (especially in regard to non-poolable objects) does very little, practically nothing, to enhance performance on the server. Additionally, injudicious use of deactivation for an object which is called repetitively by its client can actually degrade performance. Serendipitously, the (mis)use of deactivation has resulted in the design and development of stateless classes which provide ‘one-call’ services to their inter-tier clients. It is precisely the one-call, ‘get-in get-out’ nature of these objects which enables them to be scaled out properly. Deactivation has very little to contribute in this regard.

    Quite a bit of information has already been published regarding the myths and facts surrounding COM+ deactivation for non-transactional, non-poolable (e.g. VB6) components.

    Ted Pattison, whose books I have read and recommend highly, makes mention of this issue, in both of his books;

    Programming Distributed Applications with COM and Microsoft Visual Basic 6.0
    Programming Distributed Applications with COM+ and Microsoft Visual Basic 6.0

    While I’ve not seen this personally, I understand that Don Box and Tim Edwald also address this issue in their respective writings. All of these authors are well known and enjoy favorable reputations among the developers’ community. It would be extremely worthwhile for any developer seeking more information on this subject to acquire and read one or more publications by any of these authors.

    For more information available on-line, you can take a look at this excellent article by Ted Pattison.

    www.microsoft.com/msj/1299/instincts/instincts1299.htm

    Now while it’s true that successive calls must be locked to the server on which the object was originally created, I don’t wish to over-state or over-dramatize the ramifications of this fact. If the client is itself an object which exists as a component of a transaction whose life-span is measured in seconds (or even shorter), then this is not that big a deal. Ideally, inter-tier calls should involve a single stateless call and the server object should be released by the client upon its return. If the unit of work can’t be accomplished in a single call, then the next best thing is probably to maintain the server component statefully on the server, and lock this client to the specific server for all successive calls.

    In this case, the client will be pinned to a specific server for a matter of milliseconds, or even seconds which is relatively a long time. However, in this scenario we are dealing with a client which will be calling into the server class for a second or third time. I’d prefer to make the assumption that the load on the server to which this client is pinned will not change all that dramatically over the next second or two, rather than go though the definite overhead of having the client release and re-instantiate the object for every successive call, even though that would allow each successive call to be assigned by the load balancer.

    Note that .NET introduces a new dynamic in this regard with the introduction of poolable objects for VB developers. If the class can be released to the pool for use by other clients, rather than completely destroyed, this might be a factor to recommend object deactivation, that is, release to the pool so that the object is not held idly between calls. Tread carefully though. In order to pool objects successfully you need to ensure that the overhead of pooling is not greater than the overhead of simply destroying and recreating them. But that’s another discussion for another day.

    The one thing which should never be done (in terms of scalability), is to lock a persistent client (e.g. a Windows user application, or a browser session) to a specific server. Since the client will be pinned to the server over a span of minutes, or even hours, conditions on that server could degrade markedly from what they were at the time the lock was established. However, because the user’s client agent is pinned to a specific server, his or her session cannot take advantage of an alternate server which is currently under a lower level of load. Of course, if this is your only server at this point in time, then the point is moot – for now. If you do lock a persistent client to a particular machine, by storing state in the Session object for example, you will need to make substantial modifications to the state management aspects of your application in order to scale out via a replicated deployment in the future.

    Let’s return to an important point which we touched upon above. It is important for both performance and scalability, that the client communicate with its remote server in an efficient one-call, get-in, get-out’ manner. This mitigates the number of calls across the network, and reduces to one, the number of times a server object needs to be created in order to service a particular transaction.

    Object Oriented aficionado’s might well be wondering. ‘Hold on a minute. Standard OO design mandates that classes encapsulate and maintain their own data and provide it to their clients through a defined interface.’ Well, OK, that’s true, but there’s a time and place for everything. The object oriented discipline is intended to address software construction issues rather than issues of performance and scalability. As such, OO techniques are fine when developing client side software since workstation resources are relatively plentiful. They are also suitable for intra-tier class construction and communication. However, they are definitely not suitable for inter-tier class relationships.

    As a developer, I sometimes feel like I’m on an incredible journey. The feeling is not unlike being whisked up by a cyclone, whirled around and deposited in a strange, faraway land. To paraphrase the young lady who preceded us on just this type of journey…

    I don’t think we’re on the Client anymore, Toto!

    Don’t Get Chatty with the Foreigners
    Let’s discuss chatty’ vs. chunky’ relationships. The chatty client communicates with its server in an object oriented manner. It instantiates the object, and then proceeds to have a chatty back and forth dialog with the server by calling various methods, and by setting and/or retrieving various properties. After finishing its chatty conversation, the client finally terminates the object.

    The chunky style of communication is much more abrupt. The client instantiates the object. It then calls a minimal number of methods, ideally only a single method, which returns a whole slew of relevant information in a single large chunk. The client then immediately releases the object into oblivion.

    In a nutshell, while chatty calls are fine in an intra-tier situation, chunky calls are more appropriate for an inter-tier scenario.

    In order to make OO developers feel a bit more comfortable, I’d like to point out a couple of dominant areas of inter-tier communication which are using the new chunky style of communication, and have been for quite some time already.

    SQL & ADO: Way back when, I used to program using COBOL and a non-relational IMAGE database (what a database!) on the HP-3000. Both the client programs as well as the database were local to each other on the HP-3000. The standard processing paradigm in those days was record oriented. We’d establish a path (keyed or sequential) into a dataset and we’d then begin reading, processing each record in turn.

    In contrast, SQL, and subsequently ADO, arrived on the scene designed to support clients which are remote to the database which they access. SQL is by nature set oriented. A given SQL statement need not operate on a single row, rather SQL is specifically designed to be chunky by supporting set oriented operations. ADO as well is quite chunky when you think about it. The disconnected recordset feature is specifically designed to marshal large chunks of data back and forth between server and client.

    HTTP: Communications between the browser and the web server are incredibly chunky. Consider a standard form POST. The entire form contents are sent up to the server along with the query string parameters and cookie data. All of this data is processed during this single transaction and subsequently an entire page, along with any cookie data, is returned back to the browser and the connection is closed. It just doesn’t get any chunkier than that!

    As n-tier developers, we need to take our cues from these two mechanisms which are actually two major mainstays of our own development environments. When it comes to inter-tier communication, chatty is out, chunky is in. We can also take a cue from the fact that, despite the chunkiness of the underlying data stream, both of these facilities provide objects at the receiving end of the data, in order to maintain the data and provide access to it. For example, ADO provides the Recordset and other objects, while IIS provides the Request and Response objects which encapsulate and provide access to the incoming and outgoing HTTP streams.

    Here are two basic design patterns which I use to develop chunky server interfaces.

    Complex Data Classes:
    Let’s say we have a server class which provides a single, albeit complex, set of data. This is the type of scenario which, following the OO design patterns, would employ a stateful object with methods and properties, to allow a chatty client to retrieve the information piecemeal and process it accordingly. Thus, it is the complex interface presented by the class which provides structured and convenient access to the data. For a chunky conversation we don’t need a complex interface since we’ll only be making a single method call. What we need is for this single method to return a complex data structure containing all of the relevant data.

    There are several approaches with which we can accomplish this. Of course, each approach must return a data type which will physically marshal between tiers. Otherwise, accesses to a non-marshaling object will actually be crossing a physical tier for each access. If the data already exists in recordset format, I’ll generally return a disconnected recordset which marshals nicely between physical tier boundaries. Otherwise, my favorite approach is to use XML, since XML is specifically designed to maintain any complex data structure in a simple string format. To accomplish this, the server class uses a DOM (either its own or via a helper class wrapper) to store the data, instead of using private class variables. The entire DOM can then be serialized via the .xml property and returned to the client as the functional return of the chunky method.

    The client is now free to manipulate this data as it wishes. For simpler structures it can simply load a DOM and access the various nodes. For more complex data structures, I usually provide some sort of helper class which can be used by both the server and the client to interface with the DOM. The server uses the helper class to instantiate the DOM by having a chatty conversation with, and setting properties of the helper class. This is OK since this is an intra-tier relationship. The helper class provides a read/write .xml property for serializing the DOM to a string, or for loading the internal DOM from a string. After setting all the information via the helper class interface, the server class retrieves the serialized XML string and returns it to the client. The client, for its part, instantiates its own helper class and loads the serialized XML into the helper class’s internal DOM. It then has its own chatty conversation with the helper class, retrieving various bits of information from various methods and properties of the helper class’s complex interface. Again, this chatty conversation is fine since it is strictly intra-tier.

    A few quick points:

    This helper class paradigm is an almost perfect match to an ADO disconnected recordset which is serialized to marshal across physical tier boundaries, yet presents itself via a class hierarchy when it arrives at the other end. Of course, ADO provides its own custom marshaler which takes care of instantiating and loading the ADO classes automatically. Without writing a custom marshaler for our helper class, we have to take the poor man’s approach and explicitly load the class when the serialized data arrives at either end. The end result is the same though. A serialized stream of information which marshals well across physical tier boundaries along with the convenience of access through a defined interface at both ends.

    In addition to the helper class’s defined interface, I always provide access to the internal DOM, via a .DOM property. This allows new information to be added to the DOM quickly, without needing to modify the interface. The drawback to this is that this new data is not defined via the interface and requires direct DOM access as well as explicit communication between the client and server developers as to how this item will be stored and accessed. But hey, sometimes time-to-market considerations just can’t be denied. Of course, these little idiosyncrasies are always corrected by evolving the interface to account for the new data item during the next scheduled development cycle.

    Naturally, the helper classes must be installed on both the client and server tiers. There’s nothing out of the ordinary about this, nor does this indicate any deficiency to the application architecture. As part of the supporting application framework, helper and general utility classes are routinely installed across multiple tiers. Obviously, ADO itself must be installed on all tiers on which it is used. So must the VB runtime DLL’s, or the .Net framework, or MTS, or COM+ for that matter.

    Repetitive Access Classes:
    Another type of class is the repetitive access class, which provides a method which must be called multiple times in a single transaction. Depending on the complexity of the data which it returns I’ll either use XML as described above, or a simple return array. The question is really, how to engineer a single method call through which the client can provide sufficient data for multiple logical operations, so that the server method need only be called once for each transaction.

    My standard approach is to pass in parameters as arrays, rather than as simple data types. That way, the server has enough information on hand to iteratively process all information in a single method call. A variation on this approach, where the repetitive access is the exception rather than the rule, is to pass in the parameters as variants and have the server interrogate the parameters to see whether they are arrays or simple data types and then process them accordingly. (This provides convenience for clients which might wish to call these methods using simple data types rather than arrays.)

    In either case, the server method is designed to operate iteratively on one or more sets of parameters, and to return the entire chunky result set. As long as the client is in possession of sufficient data up-front, the ability to pass in multiple sets of parameters means that the client can make multiple logical’ calls to the server in a single method call. 

    In a Nutshell
    Partitioning software into logical tiers makes sense from both software construction and deployment perspectives. From a software perspective, a tiered design allows a single server tier to support multiple client tiers. For example, a business logic tier can service two types of UI tiers. One UI tier is designed to support browser access to the business objects, while the other UI tier might be a Windows UI client. From a deployment perspective, the properly implemented tiered approach provides options for scaled out deployment along tier boundaries.

    The tiered approach can only be successful by minimizing points of dependencies between tiers. Communication between inter-tier clients and servers should be chunky, rather than chatty. Information passed between client and server must be of a type which physically marshals between tiers. Always assume a physical separation between tiers. Even if this is not the case at the time of the initial product roll out, this might very well be the case if the software needs to be scaled out at some time in the future.

    While stateless, or more to the point, one-call, non-persistent, server components are ideal for providing chunky, inter-tier access, stateful, chatty objects may be freely used on a strictly intra-tier basis.

    Although, to be sure, it never pays to get carried away with excessive object oriented design. In a previous article I present an actual case history as an example of what can happen when software which is excessively object oriented is deployed on the server.

    www.devx.com/vb2themax/Article/19870

    MTS/COM+ deactivation is generally not helpful in minimizing dependencies between clients and server. By all means, deactivation must occur for transactional components, since deactivation is part and parcel of successfully committing a transaction. For other non-transactional, non-poolable objects deactivation serves practically no purpose. Instead, the client should keep its relationship as short as possible by instantiating the server object as late as possible and immediately releasing the object as soon as it is no longer needed.

    Nothing can halt a scaled deployment in its tracks faster than machine specific state maintenance. Storing state in the Session object or in any other machine specific repository will lock all requests to a particular server for the duration of the session. When necessary, state should either be delegated back to the persistent client (e.g. via cookies for a browser client) or stored server side in an independent location which is accessible from any machine in the server farm.

    By following these guidelines, you can produce software systems which are not only efficient, but which will also be capable of scaling out to support a growing user community and increasing levels of demand.

    If you’d like to download this article in Acrobat PDF format, please stop by my Web site at www.FPSNow.com and follow the links to my FTP server to download Scalability.PDF.

    Share the Post:
    Share on facebook
    Share on twitter
    Share on linkedin

    Overview

    The Latest

    your company's audio

    4 Areas of Your Company Where Your Audio Really Matters

    Your company probably relies on audio more than you realize. Whether you’re creating a spoken text message to a colleague or giving a speech, you want your audio to shine. Otherwise, you could cause avoidable friction points and potentially hurt your brand reputation. For example, let’s say you create a

    chrome os developer mode

    How to Turn on Chrome OS Developer Mode

    Google’s Chrome OS is a popular operating system that is widely used on Chromebooks and other devices. While it is designed to be simple and user-friendly, there are times when users may want to access additional features and functionality. One way to do this is by turning on Chrome OS

    homes in the real estate industry

    Exploring the Latest Tech Trends Impacting the Real Estate Industry

    The real estate industry is changing thanks to the newest technological advancements. These new developments — from blockchain and AI to virtual reality and 3D printing — are poised to change how we buy and sell homes. Real estate brokers, buyers, sellers, wholesale real estate professionals, fix and flippers, and beyond may