Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Scalable Architectural Guidelines: Designing for Flexibility

Both stateless and stateful components have a legitimate role in any scalable system architecture. This applies equally to both Client Server and n-Tier architectures. This article covers basics and not-so-basics concepts related to scaling up and scaling out.


advertisement
am frequently asked the following question, "Should I be using stateless or stateful components for scalability?"

My answer is invariably an emphatic "Yes!"

What I mean by this, is that both stateless and stateful components have a legitimate role in any scalable system architecture. This applies equally to both Client Server and n-Tier architectures. (After all, a Client Server system is an n-tier system composed of 2 tiers. In the following discussion we’ll use the term n-tier to refer to Client Server as well as classic n-tier systems, which implies 3 or more tiers.)

Before we can evaluate the question properly, we need to define what we mean when we say scalable.

As developers we are quite comfortable with the notion of efficiency. Efficiency means you use a binary search to search through an array of sorted values. It means you use a keyed item in your WHERE clause for indexed, rather than sequential retrieval. It means you process all columns in an array on a single pass, rather than scanning through the array repetitively. Enough. I’m sure you can supply plenty of your own examples. We get the message.

Efficiency establishes the baseline for your software’s performance. This baseline will naturally vary according to the supporting hardware. However, the efficiency baseline generally assumes that all of the resources of the supporting hardware, as well as the software itself, are dedicated and committed on behalf of a single user.

During the desktop era, and well on into the Client Server era, the bulk of software was indeed installed on users’ personal workstations. Thus the notion of efficiency was paramount. Efficient algorithms were sufficient to deliver excellent performance, running on a client platform dedicated exclusively to the needs of a single user. Naturally, if an old P90 wouldn’t deliver adequate performance, the workstation could be upgraded to a P133, P200 or better. This is what’s known as scaling up. But scaling up isn’t what most people are referring to when they mention scalability. We’ll touch again on this subject in a few moments.

Moving further along into the Client Server era, as software was humming briskly along on the client workstations, database administrators (the server guys) started noticing certain disquieting symptoms. Around 11:00am (that’s when the West Coast comes on-line) database performance started slowing to a crawl. This problem would manifest itself for about an hour and at about 12:00pm (that’s when the East Coast takes lunch) performance would improve somewhat. As additional offices were brought on-line with the new Client Server system, performance problems became more and more noticeable.

These DBA’s were among the first to be impacted by problems of scale. Their database servers, properly proportioned for a certain user base, were inadequate to handle an increasing community of users. Unlike a client workstation which supports a relatively fixed level of activity, the demand level on a server is not fixed, but rather grows according to its increasing user base. Thus, server platforms face problems of scale which do not impact on client workstations. In a nutshell; the level of performance a server will support is relative to the concurrent level of demand. Where demand is low, adequate support is provided. Where demand is too high, performance degrades to unacceptable levels.

Problems of scale occur when the capabilities (not necessarily the efficiency) of the software and its supporting hardware are exceeded by the demands of its growing user base. As we saw before, single user software installations are easy to scale (assuming the software is coded efficiently) since only a single user and a single supporting platform are involved. Thus, as long as the hardware can be scaled-up to accommodate the software all is well.

Multi-user systems can be scaled up as well. However the limits on scaling up are relatively finite. Go ahead, name your dream server. Budget is no problem, you’ve got the user base and revenue to justify it. Once you install this dream system you have maxed out. If your user base continues to grow, scaling-up no longer presents the solution since you are already scaled-up to the limit. It is true that at some point, perhaps next week or next month, more powerful hardware will inevitably become available. This is of relatively little comfort though, in supporting a user base which demands this additional power today. Reality check: Scaling up is a very rational and reasonable approach for systems with a known user-base with limited growth potential. A departmental Intranet application is a good example of just such a scenario. Scaling up should be considered too limited though, for those systems which are exposed to a user-base with high, and perhaps virtually unlimited, growth potential (e.g. an Internet e-Commerce application).

A more enduring solution is to scale out. In contrast to scaling up, which is increasing the power of a single server, scaling out means adding additional servers to your supporting infrastructure. It’s easy to see how scaling out breaks the barrier presented by scaling up. Your user base keeps growing? No problem, just add more hardware. In contrast to the relatively limited options of scaling up, scaling out offers much greater growth potential. Just to give you an idea, the top-ranked (not the cheapest!) configuration for the most recent TPC-C benchmark, is composed of 32 database servers (8 CPUs each) and 4 DTC servers (4 CPUs each) for a total of 36 servers with 272 CPUs, in support of a single application! Here are some interesting links for more details on this incredibly scaled-out system. It is gratifying to note that the supporting operating systems and server software are exclusively Microsoft Windows 2000 products.

  • www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=101091903
  • www.tpc.org/results/individual_results/Compaq/compaq.256p.091901.es.pdf
  • You might also wish to visit the Transaction Performance Processing Council
  • .

    Let’s recap a few points about scale and scalability. Problems of scale are associated with server, rather than client platforms. This is due to the fact that client load is limited to the level generated by a single user, while server demand can grow to virtually limitless proportions with an expanding user base. Solutions to this problem, at the hardware level, can be implemented in two ways. Scaling up, or increasing the power of a particular hardware component, is a relatively limited solution. Scaling out by increasing the number of hardware components in the supporting hardware infrastructure is a much more enduring solution. 

    So far we’ve been focused on scalable solutions at the hardware level. Well what about software? Good question, this is VB-2-The-Max after all! Let’s contrast the software implications, and contrast the pros and cons, of scaling up vs. scaling out.

    Scaling up usually doesn’t require any adjustment to the application software. As it ran previously on a Pentium 450Mhz, it will continue to run on an Athlon 1.4Ghz, just a bit faster. If the application is memory intensive, simply adding additional memory should improve performance, assuming the supporting OS can make use of the additional RAM. If the application is I-O bound, upgrading to a faster disk subsystem should result in a performance gain. In all these cases, it is unlikely that application code will require any adjustment in order to take advantage of the new scaled up deployment.

    Scaling out though is a very different story for obvious reasons. In order to scale out, we need to take a previously monolithic software installation and spread it out across multiple server platforms. The defining question in terms of scalability is 'how do we architect our systems to facilitate a scaled out deployment?' When phrased in this manner, the difference between efficiency and scalability is eminently clear. The most efficient binary search or linked list manipulations are certainly useful algorithms when programming for efficiency. However, they simply do not address the critical design and development issues which must be addressed in order to facilitate a scaled out deployment.



    Comment and Contribute

     

     

     

     

     


    (Maximum characters: 1200). You have 1200 characters left.

     

     

    Sitemap
    Thanks for your registration, follow us on our social networks to keep up-to-date