Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Stateful Session Clustering: Have Your Availability and Scale It Too : Page 4

Stateful application design is just what Java developers need for clustering HTTP sessions. It is easier to code to, more scalable, and more cost effective, and by combining it with network-attached memory, developers can avoid Java serialization while storing state in a central place.


advertisement
Scenario 3: Cascading Failure
As mentioned previously, the most scalable session clustering architecture to date is a buddy-system algorithm. Using Figure 3 below, suppose three users are actively using the application. User 1 (in blue) is sticky to server 1 and his session is backed up to server 2. User 2 (in green) is sticky to server 2 and his session is backed up to server 3. User 3 (in red) is sticky to server 3 and backed up to server 1. Each application server contains two sessions and participates in responding to two of the three users at all times. Any session clustering algorithm based on cluster-wide replication would instead have all data in all servers and all servers would participate in all HTTP responses, thus bottlenecking on the network instead of the application server's own CPU.

Click to enlarge

Figure 3. Three Users Actively Using an Application

Buddy system seems like a good architecture that would solve scalability issues. While buddy systems are very good at high scale, they are of much less value when clustering only a few servers. In fact, on a two-node cluster, every request hits both nodes. In a three-node cluster, each node works on two-thirds of the work, not one-third as you might assume.



According to most folks in the community, however, cascading failure ends up causing instability in the buddy system. If server 1 fails, then server 2 takes over for User 1's session, and server 2 is then primary for User 2 and User 1. Server 3 becomes secondary for User 1, adding to its workload of being primary for User 3 and secondary for User 2. And, as soon as User 3 executes an HTTP request, server 3 discovers that his secondary has failed and makes server 2 his secondary. Servers 2 and 3 now have one-third more workload as depicted in the Figure 4. (More generally, each server in the buddy system would have one-nth more workload, where n is the number of application servers before the failure.)

Click to enlarge

Figure 4. Cascading Failure When Server 1 Fails

Now server 2 is unevenly loaded as primary for users 1 and 2 and secondary for User 3, while server 3 is primary for User 3 and secondary for User 2. At real-world session loads, this could take server 2 down. Then server 3 becomes primary for all users, with no ability to elect a secondary (see Figure 5). As a result, the cluster crashes. You can't predict when this will occur nor stop the cascade, except to over-provision the cluster to be four times its needed size.

Click to enlarge

Figure 5. Server 3 Becomes Primary for All Users

Tenet 1 provides the scalability of the buddy system without the cascading issue, which cluster-wide replication can avoid. A central repository for sessions acts as the buddy to all other nodes. Yet when any node fails, any other node can take over its session workload and needn't elect a new buddy for those sessions. The buddy does not change. To avoid the single point of failure, a backup session repository is, of course, required.

You avoid the bottleneck on the session repository by leveraging Tenet 2 (do not serialize sessions to the repository on each request). Instead, send only deltas and keep a clone of the exact same session objects in the application server and the central repository.

Obey the Tenets, Reap Scalability and Availability

Tenet 1 combined with Tenet 2 provides scalability and availability that function in almost all cases. Tenet 1 is actually easy to understand, given that most developers and architects have experience with databases, LDAP servers, and other types of central storage. However, Tenet 2 dictates no marshaling or serialization, and as such, you must now define a repository and associated storage protocol that would simultaneously avoid marshaling and work in a centralized manner. That's where NAM comes in. Databases, LDAP, and SAN/NAS would all require marshaling of some sort in order to store data.



Ari Zilka is founder and CEO of Terracotta, a developer of solutions for Java scalability.
Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap