Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Stateful Session Clustering: Have Your Availability and Scale It Too : Page 3

Stateful application design is just what Java developers need for clustering HTTP sessions. It is easier to code to, more scalable, and more cost effective, and by combining it with network-attached memory, developers can avoid Java serialization while storing state in a central place.


advertisement

Session Clustering Scenarios

Using the notion of a load balancer in a handful of example scenarios, the following sections illustrate how you can leverage session clustering as long as NAM exists. Each scenario refers to an example application cluster, which Figure 1 illustrates.

Click to enlarge

Figure 1. The Application Cluster for the Session Clustering Scenarios

Scenario 1: Round-Robin Load Balancing
In this scenario, an end user is running a single Web browser on a desktop computer. The end user is accessing your Web application from the Internet on the public side of the firewall. The load balancer has been configured to send requests to app server 1, then 2, then 3, and back to 1, in that order. (For simplicity's sake, assume only one end user.)



Tenet 1 dictates that you must have a central repository for sessions. Assume that the user's session is maintained on a separate server from all of the application servers. The first request goes to app server 1, which asks the session server for the session, based on the session cookie. The session server has no data, so server 1 creates the session and sends it back to the central repository.

The end user decides to log into the Web site. Next, the HTTP POST request goes to Server 2, which will pull the session from the central repository, confirm the user is not logged in, log the user in, update that session, write it back to the repository, and send back a successful HTTP response.

On the next request, the end user asks to see the status of some previous business transaction. Server 3 gets the HTTP GET request. That session is pulled from the central repository and all is good.

Figure 2 depicts the entire conversation flowing from request one (the greeting) through request three (executing the business transaction).

Click to enlarge

Figure 2. The Conversation Flowing from Request One Through Request Three

Tenet 2 says you cannot marshal the session object into an intermediate form. Why? Scalability is the key. If the central repository is hit for every request, you will bottleneck on that repository when you attempt to grow your business. However, if you can cache the session on each application server, you can get scalability back. But Figure 2 shows a hit to the session repository for every request. You need to augment the scenario.

If you add a fourth request, Tenet 2 allows you to assume that app server 1 will not need to pull a copy of the session from the repository since the repository has kept your session state in sync amongst all three application servers.

You have just been asked to make two leaps of faith:

  1. Java serialization is not needed in order to copy objects around the cluster.
  2. The central repository can keep track of every object in every JVM and knows which JVMs are out of sync with the cluster-master object state.

Assuming the existence of NAM, these two leaps are not unreasonable. If the session is cached on each application server, and the repository can keep the session up to date on all three nodes, NAM delivers easy stateful clustering while providing the availability of stateless architecture. The value to the developer and to the runtime is very high.

While Tenet 2 (no serialization) is important, Tenet 1 brings it all together. The central repository acts as the central view of the critical piece of the JVM's heap, where the end user is stored. So an update on any app server turns into an update to the repository, while a read on any app server can optionally turn into a centralized read if the repository detects that local memory copies are invalid.

In this scenario, app server 1 services the first HTTP request and keeps the session in cache. App server 2 logs the user in and keeps the session in cache. The session update (when your end user logged in) can get pushed to app server 1 in case it sees Web requests from this session later. The central repository knows a copy of the session is checked out on app server 1 and that this copy is otherwise stale.

Alternatively, server 1 can be notified that its session copy is stale, and when it sees another request, it can be forced to update local memory from the network (this is usually more scalable than network broadcasting). App server 3 joins the conversation on the third HTTP request. In the modified scenario, the session is resident on all three app servers, but you are not marshaling and, thus, the session is not causing network chattiness as long as your implementation of network-attached memory can issue fine-grained replication of object changes down to a field-level on any one object.

In this scenario, you have achieved the equivalent of a session grid without the implementation work. Plus, the session is the exact same session, cluster-wide. Round-robin load balancing is delivered with semantic correctness, meaning there is one object, cluster-wide, instead of object copies or clones. And the load balancing occurs at scale because every session update requires notifying only the central repository.

However, traditional stateful will not scale in this scenario. Had you attempted to use a messaging layer, you would have ended up with all sessions resident in every app server but you would have wasted bandwidth copying session from app server 1 and 3's requests even when it did not change. Furthermore, messaging layers would not allow you to drop sessions from local memory in case of OutOfMemory exceptions, whereas having all sessions in a central repository allows any one application server's JVM to drop sessions from memory at will.

Simplistic stateless would scale, but it would complicate your business logic. If session were clustered using a database, those sessions should not have been cached in-heap because the application would become significantly more complex in the presence of database storage with local in-memory caching. As an example, checking out of an e-commerce site once and only once would require the introduction of optimistic caching of session information and a transaction engine to rollback on update collision. So you can choose not to cache so that your application remains simple, but without Java-level caching the database becomes a scalability bottleneck.

Scenario 2: Large Session, Small Delta
Imagine a financial application designed to help manage a portfolio of trading instruments. The application is designed to load all potential instruments into session and then, before any trading actions can be taken, check the price of a particular instrument for consistency with the system of record and update the price into session. On first blush, this would be a simple application to write. No developer in financial services builds applications this way, however. Why? "Transactionality" and scalability are at play here.

Assume the data in session takes 15MB of heap, and the delta to the trading price of any instrument is stored as a floating point number, which you can assume takes 16 bytes of heap. The application logs a user in and generates the session cache of all financial instruments in 45 seconds. After a user is logged in, all subsequent requests can be served in fewer than three seconds. Once session clustering is enabled, however, every page request takes 45 seconds, as session data gets serialized and replicated to a secondary application server.

This scenario violates Tenet 2 (no serialization). If the application server didn't serialize and copy session on change, then updating the 16 bytes to a secondary application server would not introduce significant overhead to the three-second request. It would also not pose a significant new risk to consistency of data between the system of record and the session cache of that data. You are forced to introduce transactions to this application if you want it to work properly, but when executing trades against data that is stored in the database and cached in session, the likelihood of a rollback frustrating the end user is lower when the trade can complete in three seconds as opposed to 45.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap