Real-Time Tracking and Tuning for Busy Tomcat Servers

Real-Time Tracking and Tuning for Busy Tomcat Servers

pache Tomcat enjoys the reputation of being a small but very formidable servlet container capable of handling heavy production loads. Its most common role probably is as the application server for lightweight J2EE applications (servlets, plain old Java objects, and no EJBs). In such a role, it often handles significant production loads, for which these types of lightweight applications are best suited.

Tomcat’s default installation, though, is configured to handle medium loads. Running an application on Tomcat in high-load environments requires further tuning. I recently worked on a project that presented this exact problem. We had to configure Tomcat servers to handle significant traffic (maximum sustained load was 1,450 requests per second generated by 150 or more unique, concurrent visitors, divided across 2?4 servers) and determine how appropriate Tomcat’s configuration is for changing load demands such as traffic spikes and continuously increasing loads.

The key finding we discovered was that the configuration and size of Tomcat’s resource pools had a significant effect on the overall scalability and performance of the server. Tomcat with properly configured resource pools is capable of handling heavy Web traffic with sustained performance. However, how do you know exactly what the appropriate size of these resource pools would be, as well as how to track their usage in real time? A pool that is too small obviously would be a bottleneck that directly affects the end user experience. A pool that is too large would consume vital system resources such as CPU and/or memory and potentially threaten the stability of the platform.

While trying to determine those crucial parameters and ratios, I developed an approach that helped us not only track and understand the most appropriate server capacity-related settings but also open the door to other similar real-time tracking and monitoring approaches. This article reviews the merits of my proposed technique and discusses the exact steps for implementing something similar in your applications.

Tomcat Resource Pools

First, a review of the types and roles of Tomcat component pools is in order. A resource pool is a pool of reusable objects that are vital for application processing yet expensive to instantiate on demand. Tomcat’s connector thread pool and database connection pool are two such pools. They have a direct impact on the overall throughput of the server as well as on the individual applications.

Connector Thread Pool

The connector pool is a pool of threads that accepts connections from standard Tomcat connector ports (e.g., 8080 for HTTP, 8009 for AJP) and hands them off to the processing components. This pool plays an essential role in Tomcat’s processing throughput capacity. Every single request directed into Tomcat goes through the connector pool. If the pool is too small and too many requests get backlogged, the request gets denied.

If this pool has too few active request-processing objects and the traffic increases dramatically, a delay in request processing results due to pool component instantiation. If the pool is too large (i.e., the number of ready threads is too high), CPU cycles and memory may become overused.

Therefore, knowing how to set the proper size and other parameters of this pool is crucial for the Web application to function properly under heavy loads. By default, the connector pool is preconfigured with the following settings:

  • Minimum number of spare threads (Number of threads that are always in the pool) ? 4
  • Maximum number of threads in the pool ? 200
Database Connection Pool

The database connection pool is an essential component to all J2EE applications that utilize pooled JDBC connections, which in my experience is the majority of J2EE applications. The database connection pool, as provided by Tomcat’s implementation of Jakarta’s DBCP, follows a similar configuration pattern as the connector pool.

By default, the database pool always holds three idle connections and a maximum of 15 active connections. If the connections are exhausted, the application will not be able to get a connection to the database, causing run time errors that are potentially fatal for the application.

Resource Pools Under Heavy Loads

Both of these pools, under heavy loads, can quickly become major performance bottlenecks?a hard limiting factor in application scalability. At the same time, with the proper knowledge of real-time resource utilization, you can achieve amazing results in application responsiveness and scalability.

The challenge is performing appropriate data collection and monitoring of pooled resources under heavy loads in real time and with low overhead. The solution is a low-overhead, near-real-time monitoring and data collection component, which gives you the best picture of Tomcat’s pooled resources utilization. With that information, you can configure Tomcat’s pools to handle any projected demands with relative ease.

Before diving into the proposed solution, get to know the Tomcat components that will play key roles in the custom performance-monitoring component.

The Value of Tomcat Valves

Valve is a core architectural component of the Tomcat server. Its main purpose is to provide a standard, flexible, and extensible way for tying the server or user-defined events to requests coming into a Tomcat server on a per-request basis. In that sense, Valves are similar to filters in Java Web applications. You can chain them together to perform some action upon each request. This per-request nature is the key ingredient for the custom performance monitoring mechanism.

In a Web environment, it is far more meaningful to track server parameters on a per-request basis than on some pre-determined time interval. Due to the erratic, spike-prone nature of the Web, the server may be idle one moment and then flooded with requests the next. For that reason, Valve provides far better and more pertinent information than regular interval tracking, in terms of how Tomcat’s resources are utilized on a request-by-request basis.

Using Tomcat MBeans for Resource Monitoring

Tomcat server ships with a rich set of MBean objects (see JMX standard) that provide unprecedented access to the server’s control and management functions, as well as to server-managed resources and deployed applications.

While JMX-based server control and management functions are powerful and comprehensive, using JMX for managing and monitoring production servers can be controversial. Typical JMX-based management requires client software and remote connectivity components. These components, such as JConsole and RMI-based JMX servers, can add overhead to already busy production machines. In addition, monitoring may require relaxation of stringent security rules. Because of these reasons, the solution that this article proposes does not require complete JMX support. (Working with Tomcat’s JMX features is beyond the scope of this article. See the previously published DevX article on best practices for monitoring Tomcat using MBeans.)

Instead of accessing MBeans through the external console, monitoring Valve accesses the local MBeans directly on the server without incurring any overhead or interfering with the security rules of the system. This solution is concerned only with the MBeanServer object and basic attribute retrieval of resource pool MBeans. For the JDBC Pool, it uses the MBean identified in Table 1.

DataSource MBean
name“jdbc/testDB”Must match the name of the JDBC datasource in context.xml file
class“javax.sql.DataSource”Constant value
host“localhost”Name of the host server as defined in server.xml file
path“/Test”Path of the Web application to which the JDBC source is attached
type“DataSource”Constant value
Table 1. MBean Properties for JDBC Pool in Resource Monitoring Solution

For the Connector Threads Pool, it uses the MBean identified in Table 2.

Connector MBean
name“jk-8009”This is the connector type that you want to track. Two common connectors are jk-8009 for Tomcat AJP connector and http-8080 for HTTP connector. Use jk-8009 if Tomcat is attached to the Web server via mod_jk, otherwise use http-8080.
type“ThreadPool”Constant value
Table 2. MBean Properties for Connector Threads Pool in Resource Monitoring Solution

For the JDBC DataSource MBean, the solution reads out the following MBean attributes:

  • numActive ? number of currently active JDBC connections for the pool
  • maxActive ? maximum number of available connections
  • For the connector thread pool MBean, the solution reads out the following:

  • currentThreadsBusy ? number of threads currently busy processing incoming requests
  • currentThreadCount ? total number of loaded threads ready to process requests (These threads are available to get immediately busy with spikes of traffic.)
  • How It Works

    As each request comes into Tomcat connector, the custom, performance-tracking Valve gets invoked. This Valve gathers information from the MBeans about the utilization of the pooled resources and then prints that information into a log file, along with the timestamp and other potentially useful information about the nature of the request. This log file is available for monitoring in real time, as well as for later analysis of the server’s performance and capacity.

    For my solution, I chose to modify the AccessLogValve supplied with the Tomcat source code. AccessLogValve produces a log file formatted according to Web logging standards, so its output looks identical to the standard Apache logs. Since these log statements contain very useful information by themselves, I preferred to keep them and just append extra resource pool state information to the log output. Of course, you can choose whatever information you want logged.

    The following are the specific implementation steps for the solution:

    1. Implement a custom resource-tracking Valve. Tomcat supplies a predefined BaseValve that you can extend for this purpose. It is located in the catalina.jar file. Add this jar file to your classpath and have your custom Valve extend it as follows:
    2. public class ResourceTrackingAccessLogValve     extends ValveBase    implements Lifecycle {

      (Note: I also implemented the Lifecycle interface. That is not important. It just adds a marks object with extra manageability features.)

      When extending the base Valve, override the method as follows:

         public void invoke(Request request, Response response)        throws IOException, ServletException {

      This contains the code that will query MBeans for resource utilization and log that information to the log files.

    3. Inside the invoke method, access the resource pools MBeans to read the performance attributes as follows (See the accompanying source code and its comments for details):
                 //Lazy instantiate MBeanServer using Tomcat's core              // MbeanUtility class            if ( this.mbeanServer == null ) {                                  this.mbeanServer = MBeanUtils.createServer ();                             }//end if                        //Get the instance of the db pool MBean object            // by fetching it through the ObjectName (connectionPoolMBeanID)             if ( this.dbPoolObjectName == null ) {                                                this.connectionPoolMBeanID.put( "name",""" + this.getJdbcName() + """ );                           this.connectionPoolMBeanID.put( "class", "javax.sql.DataSource" );                            this.connectionPoolMBeanID.put ( "host", this.getHost() );                        this.connectionPoolMBeanID.put( "path", this.getApplicationPath() );                this.connectionPoolMBeanID.put( "type", "DataSource" );                                                 this.dbPoolObjectName = ObjectName.getInstance( this.domain , 
      this.connectionPoolMBeanID ); } MBeanInfo mBeanInfo = mbeanServer.getMBeanInfo ( this.dbPoolObjectName ); MBeanAttributeInfo attributeInfo[] = mBeanInfo.getAttributes(); for (int i = 0; i < attributeInfo.length; i++) { if ( attributeInfo[ i ].getName().equals( this.DB_POOL_NUM_ACTIVE_ATTRIBUTE ) ) { this.dbConnectionCount = mbeanServer.getAttribute( dbPoolObjectName,
      attributeInfo[ i ].getName() ).toString(); } if ( attributeInfo[ i ].getName().equals( this.DB_POOL_MAX_ACTIVE_ATTRIBUTE ) ) { this.dbMaxSize = mbeanServer.getAttribute( dbPoolObjectName,
      attributeInfo[ i ].getName() ).toString(); } }//end for
    4. Log out the performance information each time the Valve is accessed, as follows:
             result.append( " | busy threads: " + this.threadsBusyCount );        result.append( " active threads: " + this.threadsCurrentCount  );        result.append( " active db connections: " + this.dbConnectionCount );       result.append ( " max db connections : " + this.dbMaxSize );

      These log statements will produce output in the log file that looks similar to this: [29/Aug/2006:12:58:05:265 -0400] 12 200 896 /Test/  GET - - - - | busy  
      threads: 2 active connector threads: 51 active db connections: 8 max db connections : 30
    5. Register your Valve. Insert the following statement into Tomcat’s server.xml file, right before the closing tag:

      This directive instructs Tomcat to enable your custom Valve, and it configures the Valve to produce standard Web server log output into a localhost_resource_log.txt file.

    6. Install the Valve. I chose to deploy the custom Valve as a jar file and place it inside the Tomcat’s server/lib directory. Putting this jar into the server’s classpath provides it access to all of the Tomcat internal classes that the custom monitoring process utilizes.

    7. Parse and interpret the logs. As you run load and stress tests against your application, ResourceTrackingAccessLogValve will fill the localhost_resource_log.txt file with invaluable information related to the resource consumption and pooled resources utilization at run time.

    This information will be of tremendous value for your production configuration efforts. Observe the maximum number of threads, and take note of the ratio between busy connector threads and active database connections. This ratio will tell you a lot about the health of your application, not only in terms of what your application consumes under various loads, but also how truly scalable your application is. Scalable applications tend to stay conservative on database connection usage as the number of Web connections grows.

    Further Functions

    The technique described in this article provides a practical understanding of Tomcat’s resource utilization under heavy loads. In my experience, understanding key configuration parameters such as resource pools and understanding the real-time needs of production applications are essential to running scalable and responsive applications. However, you can use my proposed approach for more than just tracking resource utilization.

    If you examine the Valve included with the accompanying source code and spend some time learning the access method for MBeans, you will probably find it useful for some other needs such as security tracking and pure performance monitoring.

    I also strongly encourage you to use tools such as JConsole to explore the richness of Tomcat’s MBeans API and learn more about the internals of the server as well as the runtime behavior of the application.


    About Our Editorial Process

    At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

    See our full editorial policy.

    About Our Journalist