devxlogo

Optimize ASP and IIS by Decoupling Long-running Requests

Optimize ASP and IIS by Decoupling Long-running Requests

here’s little doubt that the Internet and the Web have transformed services and content delivery more than any other recent technology innovation. In its initial form, the Web provided a somewhat standard mechanism for delivering formatted content. As it matured, the Web was used for increasingly more complex forms of services delivery. In doing so, many developers applied traditional synchronous approaches to developing the Web systems that provided services. Synchronous delivery of these services has caused problems when attempting to scale these Web-based systems.

In this article, I will explain how synchronous access to slow and/or long running requests for resources leads to diminished Web server throughput. I used Microsoft’s IIS Web server running the ASP runtime environment to address a specific problem with ASP processing. However, the general background explanation and solution you’ll find here applies to all Web servers. The explanation will also include details on how to detect the problem using Performance Monitor (PerfMon). I will then describe and briefly demonstrate a solution to the problem using Microsoft’s Message Queuing Server (MSMQ).

It would be grossly misleading to attempt to convince you that the solution described here will solve a majority of your Web server performance problems. Specifically, this article focuses on the unique problems caused when Web requests block for long periods of time. This problem transcends gains that can you might obtain by adding memory and increasing cache sizes, using faster processors, and (in general) spending more on better hardware. To a limited extent, scaling up (increasing the number of processors) and scaling out (increasing the number of Web servers) can help, but for a truly scalable solution that can scale orders of magnitude higher, you need to consider a different architectural paradigm.

How IIS Allocates Threads
Microsoft’s Internet Information Server is optimized for relatively short-running ASP pages and/or static content. While that works well in most situations, when you combine long running requests for dynamic content with high traffic loads, the Web server’s throughput can diminish rapidly. The architecture employed by IIS and ASP creates a predefined number of threads devoted to handling incoming requests. The primary issue that arises when providing synchronous access to long-running queries is that ASP’s thread pool becomes a bottleneck?the volume of incoming requests outpaces the ability of ASP to process the requests.

When a Web server receives requests that it cannot process immediately, excess requests are queued up until resources are available to process the request. For IIS, you can track the queued requests using the PerfMon counter named Active Server Pages/Requests Queued. In a normally functioning Web server environment, this performance counter should stay at or very close to zero. However, as the Web server loses ground in keeping up with the volume of incoming requests, the counter value begins to grow.

To manage incoming requests, IIS allocates a pool of threads named the Asynchronous Thread Queue (ATQ). These threads satisfy most requests, including static HTML files, graphic images, and ISAPI filters/extensions. Like all other requests, ASP page requests are first triaged by the ATQ but are then turned over and serviced using a separate thread pool. The fact that IIS uses two different thread pools for ASP and non-ASP requests provides a means of troubleshooting ill-functioning Web servers.

The ASP Thread Pool
IIS manages the ASP thread pool separately from the ATQ. The IIS metabase property ASPProcessorThreadMax specifies the number of threads per processor allocated for use by ASP. The default value is 25 threads per processor. The default setting allows 25 ASP scripts to execute simultaneously on a single-processor machine, 50 scripts on a dual-processor machine, etc. You can modify this value to allow even more scripts to execute, although Microsoft recommends a maximum value of 100 threads per processor. In my experience, any setting greater than 30-50 threads can begin to cause problems (see Table 1).

Table 1: The table shows the average number of responses per second for IIS as the number of processors, the number of available threads (as defined in the metabase) and the length oftime required to process each response vary. The figures were calculated using the formula:?(60 sec/min / x sec/request * thread) * y threads/processor * z processors.

Seconds per Request

1 Processor

2 Processors

3 Processors

4 Processors

0.1

15000

30000

45000

60000

0.15

10000

20000

30000

40000

0.2

7500

15000

22500

30000

0.25

6000

12000

18000

24000

0.5

3000

6000

9000

12000

0.75

2000

4000

6000

8000

1

1500

3000

4500

6000

5

300

600

900

1200

10

150

300

450

600

15

100

200

300

400

30

50

100

150

200

45

33

67

100

133

60

25

50

75

100

90

17

33

50

67

120

13

25

38

50

From Table 1, you can easily see that with the default setting, IIS is capable of tremendous request capacity. Many images and concise ASP scripts complete in a quarter of a second or less. However, the chart also illustrates that long running requests can severely limit IIS’s throughput. From this chart, it’s obvious that one primary goal for a scalable Web site is to reduce the amount of time spent processing each request.

Diagnosing the Problem
Because it uses a different thread pool for ASP and non-ASP requests, IIS can scale to huge numbers of requests for static content with little impact on its ability to process requests for ASP pages. However, that also means that where a majority of the content consists of ASP pages, you can exhaust the ASP request capacity even while you still have excess capacity to service other request types. In other words, requests for ASP pages can queue up (due to exhaustion of the ASP thread pool), causing longer response times for users, while static (non-ASP) content (provided via the ATQ) continues to be serviced as normal. Other common symptoms of this condition include a non-responsive or slowly responding Web server with low CPU utilization. The key is that the limitation isn’t caused by CPU, network, or memory capacity, but rather by the limitation in the number of threads available to handle the Web requests.

The Windows performance monitor can help to visualize the problem. To know for sure if you are having trouble with running out of ASP threads, open PerfMon and add two performance counters: Active Server Pages/Requests Executing and Active Server Pages/Requests Queued. In a healthy environment, the value of Requests Executing will stay well below the maximum allowed on the machine. The maximum number of Requests Executing is equal to the number of processors times the value of the IIS metabase property ASPProcessorThreadMax. Therefore, if you have a dual processor Web server with the default value of twenty-five for ASPProcessorThreadMax, the maximum number of simultaneous requests will be 25 x 2, or fifty. Therefore, you should strive to keep the number of simultaneously executing requests well below fifty to maintain a healthy ASP server environment.

Likewise, Requests Queued should never rise above zero. When this value rises above zero, the pool of ASP threads allocated to processing ASP requests has been saturated?new requests that are received when the server is in this state will be placed in a first-in-first-out (FIFO) queue to be processed as ASP threads become available.

In Figure 1, notice that the Requests Executing counter is almost flat-lined out at a value of approximately fifty. Because requests continue to arrive while the server is in this state, the Requests Queued value rises at a rate equal to the difference between the rate of arrival of new requests and the message processing rate of the server. In such a situation, unless something occurs to dramatically lower the response times of the ASP requests, the server will have little chance of recovering.

?
Figure 1. Performance Monitor Counters: The figure shows the Requests Executing counter essentially “flat-lined” at 50, causing IIS to queue increasing numbers of new requests (Requests Queued).

The Solution
Most of the fixes for this problem cannot be implemented quickly, especially in a “live” context where you find your server suffering from these symptoms. The quickest, albeit most dangerous, solution is to simply modify the value of the ASPProcessorThreadMax metabase property. Using the adsutil.vbs script available in the inetpubadminScripts folder, you can modify that value easily?note that you’ll need to restart the IIS service for the new setting to take effect. If the server capacity is being only slightly outpaced by the rate of incoming transactions, it might be possible to raise the value of ASPProcessorThreadMax enough to offset the IIS throughput shortfall. This is a reasonable short-term solution, but others should be considered that will provide the ability to scale up by several orders of magnitude or more.

Two other solutions involve throwing hardware at the solution. By adding additional Web servers or additional processors, you can achieve near-linear growth in capacity. The benefits of this solution are that it will likely require no fundamental change to application architecture of the solution and that it can be implemented relatively rapidly?the only real cost is money. However, as growth of this nature is generally a good thing for business, the solution can probably be easily cost justified.

The solutions discussed so far may get you into the All-Star game batting lineup at the end of the season, but to get into the Hall of Fame requires an architectural paradigm shift. The key is to decouple the request from the response in a way that does not tie up valuable server resources while the long running resource is being accessed. Just as early operating system designers found that multitasking let processors execute other processes while one process was I/O-bound, the idea here is to use the ASP thread only long enough to queue a request for the long running resource and then disconnect?you can come back later to get the response from the resource.

To disconnect the request and response requires three phases of execution. First, the Web client posts data to an ASP script that queues a request for the resource. Second, the client periodically polls the server for the response. Lastly, when the client finds that the response has arrived, open an ASP page that will display the response from the resource.

This is quite a shift from conventional thinking for Web page authors. First, it requires an identifier to identify individual requests. This identifier must uniquely identify every request, even requests executing on other servers, and it must be generated before sending the request to the resource?it will be used as the voucher used by the client when polling for a specific response from the resource. A GUID (Globally Unique Identifier) is a good choice for this identifier as you can safely generate GUIDs in staggering number across any number of servers. Microsoft guarantees that you will never generate a duplicate GUID?if you meet several easily met conditions.

The system also requires a way to queue requests for the resource. The queuing mechanism holds requests when the ASP script that sends the request for the resource completes execution. A good (and free) choice is Microsoft’s Message Queuing Server (MSMQ), which ships with all versions of Windows 2000 and higher (it was also available for Windows NT 4.0 via the Option Pack). MSMQ is both easy to configure and very simple to use.

The last requirement for the system is a means of holding the response from the resource until the Web client comes looking for it again. You could also use MSMQ for this purpose, but SQL Server (or any other database) is a much better choice. MSMQ is optimized for providing prioritized FIFO messaging, but is not optimized for providing random access to messages in a queue. The key design goal for storing responses is to optimize the retrieval of those responses from the system. An indexed database table provides this easily.

With all the requirements in place, it’s time to walk through a sample data flow under the proposed model. The first step is to provide an ASP page that generates a GUID and, using MSMQ, submits the request for the long-running resource (attaching the GUID to the request). Finally, the ASP page redirects the client to an intermediate ASP page that periodically checks for the response each time it refreshes. Next, you need an application that picks up the requests from MSMQ, and satisfies the requests against the resource. When the application receives a response from the resource, it inserts the response into a SQL Server table, indexed by the previously generated GUID. The next time the ASP page executes after the response has been written to the SQL table, it redirects the browser to a Web page that displays the response to the user. Long-winded as the explanation may be, it is really is quite a simple process.

Exploring the Code
The code for the solution is really quite simple. The code below demonstrates the ease with which you can queue a message using MSMQ. The first step is to create a GUID for the unique message ID. You’ll find the code in the GUID_Generator project in the GUIDGen class included in the downloadable source.

Here’s how the ASP script calls the NewGUID() method:

   '---- create a GUID to uniquely identify the message   Set objGUID = Server.CreateObject("GUID_Generator.GUIDGen")   strMsgID = objGUID.NewGUID()   

Next, the script opens the MSMQ queue.

   '---- open MSMQ Queue for sending request message   Set objQueueInfo = Server.CreateObject( _      "MSMQ.MSMQQueueInfo")   objQueueInfo.FormatName = _      "DIRECT=OS:.private$ResourceServerReceiveQueue"   Set MSMQ_QueueSend = objQueueInfo.Open( _      MQ_SEND_ACCESS, MQ_DENY_NONE)   

The preceding script fragment creates an MSMQ.MSMQQueueInfo reference using a direct format name to identify the queue?the “OS:.” notation indicates that the queue is located on the local machine.

The last section of the code creates a message, sets the message body to the unique message ID, and then sends the message to the queue that was opened.

   '---- send message   Set objMsg = Server.CreateObject("MSMQ.MSMQMessage")   objMsg.Body = strMsgID   objMsg.Send MSMQ_QueueSend   

In practice, the GUID would comprise only a small portion of the request information being sent to the resource, but this example serves to illustrate a simple solution. Finally, the user is redirected to an intermediate page that polls for a response from the resource server.

   '---- redirect user to intermediate wait page   Response.Redirect _      "Response_Intermediate_Wait.asp?MsgID=" & strMsgID

For the purposes of illustrating a simple solution, I have included the code for a resource server that ‘listens’ on the queue that the requests are sent to. The server uses an event-driven mechanism provided by MSMQ to receive notifications of new messages in the queue. When a new message arrives, the server removes the message from the queue, and immediately writes the GUID to a SQL table, which will be the signal to the intermediate page that the resource server has responded to the request. Again, in practice, the data written to the table would be considerably more elaborate to support the needs of the application. You can download the code here.

At this point, it is important to mention that the resource server application is the key to making the call asynchronous for the Web page. By allowing the ASP page to simply submit the request to an external process, in this case executing in Visual Basic, it is freed from the requirement of waiting on the response to return, as was the case when the ASP script executed synchronously. This is very similar to the way operating systems block a thread waiting for a response from I/O reads. Blocking the waiting thread allows the operating system to go on executing other waiting threads?when the read completes, the thread is reawakened and is allowed to go on executing. In a very similar way, this solution allows the operating system to make better use of its available resources and increase overall throughput of the system.

When running, the ASP pages generate the sequence of client-side displays shown in Figure 2, Figure 3, and Figure 4.

?
Figure 2. Make a Request: Clicking this button will run the ASP script that generates the unique message ID, and sends the message to the resource server using MSMQ.
?
Figure 3: Waiting for Response. After making a request, users see this page, which refreshes itself every 2 seconds, looking for its response from the resource server application.
?
Figure 4. Request Completed: When the server finishes processing the response, the “wait” page retrieves the content from the database and displays the results to the user.

Why Should You Bother?
One obvious question you should be asking about this solution is whether the effort involved is worth the tradeoff between tying up a single thread and polling an ASP page many times. The answer is that undeniably, it’s vastly more scalable to execute large numbers of extremely short requests (such as the polling page). Looking at Table 1, a single processor machine with requests taking, on average, 120 seconds to complete can process roughly 13 requests per minute. By quickly submitting the request, even if you poll for the response many times, you can increase the capacity of the server by at least several hundred times.

Another issue that should come to mind is how the system will respond to a large number of queries to the database from clients polling the system looking for their specific responses. First, consider that not all users will be actively executing requests for the long running resources your site provides. In fact, in most sites, the majority of the ASP requests are satisfied by very short running database queries and static content. These types of requests are probably not good candidates for this type of solution. Second, if there are so many active requests in polling mode that other system resources are strained, such as database connections, then consider ways of using smaller tables that are optimized specifically for fast retrieval of response records.

There are still other ways of optimizing the resources required to build asynchronous connections to resources. While beyond the scope of this article, there are methods of optimizing MSMQ messaging to yield performance increases of at least one to two orders of magnitude.

Another way of improving performance would be to optimize the polling algorithm. By monitoring the average response time of the resource, the polling could be made more resource-friendly. The initial period of time the client waits before checking for the response could be set to an amount of time slightly higher than the average response time of the resource. If the initial delay was set properly, most of the requests would poll only once or twice.

When getting started with tweaking the wait times, a suggestion for the initial wait time setting could be the average response time plus one second. The second wait time could be the average wait time plus one half the average response time, etc. These times are only suggestions, and you should take into account how tightly grouped the response times are. For resources with response times that are all very close to the average (small standard deviation), then shorter secondary waiting intervals would be appropriate. However, for resources with widely ranging response times (large standard deviation), larger periods of time for secondary waiting periods would be in order.

Remember that the key to increasing IIS’s capacity is to design ASP scripts to run as quickly as possible. If a particular request has long response times, use the solution described in this article to decouple those requests from the responses. My real-world experience applying this solution in a high-capacity Web server environment has provided much-needed evidence that the solution is a valuable cost savings tool.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist