Why Should You Bother?
One obvious question you should be asking about this solution is whether the effort involved is worth the tradeoff between tying up a single thread and polling an ASP page many times. The answer is that undeniably, it's vastly more scalable to execute large numbers of extremely short requests (such as the polling page). Looking at Table 1
, a single processor machine with requests taking, on average, 120 seconds to complete can process roughly 13 requests per minute. By quickly submitting the request, even if you poll for the response many times, you can increase the capacity of the server by at least several hundred times.
Another issue that should come to mind is how the system will respond to a large number of queries to the database from clients polling the system looking for their specific responses. First, consider that not all users will be actively executing requests for the long running resources your site provides. In fact, in most sites, the majority of the ASP requests are satisfied by very short running database queries and static content. These types of requests are probably not good candidates for this type of solution. Second, if there are so many active requests in polling mode that other system resources are strained, such as database connections, then consider ways of using smaller tables that are optimized specifically for fast retrieval of response records.
There are still other ways of optimizing the resources required to build asynchronous connections to resources. While beyond the scope of this article, there are methods of optimizing MSMQ messaging to yield performance increases of at least one to two orders of magnitude.
Another way of improving performance would be to optimize the polling algorithm. By monitoring the average response time of the resource, the polling could be made more resource-friendly. The initial period of time the client waits before checking for the response could be set to an amount of time slightly higher than the average response time of the resource. If the initial delay was set properly, most of the requests would poll only once or twice.
When getting started with tweaking the wait times, a suggestion for the initial wait time setting could be the average response time plus one second. The second wait time could be the average wait time plus one half the average response time, etc. These times are only suggestions, and you should take into account how tightly grouped the response times are. For resources with response times that are all very close to the average (small standard deviation), then shorter secondary waiting intervals would be appropriate. However, for resources with widely ranging response times (large standard deviation), larger periods of time for secondary waiting periods would be in order.
Remember that the key to increasing IIS's capacity is to design ASP scripts to run as quickly as possible. If a particular request has long response times, use the solution described in this article to decouple those requests from the responses. My real-world experience applying this solution in a high-capacity Web server environment has provided much-needed evidence that the solution is a valuable cost savings tool.