erving files that clients can download over the Internet couldn't be easier, right? Just copy the downloadable files into your Web application's directory, distribute the links and let IIS do all the work. Then again, serving files couldn't be more of a pain in the neck. You don't want your data to be accessible to the whole world. You don't want your server crowded with hundreds of static files. Maybe you even want to download temporary filescreating them on-the-fly only when the client starts the download.
Unfortunately, that's not possible using IIS's default response to a download request. So normally, to gain control over the download process, developers link to a custom .aspx page, where they can check credentials, create the downloadable file, and push that file back to the client using:
And that's where the real troubles begin.
What's the Problem?
method seems perfect; given a file name, it streams the binary data for that file down to the client. Until recently though, the WriteFile
method was a notorious memory hog; loading the entire file into your server's RAM to serve it (actually, it can use up to twice the file's size). For large files, this causes severe memory problems, and can recycle the ASP.NET process itself. But in June 2004, Microsoft solved that issue via a hotfix (see Knowledge Base Article 823409
). This hotfix is now part of the .NET Framework's 1.1 Service Pack 1.
|Author's Note: If you haven't installed the .NET Framework version 1.1 Service Pack 1 (SP1), please do it nowSP1 provides numerous fixes and improvements.
Among other things, the hotfix introduced the TransmitFile
method, which reads the disk file into a smaller memory buffer to transmit the file. Even though that solution solves the memory and recycling problems, it's unsatisfying. You have no control over the life-cycle of the response. You can't tell if the download completed properly, you have no way of knowing whether the download was interrupted, and (if you created a temporary file) you don't know if or when you can delete the file. Even worse, if the download does fail, the TransmitFile
method restarts it from the top upon the client's next attempt.
One possible solution, implementing the Background Intelligent Transfer Service (BITS)
is not an option for most sites, because that would ruin the attempt to maintain browser and OS independence on the client side.
The base for a satisfying solution came from Microsoft's first attempt to solve the memory-cluttering problems that WriteFile
caused (see Knowledge Base Article 812406
). The workaround in that article shows a chunk-wise downloading process, which reads data from a file stream. Before the server sends each chunk of bytes to the client, it checks whether the client is still connected, using the Response.IsClientConnected
property. If so, it continues streaming bytes; otherwise, it stops, preventing the server from delivering unnecessary data.
That's the way to go, particularly when you're downloading a temporary file. In the event that IsClientConnected
, you know that the download was interrupted and you must keep the file; whereas when the procedure completes successfully, you can delete it. In addition, to resume broken downloads, all you have to do is start streaming again from the point in the file where the client connection failed during the previous download attempt.