devxlogo

Control Log File Growth to Avoid Out-of-Space Errors

Control Log File Growth to Avoid Out-of-Space Errors

he first step in my initiation as a SQL Server admin came after only a week or two on the job. I was confronted with a message that made my palms sweat. The error read something like this:

Error: 1105, Severity: 17, State: 2: Could not allocate space forthe transaction log for database’Registration’ because the deviceis full. Dump the log or  enlargethe device to create more space.  
I learned that any error with a severity greater than 16 is a “showstopper,” meaning the server won’t work until the error is cleared. My distress only increased when my attempts to dump the log were prohibited because the log was out of space! Because every transaction is logged in the transaction log, there was no more room to log the changes to the log.

After much fear and loathing, Books-On-Line gave me the instructions that were required to clear the error. Unfortunately, the instructions were almost as daunting as the error. They required changes that could be accomplished only by resetting server options and then selectively allowing and prohibiting writes to the system tables. I started to feel like the cure was as bad as the disease. Eventually, I was able to free space in the transaction log and clear the error; then I had to lie down for a while.

New and Improved SQL Server with AutoGrow
Fast-forward to SQL Server 2000. Instead of a device running out of space, we have the ability to let data and log files “autogrow.” SQL Server 2000’s AutoGrow feature is checked by default when the database is created, and since the source of this option is recorded in Model, the settings in Model determine how new databases will be configured.

You can set a limit to how large the data or log files can grow and also set the mode through which it grows (either by a fixed number of megabytes or as a percentage of the file’s size). However, leaving the database at its default setting or not specifying a maximum size carries a new risk: the database files can grow to the limit of available disk space.

The integration between the Windows operating system and SQL Server will prevent the oversized transaction(s) from locking you out because Windows will make SQL Server roll the transaction back. However, this event could be quite an inconvenience on your local 40GB drive and an even bigger embarrassment on your employer’s RAID cage!

The moral of the story is cap the size of your data and log files at creation or the files will grow to the limits of your disk space. On the other hand, having the data and log files sized too small will degrade performance, making your server incur the overhead of enlarging the files while the system is in use.

In this 10-Minute Solution, I will illustrate how to plan ahead for out-of-space errors with the appropriate setup, sizing, and maintenance to keep logs from growing out of control. Should such errors catch you by surprise, you’ll need to set up monitoring facilities that will catch them for you, which I will cover as well.



How do I keep my transaction log at a reasonable size so it doesn’t cause out-of-space errors?



Plan ahead with appropriate setup, sizing, and maintenance and set up monitoring facilities that will catch out-of-space errors for you.

The Hardest Working File in the Database
The transaction log has to store everything that happens before and after transactions are committed. The log does get “emptied” after its transactions are moved to the database, but it remains the same size. Backing up the log truncates it; it doesn’t back up the database. Think of it as a sumo wrestler who’s gone on a crash diet. Once he loses the weight, his skin still hangs on to its original size.

Although dumping the log and the DBCC ShrinkDatabase or DBCC ShrinkFile operations can change the log size, understanding how the log decides where to begin can be a useful. The log is truncated from a point called the Log Segment Number (LSN), the place where it begins. Knowing this, you should dump the log and database as soon as its created to establish a LSN pointer when the log is at its smallest. This allows you to shrink the log down to its minimum size (as specified in Model) without having to dupe it with ghost transactions later. Remember, the database has to remember only what the log has released, which is why the transaction log is the hardest working file in the database.

Additionally, SQL Server creates a new transaction log for you when you either attach or restore a database without a transaction log. This new log is useless, however, because it doesn’t know anything about the transactions in the database. If you find yourself in this position, be sure to take a full database backup before doing anything else.

One more caveat to the detach/attach approach is that it will work only on a system where the log is in one physical file. Still, setting the LSN on creation will help you establish the optimum start size for the log.

Choose the Appropriate Recovery Model
Out-of-space errors can arise from setting your database recovery mode inappropriately. Picking the appropriate recovery model for your database depends on its purpose. Developers typically do not need to recover transactions, so they normally work with the simple model. You can think of the simple model as working without a net.

For recoverability, administrators need to consider how much time their restores will take and how much, if any, data loss their organizations can sustain. For up-to-the-minute recoverability, choose the full recovery model. If you perform a lot of imports or text manipulations or are tight on disk space, you can survive with the bulk logged model. The bulk logged model will give you recoverability at the cost of having to repeat an import or indexing operation. The benefit is smaller log files.

Depending on the size of your files and the amount of time you have to recover, you can use full or incremental backups of the log to keep an up-to-date picture of transactions synchronized with what has been written to the database. In order for the full recovery model to work, you need to have a few more conditions in place with respect to the physical location of the server executables, OS executables, data files, and log files. In essence, you must keep them on separate physical drives; the executables, data files, and log files need to be independent for this strategy to work.

Table 1: SQL Server 2000 Recovery Models
Recovery Model
Feature
Function
Benefit
FullComplete protectionPoint-in-time recoveryAll DB operations are logged, allowing full recovery (with potentially huge logs).
Bulk LoggedWeighty operations minimally loggedMinimally logged operations:
  • Select into
  • BCP
  • Index operations
  • Text operations
  • Recovery up to the log holding minimally logged items, with slight exposure to data loss for bulk data; creates smaller logs
    SimpleProtection to the extent of the last full database backupEliminates reliance on the transaction log for recoveryTransaction logs are of no use as their active portions are checkpointed

    The bulk logged model works by ignoring big chunks of data created by fast operations (see Table 1) and, therefore, leaves the system exposed to small data losses for those operations that are minimally logged. Using this model, you have an idea of which data is inconsistent and which backup to apply. You will also save some disk space because the types of transactions ignored hog quite a bit of transaction log space.

    Recreating Error 1105
    In order to gain an understanding of how Error 1105 occurred in the first place, I recreated the error. I “broke” my configuration by causing it to fail and generate the error.

    Figure 1. Recreating Error 1105: Unchecking the Automatically Grow File option disallows the DB from asking the OS for more space.

    After setting the recovery model to simple and disabling AutoGrow, I forced the error by filling up the files. Unchecking the Automatically Grow File option disallows the DB from asking the OS for more space. I unchecked the checkboxes under the Log and Data File tabs (see Figure 1).

    The next steps were to create a table in the dev database called stuffit and then create a routine to make the file run out of space. The condition While 1 = 1 makes the insert statement run forever. This routine starts a process that keeps going until the error is generated and the database stops operating. After opening Query Analyzer and executing the insert statement (see Figure 2), I succeeded in generating the error, leaving the database dead in the water. The database was still available for querying, but nothing else. Its data file was too full to accept any more inserts.

    I used the following resources to recreate the error:

    • SQL Server log files
    • Performance monitor logs
    • Alerts engine and job steps

    You can use them to debug and document the out-of-space errors you encounter. Let’s step through each one and examine the helpful information it can offer.

    Log Files
    The server creates a rolling set of log files that is available inside Enterprise Manager and in the SQL default directory. These files are the first line of defense in debugging most errors. Here is a log report for the activity that generated Error 1105:

    2002-03-23 10:48:44.43 spid54    Error: 1105, 
    Severity: 17, State: 22002-03-23 10:48:44.43 spid54
    Could not allocate space for object ‘stuffit’
    in database ‘dev’ because the ‘PRIMARY’ file
    group is full.

    This is a straightforward report of what’s gone wrong, but it’s light on details about how to correct it. One solution is to enlarge the file by performing an Alter Database command and specifying how much to increase the file size.

    Performance Monitor

    Figure 2. Query Analyzer: Setting up Query Analyzer and hitting the Execute button generated the desired error.

    Performance Monitor is a tool that comes with the operating system, which gives it a somewhat more inclusive perspective than the SQL error logs. When you install SQL Server, Performance Monitor receives several extra sets of counters specifically for the SQL Server DBMS. The extra counters log disk access, memory used, threads opened, and processor time expended. They give you additional depth in evaluating bottlenecks in your installation, and guide you about what resources need to be expanded to eliminate the bottlenecks.

    Performance Monitor’s biggest advantage is that it is integrated with SQL Server’s Alerts and Jobs Objects, part of the Distributed Management Object (DMO) (see Figure 3). An administrator can make a counter’s threshold be the trigger for an alert, which can email or page the administrator or whomever is assigned to maintenance.

    Figure 3. Performance Monitor: Performance Monitor is integrated with SQL Server’s Alerts and Jobs Objects.

    Database Alerts
    The Alerts engine is a SQL Server tool that integrates the output of Performance Monitor with the SQL DMO Job object. It not only can find out about trouble, it can actually do something about it. It can be set to monitor for a specific error number (such as a showstopper like 1105) or watch for a particular Performance Monitor threshold to be reached. Since the Alert engine is integrated with the Job engine, once the error or event is detected it can initiate a SQL DMO Job that executes a workflow you specify.

    The workflow has the capability to take different actions based on the success or failure of its preceding steps, so the action you specified can branch. The specified actions are in the form of T-SQL commands you type into the Job steps held by the Alert. In Figure 3, I just set an alert for the size of the log file so I would get a warning. The warning can come as a dialog, a page to an administrator, or an e-mail informing the recipient of the execution of the job.

    Plan Ahead
    The solution for out-of-space errors is to plan ahead for them with appropriate setup, sizing, and maintenance to keep logs from out-of-control growth. You can eliminate surprises by setting up alerts and jobs that will catch and potentially fix the errors for you. As a last resort, you can make existing files larger or create additional files with the Alter Database command. Should you choose this solution, you may have to move the file to a different drive that is visible to the server and create new files on the new file group so the database can use them.

    devxblackblue

    About Our Editorial Process

    At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

    See our full editorial policy.

    About Our Journalist