Browse DevX
Sign up for e-mail newsletters from DevX


Control Log File Growth to Avoid Out-of-Space Errors-3 : Page 3




Building the Right Environment to Support AI, Machine Learning and Deep Learning

Choose the Appropriate Recovery Model
Out-of-space errors can arise from setting your database recovery mode inappropriately. Picking the appropriate recovery model for your database depends on its purpose. Developers typically do not need to recover transactions, so they normally work with the simple model. You can think of the simple model as working without a net.

For recoverability, administrators need to consider how much time their restores will take and how much, if any, data loss their organizations can sustain. For up-to-the-minute recoverability, choose the full recovery model. If you perform a lot of imports or text manipulations or are tight on disk space, you can survive with the bulk logged model. The bulk logged model will give you recoverability at the cost of having to repeat an import or indexing operation. The benefit is smaller log files.

Depending on the size of your files and the amount of time you have to recover, you can use full or incremental backups of the log to keep an up-to-date picture of transactions synchronized with what has been written to the database. In order for the full recovery model to work, you need to have a few more conditions in place with respect to the physical location of the server executables, OS executables, data files, and log files. In essence, you must keep them on separate physical drives; the executables, data files, and log files need to be independent for this strategy to work.

Table 1: SQL Server 2000 Recovery Models
Recovery Model
Full Complete protection Point-in-time recovery All DB operations are logged, allowing full recovery (with potentially huge logs).
Bulk Logged Weighty operations minimally logged Minimally logged operations:
  • Select into
  • BCP
  • Index operations
  • Text operations
  • Recovery up to the log holding minimally logged items, with slight exposure to data loss for bulk data; creates smaller logs
    Simple Protection to the extent of the last full database backup Eliminates reliance on the transaction log for recovery Transaction logs are of no use as their active portions are checkpointed

    The bulk logged model works by ignoring big chunks of data created by fast operations (see Table 1) and, therefore, leaves the system exposed to small data losses for those operations that are minimally logged. Using this model, you have an idea of which data is inconsistent and which backup to apply. You will also save some disk space because the types of transactions ignored hog quite a bit of transaction log space.

    Comment and Contribute






    (Maximum characters: 1200). You have 1200 characters left.



    Thanks for your registration, follow us on our social networks to keep up-to-date