A Useful Tip For Dealing With A Full Transaction Log Drive

This is a problem all DBAs will run into at some point – the drive a database’s transaction log is on runs out of space. In most cases, this was caused by the transaction log autogrowing until there is no room left on the drive. If your alerts are set up correctly, you’ll get an out of disk space alert when this happens. Of course, by then, the damage is done.

The first steps you’ll try will likely fail. Kick off a transaction log backup to free the log? Fail. The backup is a logged event itself and if SQL can’t write to the log, it won’t work. Add another file on another drive to the transaction log? Guess what? That’s a logged operation too and it will fail for the same reason.

So what’s a DBA to do? One option is to truncate the log, but that will mean you lose your point in time recovery options for a brief window. Here’s a little trick I’ve used:

Set your autogrow size to be something fairly large – 25 MB or larger. If you get a warning that your transaction log cannot grow because it is out of disk space, odds are there will actually be some space left on the disk – just not the full amount specified for autogrowth – maybe only 15 MB in our example. Now you can change your autogrow to something smaller, say 1 MB. That will give SQL the room to grow the log enough to perform a log backup or add another file and do what you need to in order to resolve the situation. This also maintains your ability to perform a point in time recovery. (Don’t forget to change your autogrow setting back once you have cleared disk space.)

This isn’t a foolproof method. Depending on what else is using the disk (or if your autogrow is an exact multiple of the disk size), you could still actually end up with zero bytes available and this method wouldn’t work. However, if you do this, you at least give yourself a fighting chance to recover while maintaining all your disaster recovery options.

2 thoughts on “A Useful Tip For Dealing With A Full Transaction Log Drive

  1. Just for the record, I’d set the autogrowth to be larger than that – typically 128-256MB even for small databases. This helps avoid VLF problems:

    http://adventuresinsql.com/2009/12/a-busyaccidental-dbas-guide-to-managing-vlfs/

    Dave Stein also has a neat trick – he leaves a 5-10GB file on each drive, like an old backup. When he starts getting OS & SQL alerts that he’s low on space, he can delete that file and buy himself troubleshooting time.

  2. Good point about the VLFs. I’ve got a weekly routine that monitors for that, so I don’t run into that too often. I like the old file trick!!

Leave a Reply

Your email address will not be published. Required fields are marked *

I am a real person and can prove it by doing this math problem: (required) Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.