What annoys you? For me, a few things I find annoying are people talking on cell phones in restaurants, people chewing with their mouth open, and getting calls from telemarketers. SQL jobs failing also annoy me. OK, maybe not as much as Chatty Cathy yakking with her friend on the phone during dinner, but it’s close.
I hate it when my maintenance jobs fail for some reason. I REALLY hate it when they fail intermittently. That means there is usually some strange situation that I didn’t think about when writing my code that pops its head up once in a while and crashes my routines. Unless it’s something serious, I can ignore it for a while. If it happens infrequently, it’s tempting to just let it be. But it always sticks in my craw, bothering me like a tiny sliver in my finger.
This is how I discovered the issue with TRY / CATCH blocks and restoring databases that I wrote about last time. I finally got so fed up with the seemingly random failures, I had to dig in and fix the thing once and for all.
This is also the driving fforce behind the iterative revisions of my backup routines. I got tired of my transaction log backup failing every time a new database was created or a database was restored onto a new server. Rather than having the routine fail with a “no current database backup” error, I wrote code to check to make sure there was a full backup. If there wasn’t, the script would make one. Problem solved.
Or mostly solved. Occasionally, the script would still fail with the same error. Failures were much rarer, but they were still happening. It was annoying.
Some research led me to discover that the latest cause of failure was databases that were in full recovery mode, switched to simple mode, then back to full recovery. If you try to make a log backup in this situation, the backup will fail with a “no current database backup” error. The question was, how can I determine if the recovery mode of a database changed?
The answer lies in the sys.database_recovery_status table. This table has one row for each database and if the last_log_backup_lsn column is NULL, a full backup needs to be performed.
So I revised my log backup script once again, this time adding this:
IF EXISTS ( SELECT 1 FROM sys.database_recovery_status WHERE database_id = DB_ID(@Database) AND last_log_backup_lsn IS NULL ) /* make a full backup, then continue */
Now, at long last, my log backup script should be fairly bullet proof. It will detect if a database is new, if it has been restored from another server, or if the recovery mode has changed. None of these will cause it to fail with a “no current database backup” error. And I can’t think of another reason this error might be raised, so I am hopeful I will never see this error again.
Note: Reading the MSDN article about this table does not seem to state that this column behaves this way. However, I’ve verified this behavior on SQL 2008 R2 and this tip was given to me by Paul Randal, so I’m pretty comfortable using it.
3 thoughts on “Determining If A Full Backup Is Needed For A Log Or Differential Backup”
if database recovery mode is simple, last_log_backup_lsn is always null.
another way to confirm the database has full backup already is checking the mdf file page 0.
if you found
DifferentialBaseGuid = 00000000-0000-0000-0000-000000000000
then there is no full backup for this database. or else you can find the guid like this
DifferentialBaseGuid = 0b7689d7-6192-43da-bba6-4e32b9f1e18d
Yes. I was assuming we had already confirmed the db was in full recovery mode by other means. You can also possibly use DBCC DBINFO (which is just DBCC PAGE looking at page 9, file 1). I noticed several values get reset when the mode changes from full to simple and back to full. For one, dbi_LastLogBackupTime gets reset to ‘1900-01-01 00:00:00’. There are a couple other values that get reset too, which is what prompted me to email Paul. But he recommended the way I wrote about, so I figure that’s the safest way. Given that all these DBCC commands are undocumented and thus, subject to change, it’s probably the better method to use.
Thanks for sharing,
My problem was a faulty Log Shipping configuration.
I failed over a week or so go, but did not clean-up the Log Shipping settings.