Shaun J Stuart

Just another SQL Server weblog

Browsing Posts tagged maintenance

Recently, I was going through all my servers and performing some basic health checks. One of these checks was to look for foreign keys and constraints that are not trusted. I figured this would be something of a rare occurrence and was completely surprised when I found out that roughly 75% of my servers had at least one database where foreign keys and / or constraints were not trusted.

I'm telling the truth. Trust me.

Why is this important? When these items are trusted, SQL can make some assumptions about the data in the tables and can use those assumptions to create more efficient query plans. If, however, the constraints are not trusted, SQL can't make any assumptions and must construct a query plan that may be more computationally intensive.

Before I get started, let me first explain what an untrusted foreign key or constraint is. (From here on, I'll use the term constraint to include both constraints and foreign keys.) When you define a constraint on a table, you are telling SQL Server to only allow certain data in certain columns. In the case of a foreign key, you are telling SQL Server that the value in a column in Table A must exist in Table B as a primary key. If you try to enter a value that is not in Table B, the insert will fail.

However, you can tell SQL Server to cheat and allow you to insert the value anyway. You can do this in a couple of ways. The most obvious is to disable the constraint and insert the data. You can then re-enable the constraint. Another way is to perform a bulk insert operation without specifying the CHECK_CONSTRAINTS option. This is often done to speed imports of large amounts of data.

Unfortunately, once you do this, SQL Server marks the constraint as "not trusted". Simply re-enabling the constraint will not change this. The constraint remains untrusted, even after being re-enabled. Re-enabling will prevent bad data from being inserted into the table again, but it does not validate the data that was inserted while the constraint was disabled. In order to make the constraint trusted, you need to tell SQL to validate the constraint against all the data that is currently in the table. I'll show how to do this later.

So who cares? If I know the data I am importing is valid, why not go ahead and disable the constraint, load the data, then re-enable the constraint? The problem is you know the data is valid, but SQL Server doesn't. And that can lead to sub-optimal performance.

Let me give a very simplified example. The following script will create two tables: Orders and Customers. There is a foreign key constraint on the Orders table that requires the value in Orders.CustomerNumber to be in the Customers table.


CREATE TABLE [dbo].[Orders]
       (
        [OrderNumber] [int] IDENTITY(1, 1)
                            NOT NULL
       ,[CustomerNumber] [int] NOT NULL
       ,[ProductNumber] [varchar](100) NOT NULL
       ,[Qty] [int] NOT NULL
       ,CONSTRAINT [PK_Orders] PRIMARY KEY CLUSTERED ([OrderNumber] ASC)
            WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
                  IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
                  ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
       )
ON     [PRIMARY]

GO

CREATE TABLE [dbo].[Customers]
       (
        [CustomerNumber] [int] IDENTITY(1, 1)
                               NOT NULL
       ,[CustomerName] [varchar](100) NOT NULL
       ,[Address] [varchar](100) NOT NULL
       ,[City] [varchar](50) NOT NULL
       ,[State] [char](2) NOT NULL
       ,[ZipCode] [varchar](10) NOT NULL
       ,CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED ([CustomerNumber] ASC)
            WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
                  IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON,
                  ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
       )
ON     [PRIMARY]

GO

ALTER TABLE [dbo].[Orders]  WITH CHECK ADD  CONSTRAINT [FK_Orders_Customers]
FOREIGN KEY([CustomerNumber])
REFERENCES [dbo].[Customers] ([CustomerNumber])
GO

ALTER TABLE [dbo].[Orders] CHECK CONSTRAINT [FK_Orders_Customers]
GO

Now, let's insert a few rows of data. First, we'll create a customer record, then two order records that link to that customer.


INSERT  INTO Customers
        (CustomerName
        ,Address
        ,City
        ,State
        ,ZipCode)
VALUES  ('Big Spender'
        ,'123 Main Street'
        ,'Gotham'
        ,'NY'
        ,'10111')

INSERT  INTO Orders
        (CustomerNumber
        ,ProductNumber
        ,Qty)
VALUES  (1
        ,'ABC123'
        ,'10')

INSERT  INTO Orders
        (CustomerNumber
        ,ProductNumber
        ,Qty)
VALUES  (1
        ,'ABC123'
        ,'11')

Now, let's say we want to execute the following query:


SELECT  *
FROM    Orders
WHERE   orders.CustomerNumber IN (SELECT    CustomerNumber
                                  FROM      Customers)

Not that great of a query, but this is just an example. Let's now run the query with the Include Actual Execution Plan option and see what query plan SQL came up with:

Notice that SQL doesn't even touch the Customers table. This is because the constraint we defined guarantees that every value in Orders.CustomerNumber exists in the Customers table.

Now, let's disable the constraint and re-run the same query:

ALTER TABLE Orders NOCHECK CONSTRAINT FK_Orders_Customers

Here's is the query plan now:

Because the constraint is not trusted, SQL must construct a query that accesses the Customer table. This will obviously require SQL Server to do more work than the plan we got when the constraint was trusted.

Now let's re-enable the constraint and see what happens:

ALTER TABLE Orders CHECK CONSTRAINT FK_Orders_Customers

When we run the query again, here's the plan SQL generates:

The plan is exactly the same as the one we got when the constraint was disabled! This is because the constraint is still untrusted. Even though we did not add any data to the tables while the constraint was untrusted, the query engine does not know that and SQL leaves the constraint marked as untrusted. Therefore, the query optimizer cannot use the additional information the constraint provides when it optimizes the query.

So how to we get the constraint trusted again? By running the ALTER TABLE command to tell SQL to verify the constraint:

ALTER TABLE Orders WITH CHECK CHECK CONSTRAINT FK_Orders_Customers

Note the double CHECK. This is required. Now when we run our query, we get our initial execution plan again:

What happens if, while the constraint was disabled, someone did insert invalid data? In that case, the above statement would fail with an error message. If this happens, you need to fix the problem before the constraint can be re-trusted.

Now this example was a bit contrived. We're dealing with two very simple tables with a total of three rows of data. Performance will not be an issue no matter what which query plan we end up with. But imagine you have a large data warehouse with millions of records. Each week, there is a new bulk load of data and someone forgot to code the import process to use the CHECK_CONSTRAINTS option. Queries against that data warehouse could end up taking much longer than they should.

How can you tell if you have any tables in your databases that have untrusted constraints? The sys.foreign_keys table contains a column named is_trusted. If the value in that column is 1, the foreign key is not trusted. For constraints, the sys.check_constraints table contains a column with the same name and functionality.

Below is some code that will search through all the foreign keys in a database and attempt to make them trusted.  Note that this will only look at foreign keys that are enabled but not trusted. If any are disabled, this will not try to enable them.

DECLARE @CorrectedCount INT
DECLARE @FailedCount INT
DECLARE UntrustedForeignKeysCursor CURSOR
FOR
        SELECT  '[' + s.name + '].' + '[' + o.name + ']' AS TableName
               ,i.name AS FKName
        FROM    sys.foreign_keys i
                INNER JOIN sys.objects o ON i.parent_object_id = o.OBJECT_ID
                INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
        WHERE   i.is_not_trusted = 1
                AND i.is_not_for_replication = 0
                AND i.is_disabled = 0
        ORDER BY o.name

DECLARE @TableName AS VARCHAR(200)
DECLARE @FKName AS VARCHAR(200)

SET @CorrectedCount = 0
SET @FailedCount = 0

OPEN UntrustedForeignKeysCursor
FETCH NEXT FROM UntrustedForeignKeysCursor INTO @TableName, @FKName
WHILE @@FETCH_STATUS = 0
      BEGIN
			/* SELECT 'ALTER TABLE ' + @TableName + ' WITH CHECK CHECK CONSTRAINT [' + @FKName + ']' */

            BEGIN TRY
				/*
					This try-catch will allow the process to continue when a constaint fails to get re-trusted
				*/
                  EXECUTE('ALTER TABLE ' + @TableName + ' WITH CHECK CHECK CONSTRAINT [' + @FKName + ']')
                  SET @CorrectedCount = @CorrectedCount + 1
            END TRY
            BEGIN CATCH
                  SET @FailedCount = @FailedCount + 1
            END CATCH

            FETCH NEXT FROM UntrustedForeignKeysCursor INTO @TableName,
                  @FKName
      END

CLOSE UntrustedForeignKeysCursor
DEALLOCATE UntrustedForeignKeysCursor
SELECT  CAST(@CorrectedCount AS VARCHAR(10)) + ' constraints re-trusted.'
SELECT  CAST(@FailedCount AS VARCHAR(10))
        + ' constraints unable to be re-trusted.'

This code will only look for foreign keys that are untrusted. If you want to also check for untrusted constraints, change the table in the cursor definition from sys.foreign_keys to sys.check_constraints. Everything else can stay the same. The code will report a count of constraints it has fixed and was unable to fix.

As I said before, I was completely surprised by the number of databases I had that contained untrusted foreign keys and constraints. I recommend taking a look at your systems to see how many there are in your environment.

(Standard code disclaimers apply - do not run unless you understand what the code is doing. This code has been tested against SQL 2005 and SQL 2008 R2 servers.)

Share

Everyone know SQL Server loves memory. It will happily gobble up all the RAM you throw at it. On physical boxes, this may not be a big deal, especially if you plan for it and properly configure your max and min memory settings within SQL Server. RAM makes SQL Server run faster and who doesn't want that?

Of course I want to super size that sandwich! And throw on some Doritos and squeeze cheese, while you're at it.

In a virtual environment, this RAM gluttony can be a detriment. If you are just beginning to experiment with virtualizing SQL Server, odds are, the first servers you virtualize are going to be the lesser used ones. You (or your network / VM people) will likely just do a P2V of the server and you'll soon find yourself holding the keys to a shiny new VM. Presto-chango, you've just virtualized your SQL Server and you are done!

Not so fast. Think about what just happened. The P2V process cloned your physical hardware and made a VM out of it, without giving any thought to if that hardware was correct for the system. Suppose the system you just virtualized was a little used system that was built on a server that was used for a more active programs in the past. Perhaps the heavily used databases had been migrated off of this server over time and now the server is hosting half or one-third of the load it was originally built for. You could end up with a server that is way overpowered its current load.

In the virtual world, this can hurt you. Remember that each VM is sharing it's resources with other VMs on the same set of host hardware. So if your VM is running with 12 GB of RAM and 8 CPUs, that's fewer resources available to the other VMs on that host.

I will take a timeout here to point out that VM hosts do provide tools to share RAM and CPU amongst all VMs as the load on each VM changes. For example, "ballooning" is a method where a VM can borrow RAM from another VM on the same host to temporarily satisfy memory needs on another. Of course, all these sharing techniques come with a price - when they occur, performance degrades. I'm lucky at my company because the VM team here is very conservative with our SQL Server VM hosts. They never oversubscribe RAM and are very conscientious about CPU allocation. In short, I never really have to worry about resource contention amongst my SQL VMs.

Be a good corporate citizen. If you don't need so many resources, give them back. Your network and VMs admins will love you. Everyone is ALWAYS bugging them for more resources. No one ever tells them "Hey, I've got too much. You can have some back." The trick is determining if you do have too many resources.

I'm going to focus on RAM only in this post, because this is a situation I found myself in recently. As part of my normal DBA monitoring processes, I was reviewing the page life expectancy of my SQL Servers using the Typeperf data collection tool I've written about previously. I noticed one SQL Server had an absurdly high page life expectancy:

This is what too much RAM looks like

This particular server has 12 GB of RAM. Three million seconds is just over 34 days. That's a long time for SQL to keep data in memory. Also, note the pattern. The drop offs are when the server was rebooted for patching. When the server comes back up, it basically loads data into memory and never needs to flush it out again.

Now, of course, whether or not this represents a waste of resources depends on your situation. If this was for a heavily used report server, this could be a highly desired performance pattern. But in my case, this chart is for a SQL Server back-end of a phone system. There are no other databases on the system and it is not under a heavy load. Also remember what I said previously about my VM admins - they do not over-allocate RAM on SQL Server VMs. So I've clearly got RAM on this VM that could most likely be better utilized elsewhere.

So what do I do to correct this? Luckily, the solution is fairly easy. By changing SQL Server's maximum memory setting, I can restrict the amount of memory SQL Server can use to a lower value and see how that affects performance. Furthermore, this is a setting that can be changed on the fly, so no downtime is required. In my case, I configured SQL to use a maximum of 7 GB of RAM (which would reserve 1 GB for the OS on an 8 GB system) and am letting it run for a couple weeks. If no performance issues are noted, I will reconfigure this VM to have 8 GB of RAM instead of 12 GB and I will reallocate that 4 GB RAM to another one of my SQL Server VMs on that same host that I know can use more RAM. And if performance issues do crop up, it's a quick fix to add the RAM back by increasing SQL's max memory setting again. By contrast, changing the amount of RAM in a VM requires a reboot, so that is why I'm testing first by changing the SQL Server memory settings.

Share

Microsoft released SP2 for SQL Server 2008 R2 a couple weeks ago and I've been applying it to my servers. Most of the time it installed without problems, but I encountered a very puzzling error on one server. When I ran the service pack installation, I saw a DOS window pop and and disappear quickly and nothing else happened. The temporary directory that the service pack process creates was deleted.

I managed to get a copy of the temporary directory from another server while I was installing the service pack there and moved it to my troublesome server, so I could see what was happening before it got deleted. I opened an administrative DOS prompt so I could see any errors without it closing. When I ran setup.exe from the command prompt, all I saw was the copyright notice for the  service pack:

Microsoft (R) SQL Server 2008 R2 Setup 10.50.4000.00
Copyright (c) Microsoft Corporation. All rights reserved.

Then I was dropped back to the command prompt. As far as I could tell, no log files were created. I checked the normal SQL installation log file location (C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap) but that directory did not exist. UAC was disabled on this machine. I cleared the IE cache, rebooted the machine, and even verified the Windows Installer service was running. I also checked Windows Update and applied all the patches the machine needed. None of that solved my problem.

This was very strange. Without a log, I didn't know how I was going to troubleshoot this. A couple suggestions from the forums at SQLServerCentral.com pointed me in the direction of .NET, so I went into Add / Remove Programs and did a Repair in the .NET installation. That completed, but did not solve the problem.

Not believing Microsoft wouldn't make a log file somewhere, I searched the hard drive for recently created files. Bingo! I found a log file at C:\Users\<username>\AppData\Local\Temp\SqlSetup.log. Opening that showed me some steps the installer was trying to do. The last few lines were:

08/02/2012 06:54:45.749 Attempt to initialize SQL setup code group
08/02/2012 06:54:45.751 Attempting to determine security.config file path
08/02/2012 06:54:45.763 Checking to see if policy file exists
08/02/2012 06:54:45.764 .Net security policy file does exist
08/02/2012 06:54:45.766 Attempting to load .Net security policy file
08/02/2012 06:54:45.772 Error: Cannot load .Net security policy file
08/02/2012 06:54:45.774 Error: InitializeSqlSetupCodeGroupCore(64bit) failed
08/02/2012 06:54:45.777 Error: InitializeSqlSetupCodeGroup failed: 0x80004005
08/02/2012 06:54:45.779 Setup closed with exit code: 0x80004005

Hmm. It seemed the problem was related to .NET after all. Someone else had a similar problem and posted about it at http://www.sqlservercentral.com/Forums/Topic1262389-391-4.aspx. The solution for that person was to reset the .NET security policy file using the caspol.exe utility. I tried that and it did not solve my problem. However, the error log still seemed to indicate this file was the issue, so I did some more digging. I found this post from Microsoft giving the location of the security policy files. The previous post said one way to restore your system to a useable state was simply to delete these files. So that's what I did. When I re-ran the SP2 installtion, I had the same issue and, more surprisingly, the logfile still included the line ".Net security policy file does exist".

So I searched the entire drive for all occurances of Security.config and Security.cch and found another copy in the C:\Users\<username>\AppData\Roaming directory. Once I deleted that, the SP2 installation program was able to run.

 

 

Share

Last time, I wrote about how to set up a basic maintenance plan to back up your databases on a regular basis to avoid having your transaction logs grow out of control and fill up your disk. As I mentioned at the end of that article, that routine creates backup files, but it does not delete them, so you could still end up running out of disk space. Today, I'll show you how to modify the maintenance plan we made to take care of this.

I'm going to repeat the same disclaimer I gave last time:  This tutorial is intended for accidental DBAs - people whose primary job role is something else, but ended up in charge of one or more SQL Servers. It will create a very basic backup plan that will prevent transaction logs from growing to eat up all your disk space and give you a basic level of data protection. It is not meant as a substitute for someone with database experience who can actively manage your environment.

Before we get into modifying the maintenance plan however, I want to give a brief overview of how SQL Server backups work. The maintenance plan we defined creates a full backup each Sunday, differential backups Monday through Saturday, and transaction log backups hourly. In order to determine what backup files we can delete, we need to understand what files SQL Server needs in order to restore a backup. Take a look at this calendar:

Supposed today is the 30th and we need to restore the backup that was taken at midnight on the 27th. The 27th was a Friday, so the backup taken that morning was a differential backup. In order to restore that, we need the full backup it was based on, namely the full backup taken on the 22nd. (Note SQL Server uses differential backups, not incremental backups. Therefore, in this scenario, we don't need to restore the backups taken on the mornings of the 23rd through 26th. Each differential backup contains all the changes made to the database since the last full backup was made.)

Now we can figure out what backup files we need to retain and for how long. I'm going to assume our business requirements are that we need to be able to restore the databases to any day within the past four weeks. Additionally, we need to be able to restore the databases to the point of failure during the current day. In other words, our databases are used during the day - perhaps they are online transactional databases for taking product orders. At the end of each day, the orders are finalized and we no longer care about recovering to a point in time for that day. For instance, if today is the 27th, we will never need to restore to the 24th at 5:23 PM. We only would need to restore to either the 24th at midnight or the 25th at midnight. We may need to restore to the 27th at 9:12 AM however.

Given this, we can conclude that we need to retain 4 week's worth of full and differential backups and 1 day's worth of transaction log backups. So, how do we modify our maintenance plan to do this? Easy.

First, in SSMS, connect to your SQL Server and expand the Management node. expand the Maintenance Plans node and you should see your maintenance plan. Right click it and choose Modify. (Click any screenshot to embiggen.)

This will open up the plan for editing inside SSMS. Across the top of the pane, you will see a list of your subplans. Recall that when we initially made this plan, the first subplan was for full backups, the second was for differential backups, and the last was for transaction log backups. We will be adding a Maintenance Cleanup Task to each of those subplans. First, let's change Subplan_1. When you first open the plan, you will probably see something like this:

You can move the existing task and enlarge it so you can see all the text. Drag the Maintenance Cleanup Task from the tollbox on the left into the main pane. Click on the Back Up Database (Full) task to select it. You will see an arrow appear at the bottom of the box. Drag the head of the arrow down to the new Maintenance Cleanup Task you just created. You should see something similar to this:

The green arrow tells SQL Server to continue to the Maintenance Cleanup Task if the Backup task successfully completes. (You can set up other tasks for cases of failure, but that is outside the scope of this tutorial.) Now, right click on the Maintenance Cleanup Task and choose Edit... You will be presented with the following screen:

Notice the items I circled in red. The path in the Folder field should be the path you are storing your backups in. The BAK extension is the default for SQL Server backup files. We need to check the Include first-level subfolders box because when we made the maintenance plan, we told the wizard to create a separate subfolder for each database. This check box tells SQL to recurse the folders one level deep when looking for files to delete. The option to delete files older than 4 weeks is the default setting and we don't need to change it. Click OK to accept these settings.

We've now made changes to the full backup portion of the maintenance plan to delete old backup files. The next step is to do the same thing for the differential backups files. At the top of the editing pane, click the Subplan_2 line to switch to editing that subplan. I'm going to make one change here to make the file maintenance process a bit easier. Once again, move and resize the backup Database (Differential) task so you can read it. Right click the task and choose Edit... You'll see the following screen:

Change the backup file extension field from the default of BAK to DIF. I'm doing this simply to make it easier to differentiate between the full and differential backup files because SQL Server uses the same extension for both by default. Click the OK button to accept this change. As we did previously, drag the Maintenance Cleanup Task from the toolbox to the editing pane and connect the arrow from the backup task to it:

Now, right click the Cleanup task and choose Edit... As we did before, we're going to specify the path where the backup files are located, the file extension (which we changed to dif), and tell SQL to recurse one level of subfolders. We can again accept the Delete files older than 4 weeks default.

Click OK to accept the changes. This completes our work on the differential backup subplan.

Click on Subplan_3 at the top of the editing pane to select the transaction log backup subplan, move and resize the backup task, drag out a new maintenance cleanup task, and connect it by arrow to the backup task.

Right click the Maintenance Cleanup Task and choose Edit.. Make the following changes:

Note here we have to change the Delete Files Older Than setting from the default of 4 weeks to 24 hours. This is because our business needs say we only need to recover to a point in time for the current day. Click OK to accept.

We have now set up automatic deletion of our old backup files. If we had to, we could stop here. However, there are still two more things we need to manage - one is the text files the maintenance plan generates. We don't need those hanging around forever. The second is something the accidental DBA might not know about. SQL Server stores records of each backup it takes, history of each job that executes, and history of each time the maintenance plan runs. If you don't actively manage these, your MSDB database will grow. (That database contains the system tables where this information is stored.)

Let's tackle the second one first. Switch back to the first subplan by clicking Subplan_1 at the top of the edit pane. Drag the History Cleanup Task from the toolbox into the editing pane and connect it to the Maintenance Cleanup Task.

Right click the History Cleanup Task and choose Edit... To bring up the following screen:

The options shown are all defaults and can be kept. Click OK to accept.

The final thing we need to do is to manage the text files the maintenance plan creates. First, we need to find out where the directory where the files are being written. Do this by clicking the Reporting and Logging button in the tool bar:

This will open up the window shown below. Make a note of the path specified as we will need it later.

Click Cancel to close the window without saving any changes. Now, drag a new Maintenance Cleanup Task into Subplan_1 and connect it with an arrow as shown:

Edit this task. This time, select the Maintenance Plan text reports radio button. Paste the path you found in the previous step into the Folder box:

Click OK to save the task. That completes our edits! Save the new plan by clicking the Disk icon.

If your backup policies require you to maintain backups for a different length of time than we set up here, it should be relatively straightforward to modify the times in this example to suit your needs. The important thing to watch out for is that you always have the last full backup needed to restore a differential backup.

You can also make the plan easier to read by changing the names of the tasks (the text in bold). You can do this by single clicking the task to select it, then single clicking the bolded text to change it.

Share

I got an email the other day from a friend who needed some SQL help. He had a SQL Server with a database whose transaction log had grown and was filling the entire disk drive. This is a common problem that system administrators face in shops that do not have a DBA on staff. The cause, of course, is that the database was in full recovery mode and no transaction log backups were being made.

My friend was asking if it was safe to use BACKUP LOG with TRUNCATE_ONLY. This was advice he had found via Google. While that used to work, it is never a good idea. If you do that, you've just lost the ability to do a point in time recovery, so you've seriously compromised yourself should your system fail. Furthermore, the TRUNCATE_ONLY option was removed with SQL 2008, so this might not even have done anything on his system.

So I walked him through the process of backing up his log file and then shrinking it back down to a manageable size. (It turns out there hadn't been any backups made in 9 months.) I then walked him through creating a backup plan so that this wouldn't happen again. Given that this is a common problem, I thought it was worth showing step-by-step instructions on how to do this.

Note: This tutorial is intended for accidental DBAs - people whose primary job role is something else, but ended up in charge of one or more SQL Servers. It will create a very basic backup plan that will prevent transaction logs from growing to eat up all your disk space and give you a basic level of data protection. It is not meant as a substitute for someone with database experience who can actively manage your environment.

This plan will take a full backup each Sunday night at midnight, a differential backup at midnight Monday through Saturday and transact log backups every hour. It should be fairly obvious how to adjust this schedule to suit your particular needs.

This plan was developed with SQL Server 2005, but the steps should be the same on SQL Server 2008, 2008 R2, and 2012, although the screens may be slightly different. You will likely need sa rights on the server and this will be built using the wizard in SQL Server Management Studio (SSMS).

The first step is to launch the maintenance plan wizard. In SSMS, you do this by connecting to the SQL Server, expanding the Management node, and right-clicking on the Maintenance Plans folder.

This will bring up a new window where you can name your plan and choose how you want to schedule the various parts of the plan. I've named my plan "Backup Plan," but you are free to name it whatever you want. (Except "Brittnie." That's just wrong.) Select the radio button to have separate schedules for each task, then click Next.

The next window that appears gives you a list of tasks you want the plan to perform. We're going to choose all the backup tasks.

The next screen lets you set the order the tasks are performed in. We're going to manually schedule each step, so there is no need to change anything here. just click Next to accept.

Now you'll be presented with a screen to configure the full database backup task. From the Database drop down list, select All Databases and click OK.

Now, you need to tell the wizard where you want the database backups stored. Towards the bottom of the window, there is a Folder: field. Enter the path here. UNC paths are supported. Also, select the two checkboxes I have circled.

Now let's tell the wizard when we want the full backups made. Click on the Change... button near the bottom of the above window. You'll be presented with the screen below. We're going to perform full backups each Sunday at midnight, so set the options as shown, then click OK.

Click Next to proceed to setting up the differential backup task. For this task, we are only going to backup the user databases, so make the following selection from the Databases drop down list:

Once again, we need to tell SQL Server where we want these files stored:

And now we have to tell SQL Server when to take the differential backups. click the Change... button to set the schedule. we'll take differential backups Monday through Saturday (full backups are being taken on Sunday, so we don't need to take a differential that day). Again, these will run at midnight.

Click OK to accept the schedule, then Next to move on to the transaction log backup task. Once again, choose All User Databases from the drop down list.

Again, define the save path and select the two check boxes.

Now we'll set up the times we want the transaction log backups to occur. This will be a bit different from the other two we set up because we want these to run every hour, not once a day. Also note we're setting the starting time to 1 AM. This is because our full and differential backup jobs run at midnight, so there is no need to also take a transaction log backup at that time.

Click OK to accept the schedule and then Next to move on to the next screen in the wizard. Here, we will specify that the backup jobs write their output to a text file. This is useful for troublehsooting purposes in case the job fails for some reason.

Click Next to get to the wizard summary screen:

Click Finish and the wizard will create your jobs. If you now refresh the Maintenance Plans node in the left pane of SSMS, you should see your new plan.

And finally, if you open the SQL Agent node and double-click on the Job Activity Monitor node, you'll get a list of jobs on the server. You'll see the one the wizard just created:

My jobs are disabled, which is why the icons are grey, but by default, the wizard enables the jobs when they are created, so they will run at the defined time.

You've now got backups being made and you are managing your transaction logs. Congratulations!

But this isn't all there is to do. The astute reader will realize that we have not set up any method of purging old backups or the job output text files. Left as is, these will just accumulate and fill up whatever disk you are storing them on. Next time, I'll show how to edit these maintenance plans to include steps to purge old files.

Another item to note: we selected all user databases to be have differential and transaction log backups be taken. This was to ensure that any newly created database automatically get picked up by the plan. However, this can cause problems because a differential or transaction log backup cannot be taken until a full backup of the database has been taken. So if you have a developer that creates a new database on Tuesday, the transaction log backup job and the differential backup log jobs will start failing until a full backup has been made of that database. The solution, of course, is to make a full backup of the new database, and then the jobs will work.

 

Share