Shaun J Stuart

Just another SQL Server weblog

Browsing Posts in SQLServerPedia Syndication

As a database professional, I see a lot of instructions from vendors on how they want their SQL Server backend configured. Many times, the recommended configuration stinks. It's not their fault - the companies typically don't have anyone on staff whose job it is to lean how to configure each database platform they support for optimum performance.

Today, I came across the opposite situation. I got some recommended SQL Server configurations that were dead-on correct. This was the recommendation:

Do not activate the option "autoshrink" in the database.

And later on, in a different document, there was this:

Do not, by any means, activate the option "autoshrink" in the database.

 

So kudos to you, UC4! You are making DBAs everywhere happy.

 

Share

We recently upgraded our installation of Great Plains from version 10 to Dynamics GP 2010 and ran into some difficulties with the upgrade hanging. We're using SQL 2008 R2 as our database back end. Let me first state that I was not involved in this upgrade and it was our vendor who was working with Microsoft on resolving this issue.

According to our vendor, the Microsoft Senior Support Escalation Engineer she was working with has seen some problems with the upgrade hanging. Microsoft has narrowed the problem down to a change that was made to the way the GL history table was upgraded in the most recent service pack and he thought the cause might be related to duplicate records in the fiscal period setup table. We did not have any duplicate records, so this was not the cause of our problems. Microsoft recommended setting the MaxDOP of the SQL Server to 1 and increasing the size of the tempdb logfile to 50 GB and ensuring there was enough disk space for that to grow, if it needed to. We made those changes and the upgrade succeeded.

I know. From a DBA viewpoint, I don't see how those changes should have any effect. Our tempdb log file was not that big on our failed upgrade attempts, but there was certainly room on the drive for it to grow that large. My log autogrow setting is 10 MB, so if it needed to grow, it should have grown to almost fill the disk before it failed, but I did not see that happening. I also fail to see how setting MaxDOP to 1 would help. (We were doing this on a test server that had no other load or databases on it.) The upgrade did take 7.5 hours to convert our 4 company databases. Our vendor said Microsoft felt this was a bit long, but if you are limiting yourself to one processor, I would expect it to take a while.

I googled this a bit and found nothing, so I'm posting this to get the information out there. I have no good explanation for why these changes worked, but they seemed to do the trick. After the upgrade, remember to change your MaxDOP setting back to what it was originally.

 

Update: The above was for a test server. Today, we are doing this on production, a system that has more RAM and more processors. Turns out, the issue is that we had a really big table (69 million rows) and the upgrade apparently copies that table to tempdb, manipulates the data,  then copies it back for some reason. We started the production upgrade without changing MaxDOP to 1 and it took much longer to work on this table - 12 hours and counting versus 8 in the test environment. I saw lots of CXPACKET waits, but nothing too horrible. The upgrade looked to be working on batches if 500,000 records. After 12 hours, I switched MaxDOP to 1 and it finished processing that table (although after 12 hours, it might have been reaching the end already..)

Share

Everyone know SQL Server loves memory. It will happily gobble up all the RAM you throw at it. On physical boxes, this may not be a big deal, especially if you plan for it and properly configure your max and min memory settings within SQL Server. RAM makes SQL Server run faster and who doesn't want that?

Of course I want to super size that sandwich! And throw on some Doritos and squeeze cheese, while you're at it.

In a virtual environment, this RAM gluttony can be a detriment. If you are just beginning to experiment with virtualizing SQL Server, odds are, the first servers you virtualize are going to be the lesser used ones. You (or your network / VM people) will likely just do a P2V of the server and you'll soon find yourself holding the keys to a shiny new VM. Presto-chango, you've just virtualized your SQL Server and you are done!

Not so fast. Think about what just happened. The P2V process cloned your physical hardware and made a VM out of it, without giving any thought to if that hardware was correct for the system. Suppose the system you just virtualized was a little used system that was built on a server that was used for a more active programs in the past. Perhaps the heavily used databases had been migrated off of this server over time and now the server is hosting half or one-third of the load it was originally built for. You could end up with a server that is way overpowered its current load.

In the virtual world, this can hurt you. Remember that each VM is sharing it's resources with other VMs on the same set of host hardware. So if your VM is running with 12 GB of RAM and 8 CPUs, that's fewer resources available to the other VMs on that host.

I will take a timeout here to point out that VM hosts do provide tools to share RAM and CPU amongst all VMs as the load on each VM changes. For example, "ballooning" is a method where a VM can borrow RAM from another VM on the same host to temporarily satisfy memory needs on another. Of course, all these sharing techniques come with a price - when they occur, performance degrades. I'm lucky at my company because the VM team here is very conservative with our SQL Server VM hosts. They never oversubscribe RAM and are very conscientious about CPU allocation. In short, I never really have to worry about resource contention amongst my SQL VMs.

Be a good corporate citizen. If you don't need so many resources, give them back. Your network and VMs admins will love you. Everyone is ALWAYS bugging them for more resources. No one ever tells them "Hey, I've got too much. You can have some back." The trick is determining if you do have too many resources.

I'm going to focus on RAM only in this post, because this is a situation I found myself in recently. As part of my normal DBA monitoring processes, I was reviewing the page life expectancy of my SQL Servers using the Typeperf data collection tool I've written about previously. I noticed one SQL Server had an absurdly high page life expectancy:

This is what too much RAM looks like

This particular server has 12 GB of RAM. Three million seconds is just over 34 days. That's a long time for SQL to keep data in memory. Also, note the pattern. The drop offs are when the server was rebooted for patching. When the server comes back up, it basically loads data into memory and never needs to flush it out again.

Now, of course, whether or not this represents a waste of resources depends on your situation. If this was for a heavily used report server, this could be a highly desired performance pattern. But in my case, this chart is for a SQL Server back-end of a phone system. There are no other databases on the system and it is not under a heavy load. Also remember what I said previously about my VM admins - they do not over-allocate RAM on SQL Server VMs. So I've clearly got RAM on this VM that could most likely be better utilized elsewhere.

So what do I do to correct this? Luckily, the solution is fairly easy. By changing SQL Server's maximum memory setting, I can restrict the amount of memory SQL Server can use to a lower value and see how that affects performance. Furthermore, this is a setting that can be changed on the fly, so no downtime is required. In my case, I configured SQL to use a maximum of 7 GB of RAM (which would reserve 1 GB for the OS on an 8 GB system) and am letting it run for a couple weeks. If no performance issues are noted, I will reconfigure this VM to have 8 GB of RAM instead of 12 GB and I will reallocate that 4 GB RAM to another one of my SQL Server VMs on that same host that I know can use more RAM. And if performance issues do crop up, it's a quick fix to add the RAM back by increasing SQL's max memory setting again. By contrast, changing the amount of RAM in a VM requires a reboot, so that is why I'm testing first by changing the SQL Server memory settings.

Share

Microsoft released SP2 for SQL Server 2008 R2 a couple weeks ago and I've been applying it to my servers. Most of the time it installed without problems, but I encountered a very puzzling error on one server. When I ran the service pack installation, I saw a DOS window pop and and disappear quickly and nothing else happened. The temporary directory that the service pack process creates was deleted.

I managed to get a copy of the temporary directory from another server while I was installing the service pack there and moved it to my troublesome server, so I could see what was happening before it got deleted. I opened an administrative DOS prompt so I could see any errors without it closing. When I ran setup.exe from the command prompt, all I saw was the copyright notice for the  service pack:

Microsoft (R) SQL Server 2008 R2 Setup 10.50.4000.00
Copyright (c) Microsoft Corporation. All rights reserved.

Then I was dropped back to the command prompt. As far as I could tell, no log files were created. I checked the normal SQL installation log file location (C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap) but that directory did not exist. UAC was disabled on this machine. I cleared the IE cache, rebooted the machine, and even verified the Windows Installer service was running. I also checked Windows Update and applied all the patches the machine needed. None of that solved my problem.

This was very strange. Without a log, I didn't know how I was going to troubleshoot this. A couple suggestions from the forums at SQLServerCentral.com pointed me in the direction of .NET, so I went into Add / Remove Programs and did a Repair in the .NET installation. That completed, but did not solve the problem.

Not believing Microsoft wouldn't make a log file somewhere, I searched the hard drive for recently created files. Bingo! I found a log file at C:\Users\<username>\AppData\Local\Temp\SqlSetup.log. Opening that showed me some steps the installer was trying to do. The last few lines were:

08/02/2012 06:54:45.749 Attempt to initialize SQL setup code group
08/02/2012 06:54:45.751 Attempting to determine security.config file path
08/02/2012 06:54:45.763 Checking to see if policy file exists
08/02/2012 06:54:45.764 .Net security policy file does exist
08/02/2012 06:54:45.766 Attempting to load .Net security policy file
08/02/2012 06:54:45.772 Error: Cannot load .Net security policy file
08/02/2012 06:54:45.774 Error: InitializeSqlSetupCodeGroupCore(64bit) failed
08/02/2012 06:54:45.777 Error: InitializeSqlSetupCodeGroup failed: 0x80004005
08/02/2012 06:54:45.779 Setup closed with exit code: 0x80004005

Hmm. It seemed the problem was related to .NET after all. Someone else had a similar problem and posted about it at http://www.sqlservercentral.com/Forums/Topic1262389-391-4.aspx. The solution for that person was to reset the .NET security policy file using the caspol.exe utility. I tried that and it did not solve my problem. However, the error log still seemed to indicate this file was the issue, so I did some more digging. I found this post from Microsoft giving the location of the security policy files. The previous post said one way to restore your system to a useable state was simply to delete these files. So that's what I did. When I re-ran the SP2 installtion, I had the same issue and, more surprisingly, the logfile still included the line ".Net security policy file does exist".

So I searched the entire drive for all occurances of Security.config and Security.cch and found another copy in the C:\Users\<username>\AppData\Roaming directory. Once I deleted that, the SP2 installation program was able to run.

 

 

Share

Last time, I wrote about how to set up a basic maintenance plan to back up your databases on a regular basis to avoid having your transaction logs grow out of control and fill up your disk. As I mentioned at the end of that article, that routine creates backup files, but it does not delete them, so you could still end up running out of disk space. Today, I'll show you how to modify the maintenance plan we made to take care of this.

I'm going to repeat the same disclaimer I gave last time:  This tutorial is intended for accidental DBAs - people whose primary job role is something else, but ended up in charge of one or more SQL Servers. It will create a very basic backup plan that will prevent transaction logs from growing to eat up all your disk space and give you a basic level of data protection. It is not meant as a substitute for someone with database experience who can actively manage your environment.

Before we get into modifying the maintenance plan however, I want to give a brief overview of how SQL Server backups work. The maintenance plan we defined creates a full backup each Sunday, differential backups Monday through Saturday, and transaction log backups hourly. In order to determine what backup files we can delete, we need to understand what files SQL Server needs in order to restore a backup. Take a look at this calendar:

Supposed today is the 30th and we need to restore the backup that was taken at midnight on the 27th. The 27th was a Friday, so the backup taken that morning was a differential backup. In order to restore that, we need the full backup it was based on, namely the full backup taken on the 22nd. (Note SQL Server uses differential backups, not incremental backups. Therefore, in this scenario, we don't need to restore the backups taken on the mornings of the 23rd through 26th. Each differential backup contains all the changes made to the database since the last full backup was made.)

Now we can figure out what backup files we need to retain and for how long. I'm going to assume our business requirements are that we need to be able to restore the databases to any day within the past four weeks. Additionally, we need to be able to restore the databases to the point of failure during the current day. In other words, our databases are used during the day - perhaps they are online transactional databases for taking product orders. At the end of each day, the orders are finalized and we no longer care about recovering to a point in time for that day. For instance, if today is the 27th, we will never need to restore to the 24th at 5:23 PM. We only would need to restore to either the 24th at midnight or the 25th at midnight. We may need to restore to the 27th at 9:12 AM however.

Given this, we can conclude that we need to retain 4 week's worth of full and differential backups and 1 day's worth of transaction log backups. So, how do we modify our maintenance plan to do this? Easy.

First, in SSMS, connect to your SQL Server and expand the Management node. expand the Maintenance Plans node and you should see your maintenance plan. Right click it and choose Modify. (Click any screenshot to embiggen.)

This will open up the plan for editing inside SSMS. Across the top of the pane, you will see a list of your subplans. Recall that when we initially made this plan, the first subplan was for full backups, the second was for differential backups, and the last was for transaction log backups. We will be adding a Maintenance Cleanup Task to each of those subplans. First, let's change Subplan_1. When you first open the plan, you will probably see something like this:

You can move the existing task and enlarge it so you can see all the text. Drag the Maintenance Cleanup Task from the tollbox on the left into the main pane. Click on the Back Up Database (Full) task to select it. You will see an arrow appear at the bottom of the box. Drag the head of the arrow down to the new Maintenance Cleanup Task you just created. You should see something similar to this:

The green arrow tells SQL Server to continue to the Maintenance Cleanup Task if the Backup task successfully completes. (You can set up other tasks for cases of failure, but that is outside the scope of this tutorial.) Now, right click on the Maintenance Cleanup Task and choose Edit... You will be presented with the following screen:

Notice the items I circled in red. The path in the Folder field should be the path you are storing your backups in. The BAK extension is the default for SQL Server backup files. We need to check the Include first-level subfolders box because when we made the maintenance plan, we told the wizard to create a separate subfolder for each database. This check box tells SQL to recurse the folders one level deep when looking for files to delete. The option to delete files older than 4 weeks is the default setting and we don't need to change it. Click OK to accept these settings.

We've now made changes to the full backup portion of the maintenance plan to delete old backup files. The next step is to do the same thing for the differential backups files. At the top of the editing pane, click the Subplan_2 line to switch to editing that subplan. I'm going to make one change here to make the file maintenance process a bit easier. Once again, move and resize the backup Database (Differential) task so you can read it. Right click the task and choose Edit... You'll see the following screen:

Change the backup file extension field from the default of BAK to DIF. I'm doing this simply to make it easier to differentiate between the full and differential backup files because SQL Server uses the same extension for both by default. Click the OK button to accept this change. As we did previously, drag the Maintenance Cleanup Task from the toolbox to the editing pane and connect the arrow from the backup task to it:

Now, right click the Cleanup task and choose Edit... As we did before, we're going to specify the path where the backup files are located, the file extension (which we changed to dif), and tell SQL to recurse one level of subfolders. We can again accept the Delete files older than 4 weeks default.

Click OK to accept the changes. This completes our work on the differential backup subplan.

Click on Subplan_3 at the top of the editing pane to select the transaction log backup subplan, move and resize the backup task, drag out a new maintenance cleanup task, and connect it by arrow to the backup task.

Right click the Maintenance Cleanup Task and choose Edit.. Make the following changes:

Note here we have to change the Delete Files Older Than setting from the default of 4 weeks to 24 hours. This is because our business needs say we only need to recover to a point in time for the current day. Click OK to accept.

We have now set up automatic deletion of our old backup files. If we had to, we could stop here. However, there are still two more things we need to manage - one is the text files the maintenance plan generates. We don't need those hanging around forever. The second is something the accidental DBA might not know about. SQL Server stores records of each backup it takes, history of each job that executes, and history of each time the maintenance plan runs. If you don't actively manage these, your MSDB database will grow. (That database contains the system tables where this information is stored.)

Let's tackle the second one first. Switch back to the first subplan by clicking Subplan_1 at the top of the edit pane. Drag the History Cleanup Task from the toolbox into the editing pane and connect it to the Maintenance Cleanup Task.

Right click the History Cleanup Task and choose Edit... To bring up the following screen:

The options shown are all defaults and can be kept. Click OK to accept.

The final thing we need to do is to manage the text files the maintenance plan creates. First, we need to find out where the directory where the files are being written. Do this by clicking the Reporting and Logging button in the tool bar:

This will open up the window shown below. Make a note of the path specified as we will need it later.

Click Cancel to close the window without saving any changes. Now, drag a new Maintenance Cleanup Task into Subplan_1 and connect it with an arrow as shown:

Edit this task. This time, select the Maintenance Plan text reports radio button. Paste the path you found in the previous step into the Folder box:

Click OK to save the task. That completes our edits! Save the new plan by clicking the Disk icon.

If your backup policies require you to maintain backups for a different length of time than we set up here, it should be relatively straightforward to modify the times in this example to suit your needs. The important thing to watch out for is that you always have the last full backup needed to restore a differential backup.

You can also make the plan easier to read by changing the names of the tasks (the text in bold). You can do this by single clicking the task to select it, then single clicking the bolded text to change it.

Share