Shaun J Stuart

Just another SQL Server weblog

Browsing Posts in SQLServerPedia Syndication

Microsoft released SP2 for SQL Server 2008 R2 a couple weeks ago and I've been applying it to my servers. Most of the time it installed without problems, but I encountered a very puzzling error on one server. When I ran the service pack installation, I saw a DOS window pop and and disappear quickly and nothing else happened. The temporary directory that the service pack process creates was deleted.

I managed to get a copy of the temporary directory from another server while I was installing the service pack there and moved it to my troublesome server, so I could see what was happening before it got deleted. I opened an administrative DOS prompt so I could see any errors without it closing. When I ran setup.exe from the command prompt, all I saw was the copyright notice for the  service pack:

Microsoft (R) SQL Server 2008 R2 Setup 10.50.4000.00
Copyright (c) Microsoft Corporation. All rights reserved.

Then I was dropped back to the command prompt. As far as I could tell, no log files were created. I checked the normal SQL installation log file location (C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap) but that directory did not exist. UAC was disabled on this machine. I cleared the IE cache, rebooted the machine, and even verified the Windows Installer service was running. I also checked Windows Update and applied all the patches the machine needed. None of that solved my problem.

This was very strange. Without a log, I didn't know how I was going to troubleshoot this. A couple suggestions from the forums at SQLServerCentral.com pointed me in the direction of .NET, so I went into Add / Remove Programs and did a Repair in the .NET installation. That completed, but did not solve the problem.

Not believing Microsoft wouldn't make a log file somewhere, I searched the hard drive for recently created files. Bingo! I found a log file at C:\Users\<username>\AppData\Local\Temp\SqlSetup.log. Opening that showed me some steps the installer was trying to do. The last few lines were:

08/02/2012 06:54:45.749 Attempt to initialize SQL setup code group
08/02/2012 06:54:45.751 Attempting to determine security.config file path
08/02/2012 06:54:45.763 Checking to see if policy file exists
08/02/2012 06:54:45.764 .Net security policy file does exist
08/02/2012 06:54:45.766 Attempting to load .Net security policy file
08/02/2012 06:54:45.772 Error: Cannot load .Net security policy file
08/02/2012 06:54:45.774 Error: InitializeSqlSetupCodeGroupCore(64bit) failed
08/02/2012 06:54:45.777 Error: InitializeSqlSetupCodeGroup failed: 0x80004005
08/02/2012 06:54:45.779 Setup closed with exit code: 0x80004005

Hmm. It seemed the problem was related to .NET after all. Someone else had a similar problem and posted about it at http://www.sqlservercentral.com/Forums/Topic1262389-391-4.aspx. The solution for that person was to reset the .NET security policy file using the caspol.exe utility. I tried that and it did not solve my problem. However, the error log still seemed to indicate this file was the issue, so I did some more digging. I found this post from Microsoft giving the location of the security policy files. The previous post said one way to restore your system to a useable state was simply to delete these files. So that's what I did. When I re-ran the SP2 installtion, I had the same issue and, more surprisingly, the logfile still included the line ".Net security policy file does exist".

So I searched the entire drive for all occurances of Security.config and Security.cch and found another copy in the C:\Users\<username>\AppData\Roaming directory. Once I deleted that, the SP2 installation program was able to run.

 

 

Share

Last time, I wrote about how to set up a basic maintenance plan to back up your databases on a regular basis to avoid having your transaction logs grow out of control and fill up your disk. As I mentioned at the end of that article, that routine creates backup files, but it does not delete them, so you could still end up running out of disk space. Today, I'll show you how to modify the maintenance plan we made to take care of this.

I'm going to repeat the same disclaimer I gave last time:  This tutorial is intended for accidental DBAs - people whose primary job role is something else, but ended up in charge of one or more SQL Servers. It will create a very basic backup plan that will prevent transaction logs from growing to eat up all your disk space and give you a basic level of data protection. It is not meant as a substitute for someone with database experience who can actively manage your environment.

Before we get into modifying the maintenance plan however, I want to give a brief overview of how SQL Server backups work. The maintenance plan we defined creates a full backup each Sunday, differential backups Monday through Saturday, and transaction log backups hourly. In order to determine what backup files we can delete, we need to understand what files SQL Server needs in order to restore a backup. Take a look at this calendar:

Supposed today is the 30th and we need to restore the backup that was taken at midnight on the 27th. The 27th was a Friday, so the backup taken that morning was a differential backup. In order to restore that, we need the full backup it was based on, namely the full backup taken on the 22nd. (Note SQL Server uses differential backups, not incremental backups. Therefore, in this scenario, we don't need to restore the backups taken on the mornings of the 23rd through 26th. Each differential backup contains all the changes made to the database since the last full backup was made.)

Now we can figure out what backup files we need to retain and for how long. I'm going to assume our business requirements are that we need to be able to restore the databases to any day within the past four weeks. Additionally, we need to be able to restore the databases to the point of failure during the current day. In other words, our databases are used during the day - perhaps they are online transactional databases for taking product orders. At the end of each day, the orders are finalized and we no longer care about recovering to a point in time for that day. For instance, if today is the 27th, we will never need to restore to the 24th at 5:23 PM. We only would need to restore to either the 24th at midnight or the 25th at midnight. We may need to restore to the 27th at 9:12 AM however.

Given this, we can conclude that we need to retain 4 week's worth of full and differential backups and 1 day's worth of transaction log backups. So, how do we modify our maintenance plan to do this? Easy.

First, in SSMS, connect to your SQL Server and expand the Management node. expand the Maintenance Plans node and you should see your maintenance plan. Right click it and choose Modify. (Click any screenshot to embiggen.)

This will open up the plan for editing inside SSMS. Across the top of the pane, you will see a list of your subplans. Recall that when we initially made this plan, the first subplan was for full backups, the second was for differential backups, and the last was for transaction log backups. We will be adding a Maintenance Cleanup Task to each of those subplans. First, let's change Subplan_1. When you first open the plan, you will probably see something like this:

You can move the existing task and enlarge it so you can see all the text. Drag the Maintenance Cleanup Task from the tollbox on the left into the main pane. Click on the Back Up Database (Full) task to select it. You will see an arrow appear at the bottom of the box. Drag the head of the arrow down to the new Maintenance Cleanup Task you just created. You should see something similar to this:

The green arrow tells SQL Server to continue to the Maintenance Cleanup Task if the Backup task successfully completes. (You can set up other tasks for cases of failure, but that is outside the scope of this tutorial.) Now, right click on the Maintenance Cleanup Task and choose Edit... You will be presented with the following screen:

Notice the items I circled in red. The path in the Folder field should be the path you are storing your backups in. The BAK extension is the default for SQL Server backup files. We need to check the Include first-level subfolders box because when we made the maintenance plan, we told the wizard to create a separate subfolder for each database. This check box tells SQL to recurse the folders one level deep when looking for files to delete. The option to delete files older than 4 weeks is the default setting and we don't need to change it. Click OK to accept these settings.

We've now made changes to the full backup portion of the maintenance plan to delete old backup files. The next step is to do the same thing for the differential backups files. At the top of the editing pane, click the Subplan_2 line to switch to editing that subplan. I'm going to make one change here to make the file maintenance process a bit easier. Once again, move and resize the backup Database (Differential) task so you can read it. Right click the task and choose Edit... You'll see the following screen:

Change the backup file extension field from the default of BAK to DIF. I'm doing this simply to make it easier to differentiate between the full and differential backup files because SQL Server uses the same extension for both by default. Click the OK button to accept this change. As we did previously, drag the Maintenance Cleanup Task from the toolbox to the editing pane and connect the arrow from the backup task to it:

Now, right click the Cleanup task and choose Edit... As we did before, we're going to specify the path where the backup files are located, the file extension (which we changed to dif), and tell SQL to recurse one level of subfolders. We can again accept the Delete files older than 4 weeks default.

Click OK to accept the changes. This completes our work on the differential backup subplan.

Click on Subplan_3 at the top of the editing pane to select the transaction log backup subplan, move and resize the backup task, drag out a new maintenance cleanup task, and connect it by arrow to the backup task.

Right click the Maintenance Cleanup Task and choose Edit.. Make the following changes:

Note here we have to change the Delete Files Older Than setting from the default of 4 weeks to 24 hours. This is because our business needs say we only need to recover to a point in time for the current day. Click OK to accept.

We have now set up automatic deletion of our old backup files. If we had to, we could stop here. However, there are still two more things we need to manage - one is the text files the maintenance plan generates. We don't need those hanging around forever. The second is something the accidental DBA might not know about. SQL Server stores records of each backup it takes, history of each job that executes, and history of each time the maintenance plan runs. If you don't actively manage these, your MSDB database will grow. (That database contains the system tables where this information is stored.)

Let's tackle the second one first. Switch back to the first subplan by clicking Subplan_1 at the top of the edit pane. Drag the History Cleanup Task from the toolbox into the editing pane and connect it to the Maintenance Cleanup Task.

Right click the History Cleanup Task and choose Edit... To bring up the following screen:

The options shown are all defaults and can be kept. Click OK to accept.

The final thing we need to do is to manage the text files the maintenance plan creates. First, we need to find out where the directory where the files are being written. Do this by clicking the Reporting and Logging button in the tool bar:

This will open up the window shown below. Make a note of the path specified as we will need it later.

Click Cancel to close the window without saving any changes. Now, drag a new Maintenance Cleanup Task into Subplan_1 and connect it with an arrow as shown:

Edit this task. This time, select the Maintenance Plan text reports radio button. Paste the path you found in the previous step into the Folder box:

Click OK to save the task. That completes our edits! Save the new plan by clicking the Disk icon.

If your backup policies require you to maintain backups for a different length of time than we set up here, it should be relatively straightforward to modify the times in this example to suit your needs. The important thing to watch out for is that you always have the last full backup needed to restore a differential backup.

You can also make the plan easier to read by changing the names of the tasks (the text in bold). You can do this by single clicking the task to select it, then single clicking the bolded text to change it.

Share

I got an email the other day from a friend who needed some SQL help. He had a SQL Server with a database whose transaction log had grown and was filling the entire disk drive. This is a common problem that system administrators face in shops that do not have a DBA on staff. The cause, of course, is that the database was in full recovery mode and no transaction log backups were being made.

My friend was asking if it was safe to use BACKUP LOG with TRUNCATE_ONLY. This was advice he had found via Google. While that used to work, it is never a good idea. If you do that, you've just lost the ability to do a point in time recovery, so you've seriously compromised yourself should your system fail. Furthermore, the TRUNCATE_ONLY option was removed with SQL 2008, so this might not even have done anything on his system.

So I walked him through the process of backing up his log file and then shrinking it back down to a manageable size. (It turns out there hadn't been any backups made in 9 months.) I then walked him through creating a backup plan so that this wouldn't happen again. Given that this is a common problem, I thought it was worth showing step-by-step instructions on how to do this.

Note: This tutorial is intended for accidental DBAs - people whose primary job role is something else, but ended up in charge of one or more SQL Servers. It will create a very basic backup plan that will prevent transaction logs from growing to eat up all your disk space and give you a basic level of data protection. It is not meant as a substitute for someone with database experience who can actively manage your environment.

This plan will take a full backup each Sunday night at midnight, a differential backup at midnight Monday through Saturday and transact log backups every hour. It should be fairly obvious how to adjust this schedule to suit your particular needs.

This plan was developed with SQL Server 2005, but the steps should be the same on SQL Server 2008, 2008 R2, and 2012, although the screens may be slightly different. You will likely need sa rights on the server and this will be built using the wizard in SQL Server Management Studio (SSMS).

The first step is to launch the maintenance plan wizard. In SSMS, you do this by connecting to the SQL Server, expanding the Management node, and right-clicking on the Maintenance Plans folder.

This will bring up a new window where you can name your plan and choose how you want to schedule the various parts of the plan. I've named my plan "Backup Plan," but you are free to name it whatever you want. (Except "Brittnie." That's just wrong.) Select the radio button to have separate schedules for each task, then click Next.

The next window that appears gives you a list of tasks you want the plan to perform. We're going to choose all the backup tasks.

The next screen lets you set the order the tasks are performed in. We're going to manually schedule each step, so there is no need to change anything here. just click Next to accept.

Now you'll be presented with a screen to configure the full database backup task. From the Database drop down list, select All Databases and click OK.

Now, you need to tell the wizard where you want the database backups stored. Towards the bottom of the window, there is a Folder: field. Enter the path here. UNC paths are supported. Also, select the two checkboxes I have circled.

Now let's tell the wizard when we want the full backups made. Click on the Change... button near the bottom of the above window. You'll be presented with the screen below. We're going to perform full backups each Sunday at midnight, so set the options as shown, then click OK.

Click Next to proceed to setting up the differential backup task. For this task, we are only going to backup the user databases, so make the following selection from the Databases drop down list:

Once again, we need to tell SQL Server where we want these files stored:

And now we have to tell SQL Server when to take the differential backups. click the Change... button to set the schedule. we'll take differential backups Monday through Saturday (full backups are being taken on Sunday, so we don't need to take a differential that day). Again, these will run at midnight.

Click OK to accept the schedule, then Next to move on to the transaction log backup task. Once again, choose All User Databases from the drop down list.

Again, define the save path and select the two check boxes.

Now we'll set up the times we want the transaction log backups to occur. This will be a bit different from the other two we set up because we want these to run every hour, not once a day. Also note we're setting the starting time to 1 AM. This is because our full and differential backup jobs run at midnight, so there is no need to also take a transaction log backup at that time.

Click OK to accept the schedule and then Next to move on to the next screen in the wizard. Here, we will specify that the backup jobs write their output to a text file. This is useful for troublehsooting purposes in case the job fails for some reason.

Click Next to get to the wizard summary screen:

Click Finish and the wizard will create your jobs. If you now refresh the Maintenance Plans node in the left pane of SSMS, you should see your new plan.

And finally, if you open the SQL Agent node and double-click on the Job Activity Monitor node, you'll get a list of jobs on the server. You'll see the one the wizard just created:

My jobs are disabled, which is why the icons are grey, but by default, the wizard enables the jobs when they are created, so they will run at the defined time.

You've now got backups being made and you are managing your transaction logs. Congratulations!

But this isn't all there is to do. The astute reader will realize that we have not set up any method of purging old backups or the job output text files. Left as is, these will just accumulate and fill up whatever disk you are storing them on. Next time, I'll show how to edit these maintenance plans to include steps to purge old files.

Another item to note: we selected all user databases to be have differential and transaction log backups be taken. This was to ensure that any newly created database automatically get picked up by the plan. However, this can cause problems because a differential or transaction log backup cannot be taken until a full backup of the database has been taken. So if you have a developer that creates a new database on Tuesday, the transaction log backup job and the differential backup log jobs will start failing until a full backup has been made of that database. The solution, of course, is to make a full backup of the new database, and then the jobs will work.

 

Share

I ran into a problem today with permissions and jobs. As I've mentioned previously, I'm in the middle of auditing access rights on our SQL Servers and removing unnecessary sa rights from logins that don't need them.

In our shop, we have developers who build SSIS package and schedule them to run on our production servers. Since I took away sa rights from them, they have not been able to edit some of those jobs. So I added them to the SQLAgentOperatorRole in msdb. According to this BOL entry, this is the most privileged role of the SQLAgent roles in msdb.

Turns out, that doesn't quite do what I want. Members of this role can edit jobs they own, but not jobs they don't own. This doesn't really work in our situation. A bit of Googling shows this seems to be a common complaint. The only way to allow a user to edit jobs owned by anyone is to grant them sa rights or make them the job owner. In fact, if you look at the code of the system stored procedure sp_update_jobstep, you see the following check:

  -- Check permissions beyond what's checked by the sysjobs_view
  -- SQLAgentReader and SQLAgentOperator roles that can see all jobs
  -- cannot modify jobs they do not own
  IF (@job_owner_sid <> SUSER_SID()                   -- does not own the job
     AND (ISNULL(IS_SRVROLEMEMBER(N'sysadmin'), 0) <> 1))   -- is not sysadmin
  BEGIN
   RAISERROR(14525, -1, -1);
   RETURN(1) -- Failure
  END

(This is in both 2005 and 2008 R2. I haven't checked other versions.)
Hmm. I obviously don't want to give people sa rights. What I ended up doing was changing the job owner to the person who wanted to edit the job. This may have some implications for jobs that call xp_cmdshell or don't set the Run As: property on their job steps, but in my case, the jobs launch SSIS packages, which use their own connection credentials, so that doesn't affect me.

Share

Last week, I wrote about what MaxDOP is and why you may not want to leave it at the default value of zero on your multi-processor systems. This week I'm going to talk about how to change it and give some results from my server where I made a change.

MaxDOP is an advanced configuration option, so if your server is not already configured to do so, you'll need to set it to display advanced configuration options:

EXEC dbo.sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO

Now if you just issue the sp_configure command without any parameters, you can see what your current MaxDOP setting is:

You can see I've already changed my MaxDOP value from the default of zero to a value of 8. (The maximum value of 64 is the maximum value SQL Server will accept, not the maximum possible value for this system.)

If you are more of a GUI person, you can see this info in SSMS by right clicking the server in Object Explorer and choosing Properties, then the Advanced tab:

Changing the setting is realtively easy and does not require a server restart to take effect. If you are using the GUI, you can enter a new value directly in the screen shown above. For T-SQL, use the following:

EXEC dbo.sp_configure 'max degree of parallelism', 8;
GO
RECONFIGURE;
GO

Substitute whatever value you want for 8 in the above command, of course.

Now that you know how to change the value, how do you know if you should change the value?

Introducing the CXPACKET wait type

SQL Server can spend time waiting on all sorts of things - data to be read from disk, data to be written to disk, waiting for locks to be acquired or released, or, of particular interest to us, waiting for the results of parallelized queries to get recombined. This later wait is tracked via a wait type counter in SQL Server called CXPACKET. This counter is available in SQL Server 2000 through the DBCC sqlperf(waitstats) command, but in SQL 2005 and later, you can also see it with the sys.dm_os_wait_stats dynamic management view.

The dynamic management view will return a value, in milliseconds, of how long SQL has spent waiting to combine the results for parallelized queries since the last time the server was restarted or the last time the wait stats were cleared. Glenn Berry has, as part of his great diagnostic queries, a query that will list the top wait states on your servers. His query is:

-- Isolate top waits for server instance since last restart or statistics clear
WITH    Waits
          AS (SELECT    wait_type
                       ,wait_time_ms / 1000. AS wait_time_s
                       ,100. * wait_time_ms / SUM(wait_time_ms) OVER () AS pct
                                                             ,ROW_NUMBER() OVER (ORDER BY wait_time_ms DESC) AS rn
              FROM                                            sys.dm_os_wait_stats
              WHERE                                           wait_type NOT IN (
                                                              'CLR_SEMAPHORE',
                                                              'LAZYWRITER_SLEEP',
                                                              'RESOURCE_QUEUE',
                                                              'SLEEP_TASK',
                                                              'SLEEP_SYSTEMTASK',
                                                              'SQLTRACE_BUFFER_FLUSH',
                                                              'WAITFOR',
                                                              'LOGMGR_QUEUE',
                                                              'CHECKPOINT_QUEUE',
                                                              'REQUEST_FOR_DEADLOCK_SEARCH',
                                                              'XE_TIMER_EVENT',
                                                              'BROKER_TO_FLUSH',
                                                              'BROKER_TASK_STOP',
                                                              'CLR_MANUAL_EVENT',
                                                              'CLR_AUTO_EVENT',
                                                              'DISPATCHER_QUEUE_SEMAPHORE',
                                                              'FT_IFTS_SCHEDULER_IDLE_WAIT',
                                                              'XE_DISPATCHER_WAIT',
                                                              'XE_DISPATCHER_JOIN',
                                                              'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
                                                              'ONDEMAND_TASK_QUEUE',
                                                              'BROKER_EVENTHANDLER',
                                                              'SLEEP_BPOOL_FLUSH')
             )
     SELECT W1.wait_type
           ,CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s
           ,CAST(W1.pct AS DECIMAL(12, 2)) AS pct
           ,CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct
     FROM   Waits AS W1
            INNER JOIN Waits AS W2 ON W2.rn <= W1.rn
     GROUP BY W1.rn
           ,W1.wait_type
           ,W1.wait_time_s
           ,W1.pct
     HAVING SUM(W2.pct) - W1.pct < 99
OPTION  (RECOMPILE); -- percentage threshold

There are a couple of nice things this query does. First, it converts the milliseconds value the DMV returns to seconds. Second, it orders the waits from largest to smallest and also provides a running total percentage.

So that's how you see what your CXPACKET wait is. But how do you know if you've got a problem. Microsoft suggests that if your CXPACKET wait consists of more than 5% of your total wait time, then you may have a parallelism bottleneck. I will again point out the It Depends SQL caveat applies here, as it does with all general SQL configuration advice.

Tracking Your Wait Times

If you decide you need to change your MaxDOP setting, you're going to want a baseline of your pre-change system operation to compare against to determine if you've actually improved performance. One way to do that is to manually reset the wait stats, let the system run for a set period of time, make the MaxDOP setting change, reset the wait stats again, then let the system run for the same length of time again. Ideally, your two test periods should include times of similar usage.

At this point, I feel the need to point out that you should always test system configuration changes on a test system before making them on production system.

You can reset your wait stats using the following command:

DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR)

For my test, I created two jobs. The first job cleared the wait stats and ran on at 8 AM on Monday morning. The second job executed Glenn Berry's query above on the following Friday at 9 PM and wrote the results to a text file. I set up the schedules for the jobs to run twice. (I could have written the data to a table, but I was in a hurry setting this up, so I just wrote to a text file. If you do this, make sure you choose the Append option so you don't overwrite existing data with your second execution.) After the first week, I had my baseline data. When I came in on the second Monday, I made sure I changed the MaxDOP setting prior to 8 AM. (No restart is required when you change this setting.) When the following Friday rolled around, I had my second set of performance data gathered after my changes.

My Results

I should point out, the server I am testing is a 16 core system in a production environment. It is NUMA-enabled and has 8 processor cores per NUMA node. Based on Microsoft's recommendation, I determined I should set the MaxDOP value to 8. My baseline numbers:

wait_type wait_time_s pct running_pct
OLEDB 370135.54 31.68 31.68
CXPACKET 295106.42 25.26 56.94
LCK_M_IS 187728.33 16.07 73.01
WRITELOG 98589.81 8.44 81.45
BACKUPIO 43881.44 3.76 85.21
LATCH_EX 38794.36 3.32 88.53
MSQL_XP 28966.19 2.48 91.01
SOS_SCHEDULER_YIELD 24027.27 2.06 93.07
PAGEIOLATCH_SH 17086.65 1.46 94.53
ASYNC_NETWORK_IO 16296.86 1.39 95.92
BROKER_RECEIVE_WAITFOR 16032.44 1.37 97.3
BACKUPBUFFER 7999.12 0.68 97.98
IO_COMPLETION 5165.44 0.44 98.42
SLEEP_BPOOL_FLUSH 4474.87 0.38 98.81
PAGEIOLATCH_EX 2892.76 0.25 99.05

After I made the change, these were my results:

wait_type wait_time_s pct running_pct
OLEDB 383416.75 35.85 35.85
LCK_M_IS 209511.89 19.59 55.44
CXPACKET 134124.75 12.54 67.98
WRITELOG 102343.21 9.57 77.55
ASYNC_NETWORK_IO 71632.59 6.7 84.24
BACKUPIO 48025.45 4.49 88.73
MSQL_XP 29184.14 2.73 91.46
SOS_SCHEDULER_YIELD 24138.6 2.26 93.72
BROKER_RECEIVE_WAITFOR 17342.3 1.62 95.34
PAGEIOLATCH_SH 14518.61 1.36 96.7
BACKUPBUFFER 7990.42 0.75 97.44
LATCH_EX 7474.26 0.7 98.14
SLEEP_BPOOL_FLUSH 4436.36 0.41 98.56
IO_COMPLETION 3937.39 0.37 98.93
PAGEIOLATCH_EX 2875.57 0.27 99.2

So you can see, the time SQL Server spent waiting for parallelism to complete went down by 160,981,67 seconds - 44.7 hours! I just gave this server one man-week of additional computing time! It spent an additional 6 hours of that time waiting to acquire intent shared locks and an additional 3.7 hours waiting for OLE DB connections, but I'm completely OK with this because those wait times are indicative of a working system. An increase in OLEDB means the server is spending more time waiting for data to return from either a call to a linked server or for data to load from the bulk insert task or some other process that uses the SQL Server native client OLE DB provider. Which, in turn means, SQL is processing more work than it used to.

Now, I'm not going to say that OLEDB wait stat value is a good one. It now represents 35% of the time SQL Server spends waiting and if I can trim that down, I can squeeze some more performance out of this server. But that will require some deeper digging to find out what the OLE DB calls being made are and what, if anything, I can do about them.

If I take a step back to look at the overall picture and add up the total time spent waiting on the top 15 waits (which are the waits Glenn's query returns), I can see my total time spent waiting dropped from 321 hours to 294 hours!

I'm also not going to claim this is a scientifically accurate test. For one, I had no control over the load the server experienced during this week, so the two workloads may have been significantly different. However, by looking at an entire week, I can more or less average out any one-time anomalies a large one-off query might introduce. I also know the nominal workload of my servers and I can make a reasonable assumption that the workload was more or less the same. Further, as a working DBA, I'm more concerned about my actual performance improvement than  a theoretical improvement. So all in all, for a first step in performance improvement, I'm very happy with this result!

Bonus: When analyzing performance data, I like to find alternative ways to validate my conclusions. My conclusion above was that this server was now spending more time processing requests. So if I use my PerfMon monitoring routine to look at the number of batch requests per second this server is handling, I should see an increase from the first week to the second week.

Graph1

Sure enough, if you look at the average low point over the two weeks, you'll see the lows were under 200 for the first week and over 200 for the second. Likewise, the highs are higher, going from typically under 600 to peaking over 600 multiple times.

Share