Shaun J Stuart

Just another SQL Server weblog

Blog reader cp spotted a pretty big bug in the last version of this routine. Namely, non-clustered indexes were being flagged as no longer present and were therefore not being compressed. This was because the existence check I used, supplying an ID to OBJECT_ID() and checking for a NULL value, will always return a null for non-clustered indexes. This is because OBJECT_ID can only find objects that are stored in the sys.objects system table and non-clustered indexes are not stored there. So I modified the routine to check for the existence of non-clustered indexes by using the sys.indexes table. Corrected code is below. Thanks again to cp for pointing this out in the comments here.

Note this script will generate an error if you have an index containing a timestamp column. The issue is a bug with Microsoft's sp_estimate_data_compression_savings procedure, as detailed here. This has been fixed in SQL 2012, but the bug is still present in SQL 2008 and SQL 2008 R2.  I've decided not to modify the code to look for this case, as it seems to be fairly rare and it has been fixed in the latest version. Additionally, the index with the timestamp can still be compressed, it's just the estimating procedure that fails. If you are running into this problem, I suggest either modifying the code below to exclude those indexes. If you don't want to get into coding that (and I'll admit that I don't), I suggest manually compressing those indexes. Once they are compressed, this code will no longer pick them up and the errors will no longer be generated. Thanks once again to cp for finding this issue as well.

CREATE PROC [dbo].[up_CompressDatabase]
      (
       @minCompression FLOAT = .25 /* e.g. .25 for minimum of 25% compression */
      ,@MaxRunTimeInMinutes INT = 60
      ,@ExecuteCompressCommand BIT = 0  /* 1 to execute command */
      ,@DBToScan SYSNAME /* database to compress */
      )
AS
      SET nocount ON;
 
/*
    Original script by Paul Nielsen www.SQLServerBible.com March 13, 2008
    Modified by Shaun J. Stuart www.shaunjstuart.com February 27, 2013
 
  Sets compression for all objects and indexes in the database needing adjustment
  If estimated gain is equal to or greater than min compression parameter
    then enables row or page compression, whichever gives greater space savings
  If row and page have same gain then, enables row compression
  If estimated gain is less than min compression parameter, then compression is set to none
 
  - SJS 2/27/13
  - Added @MaxRunTimeInMinutes to limit run length (checked afer each command, so
        may run longer) Note: This timer only applies to the actual compression process.
        It does not limit the time it takes to perform a scan of all the tables / indexes
        and estimate compression savings.
  - Sorted compression cursor to compress smallest tables first
  - Added flag to execute compression command or not
  - Added sort in tempdb option (always)
  - Re-wrote slightly to persist initial scan results. Will only re-scan after all tables
        have been processed
 
    - SJS 7/17/13
    - Modified to only look at data that is not already compressed
    - Modified for that any items with null none_size, page_size, or row_size are no longer set as
        AlreadyProcessed and now have their sizes calculated (i.e. if process was cancelled before
        initial scan was completed)
    - Modified so that time limit now applies to compression estimate process as well as actual
        compression process
    - Only look at partitions that have at least one row
 
    - SJS 8/6/13
    - Changed name of dbEstimate table to dbCompressionEstimates for clarity
    - Added StatusText column to dbCompressionEstimates and added status messages
    - Modified to take database name as input parameter to allow use in utility database
        instead of the DB being compressed (Table dbCompressionEstimates, which stores sizes and
        compression estimates, is still created in the current database.)
    - Note: Compression estimates are only made for the database supplied as an input
        parameter. However, that database name is stored in the dbCompressionEstimates table and
        will be read from there and used when actually performing the compression. This allows you to
        create estimates only for multiple databases (using @ExecuteCompressCommand=0), then perform
        the compression across multiple databases later (with @ExecuteCompressCommand=1).
    - Removed all references to code that checked already compressed data since this routine now only
        looks at uncompressed data.
 
    - SJS 8/21/13
    - Put []s around table and index names in compression commands.
 
    - SJS 10/7/13
    - Added check to make sure table / index still exists before estimating or performing compression.
    - Fixed bug in cursor to determine compression estimates (left out db name in where clause)

    - SJS 1/31/14
    - Fixed bug where nonclustered indexes were always flagged as no longer present (Thanks to cp
		for pointing this out at

http://shaunjstuart.com/archive/2013/10/enabling-data-compression-october-2013-update/comment-page-1/#comment)

*/
 
      IF ISNULL(@DBToScan, '') NOT IN (SELECT   [name]
                                       FROM     sys.databases)
         BEGIN
               SELECT   'Database ' + ISNULL(@DBToScan, 'NULL')
                        + ' not found on server.'
               RETURN
         END
 
      DECLARE @CompressedCount INT;
      SET @CompressedCount = 0;
 
      DECLARE @SQL NVARCHAR(MAX);
      DECLARE @ParmDefinition NVARCHAR(100);
      DECLARE @TestResult NVARCHAR(20);
      DECLARE @CheckString NVARCHAR(1000);
 
      DECLARE @StartTime DATETIME2;
      SET @StartTime = CURRENT_TIMESTAMP;
 
      DECLARE @CurrentDatabase SYSNAME;
      SET @CurrentDatabase = DB_NAME()
 
      IF OBJECT_ID('tempdb..##ObjEst', 'U') IS NOT NULL
         DROP TABLE ##ObjEst
 
      CREATE TABLE ##ObjEst
             (
              PK INT IDENTITY
                     NOT NULL
                     PRIMARY KEY
             ,object_name VARCHAR(250)
             ,schema_name VARCHAR(250)
             ,index_id INT
             ,partition_number INT
             ,size_with_current_compression_setting BIGINT
             ,size_with_requested_compression_setting BIGINT
             ,sample_size_with_current_compression_setting BIGINT
             ,sample_size_with_requested_compresison_setting BIGINT
             );
 
      IF NOT EXISTS ( SELECT    1
                      FROM      sys.objects
                      WHERE     object_id = OBJECT_ID(N'[dbo].[dbCompressionEstimates]')
                                AND type IN (N'U') )
         BEGIN
               CREATE TABLE dbo.dbCompressionEstimates
                      (
                       PK INT IDENTITY
                              NOT NULL
                              PRIMARY KEY
                      ,DatabaseName SYSNAME
                      ,schema_name VARCHAR(250)
                      ,object_name VARCHAR(250)
                      ,index_id INT
                      ,ixName VARCHAR(255)
                      ,ixType VARCHAR(50)
                      ,partition_number INT
                      ,data_compression_desc VARCHAR(50)
                      ,None_Size INT
                      ,Row_Size INT
                      ,Page_Size INT
                      ,AlreadyProcessed BIT
                      ,StatusText VARCHAR(75)
                      );
         END
 
  /*
 If all objects have been processed, rescan and start fresh. Useful for catching
  added objects since last scan. But beware - this is I/O intensive and can take a while.
*/
 
      IF NOT EXISTS ( SELECT    1
                      FROM      dbo.dbCompressionEstimates
                      WHERE     AlreadyProcessed = 0
                                AND DatabaseName = @DBToScan )
         BEGIN
               DELETE   FROM dbo.dbCompressionEstimates
               WHERE    DatabaseName = @DBToScan;
 
               SET @SQL = 'USE [' + @DBToScan + '];
                        INSERT   INTO [' + @CurrentDatabase
                   + '].dbo.dbCompressionEstimates
                                (DatabaseName
                                ,schema_name
                                ,object_name
                                ,index_id
                                ,ixName
                                ,ixType
                                ,partition_number
                                ,data_compression_desc
                                ,AlreadyProcessed
                                ,StatusText)
                        SELECT  ''' + @DBToScan + '''
                                ,S.name
                               ,o.name
                               ,I.index_id
                               ,I.name
                               ,I.type_desc
                               ,P.partition_number
                               ,P.data_compression_desc
                               ,0 AS AlreadyProcessed
                               ,''Initial load'' AS StatusText
                        FROM    [' + @DBToScan + '].sys.schemas AS S
                                JOIN [' + @DBToScan
                   + '].sys.objects AS O ON S.schema_id = O.schema_id
                                JOIN [' + @DBToScan
                   + '].sys.indexes AS I ON o.object_id = I.object_id
                                JOIN [' + @DBToScan
                   + '].sys.partitions AS P ON I.object_id = P.object_id
                                                            AND I.index_id = p.index_id
                        WHERE   O.TYPE = ''U''
                                AND P.data_compression_desc = ''NONE''
                                AND P.rows > 0;'     -- only look at objects with data
               EXEC (@SQL)
         END
            -- Determine Compression Estimates
      DECLARE @PK INT
             ,@DatabaseName SYSNAME
             ,@Schema VARCHAR(150)
             ,@object VARCHAR(150)
             ,@DAD VARCHAR(25)
             ,@partNO VARCHAR(3)
             ,@indexID VARCHAR(3)
             ,@ixName VARCHAR(250)
             ,@ixType VARCHAR(50)
             ,@Recommended_Compression VARCHAR(10);
 
      DECLARE cCompress CURSOR FAST_FORWARD
      FOR
              SELECT    schema_name
                       ,object_name
                       ,index_id
                       ,partition_number
                       ,data_compression_desc
                       ,ixName
              FROM      dbo.dbCompressionEstimates
              WHERE     (None_size IS NULL
                         OR Row_Size IS NULL
                         OR Page_size IS NULL)
                        AND DatabaseName = @DBToScan;
 
      OPEN cCompress;
 
      FETCH cCompress INTO @Schema, @object, @indexID, @partNO, @DAD, @ixName;
 
      WHILE @@Fetch_Status = 0
            BEGIN
                            /* evaluate objects with no compression */
                  IF DATEDIFF(mi, @StartTime, CURRENT_TIMESTAMP) < @MaxRunTimeInMinutes
                     BEGIN
                            /* First, make sure the table / index still exists (in case this
                                process is run over multiple days */
 
                           IF @indexID = 0
                              BEGIN /* heaps */
                                    SET @CheckString = 'IF object_ID('''
                                        + @DBToScan + '.' + @Schema + '.'
                                        + @object
                                        + ''') IS NULL
                                    BEGIN
                                        SELECT @TestResultOUT = ''Does Not Exist''
                                    END
                                    ELSE
                                    BEGIN
                                        SELECT @TestResultOUT = ''Exists''
                                    END';
                              END
                           ELSE
                              BEGIN /* indexes */
                                    SET @CheckString = 'IF NOT EXISTS (SELECT 1 FROM ['
                                        + @DBToScan
                                        + '].[sys].[indexes] WHERE [name] ='''
                                        + @ixName + ''' AND OBJECT_ID('''
                                        + '[' + @DBToScan + ']' + '.['
                                        + @Schema + '].[' + @object + ']'''
                                        + ') = [object_id])
                                    BEGIN
                                        SELECT @TestResultOUT = ''Does Not Exist''
                                    END
                                    ELSE
                                    BEGIN
                                        SELECT @TestResultOUT = ''Exists''
                                    END';
                              END

                           SET @ParmDefinition = '@TestResultOUT varchar(20) OUTPUT';
                           EXECUTE sp_executesql
                            @CheckString
                           ,@ParmDefinition
                           ,@TestResultOUT = @TestResult OUTPUT;
                           IF @TestResult = 'Exists'
                              BEGIN
 
                                    IF @DAD = 'none'
                                       BEGIN
                                    /* estimate Page compression */
                                             SET @SQL = 'USE [' + @DBToScan
                                                 + '];
                                        INSERT  ##ObjEst
                                                (object_name
                                                ,schema_name
                                                ,index_id
                                                ,partition_number
                                                ,size_with_current_compression_setting
                                                ,size_with_requested_compression_setting
                                                ,sample_size_with_current_compression_setting
                                                ,sample_size_with_requested_compresison_setting)
                                                EXEC sp_estimate_data_compression_savings
                                                    @Schema_name = ' + @Schema
                                                 + '
                                                   ,@object_name = ' + @object
                                                 + '
                                                   ,@index_id = ' + @indexID
                                                 + '
                                                   ,@partition_number = '
                                                 + @partNO
                                                 + '
                                                   ,@data_compression = ''page'';'
                                             EXEC (@SQL)
 
                                             UPDATE dbo.dbCompressionEstimates
                                             SET    none_size = O.size_with_current_compression_setting
                                                   ,page_size = O.size_with_requested_compression_setting
                                                   ,StatusText = 'Compression estimate 50% complete'
                                             FROM   dbo.dbCompressionEstimates D
                                                    JOIN ##ObjEst O ON D.Schema_name = O.Schema_Name
                                                              AND D.Object_name = O.object_name
                                                              AND D.index_id = O.index_id
                                                              AND D.partition_number = O.partition_number;
 
                                             DELETE ##ObjEst;
 
                                    -- estimate Row compression
                                             SET @SQL = 'USE [' + @DBToScan
                                                 + '];
                                        INSERT  ##ObjEst
                                                (object_name
                                                ,schema_name
                                                ,index_id
                                                ,partition_number
                                                ,size_with_current_compression_setting
                                                ,size_with_requested_compression_setting
                                                ,sample_size_with_current_compression_setting
                                                ,sample_size_with_requested_compresison_setting)
                                                EXEC sp_estimate_data_compression_savings
                                                    @Schema_name = ' + @Schema
                                                 + '
                                                   ,@object_name = ' + @object
                                                 + '
                                                   ,@index_id = ' + @indexID
                                                 + '
                                                   ,@partition_number = '
                                                 + @partNO
                                                 + '
                                                   ,@data_compression = ''ROW'';'
                                             EXEC (@SQL)
 
                                             UPDATE dbo.dbCompressionEstimates
                                             SET    row_size = O.size_with_requested_compression_setting
                                                   ,StatusText = 'Compression estimate 100% complete'
                                             FROM   dbo.dbCompressionEstimates D
                                                    JOIN ##ObjEst O ON D.Schema_name = O.Schema_Name
                                                              AND D.Object_name = O.object_name
                                                              AND D.index_id = O.index_id
                                                              AND D.partition_number = O.partition_number;
 
                                             DELETE ##ObjEst;
                                       END /* end evaluating objects with no compression */
                              END
                           ELSE /* table or index no longer exists */
                              BEGIN
                                    UPDATE  dbo.dbCompressionEstimates
                                    SET     AlreadyProcessed = 1
                                           ,StatusText = 'Object no longer exists at compression estimate stage'
                                    WHERE   schema_name = @Schema
                                            AND object_name = @object
                                            AND index_id = @indexID
                                            AND partition_number = @partNO
                                            AND data_compression_desc = @DAD
                                            AND DatabaseName = @DBToScan;
                              END
 
                           FETCH NEXT FROM cCompress INTO @Schema, @object,
                                 @indexID, @partNO, @DAD, @ixName
                     END -- end time check block
                  ELSE
                     BEGIN
                           PRINT 'Max runtime reached. No compression performed. Compression estimate still processing. Exiting...';
                           CLOSE cCompress
                           DEALLOCATE cCompress
                           DROP TABLE ##ObjEst
                           RETURN
                     END
            END -- end while loop
 
      CLOSE cCompress
      DEALLOCATE cCompress
 
      PRINT 'Initial scan complete.'
         --END
  /* End evaluating compression savings. Now do the actual compressing. */
 
      PRINT 'Beginning compression.';
 
  /* Do not process objects that do not meet our compression criteria */
 
      UPDATE    dbo.dbCompressionEstimates
      SET       AlreadyProcessed = 1
               ,StatusText = 'Best compression method less than minCompression threshold'
      WHERE     (1 - (CAST(Row_Size AS FLOAT) / None_Size)) < @minCompression
                AND (Row_Size <= Page_Size)
                AND None_Size > 0
                AND AlreadyProcessed = 0;
 
      UPDATE    dbo.dbCompressionEstimates
      SET       AlreadyProcessed = 1
               ,StatusText = 'Best compression method less than minCompression threshold'
      WHERE     (1 - (CAST(Page_Size AS FLOAT) / None_Size)) < @minCompression
                AND (Page_Size <= Row_Size)
                AND None_Size > 0
                AND AlreadyProcessed = 0;
 
  /* Do not set compression on empty objects */
 
      UPDATE    dbo.dbCompressionEstimates
      SET       AlreadyProcessed = 1
               ,StatusText = 'No data in table to compress'
      WHERE     None_size = 0
                AND AlreadyProcessed = 0;
 
 -- set the compression
      DECLARE cCompress CURSOR FAST_FORWARD
      FOR
              SELECT    DatabaseName
                       ,schema_name
                       ,object_name
                       ,partition_number
                       ,ixName
                       ,ixType
                       ,CASE WHEN (1 - (CAST(Row_Size AS FLOAT) / none_Size)) >= @minCompression
                                  AND (Row_Size <= Page_Size) THEN 'Row'
                             WHEN (1 - (CAST(page_Size AS FLOAT) / none_Size)) >= @minCompression
                                  AND (Page_Size <= row_Size) THEN 'Page'
                             ELSE 'None'
                        END AS Recommended_Compression
                       ,PK
              FROM      dbo.dbCompressionEstimates
              WHERE     None_Size <> 0
                        AND (CASE WHEN (1 - (CAST(Row_Size AS FLOAT)
                                             / none_Size)) >= @minCompression
                                       AND (Row_Size <= Page_Size) THEN 'Row'
                                  WHEN (1 - (CAST(page_Size AS FLOAT)
                                             / none_Size)) >= @minCompression
                                       AND (Page_Size <= row_Size) THEN 'Page'
                                  ELSE 'None'
                             END <> data_compression_desc)
                        AND AlreadyProcessed = 0
              ORDER BY  None_Size ASC;      /* start with smallest tables first */
 
      OPEN cCompress
 
      FETCH cCompress INTO @DatabaseName, @Schema, @object, @partNO, @ixName,
            @ixType, @Recommended_Compression, @PK  -- prime the cursor;
 
      WHILE @@Fetch_Status = 0
            BEGIN
 
                  IF @ixType = 'Clustered'
                     OR @ixType = 'heap'
                     BEGIN
                           SET @SQL = 'USE [' + @DatabaseName + '];
                                ALTER TABLE [' + @Schema + '].[' + @object
                               + '] Rebuild with (data_compression = '
                               + @Recommended_Compression
                               + ', SORT_IN_TEMPDB=ON)';
 
                           SET @CheckString = 'IF object_ID('''
                               + @DatabaseName + '.' + @Schema + '.' + @object
                               + ''') IS NULL
                                BEGIN
                                    SELECT @TestResultOUT = ''Does Not Exist''
                                END
                                ELSE
                                BEGIN
                                    SELECT @TestResultOUT = ''Exists''
                                END';
                     END
 
                  ELSE  /* non-clustered index */
                     BEGIN
                           SET @SQL = 'USE [' + @DatabaseName + '];
                                ALTER INDEX [' + @ixName + '] on [' + @Schema
                               + '].[' + @object
                               + '] Rebuild with (data_compression = '
                               + @Recommended_Compression
                               + ',SORT_IN_TEMPDB=ON)';
 
                           SET @CheckString = 'IF NOT EXISTS (SELECT 1 FROM ['
                               + @DBToScan
                               + '].[sys].[indexes] WHERE [name] ='''
                               + @ixName + ''' AND OBJECT_ID(''' + '['
                               + @DBToScan + ']' + '.[' + @Schema + '].['
                               + @object + ']''' + ') = [object_id])
                            BEGIN
                                SELECT @TestResultOUT = ''Does Not Exist''
                            END
                            ELSE
                            BEGIN
                                SELECT @TestResultOUT = ''Exists''
                            END';
                     END

                  IF DATEDIFF(mi, @StartTime, CURRENT_TIMESTAMP) < @MaxRunTimeInMinutes
                     BEGIN
                           IF @ExecuteCompressCommand = 1
                              BEGIN
 
                            /* verify that table / index still exists before doing anything */
 
                                    SET @ParmDefinition = '@TestResultOUT varchar(20) OUTPUT';
                                    EXECUTE sp_executesql
                                        @CheckString
                                       ,@ParmDefinition
                                       ,@TestResultOUT = @TestResult OUTPUT;
                                    IF @TestResult = 'Exists'
                                       BEGIN
 
                                             UPDATE dbo.dbCompressionEstimates
                                             SET    StatusText = 'Compressing data...'
                                             WHERE  PK = @PK;
 
                                             PRINT 'Compressing table/index: '
                                                   + @Schema + '.' + @object
                                                   + ' in database '
                                                   + @DatabaseName;
                                             EXEC sp_executesql
                                                @SQL;
 
                                             UPDATE dbo.dbCompressionEstimates
                                             SET    AlreadyProcessed = 1
                                                   ,StatusText = 'Compression complete'
                                             WHERE  PK = @PK;
 
                                             SET @CompressedCount = @CompressedCount
                                                 + 1;
                                       END
                                    ELSE
                                       BEGIN
                                             UPDATE dbo.dbCompressionEstimates
                                             SET    AlreadyProcessed = 1
                                                   ,StatusText = 'Object no longer exists at compression stage'
                                             WHERE  PK = @PK;
                                       END
                              END
                           ELSE
                              BEGIN
                                    PRINT 'Command execution not enabled. Command is:'
                                          + @SQL;
                              END
                     END
                  ELSE
                     BEGIN
                           PRINT 'Max runtime reached. Some compression performed. Exiting...';
                           CLOSE cCompress
                           DEALLOCATE cCompress
                           DROP TABLE ##ObjEst
                           BREAK
                     END
 
                  FETCH cCompress INTO @DatabaseName, @Schema, @object,
                        @partNO, @ixName, @ixType, @Recommended_Compression,
                        @PK;
            END
 
      CLOSE cCompress;
      DEALLOCATE cCompress;
 
      PRINT 'Objects compressed: ' + CONVERT(VARCHAR(10), @CompressedCount);
      DROP TABLE ##ObjEst
      RETURN
Share

Brent Ozar Unlimited had a post recently noting that it appears Microsoft has seemed to move away from releasing service packs for SQL Server and instead is only releasing cumulative updates. Given that Microsoft itself warns against installing cumulative updates unless it solves a specific problem you are encountering, most DBAs, including myself, are somewhat reluctant to install them, preferring to wait for the (supposedly) more thoroughly tested service packs.

A recent Steve Jones editorial has picked up the call for service packs and includes links to Microsoft Connect items to vote for releasing service packs for SQL 2012, 2008 R2, and 2008. Please take a moment and vote for these.

Release SQL Server 2012 Service Pack 2

Release SQL Server 2008 R2 Service Pack 3

Release Final SQL Server 2008 and 2008 R2 Service Packs

 

Share

Two months ago I blogged about a procedure I wrote that, if you are using Ola Hallengren's index maintenance routine, can look for indexes that are being repeatedly defragged. This usually indicates that the fill factor can be adjusted downward so that the index doesn't need to be defragged every time your maintenance routine runs. Why do you care if indexes are being repeatedly defragged? For one, the defragmentation process is logged, so it generates lots of log records that need to be backed up, which increases your backup times, restore times, and backup file sizes. If you are using log shipping or any other high availability solution that utilizes the log file, it simply creates more work for that process.

Without covering all of that again, let me just point you to my previous post which not only talks about this in a bit more detail, but also features unicorns.

I've made some improvements to the code:

  1. The old version used to generate commands to rebuild indexes using the ONLINE = ON argument. This is an Enterprise-only feature and this new version is now smart enough to detect if you are running Enterprise edition or not and sets the option correctly. (ONLINE = ON should also be generated on SQL 2008 Datacenter edition, but I do not have a copy of that to test with. If someone can verify this works, please let me know.)
  2. The generated ALTER INDEX code used the PARTITION = ALL argument. This was not introduced until SQL 2008, so when running on a SQL 2005 server, the generated command would fail. Oops. The code now checks for this and will not include it on SQL 2005 machines.
  3. I've modified the subject line of the email that is generated to include the name of the database where excessive defragmentation was found. This is helpful when you are checking several databases on the same server.
  4. There were two locations where I forgot to put square brackets around the database name when generating commands, which caused errors when database name included a period. This has been fixed.

Be sure to revisit the original post for instructions on how to use this procedure. Pay special attention to the bits about how often to run this and how it needs to be correlated with how often you clear Ola's history table.

Known issues:

  1. The generated code will only work on non-partitioned indexes or indexes that are partitioned using aligned partitions (partitions that are based on equivalent partition functions). That's just how PARTITION = ALL works. However, even if you have non-aligned partitioned indexes, this routine will still identify them for you if they are being defragged constantly - you'll just have to write your own code to change the fill factor.
  2. This only examines one database at a time. Eventually, I want to modify this so it will look at all databases in Ola's log table at once, but that requires more coding than I have time to do right now. It's on the To Do list though.

Let me know if you find this useful or have any suggestions for improvements!

 


CREATE PROCEDURE [dbo].[up_IndexDefragFrequencyCheck]
       (
        @DatabaseToInvestigate VARCHAR(100) = NULL
       ,@CommandLogDatabase VARCHAR(100) = NULL
       ,@CountThreshold TINYINT = 4
       ,@OverRebuildThresholdNewFF TINYINT = 70
       ,@OverReorgThresholdNewFF TINYINT = 80
       ,@MailProfileName sysname
       ,@MailRecipients VARCHAR(MAX) = NULL
	   )
AS /*
	
	This procedure scans the CommandLog table of Ola Hallengren's index defrag script
		(http://ola.hallengren.com/) and identifies indexes that are being rebuilt or
		reorganized regularly. The original code comes courtesy of Tim Ford, who blogged
		this at http://thesqlagentman.com/2013/10/whats-the-frequency-kenneth/. This routine
		expands his work. The procedure is meant to be run on a regular basis and will send 
		an email when it finds indexes being defragged more than a specified number of times.
		It will also suggest rebuilding the index with a new fill factor to reduce the number
		of rebuilds or reorgs. The new fill factor recommended is one you specify, if the current
		fill factor of the index is 0 or 100%, otherwise it is 10% less than the current
		fill factor. It will not recommend anything less than 50%.

		IMPORTANT!!! This routine looks at the log table and the _current_ fill factor setting
		of the indexes. Therefore, in order to get appropriate fill factor recommendations,
		this routine should NOT be run more often than the log table is cleared. FOR EXAMPLE,
		IF YOU KEEP 30 DAYS WORTH OF LOG RECORDS IN YOUR TABLE, THIS ROUTINE SHOULD NOT BE RUN
		MORE OFTEN THAN EVERY 30 DAYS. This is because if you change the fill factor of an index
		it may result in stopping rebuilds, but the log table will still have the records of the
		prior rebuilds, which may trigger another recommendation to reduce the fill factor, which
		would then be 10% less than what you just changed it to.

		Do not blindly run the commands in the email notification! They are meant to make it easier
		for a DBA to change the fill factor, but they should be evaluated first to ensure they
		are appropriate for your situation.
		
		Input parameters:

		@DatabaseToInvestigate - The database to scan for excessive defrags
		@CommandLogDatabase - Database holding the CommandLog table of Ola's routine
		@CountThreshold - indexes being defragged equal to or greater this number of times
							are flagged as being excessively defragged
		@OverRebuildThresholdNewFF - Initial starting fill factor suggestion for indexes that
							are being rebuilt more than @CountThreshold times. Only used when
							current fill factor is 0 or 100
		@OverReorgThresholdNewFF - Initial starting fill factor suggestion for indexes that
							are being reorganized more than @CountThreshold times. Only used when
							current fill factor is 0 or 100
		@MailProfileName - name of mail profile the routine can use to send email
		@MailRecipients - list of email addresses the notification will be sent to. Must be of
							the form xxx@xxx.xxx  Multiple addresses should be separated by a ;


		Shaun J. Stuart
		shaunjstuart.com

		v1.0 - 10/8/13
		v1.1 - 10/13/13 - SJS - Added support for case where current fill factor between 100 and the
								min thresholds passed in.
		v1.2 - 1/16/14  - SJS - Added check for SQL 2005, which does not support the PARTITION = ALL option
							  - Added check for Enterprise edition to set ONLINE = ON or OFF
							  - Added square brackets around DB name where they were not before to
									handle db names with periods in them
							  - Added database name to email subject line

*/




       DECLARE @SQLstr VARCHAR(2000)
       DECLARE @EmailBody VARCHAR(MAX)
       DECLARE @CRLF CHAR(2)
       DECLARE @EmailSubject VARCHAR(200)
	   DECLARE @SQLVersion TINYINT
	   DECLARE @SupportsOnlineRebuilds BIT

       SET @CRLF = CHAR(13) + CHAR(10)
	   SET @SQLVersion = LEFT(CAST(SERVERPROPERTY('productversion') AS VARCHAR(15)),
                        CHARINDEX('.',
                                CAST(SERVERPROPERTY('productversion') AS VARCHAR(15)))
                        - 1)
	/* SQLVersion: 9 = SQL 2005, 10 = SQL 2008 or 2008 R2, 11 = 2012 */

	IF LEFT(CAST(SERVERPROPERTY ('edition')AS VARCHAR(50)),10) = 'Enterprise'
		OR LEFT(CAST(SERVERPROPERTY ('edition')AS VARCHAR(50)),10) = 'Datacenter'
	BEGIN
		SET @SupportsOnlineRebuilds = 1
	END
	ELSE
	BEGIN
		SET @SupportsOnlineRebuilds = 0
	END

/* Data validation */

       IF @DatabaseToInvestigate IS NULL
          OR @DatabaseToInvestigate = ''
          BEGIN
                SELECT  'A database to investigate must be specified.'
                RETURN
          END

       IF @CommandLogDatabase IS NULL
          OR @CommandLogDatabase = ''
          BEGIN
                SELECT  'A database holding the CommandLog table must be specified.'
                RETURN
          END

       IF @MailRecipients IS NULL
          OR @MailRecipients = ''
          BEGIN
                SELECT  'At least one email recipient must be specified.'
                RETURN
          END

       IF @MailRecipients NOT LIKE '%@%.%'
          BEGIN
                SELECT  'Email recipient not valid address format.'
                RETURN
          END



--============================================================================
-- INDEX REBUILD COUNT 
--============================================================================

       IF EXISTS ( SELECT   name
                   FROM     tempdb.sys.objects
                   WHERE    name = '##Recommendations' )
          BEGIN
                DROP TABLE ##Recommendations
          END

       IF EXISTS ( SELECT   name
                   FROM     tempdb.sys.objects
                   WHERE    name = '##Index_History' )
          BEGIN
                DROP TABLE ##Index_History
          END
 
       IF EXISTS ( SELECT   name
                   FROM     tempdb.sys.objects
                   WHERE    name = '##Indexes' )
          BEGIN
                DROP TABLE ##Indexes
          END

       CREATE TABLE ##Recommendations
              (
               CommandLine VARCHAR(MAX)
              );



       CREATE TABLE ##Index_History
              (
               the_database sysname
              ,the_schema sysname
              ,the_object sysname
              ,the_index sysname
              ,index_type VARCHAR(13)
              ,fill_factor TINYINT
              ,rebuild_count INT NULL
              ,reorg_count INT NULL
              );
 
       CREATE TABLE ##Indexes
              (
               the_schema sysname
              ,the_object sysname
              ,the_index sysname
              ,fill_factor TINYINT
              );
 
       SET @SQLstr = 'INSERT  INTO ##Indexes
        (the_schema
        ,the_object
        ,the_index
        ,fill_factor)
        SELECT  OBJECT_SCHEMA_NAME(SO.object_id, DB_ID('''
           + @DatabaseToInvestigate + ''')) AS the_schema
               ,SO.name AS the_object
               ,SI.name AS the_index
               ,SI.fill_factor
        FROM    [' + @DatabaseToInvestigate + '].sys.objects SO
                INNER JOIN [' + @DatabaseToInvestigate
           + '].sys.indexes SI ON SO.object_id = SI.object_id
        WHERE   SI.index_id > 0;'
		--PRINT @SQLstr
       EXEC (@SQLstr)
 
       SET @SQLstr = 'INSERT  INTO ##Index_History
        (the_database
        ,the_schema
        ,the_object
        ,the_index
        ,index_type
        ,fill_factor
        ,rebuild_count
        ,reorg_count)
        SELECT  C.DatabaseName
               ,C.SchemaName
               ,C.ObjectName
               ,C.IndexName
               ,CASE C.IndexType
                  WHEN 1 THEN ''Clustered''
                  ELSE ''Non-Clustered''
                END AS IndexType
               ,IX.fill_factor
               ,COUNT(C.ID) AS rebuild_count
               ,0 AS reorg_count
        FROM    [' + @CommandLogDatabase + '].dbo.CommandLog C
                LEFT JOIN ##Indexes IX ON C.SchemaName = IX.the_schema
                                          AND C.ObjectName = IX.the_object
                                          AND C.IndexName = IX.the_index
        WHERE   C.CommandType = ''ALTER_INDEX''
                AND C.ObjectType = ''U''
                AND C.Command LIKE ''%REBUILD%''
                AND C.DatabaseName = ''' + @DatabaseToInvestigate + '''
        GROUP BY C.DatabaseName
               ,C.SchemaName
               ,C.ObjectName
               ,C.IndexName
               ,IndexType
               ,IX.fill_factor;'
		--PRINT @SQLstr
       EXEC (@SQLstr) 
--============================================================================
-- INDEX REORGANIZE (ONLY) COUNT 
--============================================================================
       SET @SQLstr = 'INSERT  INTO ##Index_History
        (the_database
        ,the_schema
        ,the_object
        ,the_index
        ,index_type
        ,fill_factor
        ,rebuild_count
        ,reorg_count)
        SELECT  C.DatabaseName
               ,C.SchemaName
               ,C.ObjectName
               ,C.IndexName
               ,CASE C.IndexType
                  WHEN 1 THEN ''Clustered''
                  ELSE ''Non-Clustered''
                END AS IndexType
               ,IX.fill_factor
               ,0 AS rebuild_count
               ,COUNT(C.ID) AS reorg__count
        FROM    [' + @CommandLogDatabase
           + '].dbo.CommandLog C
                LEFT JOIN ##Indexes IX ON C.SchemaName = IX.the_schema
                                          AND C.ObjectName = IX.the_object
                                          AND C.IndexName = IX.the_index
                LEFT JOIN ##Index_History IH ON C.DatabaseName = IH.the_database
                                                AND C.SchemaName = IH.the_schema
                                                AND C.ObjectName = IH.the_object
                                                AND C.IndexName = IH.the_index
        WHERE   C.CommandType = ''ALTER_INDEX''
                AND C.ObjectType = ''U''
                AND C.Command LIKE ''%REORGANIZE%''
                AND C.DatabaseName = ''' + @DatabaseToInvestigate + '''
                AND IH.the_database IS NULL
        GROUP BY C.DatabaseName
               ,C.SchemaName
               ,C.ObjectName
               ,C.IndexName
               ,IndexType
               ,IX.fill_factor;'
		--PRINT @SQLstr
       EXEC (@SQLstr)
--========================================================
-- ACCOUNT FOR INDEXES BOTH REBUILT AND REORGANIZED
--========================================================
       SET @SQLstr = 'UPDATE  ##Index_History
					  SET     reorg_count = C2.reorganize_count
					  FROM    ##Index_History IH
					  INNER JOIN (SELECT  C.DatabaseName
						     ,C.SchemaName
							   ,C.ObjectName
							,C.IndexName
                           ,	COUNT(C.ID) AS reorganize_count
                      FROM    [' + @CommandLogDatabase + '].dbo.CommandLog C
                      WHERE   C.CommandType = ''ALTER_INDEX''
                              AND C.ObjectType = ''U''
                              AND C.Command LIKE ''%REORGANIZE%''
                              AND C.DatabaseName = ''' + @DatabaseToInvestigate
           + '''
			          GROUP BY C.DatabaseName
                            ,C.SchemaName
                            ,C.ObjectName
                            ,C.IndexName
                    ) C2 ON IH.the_database = C2.DatabaseName
                            AND IH.the_schema = C2.SchemaName
                            AND IH.the_object = C2.ObjectName
                            AND IH.the_index = C2.IndexName
					  WHERE   IH.rebuild_count > 0'
--print @SQLstr
       EXEC (@SQLstr) 
--============================================================================
-- RETURN THE RESULTS
--============================================================================
 
       SELECT   the_database
               ,the_schema
               ,the_object
               ,the_index
               ,index_type
               ,ISNULL(CONVERT(VARCHAR(20), fill_factor), 'No longer exists') AS fill_factor
               ,rebuild_count
               ,reorg_count
       FROM     ##Index_History
       WHERE    rebuild_count >= @CountThreshold
                OR reorg_count >= @CountThreshold
       ORDER BY the_database
               ,rebuild_count DESC
               ,reorg_count DESC
               ,the_object
               ,the_index
 
       INSERT   INTO ##Recommendations
                (CommandLine)
                SELECT  'USE [' + the_database + ']; ' + @CRLF
                        + 'ALTER INDEX [' + the_index + '] ON [' + the_schema
                        + '].[' + the_object + '] ' + @CRLF
                        + 'REBUILD ' + 
							CASE 
								WHEN @SQLVersion = 9 THEN ' '
								WHEN @SQLVersion > 9 THEN  'PARTITION = ALL '
							END
						+ 'WITH (PAD_INDEX = OFF, '
                        + 'STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = ON, ' +
							CASE @SupportsOnlineRebuilds 
								WHEN 1 THEN 'ONLINE = ON, '
								WHEN 0 THEN 'ONLINE = OFF, '
							END
                        + @CRLF
                        + 'ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = '
                        + CASE WHEN fill_factor = '0'
                               THEN CAST(@OverRebuildThresholdNewFF AS VARCHAR(3))
                               WHEN fill_factor = '100'
                               THEN CAST(@OverRebuildThresholdNewFF AS VARCHAR(3))
                               WHEN fill_factor <= @OverRebuildThresholdNewFF
                               THEN CAST(@OverRebuildThresholdNewFF - 10 AS VARCHAR(3))
							   WHEN (fill_factor > @OverRebuildThresholdNewFF AND 
								fill_factor <100)
                               THEN CAST(@OverRebuildThresholdNewFF AS VARCHAR(3))
                               ELSE NULL /* unknown case */
                          END + '); '  + @CRLF AS RebuildCommand
                FROM    ##Index_History
                WHERE   rebuild_count >= @CountThreshold
                        AND (fill_factor = 0
                             OR fill_factor >= 60)
                UNION ALL
                SELECT  'USE [' + the_database + ']; ' + @CRLF
                        + 'ALTER INDEX [' + the_index + '] ON [' + the_schema
                        + '].[' + the_object + '] ' + @CRLF
                        + 'REBUILD ' +
							CASE 
								WHEN @SQLVersion = 9 THEN ' '
								WHEN @SQLVersion > 9 THEN  'PARTITION = ALL '
							END
						+ 'WITH (PAD_INDEX = OFF, '
                        + 'STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = ON, ' +
							CASE @SupportsOnlineRebuilds 
								WHEN 1 THEN 'ONLINE = ON, '
								WHEN 0 THEN 'ONLINE = OFF, '
							END						
                        + @CRLF
                        + 'ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = '
                        + CASE WHEN fill_factor = '0'
                               THEN CAST(@OverReorgThresholdNewFF AS VARCHAR(3))
                               WHEN fill_factor = '100'
                               THEN CAST(@OverReorgThresholdNewFF AS VARCHAR(3))
                               WHEN fill_factor <= @OverReorgThresholdNewFF
                               THEN CAST(@OverReorgThresholdNewFF - 10 AS VARCHAR(3))
							   WHEN (fill_factor > @OverReorgThresholdNewFF AND 
								fill_factor <100)
                               THEN CAST(@OverReorgThresholdNewFF AS VARCHAR(3))
                               ELSE NULL /* unknown case */
                          END + '); ' + @CRLF AS RebuildCommand
                FROM    ##Index_History
                WHERE   reorg_count >= @CountThreshold
                        AND (fill_factor = 0
                             OR fill_factor >= 60)

       IF EXISTS ( SELECT   1
                   FROM     ##Recommendations )
          BEGIN
                SET @EmailBody = @CRLF
                    + 'Analysis of the index defrag log table has been performed and it appears '
                    + 'there are indexes that are being repeatedly defragged. Repeated defragging may '
                    + 'indicate the indexes need a lower fill factor to reduce page splits. Repeated '
                    + 'defragging also wastes resources and generates large amounts of potentially '
                    + 'unnecessary transaction log entries. The parameters used to generate this report '
                    + 'are: ' + @CRLF + @CRLF + 'Server: ' + @@SERVERNAME
                    + @CRLF + 'Database: ' + @DatabaseToInvestigate + @CRLF
                    + 'Min number of rebuilds / reorgs flagged as excessive: '
                    + CAST(@CountThreshold AS VARCHAR(3)) + '. ' + @CRLF
                    + 'Starting suggested fill factor for excessive rebuilds: '
                    + CAST(@OverRebuildThresholdNewFF AS VARCHAR(3)) + '. '
                    + @CRLF
                    + 'Starting suggested fill factor for excessive reorgs: '
                    + CAST(@OverReorgThresholdNewFF AS VARCHAR(3)) + '. '
                    + @CRLF + @CRLF + @CRLF
                    + 'Below are the suggested commands to change the fill factor on the indexes in question. '
                    + 'These commands should be carefully evaluated before being run to ensure they are '
                    + 'appropriate for the situation! In particular, if this routine is run more frequently '
                    + 'than the defrag log table is cleared, it may result in inappropriate fill factor '
                    + 'recommendations.' + @CRLF + @CRLF + @CRLF


                IF EXISTS ( SELECT  1
                            FROM    ##Index_History
                            WHERE   (rebuild_count >= @CountThreshold
                                     OR reorg_count >= @CountThreshold)
                                    AND fill_factor <= 50 
									AND fill_factor > 0)
                   BEGIN
                         SET @EmailBody = @EmailBody + @CRLF + @CRLF
                             + 'Note: Some indexes were found that are being defragged regularly and have a fill factor '
                             + 'setting of 50 or less. Reducing the fill factor further is not generally recommended '
                             + 'and further investigation is warranted.'
                             + @CRLF + @CRLF

                         INSERT INTO ##Recommendations
                                (CommandLine)
                                SELECT  '/* Database [' + the_database
                                        + '], index [' + the_index + ']'
                                        + ' on [' + the_schema + '].['
                                        + the_object + '] '
                                        + 'is being repeatedly rebuilt and its fill factor '
                                        + 'is already set to 50 or less. This likely is worth a deeper '
                                        + 'investigation. */' AS RebuildCommand
                                FROM    ##Index_History
                                WHERE   rebuild_count >= @CountThreshold
                                        AND fill_factor <= 50
										AND fill_factor > 0 
                                UNION ALL
                                SELECT  '/* Database [' + the_database
                                        + '], index [' + the_index + ']'
                                        + ' on [' + the_schema + '].['
                                        + the_object + '] '
                                        + 'is being repeatedly reorganized and its fill factor '
                                        + 'is already set to 50 or less. This likely is worth a deeper '
                                        + 'investigation. */' AS RebuildCommand
                                FROM    ##Index_History
                                WHERE   reorg_count >= @CountThreshold
                                        AND fill_factor <= 50
										AND fill_factor > 0

                   END
                SET @EmailSubject = 'Excessive Index Defragmenting found on '
                    + @@Servername + ', database ' + @DatabaseToInvestigate

                EXEC msdb.dbo.sp_send_dbmail
                    @profile_name = @MailProfileName
                   ,@recipients = @MailRecipients
                   ,@subject = @EmailSubject
                   ,@body = @EmailBody
                   ,@query = 'SELECT * FROM ##Recommendations'
                   ,@query_result_width = 32767
                   ,@query_no_truncate = 1


          END

--============================================================================
-- CLEANUP THE MESS
--============================================================================

IF EXISTS ( SELECT  name
            FROM    tempdb.sys.objects
            WHERE   name = '##Index_History' )
   BEGIN
         DROP TABLE ##Index_History
   END
 
IF EXISTS ( SELECT  name
            FROM    tempdb.sys.objects
            WHERE   name = '##Indexes' )
   BEGIN
         DROP TABLE ##Indexes
   END

IF EXISTS ( SELECT  name
            FROM    tempdb.sys.objects
            WHERE   name = '##Recommendations' )
   BEGIN
         DROP TABLE ##Recommendations
   END

Share

At my company, it's time for our yearly performance evaluations. The first step of this process is for the employee to perform a self-evaluation. This includes rating yourself from Needs Improvement to Outstanding in 5 areas our company has deemed important. More importantly, it also includes a section where you list your specific achievements and strengths for the past year. While the first part is simply a bunch of checkboxes, this last section is where you can really demonstrate to management how awesome you are and how valuable you are to the company.

When filling out this section, I like to include not only specific actions I have taken that helped the company, but also any steps I have taken to further develop myself professionally. I believe this helps communicate to management that you take your role seriously and you are concerned with staying current in your field. This has the potential to turn into a nice circle of positive reinforcement where the company recognizes the value in furthering your education and becomes willing to spend more training dollars on you.

My 2013 self-appraisal reads:

Accomplishments

  • Retired 1 of 3 remaining SQL Server 2000 servers.
  • Reduced number of SQL Server 2005 servers from 38 to 28.
  • Virtualized several production and development SQL Servers
  • Implemented database compression on our two largest SQL Servers. This saved 1.2 TB of expensive SAN storage while improving database performance.
  • Tuned SQL queries for <reporting database> resulting in query runtimes decreasing from 1.5 hours to less than 1 second.
  • No SQL Server-related actionable items found by audit for second year in a row.

Professional Development

If you were writing a self-appraisal for 2013, how would yours read?

Share

Lately, I noticed that SSMS was taking longer and longer to start. When I started at my company a couple years ago, SSMS would come up fairly quickly. Now, it was taking about 3 minutes to start up. As the time gradually increased, I initially chalked it up to newer versions of SQL - in my time here, I've gone from using the SSMS version that comes with SQL 2005 to 2008 R2 to 2012. We all know software experiences bloat over time as more features get added to newer releases. But today, during an unrelated issue, I happened to be going through my computer's System Event log and saw a bunch of DCOM errors:

SSMS_DCOM_Error

I decided to track down the cause. The timestamps all seemed to indicate that problem occurred shortly after my computer booted. I spent too much time chasing down processes that ran on my machine during the Windows boot process before I finally realized my problem was in how I use SQL Server Management Studio:

DecomServers

As I decommissioned servers, I would move their entries in my SSMS Registered Server list to to a special folder. I kept the entries around because on some of my servers, I connect via Windows authentication, some use SQL authentication, some are referenced by DNS name, some by IP address, etc. I wanted to save all that connection configuration info in case I needed it later for some reason.

As it turns out, this was what was causing SSMS to take so long to launch. When SSMS launches, it tries to query all the servers in the Registered Server list to see if they are offline or online so it can display the little green arrow or red square icon next to the server name. By leaving the decommissioned server entries in the list, SSMS was trying to connect to servers that no longer existed. Thus, on launch, I had to wait for several DCOM timeouts to occur before SSMS was available for use. As soon as I removed these entries, SSMS started quickly once more and the DCOM errors no longer appeared in my Windows event log.

As my length of employment here has increased, the number of decommissioned servers has increased, which explains the slowly increasing SSMS start up delay. Why did this always seem to occur when my system booted?  SSMS is the first program I launch when I power up and log in each morning. Mystery solved!

Share