One of the problems DBAs encounter when working with very large databases (VLDBs) is an obvious one – lack of disk space. These databases grow over time, seeming to swallow whatever disk space you can throw at them. As they get bigger, it also takes longer to perform basic maintenance tasks. If the database gets big enough, it’s possible that the time it takes to perform the maintenance exceeds the scheduled maintenance window. When that happens, it’s tempting to just stop performing maintenance, especially if you have never encountered any problems with the database in the past. This is a recipe for disaster.

As I went through all the databases here at my new company, I discovered a VLDB that was about 600 GB. This particular server is running on local disk drives, not SANs, and there are no more drive bays available to add disk space. The former DBA here, in an effort to stop running out of space, implemented a job to shrink the database weekly. That temporarily solved the disk space issue, but created others. Notably, performance dropped. Not only does shrinking the database completely fragment the indexes, it creates poor performance because every time the database needs to grow, it has to zero out about 60 GB of disk space. (The autogrowth setting was set to 10% of the file size.)

Did I also mention there was no maintenance being performed (other than the shrink)?

This database was a mess. The first thing I did was kill the shrink job. I then changed the autogrowth settings to something more reasonable – a fixed 500 MB. This particular database didn’t seem to be growing as much as I initially thought it would, so I started looking at getting some index maintenance going to fix all the fragmentation I was sure was present. As I dove into this more deeply, I discovered this database had over 65,000 tables with each table having about 2 to 5 indexes. Checking them, I did indeed find near 99% fragmentation in almost all of indexes. I also found something else – the Page Fullness of many of the indexes was between 8% and 12%! Now, if this database was handling thousands of DML statements daily, this might be an appropriate value. But after looking at the server load, it was clear to me this was not the case. There was no need for the index pages to be that sparsely populated. So as part of my index maintenance, I also rebuilt the indexes with a fill factor of 90%. (It also helped that this was on an Enterprise edition of SLQ Server, so on-line index rebuilds were supported and I was therefore not so chained to my maintenance window.)

Turns out, this simple maintenance solved two problems at once. Not only did performance of the database improve due to the index defrags, but increasing the page density of the indexes freed up a ton of space, helping to curb the need for additional drive space. Before I started my maintenance, the database was 605 GB in size with 75 GB of free space. After maintenance was complete, the database was  still 605 GB in size but it now had 102 GB of free space. (Now granted, the log file backups and differential backup sizes increased during the few days I was performing my maintenance – all those index rebuilds are logged after all -  but those are temporary increases that will drop off the hard disks as the backup files age.)  This free space is internal to the database files, so the disk drive available space hasn’t changed, but at least now the database shouldn’t need to grow for a while.

So keep in mind that routine maintenance not only helps performance, but can also help manage disk space as well. Do not neglect basic maintenance just because it takes a long time. That will only cause you more trouble down the road!

Share