Business Intelligence Blogs

View blogs by industry experts on topics such as SSAS, SSIS, SSRS, Power BI, Performance Tuning, Azure, Big Data and much more! You can also sign up to post your own business intelligence blog.

DBA Question of the Day: What Changes have occurred to DBCC CHECKDB in SQL 2012?

  • 24 August 2012
  • Author: BradleyBall
  • Number of views: 6883



As a DBA some of the most interesting features of a new SQL Server Release are the bits that are changed in the code, not necessarily those that are in the marketing brochure.  In SQL 2008 R2 that was the Lock-Hash Key Algorithm change.  If you're not familiar with that, just wait it will be the next DBA Question of the day.

In SQL Server 2012 one of the changes that we should be aware of are those that were made for DBCC CHECKDB.  It is all internal and under the covers.  Bob Ward detailed these out in blog posts that he made on the MSDN blogs.  Part 1 talked briefly about what the issue they were facing was and Part 2 really dives under the covers to tell you how and why things changed.

In short Bob and Andrew Cherry worked on an effort to speed up DBCC CHECKDB for VLDB’s.  They changed the way that “object batching” and “page batching” occur within DBCC CHECKDB.  These changes are meant to specifically change how DBCC CHECKDB runs WITH option PHYSICAL_ONLY.  However Bob goes on to point out that there is the potential that you could see improvements with all areas of DBCC CHECKDB, could being the operative word.  Their goal was a simple one increase the disk throughput that DBCC CHECKDB could use to make more effective uses of the Disk I/O’s.

Normally DBCC CHECKDB reads pages in batches.  This is by design, for more on that read Paul Randal’s CHECKDB From Every Angle: Complete Description of all CHECKDB stages and How does DBCC CHECKDB WITH ESTIMATEONLY WORK? For all the deep down SQL Fun. 

Tempdb is used to store the “fact tables” that are created by the CHECKDB process.  CHECKDB tries to minimize the space required in tempdb by reading objects in on batches of 512 or more indexes in the batch or if the total estimate of the facts for the batch requires more than 32 MB.  In a TB sized database you can see how that could be a choke point.  Enter Object Batching.

Object Batching changes the way that IAM chains and Allocation Bitmaps are read in.  CheckAlloc reads in all the Allocation Bitmap’s but CheckTable can page some out of cache, this required small but frequent 8kb page reads that paused the process.  Now 64kb reads are done that dramatically increase the time Checkdb runs in (if the DB is large enough that allocation pages would have been pushed out of memory).  Object Batching now reads the database in ONE BATCH instead of multiple. It also requires Trace Flag 2562 and allows the server to read the entire database and it’s objects in one batch, this causes tempdb growth and will require a tempdb of at least 5% or more of the user DB.  Tempdb being on a slower I/O system could prove to be a bottle neck.


After Bob and Andrew tackled Object batching they were still looking at a way to improve the performance.  They had noticed a high amount of DBCC_MULTIOBJECT_SCANNER latches, which is a non-page latch that will occur when DBCC CHECKDB is run in parallel.  They found the latch was not being used in the most effective manner.   Enter the concept of “Page Batching”. 


Page Batching changes the way that the Worker Threads in a parallel DBCC CHECKDB run operate.  It allows more Workers to read more pages at a time and hold the latches for less time.  This allows Workers to do more work quicker.  This does not require a trace flag and is built in with SQL 2012.


It is available in a CU 9 for 2008 SP2, CU4 for 2008 SP3, CU11 for SQL 2008 R2, CU 4 for SQL 2008 R2 SP1.  They are built in with SQL 2008 R2 SP2 as well.  Find more on this here:


As Always Thanks for reading!




Categories: Miscellaneous
Rate this article:
No rating


Other posts by BradleyBall

Please login or register to post comments.