Business Intelligence Blogs

View blogs by industry experts on topics such as SSAS, SSIS, SSRS, Power BI, Performance Tuning, Azure, Big Data and much more! You can also sign up to post your own business intelligence blog.

«September 2015»

Executing DBCC for SQL Server Analysis Services 2016

In the upcoming release of SQL Server Analysis Services 2016, one of the new features you’ll see is the ability to perform a database consistency check against your SSAS cubes and Tabular models. Just like in the database engine side of things, DBCC for SSAS checks for corruption across the entire database or individual objects within the database.

The DBCC command is shaped likes the XMLA Process command so there’s not a lot of complexity to it. Below here, you can see the basic syntax for the SSAS DBCC command. Its worthing noting that the syntax of the command will look the same whether you’re running it against an SSAS multidimensional database or Tabular model.


To run the DBCC command, just open a new MDX query window and use the code seen above. Enter in the IDs of your Database, cube, measure and/or partition.

When you’re running the DBCC command against a Tabular model, there are a couple things I’d like to point out.

In the element for the CubeID, you’ll need to specify the ID of the Model. And in the element for the MeasureGroupID, specify the ID for the table you want to check.

DBCC XMLA command for SSAS

If you want to check the whole database or model for consistency, simply remove the elements the lower elements. For example, if I wanted to check the whole model, I just would leave out the elements for MeasureGroupID and PartitionID.

To find the MeasureGroupID (Table ID) or PartitionID in a Tabular model, just navigate to the Properties for that object.

Find the SSAS Tabular MeasureGroup ID or Table ID

To find the Partition ID in a Tabular model, right click the table and select Partitions. Then highlight the partition you want to check and click the Settings icon.

Find the SSAS Tabular partition ID

If you run SQL Server Profiler against SSAS while executing the DBCC command, you can see the individual checking of the columns, tables, database and more.

SSAS Tabular Profiler trace DBCC

I also ran a trace against my SSAS 2016 OLAP instance to watch each segment of

Read more

Hello Operator: Key Lookup

  • 11 June 2013
  • Author: Kathi Kellenberger
  • Number of views: 4974

The T-SQL Tuesday topic this month, “Hello Operator”, is about execution plan operators. I would like to talk about the Key Lookup Operator.

So, the first thing you may be wondering about is if it is good or bad to see the Key Lookup operator in your plan. Well, that is one of those “it depends” kind of questions. Sometimes it is OK, and sometimes it is very, very bad.

Let’s take a look at why the Key Lookup operator would be used. Say you have a query with a WHERE clause.  In this case there is a nonclustered index the optimizer determines can be used to locate the rows. One of the benefits of nonclustered indexes is that, if all the columns required for the query results are part of the nonclustered index, the underlying table does not have to be accessed.  This is also called a "covering" index.

I’ll steal a great analogy (or is it metaphor?) from my friend, Jason Strate. Travel back in time when the public library catalog was paper based. To find a book, you looked at cards in cabinets. If you wanted to find the names of all the books written by Stephen King, you could get that information from the catalog. If you wanted to actually read one of those books, you would have to walk to the appropriate shelf to get the book.

SQL Server works the same way. If all the information required for the query is contained within the nonclustered index, it is not necessary to access the actual table. If some of the required columns are not found in the nonclustered index, SQL Server will have to pull data from some of the pages of the table. To see this in the execution plan, look for the Key Lookup operators.  If the nonclustered index is very selective and there are just a handful of rows returned, this is not so bad. But if the index is not selective or there are hundreds of thousands of rows, then this is bad. Sometimes it would be more efficient to just scan the clustered index or heap instead of using the index.

Here is an example using the AdventureWorks database. The Sales.SalesOrderDetail table has a nonclustered index on ProductID, but the index doesn’t contain the OrderQty column.

SELECT OrderQty, ProductID
FROM Sales.SalesOrderDetail
WHERE ProductID = 897;

This query returns two rows, but still 98% of the effort is expended by the Key Lookup. If I change to a ProductID that returns many more rows, the optimizer might do a  clustered index scan instead along with an index suggestion.

SELECT OrderQty, ProductID
FROM Sales.SalesOrderDetail
WHERE ProductID = 707;

The two queries are ad-hoc queries. If the queries were instead parameterized, such as within a stored procedure, we now have a problem called “parameter sniffing”. The optimizer will store the plan for the first time the proc is called and use that plan even if it would be better to use a different plan. The next example demonstrates this.

 --create the proc
CREATE PROC usp_GetQty @ProductID INT AS
 SELECT OrderQty
 FROM Sales.SalesOrderDetail
 WHERE ProductID = @ProductID;
--Call with 897
EXEC usp_GetQty @ProductID = 897;

The call to the stored procedure uses the Key Lookup. That is what we expect since the Key Lookup is efficient when only two rows are returned. Now call the proc with 707.

EXEC usp_GetQty @ProductID = 707;
In this case, the optimizer reuses the original plan, which is not the best option for a ProductID of 707. Using the SET STATISTICS IO option, we can compare the IO required to return the rows with the two different methods.

SELECT OrderQty, ProductID
FROM Sales.SalesOrderDetail
WHERE ProductID = 707;
EXEC usp_GetQty @ProductID = 707;

How can we solve this problem? There are a few ways to solve it involving query hints, but in this case, I would just add the OrderQty column to the ProductID index as an included column, especially if this query runs frequently. This will eliminate the need to get the OrderQty values from the table. The index suggested  with the clustered index scan should not be used. This suggestion will actually create a duplicate index, but that is a story for another day...

NOTE: There is also an operator call RID Lookup that is used in the same way with HEAPS.





Categories: SQL Server
Rate this article:
No rating

Please login or register to post comments.