Business Intelligence Blogs

View blogs by industry experts on topics such as SSAS, SSIS, SSRS, Power BI, Performance Tuning, Azure, Big Data and much more! You can also sign up to post your own business intelligence blog.

«October 2015»

Data Warehouse from the Ground Up at SQL Saturday Orlando, FL on Oct. 10th

SQL Saturday #442SQL Saturday #442 is upon us and yours truly will be presenting in Orlando, Florida on October 10th alongside Mitchell Pearson (b|t). The session is scheduled at 10:35 AM and will last until 11:35 AM. I’m very excited to be presenting at SQL Saturday Orlando this year as it’ll be my first presenting this session in person and my first time speaking at SQL Saturday Orlando! If you haven’t registered yet for this event, you need to do that. This event will be top notch!

My session is called Designing a Data Warehouse from the Ground Up. What if you could approach any business process in your organization and quickly design an effective and optimal dimensional model using a standardized step-by-step method? In this session I’ll discuss the steps required to design a unified dimensional model that is optimized for reporting and follows widely accepted best practices. We’ll also discuss how the design of our dimensional model affects a SQL Server Analysis Services solution and how the choices we make during the data warehouse design phase can make or break our SSAS cubes. You may remember that I did this session a while back for Pragmatic Works via webinar. I’ll be doing the same session at SQL Saturday Orlando but on-prem! ;)

So get signed up for this event now! It’s only 11 days away!

Read more

Create Date Dimension with Fiscal and Time

Here are three scripts that create and Date and Time Dimension and can add the fiscal columns too. First run the Dim Date script first to create the DimDate table. Make sure you change the start date and end date on the script to your preference. Then run the add Fiscal Dates scripts to add the fiscal columns. Make sure you alter the Fiscal script to set the date offset amount. The comments in the script will help you with this.

This zip file contains three SQL scripts.

Create Dim Date

Create Dim Time

Add Fiscal Dates

These will create a Date Dimension table and allow you to run the add fiscal script to add the fiscal columns if you desire. The Create Dim Time will create a time dimension with every second of the day for those that need actual time analysis of your data.

Make sure you set the start date and end date in the create dim date script. Set the dateoffset in the fiscal script.

Download the script here:


Read more

Excel Tip #29: Forcing Slicers to Filter Each Other when Using CUBE Functions

As I mentioned in my original post, Exploring Excel 2013 as Microsoft’s BI Client, I will be posting tips regularly about using Excel 2013 and later.  Much of the content will be a result of my daily interactions with business users and other BI devs.  In order to not forget what I learn or discover, I write it down … here.  I hope you too will discover something new you can use.  Enjoy!


You have went to all the trouble to build out a good set of slicers which allow you to “drill” down to details based on selections. In my example, I have created a revenue distribution table using cube formulas such as:

=CUBEVALUE(“ThisWorkbookDataModel”,$B6, Slicer_Date, Slicer_RestaurantName, Slicer_Seat_Number, Slicer_TableNumber)


Each cell with data references all the slicers. When working with pivot tables or pivot charts, the slicers will hide values that have no matching reference. However, since we are using cube formulas the slicers have no ability to cross reference. For example, when I select a date and a table, I expect to see my seat list reduce in size, but it does not. All of my slicers are set up to hide options when data is available. There are two examples below. In the first, you can see that the seats are not filtered. However, this may be expected. In the second example, we filter a seat which should cause the tables to hide values and it does not work as expected either.



As you can see in the second example, we are able to select a seat that is either not related to the selected table or has no data on that date. Neither of these scenarios is user friendly and does not direct our users to see where the data matches.

Solving the Problem with a “Hidden” Pivot Table

To solve this issue, we are going to use a hidden pivot table. In most cases we would add this to a separate worksheet and then hide the sheet from the users. For sake of our example, I am going to put the pivot table in plain sight for the examples.

Step 1: Add a Pivot Table with the Same Connection as the Slicers

In order for this to work, you need to add a pivot table using the same connection you used with the slicers. The value you use in the pivot table, should only be “empty” or have no matches when that is the expected result. You want to make sure that you do not unintentionally filter out slicers when data exists. In my example, I will use the Total Ticket Amount as the value. That will cover my scenario. In most cases, I recommend looking for a count type valu

Read more

SQL Saturday #453–Minnesota 2015 Session Recap–A Window into Your Data

SQL Saturday Minnesota

TSQL WIndow Functions

Thanks for attending my session on T-SQL Window Functions. I hope you learned something you can take back and use in your projects or at your work. You will find an link to the session and code I used below. If you have any questions about the session post them in comments and I will try to get you the answers.

The presentation can be found here:

The code was put into a Word document that you can get here:

This session is also backed by an existing blog series I have written.

T-SQL Window Functions – Part 1- The OVER() Clause

T-SQL Window Functions – Part 2- Ranking Functions

T-SQL Window Functions – Part 3: Aggregate Functions

T-SQL Window Functions – Part 4- Analytic Functions


MSDN Resources:

Read more

Thank You for Attending my #SQLSatOrlando Session! Slides, Resources, Recording

SQL Saturday #477 in Orlando, FL has come and gone but what a turn out! The event was excellent, we had a great turnout for our session and had a blast! And as a bonus, the BBQ lunch, baked beans, coleslaw, mac n cheese and dessert were amazing. Seriously one of the best lunches I’ve had a SQL Saturday event! Plus, the Lego name tags were epic! 100% without a doubt the coolest name tag ever.

Thank you to everyone that attending my session this past weekend! I apologize for the lack of space but we had quite a turnout for our session. People were sitting in every aisle, piled up in the front, standing along the back walls and windows. You all had some really great questions and some very valid points. Because of you, our session ended up being a great discussion! Thank you so much!

Standing room only!

Download the Session Materials

If you’d like to download my PowerPoint slide deck that I used during the session, you can find the link to that down below. Also, if you’d like to download the notes Mitch and I used to prep and during the session, you’ll also find that link below.

Download Dustin’s and Mitch’s PowerPoint Slide Deck for Data Warehouse from the Ground Up

Download Dustin’s and Mitch’s Notes

Also, in the past I presented this material during an online webinar for Pragmatic Works so if you missed my session or the event entirely, you can watch the session recording for free!

Watch Dustin’s and Mitch’s Webinar Recording for Data Warehouse from the Ground Up

Data Warehouse Design Resources

There’s two books that I highly recommend if you’re looking to learn the tenants of designing a perfect star schema data warehouse database. These books are excellent and should be in every data warehouse professional’s library, in my opinion!

image The Data Warehouse Toolkit: The Complete Guide to Dimensional Modeling
image Star Schema: The Complete Reference


Thank you for all the great feedback we received during and after our session. As speakers and

Read more

Power BI Tips, Tricks & Best Practices Webinar Recording & Materials Now Available


Thank you to everyone that attended my Power BI webinar last month, September 29th. Sorry its taken me a while to finally make the information available, but my schedule has been crazy lately! The good news is, however, the recording is available! So if you weren’t able to watch the webinar live, you can still catch the recording anytime you like.

View the Power BI webinar recording

Also, if you’d like to play along or create your own Power BI dashboards using the temperature and precipitation data I used in the webinar, I’ve made the files available to download in the following link. I’ve also included the .pbix file which includes a few example dashboards I developed using the data. Feel free to download the materials and have a go at it!

Download the Power BI model files and data

Questions & Answers

There were tons of questions I received during the webinar and there’s no possible way I can answer all of them, but I figured I’d at least take a look at a few.

Q: Is Power BI Desktop free?

A: It sure is! And you can download it here.

Q: Where do we begin to learn more information on the setup/security?

A: At this point Power BI is changing so fast that the best resource we have available is the knowledge base at, which you can find here. There’s a Power BI whitepaper floating around out there, but even that is quickly becoming out of date. Microsoft has recommended that the Power BI Knowledge Base be used as the best source of information on Power BI.

Q: Is there a possibility for drill-down in the visualizations?

A: Since my webinar, this feature has been added! Just download the latest version and get crackin’! You can learn more about the drill down functionality here.

Q: How is data refreshed on the Power BI site?

A: Data can be manually refreshed by selecting Refresh Now or the refresh operation can be scheduled. In order to schedule a data refresh for on-prem data sources, its important to note that the Power BI Personal Gateway is installed on the server where the data exists. To learn more about refreshing your data in Power BI, read this.


Thank you again for attending my webinar and viewing my recording. If you have any questions regarding the webinar or Power BI, feel free to leave a question below!

Read more

SQL Internals Reading Data Records Part 5: Variable Offset Array

  • 6 July 2012
  • Author: BradleyBall
  • Number of views: 3336


  Welcome back Dear Reader to Part 5 on our series of how to read a Data Record.    In Part 1 we covered the Tag Bytes.  In part 2 we covered the Null Bitmap Offset.  In Part 3 we covered the Fixed Data Portion of a record.  And in Part 4 we talked about the Null Bitmap. Today we will be discussing the Variable Offset Array.  This part of the record is another optimization, the Variable offset array will only show up IF we have Variable Length Data Types in our Table and if they are not NULL.  If they are all NULL then we will not get an offset array.


“So Balls”, you say, “Why no offset array if it is NULL?”


Excellent question Dear Reader and the answer is in the name itself.  Just about clear as mud, let’s get to the demos and clear this up.




First let’s update our chart so we know what part of the Data Record we are tackling.  Once again these images come by way of Paul Randal (@PaulRandal | Blog) and the MCM Video series on Data Structures and the good people from Microsoft. 



What the Variable offset array does is house the numeric value to the end of the record that we want to read.  So if we have three variable length columns set to a max of 15, 20, and 25 but we only use a portion of them (8 out of 15), (10 out of 20), and (20 out of 25) then we would have unused portions right?  Wrong. 


If we use the meta data and just read the full length of the columns we are getting stuff wrong and wading into columns we do not need to, not to mention in acting like Fixed Length Data our records bloat and there is always wasted space.  This is VARIABLE Length data, we only use what we need, think of this as the Green Data Type.   So all this Recycling is cool and all, but it means that we need to have some data to help us efficiently read this data.  And Voila we get our Variable Length Offset Array.  This tells us what the end of the record is for one record at a time so we go to the end of that record and read forward.


So without further ado let’s get right to it.  We’ll use the same code that we used on day 4, and we’ll create our table and one record of data.


       DROP TABLE dataRecord2
CREATE TABLE dataRecord2
                     (myID INT
                     ,myfixedData CHAR(4)
                     ,myVarData1 VARCHAR(6)
                     ,myVarData2 VARCHAR(6)
                     ,myVarData3 VARCHAR(6)
INSERT INTO dataRecord2(myID, myfixedData, myVarData1, myVarData2, myVarData3)
VALUES (7, 'XXXX', null, null, null)


Now let’s do a DBCC IND and get our page numbers.


DBCC IND(demoInternals, 'dataRecord2',  1)



Remember that a page type 10 in an allocation page and we are reading a data page.  So look for the page number that has a PageType=1.  We’ll follow that up with a DBCC PAGE on Page 296, remembering to turn on Trace Flag 3604 so that we get our output to our SSMS window.  *Remember your page numbers may be different than mine. 


DBCC PAGE('demoInternals', 1, 296, 1)



Now that we’ve got our page let’s take a look at it, I’m only going to post the relevant output.

Record Type = PRIMARY_RECORD        Record Attributes =  NULL_BITMAP    Record Size = 15


Memory Dump @0x0000000010E3A060


0000000000000000:   10000c00 07000000 58585858 05001c             ........XXXX...


You may be wondering where our Variable offset Array is?  And the answer is there isn’t one.  Our initial column had only Null values for our Variable Columns and so our record only took up the space that it needed.  No Variable length columns means no need for a variable column offset array.  So let’s add one more record and see what happens.


UPDATE dataRecord2
SET myVarData2='WWWWWW'
DBCC PAGE('demoInternals', 1, 296, 1)



Record Type = PRIMARY_RECORD        Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS

Record Size = 27                   

Memory Dump @0x000000000A03A060


0000000000000000:   30000c00 07000000 58585858 05001402 0015001b  0.......XXXX........

0000000000000014:   00575757 575757                               .WWWWWW


We have Several values in our Variable Offset array.  First we have 02 00 then 1500 and finally 1b 00. As you might have guessed these are Hex pairs.  Our first set of Hex pairs will tell us the number of columns for our variable length fields, in this case 0200 which translates to 0x0002 or just 2.  We have three columns but remember we just updated the middle column, so we have to acknowledge the column in front of it but not behind it because we do not yet need this space.  1500 is our next Hex pair which translates out to 0x0015 in this case we are translating Hex to Decimal and not binary, remember to use our handy tool, and this translates to 21.


So how do we get to 21?  2 Bytes for our Tag bytes + 2 bytes for the Null Bitmap offset + 8 Bytes of fixed Length Data + 3 bytes for the Null Bitmap Offset + 6 bytes for our Null bitmap offset array, or simply (2+2+8+3+6), = 21.  And at record 21 there is nothing because our value is null.  Our Remaining value is 1b00 or 0x001b also known as 27.  Remember we inserted six W’s into our variable length field, so just add 6 to 21 and we get 27 the end of our current record.


Just to double check let’s insert some data into our final Variable length row and look at the updates.


UPDATE dataRecord2
SET myVarData3='BBBB'
DBCC PAGE('demoInternals', 1, 296, 1)


Remember just putting out the relevant data.

Record Type = PRIMARY_RECORD        Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS

Record Size = 33                   

Memory Dump @0x000000000A03A060


0000000000000000:   30000c00 07000000 58585858 05000403 0017001d  0.......XXXX........

0000000000000014:   00210057 57575757 57424242 42                 .!.WWWWWWBBBB


Right away we see our values have changed 03 00, 1700, 1d 00, and 2100 are our new values.   We now have 3 columns so we see that reflected in 0x0003 or 3.  Our first value incremented by 2 because of the addition of our 3rd column to 0x0017 or 23, and our second column has incremented as well from 0x001b to 0x001d or 29.  Our new value for our 3rd row is 2100 which translates to 0x0021 or 33.  If we add 4 , for the four B’s we added to column 3, to 29 which is the ending value of column 2 we get 33 (4+29=33).  So one last update and we’ll have gotten rid of all the null’s.


UPDATE dataRecord2
SET myVarData1='SSSS'
DBCC PAGE('demoInternals', 1, 296, 1)

Once again I’m only going to copy out the relative output.


Record Type = PRIMARY_RECORD        Record Attributes =  NULL_BITMAP VARIABLE_COLUMNS

Record Size = 37                   

Memory Dump @0x000000001398A060


0000000000000000:   30000c00 07000000 58585858 05000003 001b0021  0.......XXXX.......!

0000000000000014:   00250053 53535357 57575757 57424242 42        .%.SSSSWWWWWWBBBB


And now we’ve shifted our values one more time.  The relative values are 03 00, 1b00, 21 00, 2500.  We still have 3 columns 0x0003, but now we see that our first record has grown again to the ending point of our first variable length record to 0x001b or 27.  The ending point of our second variable length record is 0x0021 or 33, and the ending point for our 3rd and final record is now 0x0025 or 37.


And that as they say Dear Reader is that, I hope you enjoyed the read and as always Thanks for stopping by.





Categories: Analysis Services
Rate this article:
No rating


Other posts by BradleyBall

Please login or register to post comments.