Business Intelligence Blogs

View blogs by industry experts on topics such as SSAS, SSIS, SSRS, Power BI, Performance Tuning, Azure, Big Data and much more! You can also sign up to post your own business intelligence blog.

«February 2016»

Power BI Publish to Web for Anonymous Access is Here

Earlier this week on Wednesday the Microsoft Power BI made an incredibly exciting announcement and released Power BI “publish to web” as a preview feature. This is HUUUUGE news! This was probably the top requested feature and its finally here thanks to the hard work and dedication of the Microsoft Power BI team!

Read Getting Started with R Visuals in Power BI

Power BI “publish to web” allows you to easily expose a Power BI report to the world through an iframe that can be embedded wherever you like.

To publish your Power BI report to the web, log into your Power BI site.

Find the report that you want to share and click File in the top left.
Power BI publish to web

You’ll see a message pop up box similar to below. Click the yellow button to create the embed code.
Power BI publish to web preview

This is where you’ll see a very important warning!
WARNING: Reports that you expose through the “publish to web” feature will be visible to everyone on the internet! This means NO AUTHENTICATION is required to view the report that is embedded in your application.
warning 2

Once you do that, you’ll receive an embed code that you can then use to expose your Power BI report within your blog as seen below!

As you can see the report maintains all the interactivity features of Power BI. And as your Power BI report updates and changes, those changes will be reflected in your embedded Power BI reports!

Pretty awesome!

Additional Resources

Read the Power BI “publish to web” announcement here.

Read the Power BI “publish to web” documentation here.


Let me know what you think of this feature or if you have any questions. Leave a comment down below.

Read more


Non Empty vs NonEmpty

Hey everyone, in this blog I want to address a very common MDX Question. What is the difference between the NON EMPTY keyword and NONEMPTY function? To take it a step further which one should you use?

Non Empty keyword VS NONEMPTY Function.

The big difference between the NON EMPTY keyword and the NONEMPTY function is when the evaluation occurs in the MDX. The NON EMPTY keyword is the last thing that is evaluated, in other words after all axes have been evaluated then the NON EMPTY keyword is executed to remove any empty space from the final result set. The NONEMPTY function is evaluated when the specific axis is evaluated.

Should I use NON EMPTY keyword or NONEMPTY function?

Ok Mitchell, so you told me when each of these are evaluated but really you haven’t told me anything up until this point. Can you tell me which one I should use already? Well, unfortunately, it depends. Let’s walk through an example of each using the BOTTOMCOUNT function.


In this example I’m returning the bottom ten selling products for internet sales. Notice that I have returned all products that have no internet sales, this is not necessarily a bad thing, maybe you want to return products that don’t have sales.


However if you don’t want to return these products then we can try using the NON EMPTY keyword. In the below example you can see the results when I add NON EMPTY to the ROWS axis.


WHOOOAAA, what happened?? A lot of people would have expected the results here to show the bottom ten products that DID have sales. However, that is not the case, remember that I said the NON EMPTY keyword is evaluated LAST after all axes have been evaluated. This means that first the bottom ten selling products which have $0 in sales are first returned and then the NON EMPTY keyword removes all that empty space from the final result.

BOTTOMCOUNT function with NONEMPTY function.

So let’s try this again, if you want to return the bottom ten products that had sales then we must first remove the empty space before using the BottomCount function. Take a look at the code below:


In this code we first remove the empty space before using the BOTTOMCOUNT function. The result is we return the bottom ten products that had internet sales. Once again neither one is right or wrong here it just depends on what you want in your final result.

NON EMPTY Keyword vs. NONEMPTY Function – Performance

There is a very common misconception that the NONEM

Read more

Getting Started with Data Quality Services (DQS) 2012

  • 29 March 2012
  • Author: cprice1979
  • Number of views: 14316

Data Quality Services is a new and powerful feature that is available in SQL Server 2012. Called a knowledge-driven data quality product, DQS allows you to build knowledge bases that handle the traditional data quality tasks such as profiling, correction, enrichment, standardization and de-duplication.  In this blog series we will dive into DQS and explore how its numerous features and capabilities can improve and enrich your critical and valuable business data. 


DQS Blog Series Index

Part 1: Getting Started with Data Quality Services (DQS) 2012

Part 2: Building Out a Knowledge Base

Part 3: Knowledge Discovery in DQS

Part 4: Data Cleansing in DQS

Part 5 : Building a Matching Policy in DQS

Part 6: Matching Projects in DQS

Part 7: Activity Monitoring, Configuration & Security in DQS


Installing the DQS Server and Data Quality Client

Data Quality Services consist of two components: DQS Server and the Data Quality Client. Both of these components are install by the Data Quality Server Installer. 

To install the DQS server and client, select 'Data Quality Server Installer' shortcut from the the Microsoft SQL Server 2012 RC0/Data Quality Services folder on the All Programs menu of the Start button. 


Once the installer starts, you will be prompted to enter a password for the database master key. This key will be used to encrypt the contents of the DQS databases. 


 The process to install the DQS components may take several minutes. During this process three databases and an out of the box knowledge base is created. The three databases that are created are: 

  • DQS_MAIN - This database contains all the DQS stored procedures, the DQS engine and published knowledge bases
  • DQS_PROJECTS - Contains the data associated with data quality projects created in the Data Quality Client
  • DQS_STAGING_DATA - As the name implies, this is a staging area where you can both copy data to perform DQS operations on it as well as export processed data from. 


 The installation process also creates several DQS server logins  (##MS_dqs_db_owner_login##  and ##MS_dqs_service_login##) and DQS database roles (dqs_administrator, dqs_kb_editor and dqs_kb_operator). To handle DQS initialization, a stored procedure is created in master database. It should also be noted that is the installer can find a Master Data Services database instance on the same service it create a user and map it to the MDS login and then grant administrator access to the DQS_MAIN database. 


Once the installer finishes, you will be prompted to press any key to exit and the installation is complete. 

Miscellaneous Notes:

  • The Microsoft.Net Framework 4.0 is required to run the Data Quality Client.
  • To login to the Data Quality Client a user must be in one of the DQS roles. If a user is in the sysadmin server role, it is not necessary to add them to the DQS roles.
  • If you are running the client on a separate computer, TCP/IP must be enabled on the instance hosting the DQS server.

First Look at the Data Quality Client

To begin working with DQS open the Data Quality Client from All Programs menu on the Start button. When the application launches you will be prompted to enter the server name of instance which host your DQS server. Before you click 'Connect', note that you have an option to 'Encrypt connection' which will use a SSL connection for the communications between the client and server. 


Once you are connected you will notice, three distinct areas: Knowledge Base Management, Data Quality Projects and Administration. We will dive deeper into these areas in the next blog post but I just want to cover some of the high level concepts that are important in the DQS world. 


  • Knowledge Base - collection of data domains
  • Domain - contain domain values and status, domain rules, term-based relations, and reference data. Domains can either be single or composite
  • Knowledge Discovery - analyzes organizational data to build knowledge that can be used in cleansing, matching and profiling
  • Cleansing - Process of using the a knowledge base to propose data corrections
  • Matching Policy - Rules used to perform data de-duplication. These rules can be fine tuned by matching results and profiling that creates additional matching policies.
  • Reference Data - Data that can be used to validate and enrich your data. Reference data providers are available in the Azure Marketplace Data Market or you have the option of connecting directly to your provider.

In the next blog post we put DQS to use by building out a knowledge base with domains and business rules, run through the data discovery process and then build out a matching policy. 

Till next time!! 


Categories: Analysis Services
Rate this article:
No rating


Other posts by cprice1979

Please login or register to post comments.