Business Intelligence Blogs

View blogs by industry experts on topics such as SSAS, SSIS, SSRS, Power BI, Performance Tuning, Azure, Big Data and much more! You can also sign up to post your own business intelligence blog.

«February 2016»
MonTueWedThuFriSatSun
25262728293031
12345

Power BI Publish to Web for Anonymous Access is Here

Earlier this week on Wednesday the Microsoft Power BI made an incredibly exciting announcement and released Power BI “publish to web” as a preview feature. This is HUUUUGE news! This was probably the top requested feature and its finally here thanks to the hard work and dedication of the Microsoft Power BI team!

Read Getting Started with R Visuals in Power BI

Power BI “publish to web” allows you to easily expose a Power BI report to the world through an iframe that can be embedded wherever you like.

To publish your Power BI report to the web, log into your Power BI site.

Find the report that you want to share and click File in the top left.
Power BI publish to web

You’ll see a message pop up box similar to below. Click the yellow button to create the embed code.
Power BI publish to web preview

This is where you’ll see a very important warning!
WARNING: Reports that you expose through the “publish to web” feature will be visible to everyone on the internet! This means NO AUTHENTICATION is required to view the report that is embedded in your application.
warning 2

Once you do that, you’ll receive an embed code that you can then use to expose your Power BI report within your blog as seen below!

https://msit.powerbi.com/view?r=eyJrIjoiYTNjNzcwNjctNTczMy00ZDMxLWFlMGUtMDViODA1NGZiNmI0IiwidCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0NyIsImMiOjV9

As you can see the report maintains all the interactivity features of Power BI. And as your Power BI report updates and changes, those changes will be reflected in your embedded Power BI reports!

Pretty awesome!

Additional Resources

Read the Power BI “publish to web” announcement here.

Read the Power BI “publish to web” documentation here.

Feedback

Let me know what you think of this feature or if you have any questions. Leave a comment down below.


Read more
67
891011121314
15161718192021
22232425262728
29123456

Better Know A SSIS Transform – Conditional Split

  • 7 November 2009
  • Author: DevinKnight
  • Number of views: 41443
  • 0 Comments

This is part 3 of my 29 part series called Better Know A SSIS Transform.  Hopefully you will find the series informative.  I will tell you a little about each transform and follow it up with a demo basic you can do on your own. 

The Conditional Split provides a way to evaluate incoming rows and separate those rows by an expression your design.  After these rows are separated they are sent to different outputs so they can either be cleansed, loaded separately, or detect changing data (a good substitute for the slowly changing dimension).  I will provide you some scenarios where you may have to use the conditional split for these reasons and how you would use it.  There are of course other possible reasons you may use the Condition Split but these what I typically use it for.

Cleansing Data Example

The scenario is I have a package that loads Company A customers.  The data that we receive is not always complete though.  Often I will have a zip code for a customer but no city or state.  Because this is a known issue the IT department has purchased a zip code extract that list all zip codes and their associated cites and states.

 

  • Add Flat File Source pointing to incoming customer data
  • Ensure all zip codes are standardized with a Derived Column Transform
  • Use Conditional Split to separate data that does not have a city and state
  • Send rows without city and state to Lookup Transform that will match zip codes and return missing city and states.  If it doesn’t find a match send the output to a table so the rows can be corrected by hand.
  • Use a Union All to combine original good data with corrected data from the Lookup Transform.
  • Send to Destination Table

Conditional Split Configuration

 

  • The condition is trimming any blank spaces in the columns and checking to see if the City and State columns are empty.  If they are empty those rows are sent to a Bad Data output.
  • All rows that don’t meet this condition are sent to the Default output Good Data.
  • Another method could be to convert these blank spaces to null before the Conditional Split then just check for null in the Bad Data condition.

Load Data Separate Example

The scenario is I have a package that loads customer mailing lists.  Company B sends out promotions and wants to separate those mailing list depending on a customers education level.  Those with some college and high school or less education will more likely receive my promotion to attend a career college.

 

  • Add a OLE DB Source to bring in data from my customer table
  • Use a Conditional Split to separate customers by education level
  • Connect outputs to Flat File Destinations to create mailing lists.

Conditional Split Configuration

  • The Completed College output is checking the EnglishEducation column for either a string value of Bachelors or Graduate Degree
  • The Some College output is checking the EnglishEducation column for a string value of Partial College
  • All other rows are sent to the default output named High School Education or Less

Detecting Changing Data Example

This common scenario is using an alternative method to using the Slowly Changing Dimension.  I have incoming records from Company C’s ecommerce system that need to be loaded to my data warehouse.  Before these records get loaded I need to check to see if they are either new, updated or duplicate records.

 

  • Add a OLE DB Source pointing to ecommerce database
  • Use a Lookup Transform on the destination table joining by the table primary key and rename all Output columns Target_(column_name).  Tell the transform to ignore failure when no matches are found.  A better method is to use either Checksum a Hash byte column for comparison, but this is a good starting method. The Checksum or Hash byte method creates a unique identifying number for each row so instead of comparing each column of a row you can compare just one column to detect a change.
  • Use Conditional Split to determine which records are new, updates or duplicates.
  • Send New records to final destination table
  • Send Updates to a staging table
  • Use an Execute SQL task in the Control Flow to process the updated rows into the destination table.  (This method is much faster than using OLE DB Command)

Conditional Split Configuration

 

  • The New Record output is checking to see if the Target_(primary_key) is null.  If it is null then we know it’s a new record.
  • If the Target_(primary_key) is not null then the Update output will compare each column to the destination table to see if there are any differences so we know that it needs to be updated.  Again the best method for doing this would be to use either Checksum or Hash byte to create a unique number that represents a row.  Then just compare that one column instead of all columns.
  • Anything that doesn’t meet these conditions are duplicates and we do not want to load.  Just don’t connect the Duplicate output to anything and these rows will not be loaded.
Print
Tags:
Rate this article:
No rating
DevinKnight

DevinKnightDevinKnight

Other posts by DevinKnight

Please login or register to post comments.