Business Intelligence Blogs

View blogs by industry experts on topics such as SSAS, SSIS, SSRS, Power BI, Performance Tuning, Azure, Big Data and much more! You can also sign up to post your own business intelligence blog.

«August 2015»
MonTueWedThuFriSatSun
272829

Power BI and Big Data

If you’re worked in the wide and diverse field of information technology for almost any amount of time, it probably hasn’t taken you long to discover that the one thing constant about IT is that the technologies and strategies involved change faster than you can learn them. And if you work in business intelligence like I do, you don’t have to look very far at all to see change. The Microsoft Power BI team rolls out a software update every month! If I want to stay learned up on the technology, I have to really be on top of things.

About ten years ago when Hadoop was first being developed at Yahoo, I don’t think anyone could have anticipated the size of the ripples (more likes cannonball sized splashes) being able to access Big Data could and would have on the IT industry. Hadoop (and other advances in hardware and software technologies) gave us something we never had before: The ability to access and report on data in real time on a scale never previously imagined gives an organization to identify and understand trends and patterns in the data and gain previously unknown insights. The organizations that are able to leverage big data will be the organizations that leave their competition in the dust.

Set Up and Configure the Hortonworks Sandbox in Azure

Not only does Power BI Desktop give us the ability to connect to Hadoop Distributed File System (HDFS) for reporting we can also mash it up with other more traditional and structured data sources with minimal effort required. But that’s not what this blog post is all about. This post is about setting up a virtual machine in Azure running Hadoop and connecting to our Hortonworks Sandbox with Power BI Desktop :).

The first thing you do if you don’t have access to a Hadoop cluster is to set up the Hortonworks Sandbox on Azure. The good news is its free (for the duration of the trial) and its super easy. Just follow the instructions at this link to set up the Hortonworks Sandbox.

Hadoop in Azure

Once that’s set up, you’ll need to add mapping for the IP address and host name to your hosts file. Devin Knight has a blog on this that you’ll find helpful.

Connecting to Hadoop with Power BI Desktop

Once your Hortonworks Sandbox is set up, you’re ready to set up your connection to Hadoop with Power BI Query. Start up the Power BI Desktop and click Get Data. Scroll down and select Hadoop File (HDFS) and click Connect.

Get Data with Power BI

From there you can follow the rest of the wizard to load the data into the semantic model.

Load Data with Power BI

Once the data is loaded, you’ll need to modify the query to navigate to the data you wish to use in your model.

In Power BI Desktop, go to the Home ribbon and click Edit Queries.

Read more

30

Three Best Practices for Power BI

Since the release of Power BI Desktop this past week, I’ve been really spending my extra time digging into the application focusing on learning and experimenting as much as I can. When my wife has been watching Law and Order: SVU reruns at night after the rug rats are in bed, I’ve been right there next to her designing Power BI dashboards like the total data nerd that I am. When my kids have been taking their naps during the weekend, I’ve been writing calculations in the model for my test dashboards. Or when I’ve been riding in the car back and forth to work I’ve been thinking of new things to do with Power BI Desktop.

Since I’ve been spending a decent amount of time with Power BI Desktop, I thought I’d take a moment to share three things to know and remember when designing your Power BI models and dashboards that I think will help you make the most of this tool and be effective at providing the data your business needs to succeed.

1. Optimize your Power BI Semantic Model

It probably hasn’t taken you long to figure this one out if you’ve built Power Pivot/Tabular models or at least it won’t when you do start developing Power BI dashboards. The visualizations in Power BI and Power View are heavily meta-data driven which means that column names, table or query names, formatting and more are surfaced to the user in the dashboard. So if you using a really whacky naming convention in your data warehouse for your tables like “dim_Product_scd2_v2” and the column names aren’t much better, these naming conventions are going to be shown to the users in the report visualizations and field list.

For example, take a look at the following report.

Power BI Dashboard without formatting

Notice anything wonky about it? Check the field names, report titles and number formatting. Not very pretty, is it? Now take a look at this report.

Power BI Dashboard with formatting

See the difference a little cleaned up metadata makes? All I did was spend a few minutes giving the fields user-friendly name and formatting the data types. This obviously makes a huge difference in the way the dashboard appears to the users. By the way, I should get into the movie production business. ;)

My point is that the names of columns, formatting, data types, data categories and relationships are all super important to creating clean, meaningful and user friendly dashboards. The importance of a well-defined semantic model cannot be understated in my opinion. A good rule of thumb is to spend 80% to 90% of your time on the data model (besides, designing the reports is the easy part).

I’d also like the mention the importance of the relationships between the objects in the semantic model. Chance are you will have a small group of power users that will want to design their own dashboards to meet their job’s requirements and that’s one of the beauties of Power BI. But when users began developing reports, they may query your model in unexpected ways that will generate unexpected behaviors and results. I only want to mention this because the relationships between the objects in the model will impact the results your users will see in their reports. Double check your relationships and ensure that they are correct, especially after you add new objects to the model since the

Read more
31

Power BI Fantasy Football Player Stats Dashboards for Download

Every year at Pragmatic Works some coworkers, including consultants, marketing staff, support team members, software development staff and project management, partake in a company fantasy football league. And with the recent release of the new Power BI Desktop, I thought what better way is there to prepare to completely annihilate my coworkers and friends in an imaginary nonsensical game than by creating some nifty Power BI dashboards based on last years player stats as recorded by Yahoo! Sports. So I thought I’d walk you through some of the steps I followed to leverage the Yahoo! Sports NFL player stats page as a data source and some of the query transformations I applied to prepare the data for reporting.

Power BI dashboard with Power BI Desktop

Click here to download my Fantasy Football Dashboards Power BI .pbix file.

If you’re completed new to Power BI Desktop I highly suggest you watch my video walkthrough of Power BI Desktop or read my blog post which walks you through each step of creating your first Power BI dashboards with Power BI Desktop. Last Friday, I also blogged about my three best practices for designing a killer Power BI solution, so take a look at that.

To create these dashboards, I simply navigated to the Yahoo! Sports NFL stats page and found the page for each position I’m interested in for this fantasy football season. I copied the URL to my clipboard. In Power BI Desktop, click Get Data and then use the Web data source option. Then all you have to do is copy and paste the URL into the text box and click OK.

Get data from web with Power BI Desktop

Then select the HTML table that contains your data and click Edit. We need to edit our query because there are some issues with the data. By clicking Edit, we can apply transformations to our query which will allow us to do things like rename columns, remove unwanted columns, modify data types, create custom columns and much more.

Get data from web with Power BI Desktop

One thing you’ll notice in the above screen grab is that the column names are in the first row, so we need to fix that.

On the Home ribbon of the Query Editor, just click the Use First Row As Headers button. Pre

Read more
12

Power BI Tip: Use the Treemap Chart as a Colorful Slicer

Power BI Desktop has been out for GA for over a week now and some of the pro’s out there have come up with some pretty cool tricks. For instance:

But if you’re looking for a way to spice up you report filtering with a little color, try using the Treemap chart as a Slicer for those fields that only contain a few unique values. At this point with Power BI, you don’t have any customization options for the Slicer visualization (although I’m sure that is coming down the pipe in a future release). This option won’t work terribly well if the field you would like to use as a slicer has more than a dozen or so unique members, but you can experiment with it and see what you can come up with. Here’s my Treemap Slicer in action.

tree map slice in action

To multi-select tiles in the Treemap slicer, just hold Cntrl as you click. To reselect

This little trick relies on the natural cross filtering between data regions in the Power BI dashboards. First I created a measure that calculates the distinct count of the field that I wish to use as my slicer. In this case the field is Genre.

Power BI Distinct Count DAX calculation

Then I added a Treemap chart to the report using the field Genre as the Group value and the measure Distinct Count Genre as the Values.

image

Then just resize the Treemap visualization so that the squares are about evenly sized. There’s a few ways you can arrange it, but just play around with it and see what you can come up with.

Power BI Dashboard with Treemap Slicer Power BI Dashboard with Treemap Slicer

Feedback?

What do you think? Leave me a comment below and let me know. Or if you’ve got a neat Power BI trick you’d like to share, let me know, as well. I love to hear new ideas! Thanks for reading!


Read more
34

Importing Excel Power View Dashboards into Power BI

If your organization is now a Power BI customer, congratulations. You’re now ready to create some very cool dashboards, integrate disparate and disconnected data sources and take advantage of Power BI’s ability to modify and transform your data, build interactive and dynamic dashboards and then share them with your team and organization. But until you create your dashboards to take advantage of the new visualization types and other improvements, you can easily import any existing Power View sheets in Excel into your Power BI site.

Power View dashboard in Excel ready for Power BI goodness

Above you’ll see an example of a Power View dashboard that I will import into my Power BI site.

Importing Power View into Power BI

To import Power View sheets into Power BI, navigate to your team’s Power BI site and click the Get Data button at the bottom left.

image

Then select where your data exists. In my case, I have my Power View reports in Excel saved in my One Drive folder so I’ll select Files.

image

image

You’ll have to log into your One Drive account. Once you’ve done that, just navigate to where your Excel file is and then click Connect.

After the import is complete, you’ll see your new Report and Dataset on the left in your My Workspace explorer. Open the new Report based on your Power View sheet. Now I can browse and interact with my Power View dashboard within my Power BI site.

image

I can also pin visualization from my Power View report to a Power BI dashboard with other visualizations.

image

And now that we have our Power View report imported and pinned to a dashboard can use Q&A to quickly generate visualizations using the imported data set.

image

Pretty cool stuff! It’s nice to know that I can easily jump from my legacy Power View dashboards to the new Power BI.

It’s important to not that in order to import Excel workbooks into your Pow

Read more
56

#PowerBI and #SSAS Tabular: A Natural Fit with the Power BI SSAS Connector

SSAS Tabular and Power BI In late June last month, the Microsoft Power BI team released the Microsoft Power BI Analysis Services Connector. The Power BI SSAS Connector allows your deployed Power BI reports to utilize your on-prem SSAS data sources. It’s super easy to set up and can be downloaded for free! And I who doesn’t love “free”?

Download the MS Power BI SSAS Connector here

Why Use the Power BI SSAS Connector?

Power BI and SSASOne of the advantages to using Power BI is that the tool has the ability to connect to an incredibly wide variety of data sources including SQL Server Analysis Services instances. With that in mind, what’s the purpose of using the Power BI SSAS Connector? Why not just load our SSAS data into our Power BI semantic model like we do with our Access, Excel, CSV and web data and then schedule the Power BI semantic model to refresh? And that’s a good question.

Power BI: Live & Prime Time with SSAS Tabular

First of all by utilizing the Power BI SSAS connector, we are granted a live connection to our SSAS instance. What this means is that every time a user interacts with a filter, slicer, chart or other data visualization, Power BI quickly generates a DAX query behind the scenes which is sent to your on-prem SSAS Tabular model. Now currently Power BI users are restricted to how often data sources are refreshed. If you’re a free Power BI user you’re limited to one data refresh per day and if you’re a Power BI Pro user you’re limited to Hourly data refreshes. By leveraging the live to connection to your SSAS Tabular instance, you can update the data in your Tabular model as often as you are able.

Because the Power BI SSAS Connector allows you to have a live connection to your SSAS Tabular model, this also means that your users experience less latency between updates to their data. Without the live connection to SSAS, each day the users would have to wait for the SSAS Tabular model to be processed and then for the Power BI semantic model to be refreshed. With the live connection, as soon as the Tabular model has finished processing, the Power BI users have access to the most current data instantly. Data is available to your users with potentially much less time between data refreshes.

Currently, the Power BI SSAS Connector only supports live connections to SSAS Tabular instances, although I would expect a future update to support live connections to SSAS multidimensional cubes.

Enterprise Data in Power BI

SSAS Tabular model partitions Currently Power BI semantic models are restricted by a data capacity limit. If you’re a free Power BI user you’re limited to 1 GB per/user and if you’re a Power BI Pro user you’re limited 10 GB/user. This can be a

Read more
7

Twitter Analysis with #PowerBI & Plus One

Earlier this week Christopher Finlan put together this awesome Datazen dashboard using Plus One. Christopher has been doing a lot of cool things with Datazen so I recommend that you do like I did and subscribe to his blog. But Christopher’s cool work with Plus One inspired me to create my own Social Media dashboard using Plus One, as well.

powerbi search completePlus One has created this nifty little desktop application that you can download and install on your computer. Once you’ve set the app up, all you need to do is enter a search query. In my case, I wanted to see what people were doing and saying with Power BI on Twitter. Plus One can only recover the previous seven days of data, so you’ll need to periodically refresh your search or schedule the search, which you can do easily with the Plus One application.

access db datasourceAfter running the search, the results of your query are saved in an Access database on your machine in the folder C:\Users\username\Documents\Plus One Social. And then all you have to do is use Power BI Desktop to suck the data into your Power BI semantic model and then start building some awesome dashboards.

Here’s the report and dashboard I created in Power BI. I haven’t scheduled the Plus One application to refresh the data so I don’t have any trend reports yet, but I did create some snapshot visualizations to gain insights into who is talking about Power BI and in what context by analyzing the accompanying hash tags. Pretty cool stuff!

image

image

 

Resources

Here’s the resources:

Feedback?

Leave your feedback down below! I’d love to see what kind of dashboards you can come up with using Plus One so feel free to leave a comment and a link to your blog, as well!


Read more
89
10

Cleaning Your #PowerBI Power Query Code

image Over the weekend I found this nifty tool called Power Query Management Studio. Someone shared it on Twitter and you’ve probably seen the link to download the tool on technet. Basically this tool is a fancy Excel workbook that allows you to easily clean up your Power Query code and insert it back into your Excel workbook or Power BI semantic model. It’s pretty nifty and easy to use so I figured I’d give you a quick run down on using it to clean up my Power Query code in my Fantasy Football & NFL stats Power BI model, which you can download here.

To begin using the Power Query Management Studio, download it here.

I want to use this tool to clean up my Power Query code in my Power BI model, so the first thing I’ll do is open my Power BI model in Power BI Desktop. Next, we need to capture my Power Query queries so to do this I’ll click the smiley face icon at the very top of Power BI Desktop and click Send Frown.

image

A little Send Feedback dialogue box will pop up. Uncheck the Include Screenshot (since we really don’t care about that) and leave the Include Formulas box checked. This will allow us to see the Power Query queries. Click OK.

image

When you click OK, this will open up an email for you that will include your Power Query queries in the body of the email. The code is broken up by queries seperated by semicolons so you can easily see each query.

image

I copied everything below the line “section Section1;”. Once you’ve copied that code to your clipboard, open the Power Query Management Studio Excel workbook. Clear the sheet called CodePaste (but don’t delete the table) and paste your Power Query queries into the table like so. Then click the Refresh All button up top in the Data ribbon of Excel.

image

After a few moments, the Excel workbook will have completed its magic. There’s a few sheets in the workbook I’ll point out that you’ll find useful.

image

CommentTransfer: T

Read more
1112

Here’s the New #Excel 2016 Chart Types!

The Office 2016 Public Preview is now available for download! Included in the preview of Excel 2016 are a handful of new chart types and since I’m a huge fan of awesome data visualizations, I thought I’d take a few moments to play around with them and share my experience with you so you can have a better idea of what to expect in the next version of Excel. But to be honest, if you’re a data & visualizations nerd like me, you’re probably pretty excited!

imageNow one thing to be aware of with these new chart types is that if you attempt to create this chart on top of some data in a pivot table, you’ll get a warning like the one seen here. In order to use these charts, you’ll need to create them on top of data that is not in a pivot table (at least for the time being).

Box and Whisker Chart

The Box & Whisker chart is a really nice visualization for getting a quick look at the distribution of data including outliers, mean, range and quartiles, for example. In the below chart, I pulled in some data from NFL.com with Power Query and performed some analysis on the yards per game for the top four running backs from last season.

image

You also have control over the chart formatting through some options specific to the Box & Whisker chart type.

image

Waterfall Chart

The Waterfall Chart was just added to Power BI so you’ve probably already had a look at that visualization. The neat thing about Waterfall chart is that it allows us to see how the small pieces of a whole contribute to the total. For instance, below I have a Waterfall chart that shows the play stats from the New England Patriots first drive in their conference championship game against the Indianapolis Colts which resulted in a touchdown. Using this chart I can see how each play in the drive led to the total yards gained on the play. Pretty cool!

image

A Waterfall chart could be really useful for monitoring changes in inventory or for viewing balance sheet data.

Sunburst Chart

The Sunburst chart is good for viewing hierarchical data. So if you wanted to view how individual accounts contribute to their parent accounts in a balance sheet, the Sunburst chart could be a really interesting way to visualize that type of data.

Here I’m using the Sunburst chart to analyze the receivers of a few different teams by player and position.

Read more

1314

Refreshing Excel Power Query & Pivot Tables with SSIS and Task Factory

image With SSIS 2014 and earlier there is currently not native way to refresh an Excel workbook which include Power Query queries. Now that functionality is rumored to be included with SQL Server 2016 but if you’re currently running SQL Server 2014 or 2012 you are out of luck. But that’s why Pragmatic Works put together the Excel Power Refresh component for SSIS.

Configure the Excel Power Refresh Task in SSIS

Configuring the Excel Power Refresh Task is pretty straightforward. There’s not a lot of complexity to this component, which is a good thing.

First create a Connection Manager to your Excel 2013 file that includes your Power Query queries. In my case I have an Excel workbook that has some Power Query queries that query NFL.com for some data that I used for a blog post on the new Excel 2016 chart types.

Once you create your connection to the Excel file, use the Data Connections and Pivot Table Sheets to select the queries and pivot table sheets that you wish to refresh.

image

And now I can schedule the refresh of any Power Query data connections or Pivot Tables with SSIS.

image

Very cool!

Resources

Download the free trial of Task Factory here.

Feedback?

If you have any questions or would like some more information on Task Factory, feel free to send me an email or leave a comment below. Thanks!


Read more
1516
171819

Power BI Tip: Use a Scatter Chart to Create a Calendar Report

Power BI Desktop Scatter Chart

The Scatter Chart in Power BI and Excel is very useful chart for visualizing three different metrics in tandem. But with a little bit of work you can use a Scatter Chart to create a Calendar chart for visualizing your metrics across the days of an individual month.

New to Power BI Desktop? Start here!

To configure a Scatter Chart too mimic a Calendar type report, you need the follow:

1) An attribute for the day number of the week (1,2,3,4,5,6,7).
2) An attribute for the day number of the month (1,2,3…29,30,31).
2) An attribute for the week number of the month (1,2,3,4,5,6).
3) An attribute for sorting the week numbers in reverse order.
4) A business metric you wish to represent in the report.

Most of these items you can get from a traditional date dimension. In this example, I’m utilizing the Adventure Works DW database which has a date dimension table.

To set up the Scatter chart correctly, configure the visualization as seen in this screen shot from Power BI Desktop.

image

I want to point out a couple things here. First, you can optionally add a field to the Legend to differentiate between the weekend and weekdays or to identify holidays, as seen below.

Power BI Desktop Scatter Chart

Secondly, in order to display the Calendar in the correct order, we actually need to reverse the order to the Week numbers so that the first week of the month is numerically higher than the last week of the month. To do this I used a TSQL Case statement to populate the new column in the Adventure Works Date dimension table:

[WeekNumberOfMonth]  AS 
    ((datepart(week,[FullDateAlternateKey])
    -datepart(week,dateadd(month,datediff(month,(0),[FullDateAlternateKey]),(0))))+(1)),
[WeekNumberOfMonthReverse]  AS 
    (case (datepart(week,[FullDateAlternateKey])-datepart(week,dateadd(month,datediff(month,(0),[FullDateAlternateKey]),(0))))+(1) 
        when (1) then (6) 
        when (2) then (5) 
        when (3) the
Read more
20212223
24

Setting Up an HDInsight Cluster (No Scripts Required)

Let me start by saying, I am not a fan of scripting. It definitely has its place and a lot of my peers really like it. It is the easiest way to get functionality out from software vendors such as Microsoft. PowerShell is an incredibly powerful tool which can do just about anything. However, therein lies the problem for me. Scripting solves a lot of problems, however, I just wanted to set up and use a basic HDInsight cluster to create some Power BI demos (posts coming soon). So I started the journey to find the scripts and try to understand the syntax and so on. Then I went to the Azure Portal, here is what I did to set up my cluster and load data with no scripting required. My goal was to go to get a working demo platform up. Would I necessarily recommend this path for production work, not sure yet. But now I can work with HDInsight with considerably less work required to set up the environment.

HDInsight Cluster No Script Setup Requirements

You need an Azure account. You can go to http://azure.microsoft.com to sign up for a free account if you like. If you have an MSDN subscription you should have some time available as well.

HDInsight Cluster No Script Setup

Once you have your account created, you should go to http://portal.azure.com. We will be doing our setup from here. During the process we will be creating a storage account (if this is your first run in azure, you may choose to set up a Resource Group as well) and the HDInsight cluster. Be aware that the cluster has compute costs and the storage has storage costs. At the end we will remove the cluster to save your compute time.

Create the Storage Account

This step can be done during the HDInsight cluster creation, but this limits your ability to share data across clusters. If you are just trying it for fun, you can do this during the cluster set up.

Click the + symbol on the portal, then Data + Storage, then Storage Account. This will open a blade with the set up instructions for a storage account.

image

When you create your account you will have some options to fill in:

  • Name: this name will need to be a unique name, e.g., joescoolhdinsight
  • Pricing tier: The pricing tier is really important if you are using a limited plan or if you plan to keep the data for a long time. If you are planning to use this as a demo, I would select Locally Redundant as that is the lower cost plan.
  • Resource Group: The resource group lets organize your Azure assets. This is for your benefit, so if you want to keep all of the HDInsight components together, you could create a group for that or stick with the default.
  • Subscription: This lets you choose the subscription you want to use.
  • Location: Be sure to select a location close to you that supports HDInsight. Check http://azure.microsoft.com/en-us/regions/ to see what Azure services are supported in each region.
  • Diagnostics: This is optional. If you are looking into the diagnostics or need to prep for production, you will find this useful. In most cases, we would not turn this on for demos.

Click Create and it will create your storage account. This may take a few minutes. The notifications section on the portal will alert you when this has been completed. Once that is complete, we will continue with setting up the cluster.

Create a SQL Database for a Metastore

Read more
2526

Uploading Files to an HDInsight Cluster (No Scripting Required)

As I noted in my first post, I am not a fan of scripting. In that post we set up a cluster without using scripts to do so. Now we are going to look at how to upload files without scripts. While this will work for our demo and learning purposes, I would encourage you to use scripting to handle production level loads or even if you want to upload a lot of files. While I am not a fan, it does not mean the scripting may not be a better overall tool. However, when I am trying to learn the functionality or work with system using other tools (in this case Power BI), I find that methods such as these help me be more productive sooner.

Prepping to Load Data Into Your New HDInsight Cluster

A key difference between standard Hadoop and HDInsight is file management. With HDInsight, you can load files into Azure Storage and they can be consumed by the HDInsight cluster. Keeping with the No Scripting Required mantra, we will be using a graphical interface to load files into Azure storage. There are a number of options out there, you need one of them installed. For our example, we will be using the freeware version of CloudBerry Explorer for Azure Blob Storage. Once you have your tool of choice installed you are ready to get some files.

At this point, you need some files to load. I am using some data I created for another demo. My data is in 7 files of daily receipts for my restaurant for a week in March. Once you have the data, we can load that into the cluster.

Loading Data Into Your New HDInsight Cluster

A noted above, the next steps for use will use CloudBerry Explorer to load our data. In this case, I just copied the folder with my files over to the Azure Storage once I connected the tool to Azure.

image

Once that is done, we will look at working with the data in Hadoop and with Hive.

Creating an External Hive Table and Querying It

You can create two types of tables using Hive – internal and external. An internal table loads the data into a Hive database. An external table applies a schema to the data without moving it. I will be creating an external table. I like this concept because it applies schema to the files that have been uploaded and allows other tools to interact with that data using HiveQL. When you drop an external table, the data remains because the table represents structure only.

In order to help everyone through this (in particular me), the next sections walk through the steps I took to create my table and select data from it. (This is not a detailed look at Hive, but rather a focus on the process of making HDInsight data available using HiveQL.)

Understanding the Files

The first step was to document the structure of the data in the files. Here is the data that I had in each of the files in column order:

  • Ticket Number – int
  • Ticket Date – date
  • Hour of the Day – int
  • Seat Number – int
  • App Amount – int
  • Entrée Amount – int
  • Non Alcoholic Amount – int
  • Alcoholic Amount – int

My structure was fairly simplistic. Each file represented a day.

Creating the Table

Now that I had the structure, I needed to work out the table DDL. (Reference: htt

Read more
27

Using Power BI with HDInsight Part 1: Power Query and Files

With the rise of HDInsight and other Hadoop based tools, it is valuable to understand how Power BI can help you take advantage of those big data investments. If you need to set up a cluster to work with, check out my previous posts on Setting Up an HDInsight Cluster and Loading Data Into Your New HDInsight Cluster. These posts show how to do this with no scripting required. If you prefer to script, there are a number of resources with sample scripts on doing the same work.

In this article, I will focus on using Power Query to get data from the Hadoop file structure in HDInsight. I will be using Excel 2013 with the Power Query Add-In. I will also be using the restaurant data I loaded as noted in the three previous posts. If you need to create a cluster and load data I encourage you to check the following blog posts:

These posts walk through the process of creating a cluster and loading up data.

Connecting to HDInsight

First, open a new Excel workbook and click the Power Query tab. Once there, you can find the Azure HDInsight source in the From Other Sources dropdown. Select that option to open the following dialog:

image

You will need your storage account in order to continue. Then you will need the storage account key. Once you have added the key you will see that the Navigator opened in Excel on the right.

image

It should show the name of your cluster and the default container name. Double click the container name and it will open the Power Query window. It will show all the files available in the container. Even though we have it organized in folders, the view shows all the files. If you have a large amount of files and you don’t want to scroll to find them, you can click the down arrow on the Folder Path column and use the text filter to find the folder you are looking for.

image

Now I have the files I want to use in Power Query. If you click the binary link it will open a copy of the file. However, this is not how we want to work with the data as we have multiple files. (If you did this, remove steps up to the Filtered Rows step in the Applied Steps section.) I now have the files I uploaded showing.

image

In order to work with them all together we need to Combine Binaries.

Read more

282930
31123456

Microsoft Integration Stack

  • 14 February 2013
  • Author: MarkGStacey
  • Number of views: 12688
  • 0 Comments

Note:

 This blog post has moved to : http://markgstacey.net/2013/02/12/microsoft-integration-stack/

Microsoft Integration Stack

I am often asked the question about WHICH Microsoft technology to use when integrating systems, and it’s a very relevant question, as there are so many different technologies. The good news is that each does have its’ place – there is some overlap, and most of these can be shoehorned to do the other roles, but for the most part, it’s fairly clearcut.

First, a list of the technologies

Technology

Use

BizTalk

SOA Integration

.NET/WCF

Custom SOA Integration. Not an integration technology, you’re writing code….

MSMQ

Message Queuing

SSIS

ETL

Sync Framework

“Online” integration, especially mobile

Service Broker

Message Queuing – high performance integration

Stream Insight

Complex Event Processing

 

Of these, I have implemented a production version of all SSIS, SyncFramework, Service Broker, and dabbled in WCF during my dev. I’m currently prototyping a StreamInsight application, and I have never seen BizTalk.

So let’s start with what I’m not really familiar with, not co-incidentally the technologies not provided as part of the SQL stack.

BizTalk is a real-time, service integration. Do your apps talk to each other in a realtime manner? Are the data loads small? Do they need to go via HTTP?
You may need BizTalk, but I have no idea how it helps you over coding a web service. But I’m sure it’s great. WCF of course will also give you options to use MSMQ as your comms channel, and the WCF over MSMQ vs Service Broker is one of the discussions that has real merit.

The choice, as I see is it, needs to be based on whether the channels will be consumed at a data layer or an application layer. If processing and actions are going to occur from the messages directly, then WCF/MSMQ is the correct channel. However, if the messages are going to be used in conjunction with other data, then Service Broker may be a better choice. But more on service broker soon.

SSIS

SQL Server Integration Services, the successor to DTS, is the most used and most widely known of the Microsoft integration stack.

I have most commonly encountered it in loading data warehouses, in an ETL (Extract/Transform/Load) capacity, and this is its’ ideal scenario. Large data loads, relatively homogenous data (i.e. the structure does not change frequently), typically loading from flat files or databases, and definitively loading INTO databases, and more rarely, flat files.

While SSIS does support web services to some extent, it is not designed for this. It does include a “Script task” that lets you do anything you can do in C# or VB.NET (including loading custom assemblies), and this is typically how I query web services from SSIS.

SSIS provides a large array of data sources, and even more transforms, including various lookups, fuzzy matching, deduplication, and other tools, and is even more extended by third party tools. Notification of load status using email, SMS, web service call and other methods are all supported.

What it is NOT suited for is any type of real time integration, or integration with data sources whose structure is dynamic. Also, in most use cases, it is a one-way integration, although of course, one could implement both directions.

Skill Pre-Requisites: T-SQL, SSIS

Overlaps: Not really

Example Application: Data Warehouse, Scheduled Integration between OLTP systems

Sync Framework

Microsoft Sync Framework is designed for distributed and occasionally connected application architecture.

A great example is a field engineer with a mobile device, using an app which will make updates on the local DB, and have them sync’d automatically to the main DB when connectivity is available.

The Sync Framework allows for 2 way integration of a database, by default SQL Server and SQL Server Compact (if you are using an Express or higher edition on the mobile device, a little more work is required), and handles resolution conflicts when updates are made on both the mobile as well as the central DB. And of course, multiple mobile devices are totally supported, and the update code itself can run on either the mobile or the central server (there are performance trade-offs concerning what goes over the wire)

Advantages: If you have a distributed architecture, Sync Framework will definitely save you development time. In addition, it does not require a SQL licence, it can Sync between Express and CE or Express and Express.

Disadvantages: Sync Framework is suited for neither high volume nor low latency applications.

Skill Pre-Requisites: T-SQL, .NET 2.0

Overlaps: Service Broker or WCF for high performance or Low Latency. Easier to implement though

Example App: Occasionally Connected Mobile Application

Service Broker

Service Broker is a high performance message queuing technology built into the SQL stack. The essence of a message queue is that a message is pushed onto a queue (on a server) and popped off the queue (possibly on a different server, possibly different messages by different servers)

As it is optimised for this uni-directional data flow, it is *MUCH* faster than writing to a table, even a heap. In addition, you retain the transactional consistency of SQL Server, so once data has been written to a queue, it is guaranteed to be in the queue, and will only be deleted from the source server has it has been received.

As an in-depth example of how this can be used to enable high-performance applications, one of the systems we developed had a requirement that every single change to the DB be audited in a manner that allowed for rollback, and also traces of the user, the machine used and some other information. This in the end turned out to mean that the audit DB was growing at about 4 times the rate of the operational database.

So the solution was to push the audit messages onto the Service Broker queue, and have them written by separate servers (setting up a receiver on each)

In addition, as the receivers for one queue can sit on multiple machines, another solution is to only run the receivers on the machine writing to the queue when that machine is not under load. The receivers can be throttled.

Another scenario is real-time loading of information to a data warehouse or OLTP system.

Caveat: Data sent in a service broker message is NOT compressed. It is a good idea to compress your messages if you are working on a geographically distributed solution, and are not CPU bound.

Another usage of Service broker is to perform some calcs/aggregations on the data as it comes off the queue, and prior to writing to DB. Has benefits, but loses out to StreamInsight on processing speed. However, if you already are using ServiceBroker, simplicity of only using one technology would win out. And Stream Insight does NOT provide a transport layer.

Advantages: Fast communication. Scalability (scale out)

Disadvantages : No native compression, only WCF provides that out of the other technologies though.

Skill Pre-Requisites: T-SQL, Service Broker.

Overlaps: MSMQ. Stream Insight, but only a little

Example App: High Performance Distributed Auditing. Real Time Data Loads. Scale Out Architecture

Stream Insight

Firstly, let me make this very clear: Stream Insight does not provide a transport layer. Stream Insight is for Complex Event Processing, but the events themselves must be provided to the Stream Insight process.

Having said that, hooking it up to transport layers is easy enough, as owing to its’ 3 piece architecture (Input Adapter, Query Layer, and Output Adapter), wiring the transport into the Input Adapter will be as easy as building the Input Adapter.

And on to the next point…. Stream Insight is not an application. It is a technology. You will be writing C# code (and no other .NET language mind you, ONLY C#) for your Input Adapters, Output Adapters, and binding LINQ Queries to both.

Stream Insight provides a mechanism to do calculations and aggregations on events as they come in, and all the data is stored in memory. An output adapter can provide the results of these calculations/aggregations to a UI, a web service, or persist it to a database – actually anything you want it to do really, since you have to write the code.

A good example of the usage of this is monitoring a rev counter on an engine. For every revolution, a single event is sent (a “Point” event in CEP terms), and storing each of these in a database will quickly lead to huge volumes. A more efficient approach may be to aggregate the revolution events, and at a set interval (a minute, or 10 seconds, or every second even), write the Maximum, Minimum, Count, Average and Mean values to a table. A vastly smaller footprint, with not much information value lost. Of course, choosing the aggregations, intervals and calculations correctly is essential – information such as the rate of acceleration is important when monitoring driving style and would be lost with the above set of calculations.

My next pass with this is comparing the values above against a predictive model, and sending alerts for outliers. For instance, if the engine was on a car, a vastly different driving style on this particular vehicle may indicate that it has been stolen.

Combining separate streams is the next area : so multiple sensors can be combined in one of these queries.

Advantages: The performance benefit of using the CEP streams is immense over trying to use a DBMS. In addition, the separation of Input, Output and Query allows for the creation of multiple applications where User defined queries based on the needs of the moment (for example dashboards, alerts etc) can be issued without changing the underlying code. This creates the space for an ecosystem of partners to develop based on StreamInsight, as the DBMS of CEP, vs other CEP providers who supply a complete solution.

Also, a built in persistence model to a CE database is great.

Dis-advantages: It’s like installing SQL. You do not have an application; you have a technology to create an application. Also, advanced .NET coding skills are required, .NET 2.0 knowledge will not suffice

Web and Standard Edition only support up to 5000 events per second. Data Center edition is expensive

Skill Pre-Requisites: T-SQL, .NET 3.5, Stream Insight

Overlaps: Service Broker, but only a little. Definite use case for both in one application

Example App: Realtime Sensor monitoring, High Speed Investment Trading, Monitoring of Web Service Feeds

Quick and dirty diagram

 Integration diagram

Cheat Sheet – SQL Technologies

Technology

Scenario

SSIS

High volume, batch loading

Sync Framework

2 way synchronisation, low volume,

Service Broker

Low latency, high or low volume

Stream Insight

Calculations – used in conjunction with a transport layer

 

Print
Categories: Blogs
Tags:
Rate this article:
No rating

MarkGStaceyMarkGStacey

Other posts by MarkGStacey

Please login or register to post comments.