devxlogo

The Baker’s Dozen: 13 Miscellaneous C#, SQL, and Business Intelligence Development Tips

The Baker’s Dozen: 13 Miscellaneous C#, SQL, and Business Intelligence Development Tips

o, the title isn’t a misprint?this installment of the Baker’s Dozen will visit both sides of the planet.

These days, many .NET user group meetings focus on database and business intelligence topics as well as hardcore .NET content. Over the last several months, I’ve spent roughly half my time modifying my own development framework for WCF. The result is some basic but functional factory classes I’d like to share. The other half of the time, I’ve been looking at different capabilities in the SQL Server 2005 and 2008 Business Intelligence stack, and solving requirements that BI developers often face. So rather than pick one side and make the other side wait two months, I decided to combine the two.

Perspectives

Recently I had a conversation with a highly-skilled .NET programmer about SQL Server Transaction Isolation Levels. While the entire conversation is too long to repeat, the person didn’t seem very interested in talking about them. Not long after that, I had a talk with a DBA who had even less interest in talking about .NET assemblies designed for SQL Server.

Sure, individuals can’t have an interest in everything?or should they? Consider this, an old baseball story about Willie Mays. Early in Mays’ career, his manager, Leo Durocher, would ask Willie such questions as, “there’s one out, you’re on second base, there’s a left-handed hitter at the plate who can’t get around on the pitcher’s fastballs?how long of a lead should you take?” Mays’ reply was, “I don’t need to know, that’s for the manager to tell me, I’m the player.” Durocher replied, “No, Willie, you need to know these things, and when you develop an understanding, it’ll make you a better ballplayer.”

OK, it’s not practical to be an expert in everything?but in the Microsoft computing world, being a Jack (or Jill) of many disciplines will ultimately increase the value that you bring to the table. (And it doesn’t preclude you from being a master at one or two.) For all the buzzwords that you hear every day, “value” is one to always keep in the front of your mind.

What’s on the Menu?

This installment will start out with a set of C# tips for WCF, and then cover a few miscellaneous tips for using C# with both SQL Server and other tools in the Microsoft BI stack. Finally, we’ll look at some powerful capabilities in the BI stack. So, here are the menu items for today!

The Baker’s Dozen Spotlight:

  • Creating some basic WCF factory classes (in three parts)
  • Setting up SQL Server 2005 stored procedures as HTTP endpoints and then calling them from .NET
  • Executing SQL Server Integration Services (SSIS) packages from within .NET
  • Generating email dynamically from inside SSIS
  • Revisiting some basic productivity classes in C# for handling Crystal Reports.NET data
  • Reading a Microsoft Analysis Services OLAP database from C#
  • Defining roles in an OLAP/Analysis Services environment so that certain users/roles will see only certain dimension members (i.e. brand managers)
  • Building an OLAP date dimension and writing some MDX code to perform seasonal analysis of sales over the period of two Easter years
  • MDX Named Sets in SQL Server 2008
  • Searching data dynamically in SQL Server without using dynamic SQL
  • Using the PerformancePoint Service Web Service from .NET

Some Background for the First Three Tips?

Windows Communication Foundation (WCF) has been out for roughly two years now. Of all the new .NET features in the last few years, WCF is one of my favorites. I never enjoyed maintaining multiple code bases for .NET Remoting and web services, and even wrote some factory classes in the June/July 2005 issue of CoDe Magazine to provide a single model for different communication protocols. (In retrospect, those classes were a VERY poor man’s WCF!)

Even though WCF may seem like “old news” to some, there are many developers who are still learning WCF. I have a community session (a beginner session) called “Crash Course on WCF Programming” that is well attended. Because so many developers are still seeking information to get started on WCF, I’ll provide a brief walkthough of a solution in Tips 1-3.

However, I also want to provide some value to those who have been through the basics of WCF. So I’ll provide a client-side factory class that you can use to avoid repeated WCF calls to ChannelFactory and CreateChannel. I’ll also talk about some issues you may encounter when hosting WCF services over HTTP.

Editor’s Note: This article was first published in the January/February 2009 issue of CoDe Magazine, and is reprinted here by permission.

Tip 1: Creating Basic WCF Factory Classes, Building Contracts (Part 1 of 3)

To begin this three-part tip, I’ll dive right in and discuss the fairly simple requirements for this little WCF exercise: a back-end function to validate a user ID and password. While that may seem like a mundane function, the process allows you to see the basics of WCF in action. So I’ll cover those basic steps in this order:

  1. Create a class called LoginResult as a WCF DataContract, with WCF DataMember properties for Full User Name, ActiveFlag, LoginMessage, and a Boolean property that controls whether a user should be allowed in the system. The LoginResult class will reside in a separate assembly with other DataContract classes.
  2. Create an interface called IUser as a WCF ServiceContract, with a WCF OperationContract called ValidateUserID.
  3. Create a service that implements the IUser Service Contract, and then host the service in IIS.
  4. Create a client-side remote factory called GetRemoteFactoryChannel that returns a WCF channel for the type of service being referenced.

This tip will cover steps 1 and 2 of the exercise and then the next two tips will cover steps 3 and 4.

I’ll start by creating a class library project called DataContracts. In the project, I’ll add a .NET reference to System.RunTime.Serialization. This reference is required as the DataContract represents the default serialization model in WCF. In the project, I’ll add the class below, which is essentially a basic class with four properties, all marked with WC DataContract and DataMember attributes. I’ll compile this as DataContracts.DLL:

   // Basic entity class for result set, as a WCF DataContract   using System;   using System.Collections.Generic;   using System.Runtime.Serialization;    namespace StoneWorksJob.ResultSets   {      [DataContract]      public class clsLoginResult      {         [DataMember]         public string UserName { get; set; }         [DataMember]         public int ActiveFlag {get; set; }         [DataMember]         public string LoginMessage {get; set; }         [DataMember]         public bool OKToLogin { get; set; }      }   }       

Next, I’ll create a second class library project called ServiceContracts. In the project, I’ll add a .NET reference to the DataContracts.DLL assembly, as well as the main WCF namespace, System.ServiceModel. In this project, I’ll add the following class, which is essentially a .NET interface with a function call to ValidateUserID of type LoginResult. Similar to the preceding class, I’ve marked the interface and method with WCF ServiceContract and OperationContract attributes. I’ll compile this as ServiceContracts.DLL:

   // Interface, as a WCF Service Contract   using System;   using System.Collections.Generic;   using System.ServiceModel;   using System.ServiceModel.Channels;   using StoneWorksJob.ResultSets;   namespace StoneWorksJob.Contracts   {         // WCF ServiceContract and OperationContract attributes      [ServiceContract]      public interface IUser      {         [OperationContract]         clsLoginResult ValidateUserID(string UserID,             string PassWord, int DatabaseKey);      }   }

Tip 2: Creating Basic WCF Factory Classes, Hosting a Service in IIS (Part 2 of 3)

Now that I’ve built a data contract and service contract, I can create a service that implements the data contract IUser. To do this, I’ll create a third class library project called Services. In this project, I’ll add .NET references to the DataContracts.DLL and ServiceContracts.DLL assemblies, as well as the main WCF namespace, System.ServiceModel.

In the project, I’ll add the following class, named svcUser, which implements the IUser contract, and contains the OperationContract method ValidateUserID. I’ll compile this as Services.DLL:

   // Basic service that implements the IUser ServiceContract   using System;   using System.Collections.Generic;   using System.ServiceModel;   using System.ServiceModel.Channels;   using StoneWorksJob.ResultSets;   namespace StoneWorksJob.Services   {      public class svcUser : IUser      {         public clsLoginResult ValidateUserID            (string UserID, string Password, int DatabaseKey)         {            bzUser User = new bzUser();            clsLoginResult LoginResult =                User.ValidateUserID(UserID,                Password, DatabaseKey);            return LoginResult;         }      }   }

So at this point I have three DLLs: DataContracts.DLL, ServiceContracts.DLL, and Services.DLL. Next I’ll use IIS to host the Services.DLL so I can access the service from a client.

Hosting a WCF service in IIS is a breeze?most of the time! A while back, I had to host a WCF service on a shared web hosting service provider. When I tried to test my service, I received the following error: This collection already contains an address with scheme http. There can be at most one address per scheme in this collection.

Many web service providers have more than one host header bound to the site. Fortunately, two blogs http://www.darkside.co.za/Default.aspx and http://www.robzelt.com/blog) offer excellent workarounds for this problem. The workaround is to override the ServiceHostFactory with code to check the number of Uri base addresses.

The following code shows a class called CustomServiceHostFactory that overrides the system ServiceHostFactory class, and creates a new ServiceHost object from either the first base address or the second base address. You may want to modify this class so it iterates through the baseAddresses collection for the one you want:

   // Custom class that inherits from ServiceHost   using System;   using System.ServiceModel;   using System.ServiceModel.Activation;   class CustomServiceHostFactory : ServiceHostFactory   {      protected override ServiceHost CreateServiceHost         (Type serviceType, Uri[] baseAddresses)      {         ServiceHost serviceHost;         if (baseAddresses.Length > 1)            serviceHost = new ServiceHost(               serviceType, baseAddresses[1]);         else            serviceHost = new ServiceHost(               serviceType, baseAddresses[0]);         return serviceHost;      }   }

The Service.SVC file for the web project that references the CustomServiceHostFactory is as follows:

   <%@ ServiceHost Language="C#" Debug="true"     Service="StoneWorksJob.Services.svcUser"     Factory="CustomServiceHostFactory"  %>

Tip 3: Creating Basic WCF Factory Classes, The Client (Part 3 of 3)

At this point, I can build a client piece to call the WCF service. In this example, I’ll call the service from a .NET Windows Forms application. In the past, I might have done something like this:

   ChannelFactory UserFactory =   new ChannelFactory("User");   IUser userProxy =       UserFactory.CreateChannel();

For years, some have made fun of me for trying to combine two lines of code into one. Guilty as charged! I’m usually interested in simplifying things, especially if I can create a simpler interface where others (or I myself) don’t have to worry about the exact syntax. So given that any client-side call to the WCF service requires two lines of code above, I’d like to combine that into one, for all the different services that I might call.

Here’s a small factory class (in namespace ClientRemoteAccess) called GetRemoteFactoryChannel:

   // Factory class to create a "generic" WCF Factory channel   public T GetRemoteFactoryChannel()   {      Binding oBinding = this.GetBinding();      ChannelFactory oChannelFactory =          new ChannelFactory(oBinding);      oChannelFactory.Endpoint.Address =          new EndpointAddress(this.ServerURL + "\" +         this.ServiceName);      oChannelFactory.Credentials.Windows.ClientCredential.UserName         = this.UserName;      oChannelFactory.Credentials.Windows.ClientCredential.Password         = this.PassWord;      return oChannelFactory.CreateChannel();     }   private Binding GetBinding()   {      if (this.UsingTCP == true)         return new NetTcpBinding();      else         return new WSHttpBinding();   }

All I have to do is pass it the name of the contract interface that I’m using and the function will return a factory channel that I can use to call my service in a strongly-typed fashion:

   // Using the Factory class to call the back-end service   // Example of using the Factory class    public clsLoginResult ValidateUserID(string UserID,       string Password, int DatabaseKey)   {      // Define object of the type of the interface (contract)      IUser oUser;      // Create instance of the Remote Access library      ClientRemoteAccess oRemoteAccess = new ClientRemoteAccess();      // Create instance of oUser, by calling the factory,       // and specifying the type of IUser      oUser = (IUser)oRemoteAccess.         GetRemoteFactoryChannel();      // Now that we have oUser as a proxy class, we can call it       clsLoginResult oResult = oUser.ValidateUserID(         UserID, Password, DatabaseKey);      return oResult;   }

Tip 4: SQL Server 2005 Stored Procedures, HTTP Endpoints, and Web Services

While this next tip may seem like “old news” to black-belt SQL developers, many people are just now getting into the new features in SQL Server 2005. Here’s a feature that doesn’t receive as much attention as it deserves?exposing stored procedures through HTTP endpoints. This means you can make SQL Server 2005 an HTTP listener, and consume your stored procedures as a web service.

I’ll do two things to make this happen. First, I’ll define an HTTP endpoint in SQL Server 2005. The following code shows where I’ve exposed the AdventureWorks stored procedure uspGetEmployeeManagers. Note the names provided for the endpoint, the web method, and the namespace:

   // Create HTTP endpoint for a stored procedure   -- Set Endpoint for any request    CREATE ENDPOINT TestEmployeeEndPoint   STATE = STARTED   -- Specify as HTTP   AS HTTP(     -- Name of Path (virtual path)     PATH = '/Employees', AUTHENTICATION = (INTEGRATED),     -- If using HTTPS, change CLEAR to SSL     PORTS = (CLEAR), site='localhost' )   FOR SOAP (     WEBMETHOD 'uspGetEmployeeManagers'     -- Name of stored proc     (name='AdventureWorks.dbo.uspGetEmployeeManagers',     schema=STANDARD),     WSDL = DEFAULT,     BATCHES = ENABLED,     DATABASE = 'AdventureWorks',       -- endpoint namespace     namespace='http://AdventureWorks:8080/Employees')

Second, I’ll create a web service reference using the URL http://localhost/Employees?wsdl. After I do that, I can call the stored procedure as a web service as shown below:

   // Test C# code to consume the web service   // .NET code to call stored procedure via the HTTP Endpoint   // Make sure the project contains a .NET web service reference   // to the same endpoint definition from the HTTP endpoint   WS_Employee.TestEmployeeEndPoint oEmployee =       new WS_Employee.TestEmployeeEndPoint();       // Must pass in credentials to authenticate to use the service.     oEmployee.Credentials =       System.Net.CredentialCache.DefaultNetworkCredentials;      // Result comes back as an object array, must cast result    object[] oResult = oEmployee.uspGetEmployeeManagers(1);   DataSet dtTest = (DataSet)oResult[0];     

In the preceding code, note that it may be necessary to set Credentials based on the current network credentials. Additionally, the account associated with the web application must have connection permissions to the endpoint.

Tip 5: Executing SSIS Packages from .NET

When I started using SQL Server Integration Services (SSIS) 2005, it was love at first sight. With each passing day, I love it more.

Sometimes I’ve needed to execute an SSIS 2005 package from within .NET. Listing 1 shows the code you need to execute an SSIS package, regardless of whether the package was deployed to the file system or to SQL Server. Note that you must add a .NET reference to Microsoft.SQLServer.ManagedDTS.dll.

Tip 6: Dynamic Email Generation in SSIS 2005

A frequent SSIS question is how to set properties dynamically. For example, SSIS contains an email task so that you can send emails and email attachments after a task has completed. Oftentimes, the email name and attachment file locations will be dynamic, and must be read from variables. Fortunately, this is easy to do and I’ll strip it down to the basics.

I’ll start by creating SSIS variables for email attachment location and file, as well as Contact information (see Figure 1). Then in the SSIS Send Mail task, I need to do two things (both in Figure 2):

?
Figure 1. SSIS Variables: These variables define email attachment location, file names, and contact information.
?
Figure 2. Overriding Send Mail Task Properties: Set DelayValidation to true and map variables to the corresponding properties.

First, because I want to set email properties for runtime evaluation, I need to set the DelayValidation property to true. Second, I can go into the Expression mode and map my variables to the corresponding properties. Note that the email subject line is a concatenation of two variables.

?
Figure 3. Script Task Editor in SSIS 2008: As you can see, SSIS 2008 adds the option to change the script language.

Before leaving the topic of SSIS, here’s some important news on SSIS 2008. I’ve had to write a few SSIS scripts over the last year. In SSIS 2005 scripts support only VB. Alas, I am a C# developer, and while I can write VB code, I (and others) have longed for the ability to write C# code in SSIS scripts. Fortunately, SSIS 2008 allows you to choose which language you want for scripts! Figure 3 demonstrates the language option in SSIS 2008.

Tip 7: Crystal Reports.NET Redux (redux)

No matter how many times I’ve blogged or written about reusable classes to set data sources in Crystal Reports .NET, many developers still aren’t sure about setting runtime data sources for reports in Crystal.

(Before I continue, I want to clarify that by Crystal Reports.NET, I’m referring to the version of Crystal Reports that comes with the higher-end versions of Visual Studio 2005 and 2008.)

In the January/February 2007 issue of CoDe Magazine, I wrote an article that presented a reusable library (cgsCrystalTools) with some productivity classes for working with Crystal Reports.NET. One of the functions in the main class shows how to push the contents of any DataSet into a Crystal Reports object?the function takes care of all the tables in the main report object, as well as any tables in any sub-report objects. I’ve reprinted it here:

   // Code to return a Crystal Reports.NET object with data   // Primary method, PushReportData, to push a dataset   // into a report object   // Method will handle any subreport objects automatically   public ReportDocument PushReportData(      DataSet DsReportData, ReportDocument oReport)   {      this.SetData(DsReportData,oReport);      foreach(ReportDocument oSubReport in oReport.Subreports)         this.SetData(DsReportData,oSubReport);      return oReport;   }      private void SetData(DataSet DsReportData, ReportDocument oReport)   {      foreach (Table oTable in oReport.Database.Tables)      oTable.SetDataSource(         DsReportData.Tables[oTable.Name.ToString()]);   }

You can simply create your own class and include this code, and then call the PushReportData function from your application.

The following code shows how you can push a .NET collection (i.e. a List<> class) into Crystal Reports:

   // Code to supply a custom list to Crystal Reports   // First, create an instance of a List collection   // for the entity   List oList = new List();      // Add some sample data   oList.Add(new TestReportInfo("Kevin Goff", 90000));   oList.Add(new TestReportInfo("John Goff", 95000));   oList.Add(new TestReportInfo("Mike Smith", 99000));      // Create an instance of the report   CrystalReport1 oRpt = new CrystalReport1();      // Finally, set the report data source to the list   oRpt.SetDataSource(oList);

Despite the fact that it’s possible to build reports in Crystal Reports against custom lists, I still recommend strongly-typed ADO.NET DataSets as the best practice for applications that make moderate to heavy use of Crystal Reports. I’ve written about the methodology that I use in my book, Pro VS 2005 Reports with Crystal Reports and .NET. In a nutshell, here’s the approach:

  • I’ll start by designing a physical model for my typed DataSets, in a separate DLL (e.g. DataSets.DLL). If I have many typed DataSets, sometimes I will generate the XSD schemas from SQL Server.
  • I’ll design the reports using the Crystal Reports designer and point to the XSD for the design-time data source.
  • I’ll build the reports as a separate DLL (Reports.DLL).
  • At run time, I’ll set references in my .NET application to both DataSets.DLL and Reports.DLL.
  • I’ll also set a reference to the Crystal productivity library cgsCrystalTools that I mentioned earlier in this section. This allows me to generate reports to the screen if I’m working in a client/UI tier, or directly to a PDF or other storage if the report is executing in an application layer unattended.

I’ll grant that other methodologies will work, though at the cost of a workaround. For instance, using a custom entity class as a design-time data source means that the class (or a proxy) must reside in the same project as the report. So to this day, I continue to recommend strongly-typed DataSets as the best overall approach for reporting applications in .NET.

Tip 8: Defining Data Access Roles in Analysis Services/OLAP Environment

Do any of the following scenarios sound familiar?

  • User X should see data only for Accounts A, B, and C.
  • User Y should see data only for Market A and Product B.
  • For Product A, User Z can see all Markets. But for Product B, user Z should see data only for Market C.

Organizations expect software systems to filter data based on user roles and definitions?and will react very harshly if users are allowed to see data beyond their authorization levels! In relational database systems using SQL Server, developers usually must build their own authorization methodology?SQL Server does not contain any built-in functionality for row-level authorization. That means developers must modify SQL queries to take into account any user definitions.

By contrast, Microsoft Analysis Services 2005 makes it a breeze to define user authorization roles in OLAP databases. Here’s an illustrated walkthrough for defining access to the Bikes product category in the AdventureWorks DW database:

?
Figure 4. New Role: New Analysis Services database roles appear in the tree view.
  1. Connect to SQL Server Analysis Services (in SQL Server Management Studio). Then navigate in Object Explorer to the OLAP database you want to access, expand the database, and right-click on the Roles list to add a new role (see Figure 4).
  2. In the edit role dialog box (see Figure 5) provide a Role Name and a description. Also, set the database permissions (for most users, only the read definition will be checked).
  3. ?
    Figure 5. Defining a Role: Fill out the dialog to add a new role.
    ?
    Figure 6. Adding Users/Groups: To add users and/or user groups for the role, enter or select them in this dialog.

  4. In the membership page (see Figure 6) add the users and/or user groups that will be part of this role. (The user names/groups come from the list of active Windows user names.)
  5. Select the database and cubes for access (see Figure 7 and Figure 8).
  6. ?
    Figure 7. Setting Access: You can restrict access levels to the data source.
    ?
    Figure 8. Cube Access: You can restrict access levels for specific cubes.

  7. Important! Go to the Dimension Data page (see Figure 9) and define the dimension/attribute/dimension member associated with the role. In this example, I’ve chosen the Bike category. (I also could have selected the Geography dimension to further refine access by any level in the geography hierarchy.)
  8. ?
    Figure 9. Dimension Member Access: You can even define access levels to specific dimension members (such as Bikes and all data under Bikes).
    ?
    Figure 10. Access to Pre-Defined Measures: For each role, you can allow or deny access to pre-defined measures.

  9. Finally, I can also restrict access to specific measures in the OLAP cube (see Figure 10).

For anyone who has ever had to write custom SQL code to implement user authorizations, the capabilities in Analysis Services to utilize the UDM (Unified Dimension Model) make this task a breeze!

Tip 9: Custom OLAP Date Dimension and Handling Seasonality with MDX

Without question, one of the many reasons that organizations seek OLAP functionality is to analyze performance over a period of time. Therefore, period (i.e., date)-handling functions in MDX are critical.

MDX contains several functions, such as PeriodsToDate, ParallelPeriod, LastPeriods, etc., for analytical purposes. The AdventureWorks DW sample database contains examples of these functions, as well as a good example of multiple date hierarchies (Year, Quarter, Month, for both a fiscal calendar and a regular calendar).

Those capabilities are sufficient for many organizations. However, suppose a company analyzes data by seasonality, where the exact dates differ every year. For example, Thanksgiving (and the famous shopping day, “Black Friday”) fall on a different date every year?sometimes closer to December 1 than others. Companies might want to analyze sales across years for the week before/week after. Similarly, sporting events such as the Super Bowl and World Series fall on different dates every year.

An even larger example is Lent; the week of Lent begins at a different time every year. A company that tracks sales by Lent (i.e., fish products) cannot simply compare sales from February/March of 2007 to those in February/March of 2006.

To solve such problems, those responsible for maintaining OLAP databases must define additional information in the source of a date calendar dimension. One solution would be to add a LentWeekNum numeric column into the source staging area for a date calendar, where the value represents the Lent Week number associated with that specific date/week.

With all the supporting data in place, a developer could write an MDX query like the one below, which lines up Lent Weeks for 2007 and 2006 using the LentWeekNum and the MDX ParallelPeriod function.

   // MDX code to handle seasonality (Lent weeks)   WITH MEMBER [LastYearLent] AS       ([Measures].[Sale Amount],           ParallelPeriod([Date Calendar].                        [Bz Calendar Hierarchy].[Year],1))   SELECT {[Measures].[Sale Amount], [LastYearLent]}    ON COLUMNS,      [Date Calendar].[Lent Week Num].[1] :      [Date Calendar].[Lent Week Num].[8]  ON ROWS   FROM  [Test Cube]    WHERE [Date Calendar].[Bz Calendar Hierarchy].[2007]     Lent Week   Sale Amount     LastYearLent   =========   ===========   ============   1      213468     (null)   2      210668     131795   3      (null)     198342   4      180849     (null)   5      184585     166994   6      (null)     191179   7      192178     (null)   8      174046     213207

Tip 10: MDX Named Sets in SSAS 2008

T-SQL developers making the transition to OLAP will often try to seek counterparts for stored procedures and user-defined functions. Some discover that OLAP/MDX calculated members server as a nice counterpart for scalar user-defined functions.

The counterpart for stored procedures and table-valued user-defined functions is a very different story. MDX provides named sets, which developers can use to define stored MDX expressions that return a set of information. Developers can refer to a stored MDX expression with an alias, as needed.

For example, suppose you want to create a named set that will return the 10 most profitable product categories from the AdventureWorks DW database and show the ranking number for each category.

?
Figure 11. Named Set: This MDX expressions uses TOPCOUNT to retrieve the 10 most profitable product categories.

Figure 11 shows an example of an MDX named set called [Top10ProductsByProfit].

Here’s the MDX expression in Figure 11. Note that it uses the TOPCOUNT function to retrieve the 10 most profitable product categories:

   TOPCOUNT(       [Product].[Product Categories].[Product].Members,       10, [Measures].[Internet Gross Profit])

A developer can enter the code for the named set in Visual Studio for a project associated with the OLAP cube (as shown in Figure 11), or directly into the OLAP cube (e.g. in SSMS 2005).

Next, the following code (an MDX calculated member) calculates the ranking number for each product within the top 10 set. Note that the second parameter for the MDX RANK function is the actual named set alias from above.

   RANK([Product].        [Product Categories].CurrentMember,           [Top10ProductsByProfit])

A developer can then use the named set on the ROW axis of any MDX query, as follows:

?
Figure 12. Static Named Set: This static named set has no additional dimension slicing (correct results).
   // Note the predefined named set and   // calculated member   SELECT      {[Measures].[Internet Gross Profit],      [Measures].[ProductProfittRank]}    ON COLUMNS,     [Top10ProductsByProfit] ON ROWS   FROM [Adventure Works]   

This produces the result set in Figure 12. So far, so good. However, the query above generated the top 10 products by profit for all geographies as an aggregate.

Now suppose an analyst wants to see only the top 10 selling products for the country of France:

?
Figure 13. Static Named Set for France: This query added additional dimension slicing, but produces incorrect results, as shown by the low gross profit figures.
   SELECT      {[Measures].[Internet Gross Profit],      [Measures].[ProducProfittRank]}    ON COLUMNS,     [Top10ProductsByProfit] ON ROWS   FROM [Adventure Works]      WHERE [Customer].[Country].[France]

The preceding query generates the result set in Figure 13. However, while the product profit rank column appears “correct,” the numbers for gross profit are not. In fact, a closer examination reveals that the result set contains the same 10 products that the query displayed in the first result set (for all geographies).

This example demonstrates a serious issue with permanently stored MDX named sets in Analysis Services 2005. In 2005, named sets are static; in other words, SSAS evaluates the MDX expression once, and ignores any subsequent dimension slicing that a user might perform. So essentially, SSAS never sees the slicing for the country of France. While the result set correctly shows the profit dollars for each product for France, the ranking and Top 10 logic are incorrect.

Some are unaware of this issue in SSAS 2005 (and may become aware of it only after someone points out incorrect numbers on a report). Others are aware, and use the only real workaround: Include the MDX named set expression “inline” in a query, thereby losing out on reusability.

One of the top enhancements in SQL Server Analysis Services 2008 is dynamic named sets. As implied from the name, dynamic named sets perform as developers expected all along. Figure 14 shows how you can define a named set as dynamic in the Visual Studio interface.

?
Figure 14. Defining a Dynamic Named Set: You can change a named set to be dynamic in SSAS 2008.
?
Figure 15. Correct Results: After defining a dynamic named set, the results for France are correct.

After executing the same code, Figure 15 shows the correct results, demonstrating that dynamic named sets will recognize any subsequent dimension slicing.

Tip 11: Reading OLAP Data with .NET

.NET developers may need to query OLAP data from within a .NET application. Fortunately, developers can use the same OleDbConnection, OleDbDataAdapter, and OleDbCommand objects that they use when querying relational data. Here’s an example of querying an OLAP cube, using MDX as the command text:

   // Reading data from an OLAP source   OleDbConnection oConn = new OleDbConnection();   oConn.ConnectionString =        "provider=msolap;Data " +        "Source=localhost;" +        "initial catalog=Adventure Works DW;";   OleDbCommand oCmd = new OleDbCommand();   oCmd.Connection = oConn;   oCmd.CommandText = "SELECT [Internet Sales Amount] ON COLUMNS, ";   oCmd.CommandText += " DESCENDANTS ( [Date].[Fiscal].[FY 2004], ";   oCmd.CommandText += " [Date].[Fiscal].[Month] , ";   oCmd.CommandText += " SELF_AND_BEFORE )    ON ROWS ";   oCmd.CommandText += " FROM [Adventure Works] ";   DataSet dsCube = new DataSet();   OleDbDataAdapter oDA =       new OleDbDataAdapter(oCmd.CommandText,oConn);   oDA.Fill(dsCube, "CubeData");

Additionally, developers may want to “discover” the entire dimension model of an OLAP cube. Using the Microsoft Analysis Services ADOMD Client, a developer can open an AdomdConnection and then iterate through the Cubes, Dimensions, Hierarchies, and Levels collections:

   // Using the Microsoft Analysis Services ADOMD Client   AdomdConnection oConn = new AdomdConnection       ("Data Source=localhost;" +         "initial catalog=Adventure Works DW");   oConn.Open();   DataTable dtCubeInfo = new DataTable();   dtCubeInfo.Columns.Add("Cube",typeof(System.String));   dtCubeInfo.Columns.Add("Dimension",typeof(System.String));   dtCubeInfo.Columns.Add("Hierarchy",typeof(System.String));   dtCubeInfo.Columns.Add("Level",typeof(System.String));   foreach (CubeDef oCube in oConn.Cubes)      foreach (Dimension oDimension in oCube.Dimensions)         foreach (Hierarchy oHierarchy in oDimension.Hierarchies)            foreach (Level oLevel in oHierarchy.Levels)               dtCubeInfo.Rows.Add(                  oCube.Name,                   oDimension.Name,                   oHierarchy.Name,                   oLevel.Name);

Developers can use this approach to construct custom query/browser-like interfaces for the OLAP dimension hierarchy.

Tip 12: Searching Dynamically in SQL Server without Using Dynamic T-SQL

Not only are stored procedures often preferable, they are also often necessary. Many companies lock down tables in SQL Server (or whatever back-end database applies) so that stored procedures are the only access to data.

Critics of stored procedures claim that stored procedures aren’t as flexible as other approaches, such as dynamic SQL. Take, for example, a search screen for an end-user customer representative?the screen prompts the user for name, address, city, state, zip, etc. The end user may need to search on a full name and a partial address, or a name that starts with “SMI” and an address that ends in “AVE,” etc. (Presumably, the screen might have a small pull-down beside each field, with options for equals, begins with, ends with, and contains.”)

Again, those advocating dynamic SQL will construct the query based on the fields the user filled in, and feel that stored procedures aren’t as flexible. The code here challenges that opinion, using a stored procedure that:

  • Accepts all the possible lookup options as parameters
  • Sets any NULL parameters (i.e. fields that the user left blank) to ‘%‘ (the SQL Server wildcard character)
  • Uses a series of LIKE @ColumnParameter statements in the WHERE clause. Any non-specified parameters are simply compared to themselves (which SQL Server optimizes)

Here’s the stored procedure code:

   CREATE PROCEDURE [dbo].[GetCustomers]      @LastName varchar(50) = null,       @FirstName varchar(50) = null,      @Address varchar(50) = null,       @City varchar(50) = null,      @State varchar(2) = null,      @Zip varchar(13) = null   AS   BEGIN   SET NOCOUNT ON ;   SET @LastName =  isnull(@LastName,'%')   SET @FirstName = isnull(@FirstName,'%')   SET @Address = ISNULL(@Address,'%')     SET @City =  isnull(@City, '%')   SET @State = isnull(@State,'%')   SET @Zip = isnull(@Zip,'%')   SELECT * FROM Customers  WHERE           (LastName LIKE @LastName) AND           (FirstName LIKE @FirstName) AND           (Address LIKE @Address) AND           (City LIKE @City) AND           (State LIKE  @State) AND                  (Zip LIKE @Zip)   END

All the application must do is place the SQL Server wild card character (%) at the beginning and/or end of any search string, based on how the user wanted to apply the search for that column. Here are some examples:

   -- Last Name starts with Smith,    -- lives in NY   EXEC [dbo].[GetCustomers]      @lastname = 'SMITH%',      @STATE = 'NY'       -- First Name is Laura,     -- lives on an AVE, in PA   EXEC [dbo].[GetCustomers] 
?
Figure 16. PPS Web Service Methods: The figure shows some of the methods available to developers through the PPS web service URL.
@FIRSTNAME = 'LAURA', @ADDRESS = '%AVE%', @STATE = 'PA'

Assuming you have SQL indexes on all the columns (names, address, etc.), the execution plan demonstrates that the query costs reside in index seeks and key lookups. I’ve used this approach for customer databases with nearly a million rows, and the searches are usually instantaneous.

Tip 13: The PerformancePoint Server Web Service API

Microsoft PerformancePoint Server 2007 is one of the more exciting products for Business Intelligence. While PPS 2007 is a version 1.0 product, Microsoft will surely enhance it to position it firmly as a capable performance monitoring, scorecarding, and planning application.

While the user base for PPS 2007 is primarily OLAP developers and power users, .NET developers can take advantage of the PPS 2007 web service to create/access scorecards and dashboards programmatically. Figure 16 shows some of the available methods in the PPS web service, accessed via the PPS web service URL (http://localhost: 40000/WebService/pmservice.asmx):

You can find the entire source code for this article on my web site. Check my blog for any follow-up tips, additional resources, and notes on Baker’s Dozen articles.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist