Who Has Busy Files? Identifying Biggest Performance Users and Bottlenecks (Part 2)

FEATURED AUTHOR
Merrill Aldrichshadow
Merrill Aldrich

This is Part 2 of a guest blogging series by longtime DBA, blogger, and all-around SQL Server enthusiast Merrill Aldrich. (Part 1 of this series is available here.) You can find Merrill on Twitter as @onupdatecascade.

This is the second part of a blog series about how to implement out-of-the-box SQL Server, SSIS, and SSRS components to sample and collect IO performance data for all the database files across your enterprise. In the first installment, we created a table to store samples of IO counters on each source system. In this one, we will start a walkthrough showing how to make an Integration Services package that can automatically collect that data to a central repository for reporting.

Note: This walkthrough assumes you know the basics of SSIS, including how to create a package, pull transforms from the tool box and link them together, and set the properties of objects. If you need those skills, check out a beginning SSIS tutorial, and then come back here for the intermediate-level goodies.

Building an SSIS Package to Collect the IO Performance Data

The package I actually have deployed for this function collects three sets of data about SQL Server files and disks, as shown below. From left to right, I am gathering the IO counters on a per-file basis, file sizes and space used, and the size of disks and available disk space:

Complex Package

For this walkthrough, in order to keep things simple, we are going to make a simpler version that collects only the IO statistics, as shown in the left-most column. (The other data collections in the package above just repeat the same design, so if you’d like to build those out too, it’s just a matter of getting one working, then repeating the same process for the other storage metrics.)

Simple Package - Complete

Step 1: Tables in a Central Database

First we need to create a centralized table to store the IO counter data for reporting. This table is similar to the ones out on the SQL instances, but with a server name column added to differentiate which instance the rows are coming from:

USE [Your Central Admin Database]
GO

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE dbo.TrackFileIO(
      AsOfDate datetime NOT NULL,
      server_name nvarchar(128) NOT NULL,
      database_name sysname NOT NULL,
      physical_name nvarchar(260) NOT NULL,
      sample_duration_sec decimal(4, 2) NOT NULL,
      num_of_reads bigint NOT NULL,
      avg_num_of_reads_per_sec bigint NOT NULL,
      num_of_KB_read bigint NOT NULL,
      avg_num_of_KB_read_per_sec bigint NOT NULL,
      io_stall_read_ms bigint NOT NULL,
      avg_io_stall_read_ms_per_sec bigint NOT NULL,
      num_of_writes bigint NOT NULL,
      avg_num_of_writes_per_sec bigint NOT NULL,
      num_of_KB_written bigint NOT NULL,
      avg_num_of_KB_written_per_sec bigint NOT NULL,
      io_stall_write_ms bigint NOT NULL,
      avg_io_stall_write_ms_per_sec bigint NOT NULL,
      io_stall bigint NOT NULL,
      avg_io_stall_per_sec bigint NOT NULL,
CONSTRAINT PK_TrackFileIO PRIMARY KEY CLUSTERED 
(
      AsOfDate ASC,
      server_name ASC,
      physical_name ASC
) ON [PRIMARY]
) ON [PRIMARY]
GO

This table will be the destination for all the data from the SSIS package.

Step 2: Create a New SSIS Project and Package

In Visual Studio, create a new SSIS project, then create and open a new, empty package.

Step 3: Variables for Connection Information

Ultimately, this package will loop through a list of SQL Server instances, and gather data from your ‘admin’ database on each one. In order to control that looping, we’ll need three variables: one holds the name of the source SQL Server instance, one is the database on that instance where the source data is, and the third is a container that will eventually hold the list of SQL Server instances we are iterating through. In the Variables panel, create the variables shown here:

Variables 1

Make sure to start from the Control Flow tab in the package, so that the Scope column shows the package name, which will mean that these variables are accessible from everywhere in the package. In my case, the package name is “GatherDiskStatsBlg.” Your package can be named differently; just note that the Scope column shows the package name.

For simplicity, we’ll get one server working first, then add the functionality to loop over multiple instances later. It’s good practice to start simple, test, then add complexity. Set the value of the CurrentSourceDB string to the name of one source database you will be collecting from to test, and set the CurrentSourceServer string to the name of that SQL Server instance. These values should be the same ones you would use in a connection string to connect to that database. Create, but ignore, the InstanceList variable for now. Later in the process, that will hold an object that contains a list of the SQL instances from which we are collecting.

Step 4: Create Two OLE DB Connection Managers

In the Connection Managers panel, add two OLE DB connection managers. One will act as the source connection to pull rows from (and eventually will change to collect from each SQL instance) and the other will be the destination connection, to insert rows into the central database, from Step 1:

Connection Managers

Name these objects generically, replacing the default Server/Database names generated by Visual Studio—in this case, the DBAdmin_Enterprise connection manager is for my central database, and the DBAdmin_Source connection manager is for the source data out on all the monitored instances. That way the connections can be re-pointed, and the connection manager names will still make sense.

For the source connection manager, DBAdmin_Source, use the Properties panel to set expressions for InitialCatalog and for ServerName to be the values from the variables we created in Step 3:

Connection Manager Expressions

This pair of expressions, when we get further along, will enable the source connection manager to change from one SQL Server instance to the next. This works because SSIS will recompute the connection string for the connection manager from these two expressions, inserting the values from the variables at run time. If we change the variables’ values, the connection string will change, too.

Step 5: Make a Data Flow

We will use a data flow to pull the rows from the source database and move them to the central database. First, though, add one more variable to the variables list, to store a date boundary. This will enable us to pull across only the newest rows from the source system that don’t already exist in the central database:

Variables 2

Next, create a new dataflow, and open it. In my package I’ve called the dataflow “Collect New IO Stats.” When complete, the dataflow will be a very simple source/destination setup like this:

Samples Dataflow

In the new dataflow, create the OLE DB source first. In the properties of the source, we need a query against the source table including a parameter for the boundary date, represented by the ‘?’ placeholder:

Source Query

Two quick notes: First, this query pulls all the columns from the TrackFileIO table at the source, but adds the server / instance name, so that in the central database we can tell from where the data came. Second, note the ‘?’ in the query—this is how to pass a parameter in a query using the OLE DB source. In our case, we’ll be using a date boundary, stored in the variable created above, to isolate the newest rows from the source system. Set the ‘?’ parameter to use the SSIS variable by clicking the Parameters button and adding it, like this:

Source Query Parameters

For this type of data source, parameters are represented by question marks in the query text and by associated numbered parameters, in the same order, in the parameter list. Parameter0 maps to the first question mark in the query text. (Named parameters would be easier to manage than ordered parameters, but, alas, this is what we are stuck with for an ad hoc OLE DB query.)

Next, create an OLE DB Destination, and just choose the DBAdmin_Enterprise connection manager, then Table or View – Fast Load and the dbo.TrackFileIO table from Step 1 to direct all the rows into the central database.

Step 6: Try It!

I like designing these things iteratively, working from simple to complex. It probably makes sense to try this package now, and make sure this much of it works—as long as you are working against a test server and it’s safe to do so. If you run this repeatedly, you will have to truncate the destination table between runs.

Step 7: Limit the Date Range

Next, we’ll want to make that date variable “intelligent,” so that the package can incrementally move only source rows that are newer than the newest rows in the central database. Return to the Control Flow tab, and bracket the data flow with two Execute SQL Tasks, like this:

Add Date Boundaries

Both of these new SQL tasks will have the same function: They query the central database to find the last datetime where data is present from the source server and put that value into our SSIS user variable called LastIOSample.

This takes three steps to set up. First, set the Execute SQL Task connection to the central database, and set the SQL Statement to query for the MAX( AsOfDate ) for some server, again using a ‘?’ placeholder for server name:

Start Date Query

Next, we have to map the User::CurrentSourceServer SSIS variable to that ‘?’ placeholder, so that the query will be passed that value at run time. Use the Parameter Mapping panel to provide CurrentSourceServer as Parameter0. (Again, these are ordered—parameter 0, 1, 2, etc.—in the same way as the OLE DB Source query was, above.)

Start Date Parameters

Finally, and this is different from a typical OLE DB source transformation, we want to snag the result from that query and store it in a variable, rather than having results pass down a dataflow. To retain a value from an Execute SQL Task query, you can provide a variable in the Result Set panel:

Start Date Result

This works because the query returns a column called lastsample. That column name is entered as the Result Name, and the value from that column is mapped to the SSIS variable User::LastIOSample in the package. When this runs, the query returns a value for lastsample, and then the package stores that value in User::LastIOSample.

Now repeat exactly the same setup, with all the same settings, for the second Execute SQL task. What we want to happen is a query to the central database before the data flow, and then an identical query to the central database after. The effect will be that the value is the max( AsOfDate ) of the data in the central database before new rows are added, and then the value advances to the max( AsOfDate ) of the new data after the new rows are added.

Step 8: Purge the Source DB

This is the last step we’ll tackle in this installment of the blog series; next time, we will wrap this process in a loop such that it can collect from a whole list of SQL Instances.

To complete the work of moving the latest rows from the source system to the central database, we need to delete the rows that we just copied over from the source database. The logic for this could be “Delete the rows on the source system that are older than the last date in the central database for that SQL instance.” The date variable will provide that boundary.

Add another Execute SQL Task at the end of our chain:

Add Purge

I’m somewhat conservative, and this system will run basically unattended, so what I would actually like to have is a one-day sliding window for this data, in case there is ever an issue with the data transfer. So the logic I have is almost the same as that outlined above, but in fact it says “Delete the rows on the source system where the AsOfDate is more than one day older than the latest row in the central database for the SQL instance.” This just keeps 24 hours of data on the source system.

In the new Execute SQL Task, set the connection to use the source database connection manager, which is where we are purging. Then set the SQL Statement to this DELETE, with a date parameter ‘?’:

Purge SQL Statement

Again, use the Parameter Mapping panel, but in this case set the parameter to use the LastIOSample variable. This will pass in the value of LastIOSample, which in the previous step was set to the last datetime from the data in the central database:

Purge Parameter

Important: Note the type of the parameter. Counter-intuitively, it should be DATE and not DateTime, due to conflicts in naming between all the different data types in SSIS and various source systems.

Step 9: Try Against One Source Server

That’s about it in terms of collecting data from one instance. If this works correctly, the newest rows should “move,” over time, from the source system to the central database, leaving about 24 hours of overlapping data as a safety net.

Give it a try, and get this much working smoothly. In the next installments of the series, we’ll add the looping to collect from multiple servers, and also the logic to roll up the results into a summary table.

2 Comments

  1. Mike MorinMike MorinApr 4, 2012
    6:32 PM

    Very nice pedagogical approach. I look forward to the remainder of the series!

  2. Merrill AldrichMerrill AldrichMay 5, 2012
    12:05 AM

    Oh, thanks Mike! Great to see you over here!