Home Uncategorized T-SQL Tuesday #005: On Technical Reporting

    T-SQL Tuesday #005: On Technical Reporting


    Reports. They’re supposed to look nice. They’re supposed to be a method by which people can get vital information into their heads.

    And that’s obvious, right? So obvious that you’re undoubtedly getting ready to close this tab and go find something better to do with your life. “Why is Adam wasting my time with this garbage?” Because apparently, it’s not obvious.

    In the world of reporting we have a number of different types of reports: business reports, status reports, analytical reports, dashboards, TPS reports… The list goes on and on. But they’re all reports. And generally speaking, someone did a good job of making them look nice and paid some attention to timing (more on that in a bit). But consider purely technical reports: server up-time reports, activity audits, server resource reports, and the like. Oftentimes I see this kind of information represented in lists or blocky charts. Reports in a format that we would never send to a customer, but that is considered fine for internal consumers. And worse, no one has considered basic issues like the frequency with which these reports should be sent.

    Are internal reports really that much less important than what is being sent outside company walls? What kind of message are we sending by failing to take the time to format our output? And even worse, what happens when we fail to consider timing and frequency?

    The answer to the first question is that in many cases, internal reports are even more important than than those highly polished pieces distributed by the sales and marketing departments. These flashy documents are generally designed to generate a call back from a customer or potential customer so that a sale can be pushed. Internal reports, on the other hand, are all about keeping the doors open, the business humming, the wheels spinning, and of course the metaphors flowing. They must contain real, actionable information. Stuff you, as an employee, actually care enough about to bother paying attention to.

    To address the second and third questions, briefly think back on your career. Do any of the following scenarios sound familiar?

    • Every day an e-mail arrives containing a list of jobs that ran yesterday. Some succeeded, some failed. There are some that fail every day, and they’re always in the list. Error messages are listed in-line.
    • A report on server activity and up-time is published on a monthly basis for internal utilization. It is a simple spreadsheet containing daily metrics for the past three months.
    • The company’s internal Web site has a dashboard with a list of applications, with a red light next to those that need attention. It’s unclear what the red light actually means.

    Each of these scenarios is real, taken from my own experience. And each highlights a common problem with internal reporting.

    In the first case, there are several problems. The first is frequency–the report is being sent too often. People get used to seeing the e-mail in their inbox and learn to ignore it in favor of more pressing issues. The second problem is too much information. People don’t want to see information that’s not actionable. Showing a list of jobs that did run successfully may have seemed like a more complete solution, but it’s simply not interesting and makes people even more likely to start ignoring the report. Exacerbating the situation is the presence of reports that fail on a daily basis, and have been for years. This makes a real failure all the more difficult to spot.

    A slightly more subtle problem with this report is that it is too technical. It may seem like a good idea to put error messages right there in the report to save time and energy, but it creates more problems than it solves. In addition to making the report more difficult for project managers and other nontechnical consumers to digest, it also wastes time, because invariably these people will want a full breakdown on the whys and hows of the error. While I love teaching, there is a time and a place for everything, and in the middle of trying to fix a production error the last thing that I want to do is explain it in gruesome detail because a report included a message it probably should not have.

    In the second case–the server activity spreadsheet–the problem is simple: too much information, presented in a manner that’s not conducive to quite digestion. No one can make sense, in their mind, of 30 different metrics multiplied by 90 days worth of activity. Information overload means that the first time I look at the report I might take the time to attempt to digest it. Maybe create a chart. But the second time I won’t even open the spreadsheet.

    In the third case–the dashboard–the customer actually took some time to make the thing look pretty nice. But there were still problems: that of frequency and actionability. Setting the dashboard as the default Web page in each employee’s browser seemed like a great idea. Everyone will look at it all day when they go to check for Twitter updates. Right? Wrong. People quickly learn to hit the stop button or simply ignore what’s on the screen. After all what do those red lights actually mean? Is there something we can do about this situation or not? And how severe is the issue, really?

    Fixing the problems is not that difficult. Internal reports should not be dashed off quickly and without forethought. The information is important enough to warrant creation of a report and if it’s not, there should be no report. Employees should be alerted and reports pushed when their are problems or issues that require attention, but never when there is no action to be taken. Daily pushed reports and virtually all pulled reports will simply be ignored after the first few times they’re used. If you want to get someone’s attention, send them something they haven’t seen before, or haven’t seen often. Finally, make the reports look nice. Spend a few hours with a graphing and charting package. A set of ten thousand data points is nearly impossible to digest in numeric form. Put it into a chart and anyone can understand the overall gist in twenty seconds.

    Effective technical reporting is a cornerstone of a well-run IT organization and I hope that this essay will help some teams establish appropriate reporting guidelines. Thank you for reading, and thank you to Aaron Nelson for hosting this month’s T-SQL Tuesday, of which this post is a part.

    Previous articleT-SQL Tuesday #005: Reporting
    Next articleA Warning to Those Using sys.dm_exec_query_stats
    Adam Machanic helps companies get the most out of their SQL Server databases. He creates solid architectural foundations for high performance databases and is author of the award-winning SQL Server monitoring stored procedure, sp_WhoIsActive. Adam has contributed to numerous books on SQL Server development. A long-time Microsoft MVP for SQL Server, he speaks and trains at IT conferences across North America and Europe.


    1. Good post!! I ran into some of these issues early in my career when I was a database developer. one of the struggles we had was trying to find a balance between what management wanted alerts for and what was manageable. It took creating meaningful, useful reports for us to get away from mega-alerting via SQL Server.

    2. Good post! … but really wanted to say THANKS for the #tsql2sday event, it is a GREAT idea to force some "focus" on a topic for a day, as well as give folks that blog less than other a reason to write.  Thanks for putting it together!

    Comments are closed.