devxlogo

Addressing User Comments About Performance Issues

Addressing User Comments About Performance Issues

he experience of answering e-mail to the SQL Server Pro on DevX has made me appreciate the limits of my knowledge and experience. Some questions are funny because they are so broad: “I need to design a Web site using ADO and SQL Server. Please post the code.” Others send me scurrying for my tried-and-true texts, such as Chris Date’s Introduction to Database Systems (what an understatement), Geoff and Weinberg’s Using SQL, or Fleming and vonHalle’s Handbook of Relational Database Design. Others are not so easily resolved because I plain don’t know the best way to deliver audio from the database or why DTS behaves the way it does in a particular instance. However, I felt that this month’s 10-Minute Solution could have such wide appeal with database administrators that I would be remiss if I didn’t write about it, so here we go…

How do you address user comments such as “Performance is slow” or “This takes a long time”? The challenge stems from the subjective language they use to describe their problems. You could answer the question with a question, but that would give you the perception of being a wise guy. If you reply with “Slow compared to what?” or “What is a short time for this job?” then this will cast you, the database administrator, in an unfavorable light in the questioner’s eyes.

Here’s how you avoid being defensive, while being helpful. Be prepared in advance for these questions with data that describes the response of your system. While I hesitate to call this “metadata”?because, strictly speaking, it is not?there is data you can collect about performance, query response times, resource consumption amounts, or the number of transactions in a unit of time, so let’s call them “performance statistics” for lack of a better word. (By the way, check your license agreements; your database vendor may prohibit you from publicizing or discussing your performance test results in public!)

If you maintain a history of your readings, you will establish statistics you can use that document system responses over fixed and measurable criteria. These statistics are the very ones that come from your hardware, your configuration, and your own real-life production queries.

Let’s start off with what I think is the most direct way to get information from your installation?namely, executing DBCC PERFMON. This generates a report that highlights three key areas of your server installation:

  • Input and output statistics
  • Least recently used buffer information
  • Network statistics

This is an extremely useful snapshot because it’s such a wealth of information, and we can use the output as a template and hack our own “diary” out of the resulting database statistics.

This technique can be extended to other jobs, like monitoring output from SQL Trace files from the ODBC dialog, working from the NT log, or working from the Performance Monitor. The concepts behind this technique are pretty simple; the approach just takes a little getting used to:

  1. Find or create a text log file that has labels and values.
  2. Create a table that mimics the text file’s structure.
  3. BCP the text file into the table.
  4. Examine the data.
  5. Graph the results.

That’s the procedure; let’s see how it works with our first foray into trapping statistics, the DBCC PERFMON output.

Step 1: Get the statistics.
According to our work plan, the first thing to do is find or create the text file that contains the statistics we want to trap. As I said above, we’re going to use the one-shot report generated from DBCC PERFMON, so fire up ISQL. (By the way, this is a SQL Server 6.5 plan of attack, because 7.0 already does a lot of this for you, but I’m sure you’ll find a use for the general technique regardless of the version you use.)

The first three rows of my output look like this:

Statistic                        Value                    ---------------------------------------Log Flush Requests               40.0                     Log Logical Page IO              19.0                     Log Physical IO                  25.0                     

After you examine your output, be sure to save the query result to a file. I called mine DBCC.TXT. Watch out for saving it as the default, DBCC.RPT, because Crystal Reports Server has usurped .RPT as its report file extension. I’m not certain whether .RPT files have anything special going on inside of them?such as special characters and the like?but name your text file with the .TXT extension just to be on the safe side. This report is grist for our future statistics mill.

Step 2: Create a table to store the stats in.
Once you’ve executed the command and generated the report, it’s time to trap the results in a table. (What self-respecting database person would do otherwise?) To get the right data types and lengths for the table, I counted dashes to figure out how big to make the columns and wrote the following statement to create a “bucket” for my DBCC PERFOM output:

CREATE TABLE MyStats (STATISTIC VARCHAR(32), VALUE NUMERIC(10,6))

Step 3: Fill the table with data.
BCP is a black art. It is built with plenty of flexibility, but it is an old command-line utility; as such, it’s much like working with a grumpy old man. Mind your cases on the command line and pay attention to the structure of your input file. To get BCP to work right, you may find yourself prepping your input files. For example, I’ve found it best to be sure that the text file’s output is tab-delimited, with a CHAR(13) at the end of each line. You can check this out in a word processor or text editor that has the “show codes” feature. Confirming that the file is so formatted will save you a lot of grief.

Once you are certain of the above, confirm that the structure of the text file matches the table definition. The only other thing to remember is to use the -c switch in your BCP line so that BCP uses character mode to import your text file. (There is such a raft of issues with BCP that I cannot explore that today. It’s a series all by itself.)

The statement you need to execute is something along the lines of what I show below. (I’m hedging because you must make adjustments to the command line for your server name, user name, and password.) From a command box prompt, enter this:

BCP pubs.dbo.mystats in dbcc.txt  -c -SSPI23B -Usa -P 

This will load the table with the contents of the text file. Alternatively, if you prefer working from ISQL-w and have the rights to do so, you can run this:

EXEC master.xp_cmdshell "BCP pubs.dbo.mystats in dbcc.txt  -c -SSPI23B -Usa -P" 

That will do nicely as well.

Step 4: Examine the data.
You are now on the threshold of being able to read the entrails. As you examine your work, there may be a cloud brewing on the horizon as you realize that there is no date or time to measure your readings. Some log files, like the NT application log or the SQL Trace file, give you this information as part of their data, but ours did not.

In order to ease the data import, we directly mimicked the structure of the text file, but now we would like to see a date and time to place our findings in history, so that time is part of our system analysis. Rather than edit the file to include the date or change our original data file, we can make use of a built-in function called GETDATE() and use it to populate a date column in a new table that looks almost like MyStats, except that it has an extra column for the date and time. Here’s the idea for getting the date and time associated with your statistics. Remember that the order of the columns counts as we finesse the population of MyStats2 with INSERT SELECT, like this:

CREATE TABLE MyStats2 (     STATISTIC CHAR(32),      VALUE NUMERIC(10,6),     STATDATE DATETIME NULL DEFAULT GETDATE())     )

Now that we’ve created our new bucket, we can fill it up with the contents of our DBCC PERFMON report:

INSERT MyStats2 SELECT *, GETDATE() FROM MyStats 

If you try just the column list, you’ll get the error that the target table doesn’t match the source table structure. But if you use STAR, you can include GETDATE(). This gets the table loaded with the statistics name, its value, and the date you did the load?unfortunately, not exactly the time you ran the statistics, though if this were bound in a stored procedure, I can’t see the latency invalidating the results, unless we’re doing lunar module docking or maybe heart bypass surgery.

Now you can examine your work with:

SELECT * FROM MyStats2

In order to refresh MyStats2 with new data, you will have to go through the procedure again (without recreating MyStats2, of course ):

  1. Create your measurements by running DBCC PERFMON.
  2. Save its output to a plain text file.
  3. BCP that text into a staging table. (In our example, that’s MyStats.)
  4. INSERT SELECT the contents of the staging table into your statistics table. (In our example, that’s MyStats2.)

This lets you trap the date with your observations.

Step 5: Graph the data.
Now that you know how to accumulate log data, it won’t be too long before you ask, “Why has the cache hit ratio been falling?” or “How has the ratio of logical to physical reads changed since we bought more RAM for you?” While you can answer these questions from reports or inspecting the data in ISQL-w, it is really much more compelling to take your data out to Excel and graph it. A graph really animates your measurements and makes it easier to follow trends over time.

Well, that’s this month’s 10-Minute Solution. I think you’ll agree that it is the tip of the iceberg.

Next month I’ll look at other ways to apply this technique and show you what you can learn about your system with it.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist