devxlogo

Diagnose Server Performance Problems with SQL Server’s SQL Profiler

Diagnose Server Performance Problems with SQL Server’s SQL Profiler

atabases are complicated because they have so many interdependent elements. The components’ interdependence is most evident when trying to improve server performance. When things slow down and performance seems to flag, organizations tend to look at the server as the source of the problem and recommend hardware enhancements to improve things. The list of solutions includes multiprocessor machines, buying more RAM, adding more RAID, or adding a network segment to move data off the server.

Throwing more processors, RAM, spindles, and NICs at the server is a seductive way to address the performance problem. Hardware enhancements seem to be the most direct and timely solution, especially for production systems that can’t tolerate downtime. I believe an assessment of the database’s design can yield a comparable advantage. Granted the man-hours spent in design time are more costly than hardware, but if you can spare the time on design-using what’s between your ears-you can often fix the problem and deliver the desired behavior.

Common Performance Problems
The most common performance problems are associated with the time it takes to accomplish insert update and query tasks. Updating and inserting probably are most irritating to users because they are more closely associated with the user’s time. Querying has a lower incidence of user interaction and lower expectations than a save for insert or update. More complex queries are usually shunted off to a reporting server where they can be run with minimal interference to the day-to-day operation of the database. These three scenarios (query optimization, insert tuning, and update tuning) account for the lion’s share of performance issues.

SQL Server offers two great tools to help you reach your database optimization goals, System Monitor and SQL Profiler. In this 10-Minute Solution, I present a few preparatory steps to start using Profiler for monitoring performance problems. I discuss some techniques, tools, and commands that reveal information about server behavior, while concentrating on reducing the time it takes to get results.



How do I handle performance problems with SQL Server’s out-of-the-box tools?



Use SQL Profiler to help you reach your database optimization goals.The first condition for optimization is to establish a baseline definition of what performance is so you can measure the server under a typical load. The second condition is an understanding of index design, because the query optimizer’s behavior is dependent on the structure of the available indexes and the way in which you send search arguments to it.

You can create your baseline by starting a trace from Profiler. You can use trace templates to start the investigation of your server. These templates have the appropriate events, data columns and filters already set for you, so you don’t really need to know much about them when you start out. You need only to know what your goal is and then select the trace template that represents it. The templates are named for some very common tasks. Figure 1 shows the list from the file selection box in Profiler.Chances are that most of your needs will be met by the preconfigured traces provided in these templates. When you open one, you see a four-page dialog that gives you a little more control over the details of the trace. Of particular importance are the server you want to trace, the destination of the trace result, the format of the trace result, and setting some limits on the trace result.

 
Figure 1 | The list from the file selection box in Profiler

Select the Server You Want to Trace
From New on the File menu of your template, select the button with the ellipsis to get a list of SQL Servers on your network. You’ll need to be a member of Administrators to run a trace on a remote server. While you’re at it, you should take advantage of the feature in Profiler that limits the impact of a trace on the server being traced. Follow the prompts to designate the server to be traced, but then redirect the trace output to a different machine so you can relieve the target server of minding the trace output. To help you build your repository of trace data, create a separate database on your own workstation to collect the statistics so you can collect them in a centralized place and develop your history away from the server(s) you monitor. This helps to make you a good guest on the server being traced.

Along with redirecting the trace to a different machine, storing the resulting traces into a table is also helpful. Once they are there, you’ll have the advantage of running select statements against the trace table so you can sum them-something you can’t do if you save the trace results to a file. Designating a table instead of a file for the collection also enables you to summarize the results to keep track of your work. For example, sometimes I will create more than one table on the destination machine so I can make adjustments to the database object I’m watching on the source machine. This lets me incrementally measure the effects of the changes I’ve made and document the improvement (I hope) of those changes. A little code fragment collects the various results for me, where I can do the aggregations out of the summarized TraceResult table:

use devSELECT * into traceresult from (select * from [test trace1 where reads >200union allselect * from [test trace 2] where reads >200union allselect * from [test trace 3] where reads >200union allselect * from [test trace 4] where reads >200)X

The Union All is required, because the test traces collect a text field and you are not allowed to do a Select Distinct on text, but Union All neatly sidesteps that issue.

 
Figure 2 | The preset template selections in Profiler

You probably won’t want to change the events and filters until you get a little experience with the tool, because you easily could eliminate something you need out of ignorance. (I’ve done it myself.) That’s why it’s handy to work with your own copy of the templates, so that you avoid hosing the selections that were preset there for you (see Figure 2). In the event that you do hose your template selections, you’ll need to skulk around the network to find another SQL Server installation and copy the trace template back over to your machine. Those are the files with the .tdf extensions. After a few tries, you will be creating your own templates tailored to your own requirements, but while you’re learning, it’s best to be cautious about changing the stock ones. Limit yourself to creating copies of them until you are confident in your selections.

Using the Filters

 
Figure 3 | On the Filter page, the number of reads and the time in milliseconds before the event logs

In addition to limiting Events and Data columns, Profiler helps you get more pointed information with the power to filter events you want to trace and to further refine your selection by setting thresholds on these events. For example, the trace templates are sometimes a little too rich in their selection of events, and after a few tries you may want to reduce the information that you log. You can lighten the selections in your copy of the trace templates by limiting the events to only SQL Statement Starting or Batch Starting events and including the number of reads and the duration they take to complete. Then, on the Filter page, you can enter the number of reads you are interested in and the time in milliseconds you want to pass before you log the event (See Figure 3). Additionally, you can exclude system events to screen out a ton of extraneous messages from the trace, letting you focus only on the duration of long-running SQL statements.

 
Figure 4 | The trace output with the filters not set

With the filters not set, the trace output can be quite verbose (see Figure 4). So it’s really easy to get confused by the blizzard of information Profiler produces. You’ll really need to set its limits.As a last consideration, there are some automation alternatives for using Profiler. If you’re a dyed-in-the-wool Books Online aficionado, you know you have a bunch of stored procedures you can invoke to run Profiler (sp_trace_setfilter to get them set up and sp_trace_setstatus to stop them). But if you are after the creature comforts, you can schedule your trace selections as jobs by having Profiler create the scripts for them. This is infinitely more convenient, not to mention less frustrating, because the system stored procedures that control traces are very particular in the way they accept parameters. Then again, if you weren’t very particular, you wouldn’t be use the stored procedures in the first place!

Using the scripting facility to create scheduled jobs is an enormously helpful feature. Once you know what you want, you can have the jobs engine start and stop jobs for you. The only change you have to make to the script file is to change the name of the target directory where the output will be saved. Note that you can pepper your trace with the panoply of Show Plan variants, so that you’re sure to get the information you need to determine how things are working (or not working).

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist