devxlogo

DB2 Explains Itself: A Roadmap to Faster Query Runtime

DB2 Explains Itself: A Roadmap to Faster Query Runtime

ave you ever wondered why some of your queries take so long to run? Or how DB2 interprets your requests?

As with all high-level programming languages, SQL must be translated into a set of smaller operations, which can be performed in turn. Because the efficiency of each operation can vary greatly with the nature of the data, considerable attention should be paid to selecting the best order of operations.

You can compare this with shopping. There may be many paths from your home to the shop you need to visit. You might want to park as close to the shop as you can, or perhaps park further away to avoid parking station costs and queuing to leave. There may be a smaller shop closer to home that has what you want. You may want to avoid a busy road. The best shopping path isn’t always obvious, and you could waste hours though poor planning.

To shorten a query’s runtime, you need to locate any weaknesses in the operation planning. How? IBM’s online documentation boasts: “DB2 provides the most comprehensive explain facility in the industry with detailed optimizer information on the access plan chosen for an explained SQL statement.” The explain facility outlines how the database will go about your work. This allows you to identify and correct any inefficiencies in your query planning.



How can I make my queries faster?



Use DB2’s EXPLAIN feature to access and evaluate how it plans to access your data.

Background: Plans and Costs
DB2 builds plans. A plan is basically the directions that will be followed to the data (or to the shops). To save time, some items on the plan can be performed in parallel (if you have a spouse, they might buy meat while you buy fruit).

Static plans are built once, and used many times?DB2 stores them when you BIND a collection of static SQL. It’s like establishing a favorite path for your shopping. You don’t have to think about it every time, as you have already decided on a good plan. Plans for dynamic SQL, however, must be evaluated each time. Each plan can be evaluated for its overall cost, which takes into account the nature of the data, and DB2’s understanding of the system on which it’s running. Critically important in this evaluation are the runtime statistics?the size of your tables, the cardinality of various columns, and the clustering in the indexes.

Figure 1. Color Coding: Here is a sample tree from “Visual Explain.”
Figure 2.
Drilling Down: Here is a sample details of a node from “Visual Explain.”


Show Me the Plan
There are several tools centered around explaining DB2’s plans:

  • Visual Explain is a versatile graphical tool available in the Control Center. It is well suited to examining single queries, whether static or dynamic. Each operation shows as a color-coded node in a tree structure. Clicking on a node allows you to view the arguments, statistics, and cost estimate of the node. You can also jump directly to DB2’s documentation on that node type.
  • Visual Explain can also be run from the command line as db2vexp.exe, though it then lacks static SQL ability.
  • Db2expln is the “bare bones” tool, giving text output from static SQL packages only. This can optionally include a character mode graph.
  • Dynexpln gives a text-mode analysis of a dynamic SQL query. It actually packages the dynamic query and calls db2expln to do the work. Listing 1 shows an example of the text output from dynexpln, including a graph.
  • Db2exfmt is a formatter for previously stored explain data.
All these tools use a set of data known as the “explain tables.” Visual Explain is capable of creating these tables on demand. With the other tools, you’ll need to run a script such as in Listing 2:



C:PROGRA~1SQLLIBin>cd ..miscC:PROGRA~1SQLLIBmisc>db2 connect to sample   Database Connection Information Database server        = DB2/NT 7.2.2 SQL authorization ID   = GRNASH Local database alias   = SAMPLEC:PROGRA~1SQLLIBmisc>db2 -tf explain.ddl******* IMPORTANT **********USAGE: db2 -tf EXPLAIN.DDL******* IMPORTANT **********DB20000I  The UPDATE COMMAND OPTIONS command completed successfully.

Now you can assess the results.The Plan Explained

Figure 3. Examining the Nodes: This larger graph shows some nodes feeding multiple higher nodes (using temporary tables).

Whichever explain tool you use, you will have an estimate of the cost of the query in “timerons”. A “timeron” is an abstract unit of measure. While it doesn’t directly equate to any actual elapsed time, it does give a rough relative estimate of the resources (cost) required by a database manager to execute an access plan. You can try different variations of a query, different optimization levels or other changes, and look for a lower cost to indicate a faster result. Working from the bottom up, examining the entire plan may help you find specific inefficiencies. The explain lists and graphs show how the required data is located, collated, and prepared for return. Note that the graph isn’t always a tree – it takes sensible shortcuts if it saves performing the same operation twice. At each node, you can examine the arguments to the operator, the estimated cost of the node, and the cumulative cost of all operations up to this point.

Here are the main node types:

  • UPDATE, DELETE, INSERT: perform the operations named.
  • FETCH: retrieves data from a table, given a specific pointer to the record.
  • TBSCAN: A table scan is examination of an entire table. It’s a bit like visiting every aisle in a supermarket and examining every item. It’s necessary at times, but can be most inefficient when you want a small portion of a large table (or shop).
  • IXSCAN, EISCAN, RIDSCN: Index Scanning. Indexes are pointers to rows, sorted for fast access. It’s like having a store directory, so you can go directly to the products you want. EISCAN operates with multiple ranges of values. RIDSCN retrieves the row pointers (Row IDs) from a set of index records.
  • FILTER: reduces a set of records to only those required
  • UNIQUE: removes duplicates
  • Joins – NLJOIN, MSJOIN, HSJOIN, UNION, IXAND: Join two record sets. The method chosen depends on the query and on the relationships between the tables involved. IXAND joins the results of index scans
  • GRPBY: Group By. Groups rows, and performs column function calculations on the groups.
  • TEMP: stores a set of results for re-use elsewhere in the query.

That covers the basic operators. There’s a lot more information than just the node types and estimated cost. You can find it in IBM’s “Description of db2expln and dynexpln Output.”

Reducing the Cost
Now that you can read the price tags, it’s time to reduce the cost. Here are some guidelines:

1.) Better-informed decisions. DB2’s optimizer must be well informed to make the best decisions.

  • Use the RUNSTATS to update the statistics. If your data has grown, or you’ve added or removed indexes since the last RUNSTATS, DB2’s estimates will be inaccurate.
  • If your tables aren’t fully populated, the runtime statistics will be inappropriate. Under such conditions you’ll often find tablescans in your plans, in spite of sensible indexes. You can tell the optimizer to anticipate extra data by marking certain tables as VOLATILE. (Use the Control Center, or the ALTER TABLE command).

2.) Don’t buy junk. It’s a simple rule – don’t buy what you don’t need. This includes:

  • Don’t use “SELECT * ” when you don’t need all the columns.
  • Use a WHERE clause when possible. A common beginner’s mistake is to use HAVING instead of WHERE. “HAVING” is like analyzing every item in a supermarket, then discarding those you don’t want. Sometimes that’s what you need, but it’s quicker if you can say “I only want to look at the breakfast cereals.”
  • If you only want the first row(s) of a large result set, use FETCH FIRST (see my previous solution “Mastering Top-N and OLAP Queries in DB2“).
  • Look carefully at your joins. The sooner (in the plan) you can narrow down the records you want, the less work it will be. Compare Listing 3 and Listing 4. They represent the same query, but by re-ordering the joins, the cost has been halved. (The reason for the cost difference relates to how many rows are in each table).

3.) Keep an index handy. Many columns can be used in an index, and the order of the columns is significant. You might want to use more than one index.

  • DB2 provides a clever tool, the SQL Advise Facility. It’s available from the Control Center (Indexes – Create – Index using wizard), and from the command line as db2advis. The advisor takes a query, examines the generated plan, attempts to identify additional indexes that may help, and re-evaluates the cost with and without each of the indexes. The end result is a list of potentially helpful indexes, with estimates of their size and helpfulness.
  • You can also INCLUDE[] unsorted data in an index, to completely avoid fetching from the table in some instances.

4.) Use a Summary Table. See my solution “Tame Beastly Data with Summary Tables.”

While there’s much more that can be learned on your way to writing efficient queries, you’ve got a good start in knowing how to measure the impact of your efforts.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist