devxlogo

Speed Up Your SQL Inserts

Speed Up Your SQL Inserts

atabase performance is one of the most critical characteristics of almost any application, and for the developer or DBA it usually depends on how fast they can retrieve data. Hence, many a performance optimization and tuning book discusses the ways to make queries faster. Even RDBMS makers understand the need for fast data retrieval and provide different tools (indexes, configurations options, and so on) to facilitate it. However, database performance is not always represented by speed of data retrieval. In some situations, database performance relies on the speed of data inserts.

Suppose you need to design a control measuring system or survey site that stores results in a database. The task seems to be pretty straightforward, but a closer look at the specifications reveals that everything is not as simple as you might assume:

  1. You need to process and insert a very high volume of incoming transactions into a database.
  2. You have to provide 24×7 availability.
  3. The number of fields (parameters) that has to be stored in the tables can vary widely. For example, the number of questions differs in different surveys (i.e., number of controls or elements on the Web pages), or the number of detectors or measuring devices can differ for different processes in the control measuring system.
  4. The application will be used not only for data inserts but also for data retrieval, though not very intensively. (For extensive analyses, you can create an OLAP system, but that is another story.)

Comprehensive analyses of all the possible solutions for this scenario are beyond the scope of this article. However, you can generally attack the requirements from different directions. For example, you can use all or some of the following options:

  • High-speed Internet and fast networks
  • Powerful servers with fast CPUs and lots of RAM
  • Fast disks and high-performance RAID(s)
  • Load balancing for the Web servers
  • Failover clustering for database servers
  • Tables partitioning
  • Lots of storage space (SAN), and so on

All the above solutions, except for partitioning, are quite expensive. And even if you are ready to make a big investment, you still need to put together a few puzzles.For instance, any database needs to be maintained on a regular basis: indexes, backups, purges of old data, fragmentation, and so on. If you supply your data from the application (Web) server(s) directly, then during database maintenance, you can loose some of your ready-for-insert data or crash your application (or database) server. So you need to provide some kind of buffer that can temporary hold your data during the heavy resources consumption (which means slow inserts) on a database server. Of course, you can get plenty of maintenance time by adding more database servers. But in that case, you produce another problem: consolidating data from different servers into one database.

A good choice for a buffer would be a Berkeley DB, which is very efficient for repetitive static queries and stores data in key/value pairs. (Recall that the survey site or control measuring system examples submit data as control (element) name/value or detector position/value pairs.) But no buffer can grow endlessly, and if you can’t transfer data to a regular database quickly enough, your servers still will end up crashing.

Thus, the speed of inserts becomes one of the most critical aspects of the example applications.

How to Store Data?

The design of your database can significantly affect the performance of inserts. So you should be careful when choosing your database (storage) structure. For example, you might want to store data as XML. That choice is very attractive, but in this case it will slow down the inserts and occupy a lot of storage space. You may also want to build the database in the best traditions of database design: each table reproduces the real world object and each column in the table corresponds to the object’s property. But in this case (survey site or control measuring system), the number of properties (columns) is dynamic. It can vary from tens to thousands, making the classical design unsuitable.

You most likely will choose the next solution: storing your data in name/value pairs, which perfectly match with the HTML controls’ (elements’) name/value pairs and with Berkeley DB field/value data. Since most survey (control device) data values can be interpreted as integers, you probably will find it convenient to split data by type. You can create two tables, let’s say tbl_integerData and tbl_textData. Both tables will have exactly the same structure with only one exception: the data type for the “value” column will be integer for the first table and text (varchar) for the second.

Comparing Inserts

There are many ways to insert data into a table. Some of them are ANSI-compliant, while others are RDBMS-specific. But they all are either one-row inserts or many-rows inserts. Needless to say, that the many-rows insert is much faster than the repetitive one-row inserts, but how much faster? To figure that out, run the test in Listing 1.

Run all the batches from Listing 1 separately. Batch 1 creates and loads data into the table testInserts. The first insert (before the loop) loads 830 rows, selecting OrderID from the table Northwind..Orders. (If you are using SQL Server 2005 (SS2005) and haven’t installed the Northwind database yet, you can download it from the Microsoft Download Center.) Then each loop’s iteration doubles the number of rows in the testInserts table. The final number of rows for two iterations is 3,320.

To test one-row inserts, copy the result of Batch 3 into the new windows in Query Analyzer or Management Studio and then run it. In my tests on a few boxes with different hardware configurations, the execution time of the many-rows insert (Batch 2) was about 46 ms.; the execution time of the one-row inserts (produced by Batch 3) was approximately 36 sec. (These numbers relate to SS2000.) Thus, the many-rows insert is many times faster than the repetitive one-row insert.

A number of factors make repetitive one-row inserts slower. For example, the total number of locks, execution plans, and execution statements issued by SQL Server is much higher for repetitive one-row inserts. In addition, each insert (batch) needs to obtain object permissions, begin and commit transactions, and write data into a transaction log (even for a simple recovery model).

The following are just a few results that I got by using the Profiler and tracing the inserts:

  • BEGIN…COMMIT transaction pairs ? 7,265 for one-row inserts versus one pair for many-rows inserts
  • Writes to transaction log ? 11,045 and 6,360, respectively
  • Locks ? 26,986 and 11,670, respectively

You also should remember that SQL Server has a pretty complicated mechanism for finding the space for new rows. For the heap tables, as in Listing 1, SQL Server uses IAM (index Allocation Map) and PFS (Page Free Space) pages to find a data page with free space among the pages that have been already allocated to the table. If all the pages are full, SQL Server, using GAM (Global Allocation Map) and SGAM (Shared Global Allocation Map), tries to find a free page in a mixed extent or assign a new uniform extent to the table. For the Listing 1 example, which has a heap table and no deletes, SQL Server inserts data at the end of the last page, allocated to the table. This is may produce a “hot spot” at the end of the table in a multi-user environment or when a few application servers are talking to one database.

Thus, for repetitive one-row inserts, SQL Server will launch the allocation mechanism as many times as you have inserts. For the many-rows insert, the space will be allocated immediately to accommodate all the inserted rows. For tables with indexes, you can additionally expect splits of data pages for clustered indexes and/or the index updates for nonclustered indexes.

How to Make Your Inserts Faster

To make your inserts faster, the obvious solution is replacing the repetitive one-row inserts with many-rows inserts. Using the example in Listing 2, this section demonstrates how to do that.

To trace the inserts, I created the INSERT trigger on the table tmpInserts. Every time a trigger is fired, it just prints the word Hello. To transform one-row inserts into many-rows inserts, I ran an INSERT... SELECT statement, where the SELECT part consists of many simple SELECT statements connected by UNION (ALL). I placed everything in the string variable and executed it dynamically. As you can see, for row-by-row inserts, the trigger was fired as many times as inserts I made (three, in this example). For the many-rows insert, the trigger was fired only once.

So, how can you apply this inserts technique to an application for a control measuring system or a Web site (e.g., a survey site with very high volume of transactions)? Well, when a user submits the form, the application (Web) server receives it as a sequence of name-value pairs, corresponding to the controls (elements) on the form. All you need to do now is slightly modify that sequence and forward it to a database server, which will take care of the inserts.

The examples in Listing 3 and Listing 4 show how the string, transferred to a database server, should look and how it can be processed and inserted into the table(s).

I used the table testInserts that I created and loaded in Listing 1 (Batch 1). The value of the variable @numElements defines the number of name-value pairs, which is the length of the string that will be generated. The letter x serves as a placeholder. (I’ll explain its purpose later.)

Listing 4 is a stored procedure that will process and insert data submitted to a database server.

Here’s the whole trick. You need to replace each placeholder (x) in the string-parameter with the phrase UNION ALL SELECT, and then execute this modified string. Now you can test the solution as follows:

  1. Create and load the table testInserts, if you don’t have it yet (see Listing 1, Batch 1).Run the script in Listing 3.
  2. Create the test table t3. Create the stored procedure spu_insertStrings (Listing 4).
  3. Copy and paste the result of step 2 into the new Query Analyzer (Management Studio) window. You will get something like the following script:
    SET NOCOUNT ONGOSET QUOTED_IDENTIFIER OFF     spu_InsertStrings "a=1,b=10249x2,10251x . . . . . x249,11071x250,10250"GO. . . . . . . . . . . . . . . . . . . . . . . . . . . . . spu_InsertStrings "a=3251,b=44050x3252,44053x . . . x3319,44289x3320,44292"GO

Don’t forget to include two SET statements in the very beginning of the script for NOCOUNT and QUOTED_IDENTIFIER. Then run the script and make a note of the execution time.

Using the string-inserts technique, I was able to insert 3,320 rows into the table t3 in 2 seconds. That was 18 times faster than in the repetitive row inserts. (These numbers relate to SS2000. With SS2005, I saw improvement in the 60-70 percent range.)

Some Limitations

The many-rows insert has one serious side effect. It can lock a table for a long time, which is unacceptable in multi-user environments. However, this is not the case in the example scenario, where you insert just a few hundred rows in one shot. That can’t cause the locking problem.

The 8,000-byte limit to the length of a varchar variable produces another inconvenience. However, you can solve that problem by storing incoming strings in a separate table and running another process that checks for the completed sets of the strings that belong to the same survey and user submission. Then you can insert such a set into the working table asynchronously.

In SS2005, where the varchar(max) data type can store up to 2 GB, you have much more flexibility. You can adjust the length of the string to any size up to 2 GB and try to get the optimal performance of the string inserts.

One last note: validate data in the body of your stored procedures. Although it will make your stored procedures heavier, your string inserts still will be much faster than repetitive one-row inserts.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist