Browse DevX
Sign up for e-mail newsletters from DevX


Speed Up Your SQL Inserts : Page 4

Traditionally, database performance has been represented by the speed of data retrieval, but that is not always the case. In some situations, database performance relies on the speed of data inserts.




Building the Right Environment to Support AI, Machine Learning and Deep Learning

Some Limitations

The many-rows insert has one serious side effect. It can lock a table for a long time, which is unacceptable in multi-user environments. However, this is not the case in the example scenario, where you insert just a few hundred rows in one shot. That can't cause the locking problem.

The 8,000-byte limit to the length of a varchar variable produces another inconvenience. However, you can solve that problem by storing incoming strings in a separate table and running another process that checks for the completed sets of the strings that belong to the same survey and user submission. Then you can insert such a set into the working table asynchronously.

In SS2005, where the varchar(max) data type can store up to 2 GB, you have much more flexibility. You can adjust the length of the string to any size up to 2 GB and try to get the optimal performance of the string inserts.

One last note: validate data in the body of your stored procedures. Although it will make your stored procedures heavier, your string inserts still will be much faster than repetitive one-row inserts.

Alex Kozak is a senior DBA/analyst working for SAP Canada. He has more than 15 years of database and programming experience. Microsoft has included some of his articles in the MSDN Library.
Thanks for your registration, follow us on our social networks to keep up-to-date