he “ORA-01555 snapshot too old” error, which causes user transactions to fail, is a nightmare for Oracle DBAs and developers. It usually occurs after queries or batch processes have been running for a long time, which means that it can cause you to lose many hours of processing.
In my previous 10-Minute Solution on this subject, “The ORA-01555 Error: A DBA Solution,” I explained the causes of this error and described two examples of how to avoid it from the DBA’s perspective.
In this 10-Minute Solution, I show you a third example: how to avoid the ORA-01555 error programmatically?that is, from the PL/SQL developer’s perspective.
How do you avoid the ORA-01555 error, which is hard to recreate and crops up inconsistently?
From the developer’s perspective, you can restructure your PL/SQL code to avoid fetching across commits that cause the ORA-01555 error.
Recently I’ve been working on optimizing the DeleteClassification stored procedure. It works mainly with two tables?ValidTransaction (parent) and Classification (child)?which are in a one-to-many relationship with the TransactionId foreign key.
The purpose of this stored procedure is to delete Classification records for all ValidTransactions that fall within a specified period/month. Using the TransactionDate field from the ValidTransaction table selects records in the date range. In my case, ValidTransaction contains data for 12 periods (one year) with 1.2 million rows total (100,000 rows per month on average). The Classification table has 10 rows on average for one row in the ValidTransaction table, with 12 million rows total. Cleaning up the Classification data for a specified period/month means deleting 1 million rows from the Classification table.
This presents two major problems:
- Execution is slow. It takes about 30 minutes to delete 1 million Classification records from a total of 12 million records.
- The “ORA-01555: snapshot too old” error keeps cropping up. Although it happens only occasionally, whenever it does so it always occurs during the last six minutes of the run. The error occurs on the line where data is fetched from the cursor (also shown in red in Listing 1):
fetch TransactToReset into v_TransactionId;
The TransactToReset cursor definition can be improved to eliminate the inefficient DISTINCT clause, which forces an extra step of sorting and eliminating duplicates. Moreover, the original query joins two tables, ValidTransaction and Classification, even though it returns data only from the ValidTransaction table.
I’ve decided to use a correlated subquery with an EXISTS clause instead of joining tables. This is much more efficient and results in significant performance gains.
Here is the modified version of the TransactToReset cursor found in Listing 1:
I’ve also decided to utilize some of Oracle 8i PL/SQL’s new features, such as BULK BINDS and BULK COLLECT (described in my previous 10-Minute Solution, “New PL/SQL Features in Oracle 8i: Part II“). These features improve the performance and also increase the number of rows committed in one transaction, thus decreasing the number of transactions and lowering the probability of the ORA-01555 error.
cursor TransactToReset (p_StartDate IN date, p_EndDate IN date) IS select T.TransactionId from ValidTransaction T where T.TransactionDate >= p_StartDate and T.TransactionDate
Listing 2 shows the improved DeleteClassification stored procedure. Here the BULK COLLECT feature bulk-fetches TransactionId values from the TransactToReset cursor into the v_TransactionId collection (a PL/SQL table). The BULK COLLECT operation fetches the data, starting at index 1, and successfully overwrites elements in the target collection (v_TransactionId) until it retrieves all rows.
|Fetching Across Commits: Where Is the Source of the Problem?
|Fetching across commits is a common and accepted practice in the Oracle programming world, even though it’s not an ANSI-supported operation. Read on…
I use the LIMIT clause to limit the number of rows fetched (see sidebar) from the cursor to a p_CommitBatchSize value, which is passed as a parameter. Using 10,000 as a default value of p_CommitBatchSize, 100,000 TransactionId values are retrieved in 10 bulk fetches versus 100,000 fetches in the original version. Moreover, using the BULK BINDS feature (FORALL clause), 100,000 TransactionId values are sent to the SQL engine in 10 batches, as opposed to 100,000 individual batches, as before. That leads to a significant performance gain.
The improved procedure executes 20 percent faster and the ORA-01555 error occurs less frequently. However, I cannot eradicate the error completely, even though I’ve reduced the number of transactions from 1,000 to 10. ORA-01555 errors still occur inconsistently during the day when transactional activity is high due to other users’ activity.
Modifying the DeleteClassification Procedure: Second Iteration
It’s clear that we can’t leave the cursor open for fetching while we keep processing data and committing data changes. In order to eliminate this error completely, you need to modify the code so it supports the following algorithm:
- Opens the cursor.
- Fetches the whole cursor into the collection (no data manipulation/committing here).
- Closes the cursor.
- Loops through the collection, processes the data, and commits.
As you can see, restructuring PL/SQL code in combination with new Oracle 8i features, such as BULK BINDS and BULK COLLECT, allows you to eliminate the ORA-01555 error as well as significantly improve database performance.