Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Optimize the Massive DELETE Operation in Oracle, Part 1-2 : Page 2


advertisement
Need Proof? Generate Test Data
To demonstrate the difference between direct-path and regular INSERTs, create test data. First, create a table named TestData with half a million rows, a primary key, one non-unique index, and a check constraint. You will use this table in all of the article's code and performance tests. It employs data from the all_objects view as its source.

First, create the table with the following code:

drop table TestData; create table TestData ( objectId integer not null, objectName varchar2(30) not null, objectType varchar2(30) not null, runId integer not null, createDate date , status varchar2(7) , description varchar2(255) ) nologging;



Next, populate it with data using the following block (Since the all_objects view (Oracle 9.2) has 20K+ rows, you need to do the insert 25 times to total 500K+ rows in the TestData table):

declare vCount pls_integer := 25; begin for nIndex in 1..25 loop insert /*+ APPEND */ into TestData ( objectId, objectName, objectType, runId, createDate, status, description ) select object_id, object_name, object_type, nIndex, created, status, object_name from all_objects; commit; end loop; end; /

Check the rows count:

select count(*) from TestData; COUNT(*) ---------- 547875

Add the primary key on the objectId and runId columns, one non-unique index on the objectName and objectType columns, and one check constraint on the runId:

alter table TestData add constraint TestData_PK primary key ( objectId, runId ); create index TestData_NameType on TestData ( objectName, objectType ); alter table TestData add constraint TestData_CK check (runId > 0);

Gather the statistics using the following block:

begin dbms_stats.gather_table_stats ( ownname => user, tabname => 'TestData', method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO', degree => 4, cascade => true ); end; /

Now, create two tables, TestData_Logging and TestData2_Nologging, with identical structures to TestData but with no indexes or constraints:

create table TestData_Logging as select * from TestData where 1 = 0; create table TestData_Nologging nologging as select * from TestData where 1 = 0;

TestData_Logging is in LOGGING mode, while TestData_Nologging is in NOLOGGING mode:

select table_name, logging from user_tables where table_name like 'TESTDATA\_%' escape '\'; TABLE_NAME LOGGING ------------------------------ ------- TESTDATA_LOGGING YES TESTDATA_NOLOGGING NO

Take a snapshot of the redo size before doing a regular INSERT:

select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size'; NAME VALUE ----------------- ----------- redo size 39867216

Perform the INSERT into the TestData_Logging table (Scenario 1):

set timing on insert into TestData_Logging select * from TestData; 547875 rows created. Elapsed: 00:00:08.28

Take a snapshot of redo and undo after the INSERT:

select a.name, b.value from v$statname a, v$mystat b where a.statistic# = b.statistic# and a.name = 'redo size'; NAME VALUE ----------------- ----------- redo size 93019404 select used_ublk from v$transaction; USED_UBLK ---------- 238 commit;

Populating the TestData_Logging table with 547,875 rows in LOGGING mode took 8.28 seconds and forced the database to generate 50.69MB (93,019,404 – 39,867,216 = 53,152,188) of redo and 238 undo blocks.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap