Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Book Excerpt: Oracle9i The Complete Reference

This book covers critical relational, object-relational, and Web database concepts, as well as SQL, SQL*PLUS, PL/SQL, Java, JDBC, SQLJ, and XML programming. You'll also get full details on database administration techniques—and much more. Read all of Chapter 21:''Using SQL*Loader to Load Data.''


advertisement

WEBINAR: On-Demand

Unleash Your DevOps Strategy by Synchronizing Application and Database Changes REGISTER >

hapter 21: Using SQL*Loader to Load Data

In the scripts provided in Appendix A, a large number of insert commands are executed. In place of those inserts, you could create a file containing the data to be loaded and then use Oracle's SQL*Loader utility to load the data. This chapter provides you with an overview of the use of SQL*Loader and its major capabilities. Two additional data-movement utilities, Export and Import, are covered in Chapter 40. SQL*Loader, Export, and Import are described in great detail in the Oracle9i Database Utilities provided with the standard Oracle documentation set.

SQL*Loader loads data from external files into tables in the Oracle database. SQL*Loader requires two primary files: the datafile, which contains the information to be loaded, and the control file, which contains information on the format of the data, the records and fields within the file, the order in which they are to be loaded, and even, when needed, the names of the multiple files that will be used for data. You can also combine the control file information into the datafile itself, although the two are usually separated to make it easier to reuse the control file.

When executed, SQL*Loader will automatically create a log file and a "bad" file. The log file records the status of the load, such as the number of rows processed and the number of rows committed. The "bad" file will contain all the rows that were rejected during the load due to data errors, such as nonunique values in primary key columns.

Within the control file, you can specify additional commands to govern the load criteria. If these criteria are not met by a row, the row will be written to a "discard" file. The log, bad, and discard files will have the extensions .log, .bad, and .dsc, respectively. Control files are typically given the extension .ctl.

SQL*Loader is a powerful utility for loading data, for several reasons:

  • It is highly flexible, allowing you to manipulate the data as it is being loaded.
  • You can use SQL*Loader to break a single large data set into multiple sets of data during commit processing, significantly reducing the size of the transactions processed by the load.
  • You can use its Direct Path loading option to perform loads very quickly.
To start using SQL*Loader, you should first become familiar with the control file, as described in the next section.

The Control File



The control file tells Oracle how to read and load the data. The control file tells SQL*Loader where to find the source data for the load and the tables into which to load the data, along with any other rules that must be applied during the load processing. These rules can include restrictions for discards (similar to where clauses for queries) and instructions for combining multiple physical rows in an input file into a single row during an insert. SQL*Loader will use the control file to create the insert commands executed for the data load.

The control file is created at the operating-system level, using any text editor that enables you to save plain text files. Within the control file, commands do not have to obey any rigid formatting requirements, but standardizing your command syntax will make later maintenance of the control file simpler.

The following listing shows a sample control file for loading data into the BOOKSHELF table:

LOAD DATA INFILE 'bookshelf.dat' INTO TABLE BOOKSHELF (Title POSITION(01:100) CHAR, Publisher POSITION(101:120) CHAR, CategoryName POSITION(121:140) CHAR, Rating POSITION(141:142) CHAR)

In this example, data is loaded from the file bookshelf.dat into the BOOKSHELF table. The bookshelf.dat file contains the data for all four of the BOOKSHELF columns, with whitespace padding out the unused characters in those fields. Thus, the Publisher column value always begins at space 101 in the file, even if the Title value is less than 100 characters. Although this formatting makes the input file larger, it may simplify the loading process. No length needs to be given for the fields, since the starting and ending positions within the input data stream effectively give the field length.

The infile clause names the input file, and the into table clause specifies the table into which the data will be loaded. Each of the columns is listed, along with the position where its data resides in each physical record in the file. This format allows you to load data even if the source data's column order does not match the order of columns in your table.

To perform this load, the user executing the load must have INSERT privilege on the BOOKSHELF table.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date