Happy (Audit) Trails to Oracle E-records Management

lthough electronic records have been around since the first databases, regulations about how to manage them are coming onto the scene only now. Legislation such as the Sarbanes-Oxley Act and HIPAA (Health Insurance Portability and Accountability) set standards for access, security of records, retention, and replacement of paper-based records.

This article discusses the management of electronic records in Oracle using audit trails, mechanisms for preventing previously stored information from being altered or corrupted. An audit trail is a time-stamped record of the changes that users make to a record. It contains the name/identification of the user who performed the modification and typically also includes the reason for the change. An audit trail enables you to audit the system for a record of alterations, as well as to reconstruct the data at a specified point in time.

The following are the various facets of audit trails for electronic records:

  • Audit tools
  • Event selection (selective auditing)
  • Audit data generation
  • Automatic response
  • Audit analysis
  • Storage and archiving

This tutorial addresses the following issues from the system designer, developer, and IT staff points of view:

  • Constituents of an electronic record
  • Audit trails
  • Database design
  • Application security mechanisms
  • Bulk-data capture and processing issues
  • Background processing
  • Application, system, and data-migration
  • Oracle User Administration
  • System administration
  • Procedural mechanisms

Implementing Audit Trails in the Database
Database applications usually implement audit trails using triggers on the tables. This is the primary audit trail mechanism. In the trigger-based approach, each table in the database has a corresponding audit log table (also called a history table). This audit log table stores the audit trail data. The database table to be audited has ‘BEFORE INSERT’, ‘BEFORE UPDATE’, and ‘BEFORE DELETE’ triggers. The audit log table contains all the columns that the main table has, as well as the following additional columns:

  • INS_USER_ID
  • INS_TIMESTAMP
  • UPD_USER_ID
  • UPD_TIMESTAMP
  • DEL_USER_ID
  • DEL_TIMESTAMP

The INSERT triggers populate the INS_ columns, UPDATE populates the UPD_ columns, and the DELETE triggers populate the DEL_ columns. The MERGE statement also fires the appropriate triggers (INSERT or UPDATE).

Optionally, instead of columns for each operation, you may find just the following columns:

  • MOD_USER_ID
  • MOD_TIMESTAMP
  • MOD_OPERATION

A PL/Gen utility written by PL/SQL author and instructor Steven Feuerstein may best explain triggers. The following is an example of a template that may be used to generate code in the PL/Gen utility:

DROP TABLE [objname]_aud;      CREATE TABLE [objname]_aud  (      [FOREACH]col         [colname]_o [data_declaration],       [ENDFOREACH]         [FOREACH]col         [colname]_n [data_declaration],       [ENDFOREACH]         [FOREACH]col         [colname]_f CHAR(1),       [ENDFOREACH]         created_on DATE,         created_by VARCHAR2(30)            );   CREATE OR REPLACE TRIGGER [objname]_upd_audit      AFTER UPDATE ON [objname]      FOR EACH ROW      DECLARE         audit_rec [objname]_aud%ROWTYPE;      BEGIN      [FOREACH]col      IF :OLD.[colname] != :NEW.[colname] OR         (:OLD.[colname] IS NULL AND :NEW.[colname] IS NOT NULL) OR         (:OLD.[colname] IS NOT NULL AND :NEW.[colname] IS NULL)      THEN         audit_rec.[colname]_f := 'Y';         audit_rec.[colname]_n := :NEW.[colname];         audit_rec.[colname]_o := :OLD.[colname];      ELSE         audit_rec.[colname]_f := 'N';      END IF;      [ENDFOREACH]      INSERT INTO [objname]_aud VALUES (      [FOREACH]col         audit_rec.[colname]_O,       [ENDFOREACH]         [FOREACH]col         audit_rec.[colname]_N,       [ENDFOREACH]         [FOREACH]col         audit_rec.[colname]_f,      [ENDFOREACH]            SYSDATE,         USER         );

Another approach to auditing is using the AUDIT statement (provided by Oracle since version 7.x). Using the parameters AUDIT_TRAIL and AUDIT_SYS_OPERATIONS in the initialization file will enable this option. The AUDIT-based approach is useful for all-pervasive, sure-fire (but blind) auditing. You can also audit administrator operations using the AUDIT statement. Such auditing is transaction independent. You can query the system views DBA_STMT_AUDIT_OPTS, DBA_PRIV_AUDIT_OPTS, and DBA_OBJ_AUDIT_OPTS for reporting on the auditing options in use. You can find out the default auditing options by querying ALL_DEF_AUDIT_OPTS.

The trigger-based approach is applicable for specific, conditional, and intelligent auditing. The rules of transactions govern auditing with custom triggers. System-level triggers may also be used to audit usage?especially the administrator’s or DBA’s actions?or actions that a user may potentially utilize to tamper with application security and data integrity. A full comparison between custom auditing (using triggers) and system auditing (provided by Oracle) is available in the Oracle documentation in the Application Developer’s Guide ? Fundamentals.

Auditing large tables may result in a drop in performance. Before rolling out an auditing scheme, you should test the implications on response time.

Database Design Issues
For audit trails to be effective and useful, the following database design issues must be considered:

  • User identifiers
  • Schema design
  • Date and time handling
  • Bulk data transfer
  • Binary data storage
  • Archiving and restoring

User Identifiers
The database codes that identify users should be unique, not only when the user is active on the system but up until the records are retained. These identifiers (typically USER_IDs) should never be reallocated or deleted. USER_IDs that are not in use may be marked as retired or inactive.

Clear policies about how to deal with records that were ‘owned’ by retired (or inactive) users should be established during the design stage. With long-term data retention policies and archives of records, the software system should provide explicit functions to reassign the retired (or inactive) user’s ‘work item’ to another user, since the original user may not be around to perform the operations.

Using a running sequence of numbers that allocates USER_IDs is not a good idea. With the advent of mergers and acquisitions, contract operations, and outsourcing, you can’t guarantee of the uniqueness of numbers. Cases where USER_IDs in two organizations are the same render audit trails worthless.

Appending a location or department in the USER_ID field is also a bad idea. For instance, the USER_ID UK-12326 may change when the employee transfers to another location. User location is clearly a separate attribute and should have no dependency to the USER_ID attribute. Also, the latter part of the identifier (12326) should never be subject to rollover. A better alternative may be the user’s initials followed by the employee code (e.g., MNS-9953) or his or her joining date (e.g., MNS-20040512).

The USER_ID assigned to a user should never be updated. (In general, avoid updating primary keys all together.) Systems with long data-retention policies (10-15 years) do not keep all their data online at all times. They archive some data. If the system allows updates, it updates only the records currently in the database. The records that have been archived would have a different USER_ID.

The organization may use (or may decide to migrate to) authentication infrastructures like LDAP, NTLM, NIS, or Active Directory. System designers must take such possibilities into account and keep user identifiers in a format that is compatible with these technologies. The record management system should also blend into such infrastructures.

The USER_ID is also the key to attestation or electronic signatures. All attestation actions should be audited with a reason. Without this, the action will not be treated as an attestation and will not have the validity of a paper signature.

Schema Design and the Audit Trail
System designers need to consider reporting requirements for the audit data. When you de-normalize tables, you end up with columns that don’t really belong in the tables they occupy. When such tables get updated (only in these de-normalized columns), an audit trail shows up for the table. Users are likely to get confused when viewing reports from this audit trail.

A similar situation exists for fields that contain summary values. For example, if the LIBRARY_USER table has a BOOKS_ISSUED_SO_FAR column, it is subject to updates on every book issue. This generates an unnecessarily large audit trail for the LIBRARY_USER table. Such a column may be required to take care of restrictions like “books issued per user should be limited to 6”.

Audit trails in tables that are the result of many-to-many relationships tend to grow faster than audit trails other tables. Reporting on such tables also is a tricky affair. For example, group membership changes to USERS, GROUPS, and GROUP_USERS tables will be reflected in the audit table for GROUP_USERS.

Date and Time Handling
Dates have always been a tricky issue with database applications. Books that deal with SQL always have a chapter or two dealing with dates and their difficulties in logic and operations. Dates are also important when considering audit trails and record management. Applications should be designed to correctly capture the data and time of an operation, as well as when the data was manipulated.

Dates formats should remain uniform throughout the lifetime of the application or else additional logic has to be built in when manipulating data. The date formatting should be unambiguous, say, YYYY-MM-DD. If a timestamp is also involved, it also should have a uniform standard throughout the application. Applications where date formats changed for some reason, say, Y2K, require extra testing to ensure that historical data archived prior to the format change can be correctly imported into the system again. For the sake of reporting and consistency, the audit trail should consistently be of the same time zone, across all the years the system has been in operation.

Some PDA-based (and some PC-based) applications support an offline mode where data entry takes place on the PDA and then the PDA data and the system data merge (or synch). In such cases, it is better to have multiple columns for RECORD_DT and MERGE_DT. This ensures that reports fired prior to the merge and after the merge are consistent.

Bulk Data Transfer
In an electronic record management system, the audit trail is as integral a part of the data as the data itself. Not only must audit trail information be correctly transferred, but (if the system specification so demands) the auditing operations should also be performed even in bulk transfer modes. When data is transferred to or imported from another system, provisions must be made for transferring audit information at the same point in time as the rest of the data as well. Because Fine Grained Audit Control (FGAC) interferes with export/import operations, the user who performs these operations must have privileges to access all the data in the system.

The manner in which audit trail information (and the data) is transferred should be consistent with the original NLS settings. All character string data, number data, and dates should be transferred in a manner that does not cause corruption. In Oracle, direct-path operations bypass Oracle Label Security and constraints, and NLS translation happens at the client.

Binary Data Storage
When designing an application that stores binary data (like PDF files), the choice is between BFILEs and BLOBs. Because BFILEs don’t reside in the database, revisioning is implemented on the file system itself. For audit trails (and data integrity), BLOBs are preferable to BFILEs.

Dealing with BLOBs has its own issues though (not really related to audit trails, but related to record management in general). One issue relates to using SQL*Loader to load BLOBS. Failure to load a LOB does not reject the record. Instead, it results in an empty LOB in the record column. An integrity check should follow an SQL*Loader operation.

Also, even for minor changes, you need to store the complete binary object again, resulting in much larger space usage.

Archiving and Restoring
Any system that maintains records usually comes with a utility to archive records that are not required for day-to-day operations. Various archiving strategies exist, such as the following:

  • Retaining only summary records and archiving detail records
  • Retaining the current version and archiving previous entries
  • Retaining records for the quarter and archiving those in previous quarters

The archival utility may also be used to archive the audit log table. Archiving programs should always archive a consistent state of the database records. Depending on the duration of the archiving program’s run and the requirement to allow simultaneous normal operations, the developer may choose to set the TRANSACTION ISOLATION LEVEL. If the application is of 24×7 type, it is advisable to SET TRANSACTION ISOLATION LEVEL SERIALIZABLE and opt for SET TRANSACTION ISOLATION LEVEL READ ONLY only as a last resort. The serializable mode will allow inserts/updates/deletes, while the read-only mode will prevent any updates.

As with archiving, the restore should also ensure that the system transports from one consistent state to another consistent state. This includes restoring it with the audit trail too. Restoring only part of the data compromises the integrity of the data in the system.

Application Design Issues
To get the most from audit trails, the following application design issues must be considered:

  • The right username
  • Comparing binary data
  • Security features
  • Archiving and restoring
  • Migration

Audit Trails with the Right Username
Typically, Oracle stored procedures that execute application logic and processing take an input parameter for the user name. This is mainly to tide over issues like INVOKER and DEFINER rights. The username passed as a parameter is recorded in the audit trail for operations.

When the computer directly captures data from a data source (like a log file) or from a sensor, the audit trail for the table must report a special user called, say, APPLICATION. The same applies for data that is imported. The data should correctly indicate that a user called IMPORT or APPLICATION is importing it.

When a job is triggered using DBMS_JOB or an OS-based scheduler, the audit trail should report these operations as being carried out by APPLICATION. However, when a specific user carries out the configuration of the job, and (from the system designer’s viewpoint) the job runs on behalf of the user, the audit trail should report the user who configured the job.

A tricky situation arises when an automatic state change operation needs to be audited. Consider a case where a person accumulates some additional frequent flyer miles and the traveler is upgraded to the next level. The system automatically will carry out such a change. Again, in such cases, the audit trail should report the upgrade operation as being carried out by APPLICATION rather than the person at the check-in counter. The check-in transaction will continue to have the audit record with the name of the person at the check-in counter.

Comparing Binary Data in Audit Trails
When storing binary data, reporting what changed between versions of the record is a challenge. It is difficult for a user to visually compare them or for a reporting engine to point out changes. Often, third-party utilities or components are required to compare the differences between, say, two Excel sheets or two Visio diagrams. The same is true when encrypted data is involved.

Application Security Features
Applications can make use of Oracle resource profiles to ensure security features like idle connection time (disconnect after n seconds of inactivity) and password management features like grace time, reuse time, and lock time. This way, you can age passwords, mandate change, and enforce security.

Applications should provide for creating and assigning privileges such that a distinction is made between a data-entry operator, a reviewer, and an updater. USER_ROLES, USER_PRIVILEGE, and USER_GROUPS are typical application mechanisms that satisfy this requirement.

Every database application has certain utilities that typically bypass some aspect of system security. Running these applications should be governed by clear procedural guidelines.

Many applications work with a single database user. Access to this user’s password should be restricted. Applications should evolve a mechanism that eliminates the requirement to hard-code this password, allowing it to be either a dynamic password (valid only for a time period) or a system-generated one based on some key that may be changed. The same applies to the reserved/system user within the application.

During the application logon process, the network traffic between the application and the database server should not be vulnerable to sniffing that would expose data.

Archiving and Restoring
Archiving and restoring are not just about audit trails, but they deal with the larger issue of electronic records. All archival utilities must be designed to archive metadata along with the data itself. Such utilities should also record which records were archived to validate all restoration operations. The archived data set should be accompanied by a checksum to ensure that it can be restored after a validity check. The checksum and the record of the archival operation can be used to determine whether the restore should be allowed or not. Checking against the record of archival operations may be relaxed if restoring is a means to perform the pre-loading of data after an install, but this is not a recommended practice.

A critical issue in restoring old data is backward compatibility. Archived data must be in a format that is guaranteed to be readable throughout the system’s years of operation. This very definition precludes binary formats, as a number of issues can crop up (e.g., 16-bit, 32-bit, and 64-bit compatibility, endianness, etc). CSV and SYLK are two formats that have largely withstood the passage of time. XML is now emerging as the successor to these formats, especially in newer applications. In all cases, however, the format descriptions should be available and application vendors should be mandated to include backward compatibility checks in their testing cycles and validation processes.

Such a validation process is very important because de-support is a potent mechanism that software and hardware vendors use these days to force customers to upgrade. Under no circumstances should a customer be forced to install and use a specific version of software just as a go-between to read the archived data. Backward compatibility of archived data is a promise that should be adhered to across all versions of the software, not just supported versions.

Application Migration
Applications usually provide a special mode called repair/maintenance mode in which the IT staff may run an executable file (or a script) that places the database and the application in a repair mode. They then can perform some modifications to the data (again, by means of some scripts/repair utilities) and restore the mode to normal. All runs of such scripts/utilities should be recorded.

Application and Oracle migration should be carried out in a regulated manner with records maintained for all the actions scripts perform. The site DBA should formally review these scripts for intent, mistakes, and effects on data integrity.

Deployment/Procedural Issue Checklist
This article has looked at some of the issues and precautions that must be taken to ensure that audit trails and data integrity lead to useful and usable data. The guidelines it has presented will help establish compliance with the various legal frameworks for Oracle-based database applications.

The following checklist helps address the deployment and procedural issues one might face when applying the guidelines:

  • A client/server or multi-tier application must always rely on the Database Server time as the time for the audit trail. The time across nodes and PCs also must be synchronized.
  • Physical access to the systems and the storage media where archives and backups are stored should be restricted.
  • Transfer of data to a third party, test system, or contractor should be done after certain obfuscation.
  • Maintenance of the system (either remotely or on-site) by application vendors should be performed under supervision and in the presence of the internal staff with complete records of activities.
  • The IT staff should play a minimal role in normal system operations. Most systems come with their own roles and security mechanisms. The IT staff should be granted only the minimal possible set of permissions for the application. They are the people who have superuser access to the OS and the database, and granting them rights to modify data in the application may lead to situations where system and data integrity can be compromised.
  • Regular database health-check scripts can prevent catastrophic errors that would result in data loss. DBAs usually configure periodic runs to check the parameters and also configure alerts to prevent situations such as disk space issues.
  • Alert logs and OS error logs should be periodically analyzed for errors like ORA-3113 and ORA-600. The possibilities of data loss or data inconsistency must be ruled out.
  • Often, administrators disable constraints and drop triggers before performing bulk data-transfer operations. The system designer should clearly mention policies to deal with such situations in the Administration guide.
  • IT staff and DBAs must recognize the importance of triggers and constraints in data integrity. As such, operations should not be permitted with invalid triggers and disabled constraints. The health-check scripts should also detect these conditions.
  • Utilities and reports must be developed to analyze the audit trail and, at times, formulate an automatic response (like an e-mail, a page, or an SMS alert).
  • Storage of audit trails should also be governed by a policy for longevity and active access.
  • The reserved Oracle users should have secure passwords, changed from the standard ones. Only the required users and roles must be enabled. (See CIRT.net’s default passwords page for more details.)
  • Backups are usually the safest way to back out of failed or aborted Oracle or application migrations.
  • A provision for emergency access to the data is a must. Applications may come with ‘become superuser’ scripts (or executables) that would enable the IT staff to access the application with full rights. All runs of this ‘become superuser’ script should be logged.
  • IT staff?not system users?should always carry out password management.
  • Password reset requests should not just change the password to a stock known password or a random system-generated password. They should also force the user to change his or her password during the next logon.
Share the Post:
Share on facebook
Share on twitter
Share on linkedin

Overview

Recent Articles: