o briefly recap the situation I described in “All the Changes, All the Time: Part I“, my assignment was to capture all the changes being made in one database and, upon approval by an administrator, apply these changes to another copy of the database in production. In Part I, I described how I created triggers to write an audit trail of all changes to a central table. In this 10-Minute Solution, I demonstrate how to use Dynamic SQL to process those changes and apply them to the second database.
What Is Dynamic SQL?
Using SQL Server, you can populate a string with a proper SQL statement and then tell SQL Server to parse and execute the string. You can therefore dynamically change the command being passed to SQL Server by changing the value of the string. Following is a simple and rather useless snippet of code that will nevertheless illustrate this point before I go on to more complicated examples:
use pubsgocreate procedure dynamic_demo( @tablename varchar(50))as exec (‘select * from ‘ + @tablename)godynamic_demo authorsdynamic_demo
SQL Server will dynamically return a different result set based on the parameter you pass to the procedure. The Central Audit Table
Table 1 shows the fields in the audit log table that are relevant to this discussion. In order to move the changes stored in this table to production, I need to traverse each row and?depending on the type of change?dynamically build either an insert, update, or delete statement. It’s therefore natural to use a cursor. In the databases in question, the primary key (IID) for a table is never updated. Therefore, an update or delete statement can easily be formatted. For example, an update statement for any given update change would be “update table [table_name] set [column_name] = [new_value] where iid = [iid]” (items in [] are names of columns in the audit table).
Table 1. Audit Log Table Fields | ||
Column | Type | Description |
IID | Integer | Auto-generated primary key |
CHANGEDATE | Datetime | Date the change was made |
COMPANYID | Integer | ID of company whose information was changed |
ROWID | Integer | Primary key of row that was changed (all tables in the database use a single integer column for the primary key) |
CHANGETYPE | CHAR(10) | INSERTED, DELETED, or UPDATED |
FIELDNAME | Varchar(300) | Name of field that was changed (blank or 0 for inserted or deleted) |
TABLENAME | Varchar(300) | Name of table that was changed |
OLD_VALUE | Text | Value before change |
NEW_VALUE | Text | Value after change |
PROCESSSTATUS | Int | 0 for not moved to production database, 1 if moved |
Here is the snippet of code that I’ll use to build the update statement:
if @changetype = ‘UPDATED’begin select @cmd = ‘update ‘ + @dbdest + ‘..’ + @tablename + char(13) + ‘set ‘ + @fieldname + ‘ = new.’ + @fieldname +char(13) + ‘from ‘ + @dbsource + ‘..’ + @tablename + ‘ new’ +char(13) + ‘where ‘+ char(13) + @dbdest + ‘..’ + @tablename +’.iid = ‘ + cast(@rowid as varchar) + ‘ and ‘ + char(13) + ‘new.iid = ‘ + cast(@rowid as varchar)end
Similarly, the delete statement is built with this code: if @changetype = ‘DELETED’ begin select @cmd = ‘delete from ‘ + @dbdest + ‘..’ + @tablename + ‘ where iid ‘ + ‘ = ‘ + cast(@rowid as varchar) end
The insert statement is more complicated. It would be simpler if I could simply build an insert statement of this form: - insert into productiondb.dbo.[table_name]
select * from editdb.dbo.[table_name]
where iid = [iid]
create procedure sp_createinsert(@dbdest varchar(50),@dbsource varchar(50),@tablename varchar(50),@rowid int,@cmd varchar(6000) output)asset nocount ondeclare @column_name varchar(50), @insert varchar(2000), @select varchar(2000), @where varchar(500), @first int, @skip1 char(50), @skip2 char(50), @skip3 char(50)select @skip1 = ‘changestatus’, @skip2 = ‘changetime’, @skip3 = ‘username’select @insert = ‘insert into ‘ [email protected]+ ‘..’+ @tablename + ‘( ‘select @select = ‘select ‘select @where = ‘where iid = ‘ + cast(@rowid as varchar)select @first = 1declare mycolumns cursor static for select column_namefrom information_schema.columnswheretable_name = @tablename andcolumn_name not in (@skip1,@skip2,@skip3) order by ordinal_positionopen mycolumnsfetch next from mycolumns into @column_namewhile @@fetch_status = 0begin if COLUMNPROPERTY(object_id(@tablename), @column_name, ‘IsComputed’) = 0 begin if @first = 1 ? first column shouldn’t have a comma before it begin select @insert = @insert + ‘[‘ + @column_name + ‘] ‘ select @select = @select + ‘[‘ + @column_name + ‘] ‘ select @first = 0 end else begin select @insert = @insert + ‘,[‘ + @column_name + ‘] ‘ select @select = @select + ‘,[‘ + @column_name + ‘] ‘ end end fetch next from mycolumns into @column_nameendclose mycolumnsdeallocate mycolumnsselect @insert = @insert + ‘)’select @select = @select + char(13) + ‘ from ‘ + @dbsource + ‘..’ + @tablenameselect @cmd = @insert + char(13) + @select + char(13) + @where
In the end, for each line in the audit table, I generate the proper statement and store it in a variable named @cmd. With one line of code exec (@cmd) I am able to move the change to production. You may not have the time to wade through the code given in these two 10-Minute Solutions. Hopefully you’ll take away the following ideas:
- The metadata stored in the information_schema tables can be used in many situations to automate the creation of insert statements, trigger creation, and any other tasks you need done.
- Triggers are a very useful device for capturing audit changes.
- A central audit table can be used to store changes to all the tables in your database instead of creating another audit table for each.