f the many challenges in creating, distributing, and obtaining consensus for documentation, I find the chief challenge is keeping the statements in the documentation synchronized with changing business conditions, conditions that change due to competitive forces (sales and acquisitions) in the marketplace or regulatory changes.
Typically, someone above your head promulgates a change that demands that your product comply with the new specification (e.g., Sarbanes-Oxley, HIPAA, or an acquisition). The first challenge is keeping that specification present and, if possible, together with your product so that they stay related.
The second challenge is similar but more complicated. The documentation preceding development is either foundational or prospective. The documentation supporting a report is after the fact, so instead of guiding goals and behavior, it's used to remember the assumptions you used to create an ad-hoc report. If the report is canned, then changes to its lineage are well understood, because it probably was run from a stored procedure, a stored job, or a DTS package. Since you can follow the chain out to the presentation layer (Crystal or Report Services), you can understand what created the report. If the report was in response to a special situation like an acquisition or a regulatory change, however, chances are there is no code base that represents the new problem. So you develop your solution in Query Analyzer (QA), save its text to VSS, and send your work to senior management. This presents another wrinkle in the synchronization problem: the work you did to create the result gets separated from the result. You then need to rememberor create a naming convention that helps you rememberhow things were related at the time of the report.
Solutions to this problem range from version control software and COTS help file generators all the way to homegrown documentation engines that work a lot like JavaDoc (i.e., read tags in the source and create HTML out of them). (As an aside, did ansi.sys and a batch file ever equate to IIS and HTML? End of aside, with apologies to Date <g>).
The problem also has been broached from other directions, such as full text indexing. These tools either help control code changes or help you find what is already there, but they do not help maintain the link between an ad-hoc report result and the code that produced the result.
This 10-Minute Solution demonstrates how I took advantage of MS SQL Server's extended properties, storage areas attached to schema objects in which you can store strings specific to the table and column you want to document. You can name them and describe them according to your own convention, but you have to fill in some required fields in the property definition (the names of the table and column, who's schema they belong to anyway...grist for the mill so the properties are associated with the right database object; See Table 2. String Creation Elements). By using extended properties, you can name and describe your objects according to the source specification, with minimal loss of fidelity of the topic. (You can consider other things to do with these strings later, like maybe execute them.)
The solution to the second problemdocumenting ad hoc report source codeis to take advantage of the comment box in Excel: take the contents of the QA screen, copy them to the clipboard, and paste them into the comment. This lets you store the assumptions used to create a result with the result, so that they stay together and document exactly which filters were applied to the data to generate the result.
How do I maintain the synchronization between my documentation, database objects, and the code that produced a particular result?
Leverage SQL Server's extended properties, storage areas attached to schema objects in which you can store strings specific to the table and column you want to document, and MS Excel's comment box.