The JDBC API is powerful, but using it is daunting, especially for novice programmers. It does not have to be this way. Wouldn't it be nice to have an API that delivered all the following:
SchemaCrawler is just such an API. A free and open-source API available under the LGPL license, SchemaCrawler is written in Java, making it operating system agnostic. Since it leverages JDBC, it is also database independent. It deliberately doesn't have any RDBMS-specific code. That is the reason you won't find any triggers in it; there is no way to get trigger metadata using JDBC.
SchemaCrawler in Action
Take a look at some sample code that demonstrates just how easy it is to use SchemaCrawler. This code assumes that you have a javax.sql.DataSource
that is capable of handing out database connections on demand. (If not, SchemaCrawler comes with an implementation of DataSource
that you can use.) Getting the database schema is a matter of one single call:
Schema schema = SchemaCrawler.getSchema(dataSource, SchemaInfoLevel.BASIC, new SchemaCrawlerOptions());
It is as simple as that. You get to decide how detailed you want the returned schema to bewith the basic schema information level, you get columns and tables but not indexes, for example. The schema information level determines how much time the getSchema
call takes. You can filter out certain tables and columns with regular expressions that are more powerful than SQL regular expressions by setting the right SchemaCrawlerOptions
. You can exclude tables of certain types, such as views, from the output schema. You also can sort tables, columns, and stored procedures in various ways.
Once you have the Schema object, you can iterate through the schema and find all the tables and columns using standard Java idioms:
Table[] tables = schema.getTables();
for (int i = 0; i < tables.length; i++)
{
Table table = tables[i];
System.out.println(table);
Column[] columns = table.getColumns();
for (int j = 0; j < columns.length; j++)
{
Column column = columns[j];
System.out.println("-- " + column);
}
}
The class diagram in Figure 1 gives you an idea of the richness of the SchemaCrawler object model. It offers much more than the diagram shows, though. SchemaCrawler provides information about the database system, stored procedures, and column and table privileges as well.
Extend the API
If the simple code in the previous example is not sufficient for your needs, the SchemaCrawler API provides some well-defined extension points. By implementing the CrawlHandler
interface, you can perform your own handling of each table or stored procedure every time SchemaCrawler encounters one. The API also is event driven, similar to the SAX API for XML. By writing your own crawl handler, you can do interesting things such as generating SQL scripts.
SchemaCrawler is packaged with a single jar file, with the only external dependencies being the libraries provided by the JDK and, of course, a JDBC driver of your choice.
Now, with all of that, go experiment!