RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX


Java API Makes Database Metadata as Easily Accessible as POJOs : Page 2

SchemaCrawler is an open-source API that makes working with database metadata as easy as working with plain old Java objects.


Enter SchemaCrawler

The JDBC API is powerful, but using it is daunting, especially for novice programmers. It does not have to be this way. Wouldn't it be nice to have an API that delivered all the following:
  • Table is an object that has a collection of Column objects, without requiring you to make additional calls.
  • You could use a method like getType(), whether on a Column object, a Table object, or a Procedure object.
  • Table types are enumerations instead of strings that leave you guessing.
  • Lists are always java.util.List.
  • You have the same convenience that you get with object-relational mapping tools, where you don't worry about resources or exception handling.
  • You could navigate using standard Java methods from one Column object to its containing Table object, or from one Table object to another via a ForeignKey.

SchemaCrawler is just such an API. A free and open-source API available under the LGPL license, SchemaCrawler is written in Java, making it operating system agnostic. Since it leverages JDBC, it is also database independent. It deliberately doesn't have any RDBMS-specific code. That is the reason you won't find any triggers in it; there is no way to get trigger metadata using JDBC.

SchemaCrawler in Action

Take a look at some sample code that demonstrates just how easy it is to use SchemaCrawler. This code assumes that you have a javax.sql.DataSource that is capable of handing out database connections on demand. (If not, SchemaCrawler comes with an implementation of DataSource that you can use.) Getting the database schema is a matter of one single call:

Schema schema = SchemaCrawler.getSchema(dataSource, SchemaInfoLevel.BASIC, new SchemaCrawlerOptions());

It is as simple as that. You get to decide how detailed you want the returned schema to be—with the basic schema information level, you get columns and tables but not indexes, for example. The schema information level determines how much time the getSchema call takes. You can filter out certain tables and columns with regular expressions that are more powerful than SQL regular expressions by setting the right SchemaCrawlerOptions. You can exclude tables of certain types, such as views, from the output schema. You also can sort tables, columns, and stored procedures in various ways.

Once you have the Schema object, you can iterate through the schema and find all the tables and columns using standard Java idioms:

    Table[] tables = schema.getTables();
    for (int i = 0; i < tables.length; i++)
      Table table = tables[i];
      Column[] columns = table.getColumns();
      for (int j = 0; j < columns.length; j++)
        Column column = columns[j];
        System.out.println("-- " + column);

The class diagram in Figure 1 gives you an idea of the richness of the SchemaCrawler object model. It offers much more than the diagram shows, though. SchemaCrawler provides information about the database system, stored procedures, and column and table privileges as well.

Click to enlarge

Figure 1. Partial Class Diagram

Extend the API

If the simple code in the previous example is not sufficient for your needs, the SchemaCrawler API provides some well-defined extension points. By implementing the CrawlHandler interface, you can perform your own handling of each table or stored procedure every time SchemaCrawler encounters one. The API also is event driven, similar to the SAX API for XML. By writing your own crawl handler, you can do interesting things such as generating SQL scripts.

SchemaCrawler is packaged with a single jar file, with the only external dependencies being the libraries provided by the JDK and, of course, a JDBC driver of your choice.

Now, with all of that, go experiment!

Sualeh Fatehi is a professional Java developer with over 10 years of experience in software engineering. He has a graduate degree in software technology, and a background in construction engineering. He is the author of the SchemaCrawler API.
Email AuthorEmail Author
Close Icon
Thanks for your registration, follow us on our social networks to keep up-to-date