devxlogo

Execute EJB JUnit Tests in Your Deployed Apps

Execute EJB JUnit Tests in Your Deployed Apps

esting is an integral part of any development process. JUnit, an open-source, regression-testing framework written by Erich Gamma and Kent Beck is a popular tool for building Java unit tests. JUnit provides a basic framework for developers to structure tests and test suites (test collections). Once in place, developers can use these tests to regression test either locally on their workstations or during the build and deployment processes.

Sometimes, however, tests are developed in a way that precludes them from being executed against a real deployed application. They run well enough inside local IDEs against test servers, but they won’t run in user acceptance test (UAT) or production-like environments. Test developers can end up making assumptions about the environment that limit the effectiveness and scope of their regression tests:

  • Tests rely on direct access to the DBMS.
  • JUnit test runner classes cannot be executed on the target deployment box because the box is inaccessible.
  • JUnit test runners can be used, but tests rely on access to application resources such as datasources that may not work outside the container.
  • Tests are not designed cleanly and leave unwanted data debris in the database after execution.

All of these contributing factors prevent tests from being automated. However, all is not lost. You can enable your tests to be executed without a lot of code refactoring or retooling. By building a simple JUnit test execution service that runs inside your application, you can support in container test execution via command-line, over HTTP, or even expose test execution as a Web service. Rather than running tests standalone using JUnit or IDE tools, you can execute them inside a deployed application.

An EJB-Backed Test Service

Think of your test service as any other application business service. It should provide a simple, securable mechanism for executing JUnit tests and communicating the results. By using a stateless session bean to implement your service, you can leverage the transaction semantics of the container to tear down your data once your tests are done. That way, you don’t have to worry about test data messing up your system.



My JUnit tests run fine in development, but they won’t run standalone against a real deployment.



Learn how a simple EJB JUnit test service can facilitate test execution in real deployment environments, not just on developer PCs.

Step 1: Define Your JUnit Test Service.

Define a basic Java class called TestService for executing JUnit tests. For simplicity’s sake, assume that the name of the test class and any required parameters are passed in via a map. Your service should contain a method for executing a test and returning the results. In this case, it returns the results inside a TestResultCollection class (which a later step explains).

Inside your executeTest() method, load the JUnit test class and create a JUnit TestSuite. The TestSuite will automatically inspect your class and figure out all the test cases and test methods that need to be run:

public class TestService {public TestResultCollection executeTest(String testClass, Map params) {// Create Test            TestSuite suite = loadTestSuite(testClass);	... }protected TestSuite loadTestSuite(String testClass) {        try {            Class c = Class.forName(testClass);            TestSuite suite = new TestSuite(c);            return suite;        } catch (Exception e) {            throw new ApplicationException(e);        }    }}

If an error occurs during class loading, you can propagate this back using some sort of standard application exception. This example communicates problems using a RuntimeException called ApplicationException.

Step 2: Provide a Mechanism to Pass Parameters to Tests.

Sometimes your tests are context sensitive and rely on user-supplied parameters. However, because the test class is class-loaded dynamically, there is no way to pass the parameters into the tests. One technique to manage this is to load any parameters into a ThreadLocal-backed cache, which you can access from within the tests.

If you don’t have one already, define a ThreadLocal-backed ApplicationContext object for storing parameters:

public abstract class ApplicationContext {    protected static ThreadLocal context = new ThreadLocal() {        public java.lang.Object initialValue() {            return new HashMap();        }    };        public static Map get() {       return (Map) context.get();    }        public static void close() {        Map map = (Map) context.get();        map.clear();        context.set(null);   }}

Inside your test service, ensure that the ApplicationContext is populated prior to test execution and cleaned up at the end. You don’t want short-lived parameter data hanging around in memory any longer than necessary:

public TestResultCollection executeTest(String testClass, Map params) {        try {            // initialise context            ApplicationContext.get().putAll(params);            // Create Test            TestSuite suite = loadTestSuite(testClass);...        } finally {            ApplicationContext.close();        }    }

Step 3: Provide a Mechanism to Pass Back Test Results.

When you execute TestSuites and TestCases, JUnit stores the results inside a special JUnit class called TestResult. Unfortunately, this class isn’t serializable and it is tailored for JUnit test runners to use. In order to pass the results outside the container, you have to define an analogous object or DTO (Sun BluePrints) for transferring the result data to the client.

Define a DTO class called TestResultCollection for storing test results. Provide a constructor to initialize a TestResultCollection from a JUnit TestResult instance:

public class TestResultCollection implements Serializable {    protected List failures;    protected List errors;    protected int  runCount;        public TestResultCollection(TestResult result) {       failures = new ArrayList();       errors = new ArrayList();       copyResult(result);    }        protected void copyResult(TestResult result) {        runCount = result.runCount();        Enumeration enum = result.errors();        while(enum.hasMoreElements()) {            TestFailure tf = (TestFailure) enum.nextElement();            errors.add(tf.toString());        }        enum = result.failures();        while(enum.hasMoreElements()) {            TestFailure tf = (TestFailure) enum.nextElement();            failures.add(tf.toString());        }    }}

Add the logic to your TestService to execute the JUnit TestSuite and return the results inside a TestResultCollection object:

public TestResultCollection executeTest(String testClass, Map params) {        try {            // initialise context            ApplicationContext.get().putAll(params);            // Create Test            TestSuite suite = loadTestSuite(testClass);            // Execute test            TestResult result = new TestResult();            suite.run(result);            // Copy results into serialisable collection            return new TestResultCollection(result);        } finally {            ApplicationContext.close();        }    }

Step 4: Wrap Your Test Service Inside an EJB.

You now have a working test service that can be embedded inside your server. The last step is to wrap and expose your test functionality using an EJB that the remote clients can access.

Create a stateless session bean called EjbTestRunnerBean. Define one public remote method called executeTest() for executing unit tests. This method is very similar to your existing TestService::executeTest() method with one difference: it has a parameter called teardown for controlling test data removal.

Inside your executeTest() method, instantiate and delegate test execution to your TestService:

public abstract class EjbTestRunnerBean implements javax.ejb.SessionBean {    public TestResultCollection executeTest(String testClass, Map params,            boolean tearDown) {        TestService service = new TestService();        TestResultCollection results = service.executeTest(testClass, params);...        return results;    }

Step 5: Support Data Teardown Using Transaction Rollbacks.

To ensure that all data created by your tests during setup or execution is completely removed?regardless of whether the tests succeeded or failed, mark/deploy your EJB executeTest() method as transactional using TX_REQUIRED. This means that your tests will execute inside a transactional context, managed by the container. To tear down your data, rather that relying on individual test cleanup, roll back the transaction at the conclusion of the test by either throwing a RuntimeException (which will automatically induce a rollback) or a checked exception, which you can catch and manage explicitly using SessionContext::setRollbackOnly().

If you throw a RuntimeException out of your session bean method to force a rollback, you still need some way of communicating the test results to the caller. Just because the transaction is rolled back doesn’t mean that your tests failed. One way to pass the results is to throw a special-purpose exception that stores the results.

Define a runtime exception called TearDownException with a TestResultCollection attribute:

public class TearDownException extends RuntimeException {    protected TestResultCollection results;    public TearDownException(TestResultCollection results) {       this.results = results;    }

Inside your EJB method, add the logic to throw a TearDownException if the parameter teardown is set to true:

public TestResultCollection executeTest(String testClass, Map params,            boolean tearDown) {        TestService service = new TestService();        TestResultCollection results = service.executeTest(testClass, params);        if (tearDown) {            throw new TearDownException(results);        }        return results;    }

In your client calling code, you can differentiate normal exceptions from TearDownExceptions and process the results appropriately.

Step 6: Test Your EJB Test Runner.

The final step is to create some JUnit test cases and a test client to execute them using the EjbTestRunnerBean.

Create a simple JUnit test case that has three test methods: one that fails, one that has an error, and one that succeeds:

public class SimpleTestCase extends TestCase {     public SimpleTestCase() {           }..        public void testOne() {        fail("Failed!");    }            public void testTwo() {        throw new ApplicationException("Error Broken!");    }        public void testThree(){        Date currentDate = (Date) ApplicationContext.get().get("Current Date");        assertNotNull(currentDate);        System.out.println("Success! current date is " + currentDate);    }

Ensure that your test class is deployed to your application server along with JUnit.

Create a simple command-line tester for executing SimpleTestCase. Resolve the EJB from JNDI and then call the executeTest method, passing in the fully qualified class name of SimpleTestCase and any required parameters. You can be either hardcode the test class or pass it in from the command line, likewise for test parameters:

public class EjbTestRunnerClient {..public void testBean() {            au.com.ejbtest.EjbTestRunner myBean = getHome().create();                             Map params = new HashMap();            params.put("Current Date",new Date());            TestResultCollection results = myBean.executeTest(                    ("au.com.ejbtest.tests.SimpleTestCase", params, false);                       printResults("au.com.ejbtest.tests.SimpleTestCase",results);}public static void main(String[] args) {        EjbTestRunnerClient test = new EjbTestRunnerClient();        test.testBean();  }

If your EJB is deployed successfully and you execute your test client, you should see something like this in your console:

TEST RESULTS FOR TEST (au.com.ejbtest.tests.SimpleTestCase) Run Count(3) Failures(1) Errors(1)testOne(au.com.ejbtest.tests.SimpleTestCase): Failed!testTwo(au.com.ejbtest.tests.SimpleTestCase): Error Broken!

If you elect to tear down your data, pass in true for the teardown parameter. This means that any test results will be thrown back inside a TearDownException. Add a catch clause to you client code to unravel any RemoteExceptions, and handle the results normally in the case of a TearDownException. The test results should be the same whether teardown is true or false:

try {            au.com.ejbtest.EjbTestRunner myBean = getHome().create();                             Map params = new HashMap();            params.put("Current Date",new Date());            TestResultCollection results = myBean.executeTest(                    ("au.com.ejbtest.tests.SimpleTestCase",, params, true);                       printResults(("au.com.ejbtest.tests.SimpleTestCase",,results);        } catch (ServerException se) {            // unwrap real exception            Throwable t = se.getCause().getCause();                        if (t instanceof TearDownException) {                // ok just process normally and print results                TestResultCollection results = ((TearDownException)t).getResults();                printResults(testClass,results);            }else {                se.printStackTrace();            }                } 

EJB-Enabled Unit Test Execution

The provision of an in-container test service allows you to leverage your existing JUnit investment. Your data can be seamlessly removed at the end of your tests?you don’t have to rely on inconsistent teardown methods. Tests can be executed continually from development to deployment as you move through the development lifecycle. You can even expose your testing functionality externally over HTTP, via an online test screen (JSP), or as a Web service. Your tests are then easily accessible and produce highly visible results.

An in-container EJB test service won’t solve all your problems, however. Unit tests specific to your servlet tier should be executed in the servlet tier or an analogous environment rather than inside an EJB. Open-source projects such as Cactus and Spring offer alternative frameworks.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist