The term "automation framework" is widely known today in the world of software testing. Although most individuals relate it to technology used for UI based automation, it is almost always misused by those involved in the testing arena. This is partly due to the misunderstanding of what an automation framework should do. It should be more than just a thin veneer on top of a UI technology such as Coded UI.
There are many commercial and open source test automation frameworks in the market. The main problem with the use of these frameworks is the learning curve and difficulty with tailoring them to meet the requirements of your automation project. There are some legacy frameworks that are widely used, but they have a number of disadvantages. They lack flexibility, use expensive third-party products--or again, add a thin layer on top of an existing UI technology, but don't provide much additional value.
A good test automation framework should be flexible, application (System under test) agnostic, technology agnostic and future-proof. It should support a structured and modular programming model, that is easy to install and use. By saying "easy to use", we assume and expect that the users are experienced software developers. Automation is development and the right software engineering skills are necessary to plan and implement an automation suite successfully.
The idea behind developing a UI-based functional automation system using this framework is to increase code reusability, stability, and maintainability. The code should be easy to write, debug, deploy, and run, and the occurring failures should be easy to analyze. Whether you use Ranorex, Coded UI, Telerik, or Selenium as the underlying automation technology, the design and implementation of the automation solution should be the same. The paradigms and patterns that we chose to recommend and enforce are the best practices for any development project, but they are particularly useful for UI automation.
We have created automation frameworks for several different projects at our company. There were some similar patterns that can be extracted as common for all of these. The main problem with these projects related to a difference in approaches and reusability. Besides, different teams used different automation tools to access the tested application functionality, which also contributed to the overall difficulty.
Generally, an automation framework can be defined as a set of assumptions, concepts, and tools that provide support for automated software testing. It has the following functions:
- Defining scripting methodology to automate tests
- Providing mechanism for hooking into SUT (system under test)
- Executing tests and reporting the results
- Decreasing automation project bootstrapping time
- Establishing a common standard.
Let's assume that we have a complex application with rich user interface and a lot of controls, but it only has two screens. The fact that the application is complex might mean that it can have dozens of manual functional test cases. All of these use the same two screens. So how can we increase the maintainability of such test solution?
Layered Architecture Pattern
The idea of splitting a software system architecturally into separate layers is quite widespread. The first level encapsulates the logic of a presentation, the second one is a business logic level, and the third level is in charge of data storage. The use of this paradigm allows for lower cost of application maintenance since the components inside each level can be changed with no impact on other levels. The same approach can be applied to the testing system.
The test code can be split into three layers:
- the layer of UI automation tool interfaces for system under test (SUT) access
- the layer of functional logic
- the test case layer
Each layer performs a certain task with a common goal of decreasing the expenses of test maintenance and facilitation of a new test creation.
Figure 1: Architectural archetype – multilayered architecture of the test system
Page object paradigm
Following the paradigm of creating separate test logic, business libraries and UI Map repository for each separate test case provides us with the exceptional ability to modify the current test case in future.
Let's assume that our application is Gmail web mail service and one of two screens is the login screen. The login flow is used in all test cases (e.g. in order to get to the second screen, you need to perform the log-in into the application first).
Figure 2: Google mail login control
Let's presume that something has changed in the UI, but not in logic. In our specific case, each log-in into Gmail now requires entering of CAPTCHA.
Figure 3: Google mail login control with CAPTCHA
This means that each test case should now be updated with the new login flow. But generally, it would be logical to update only one piece of code. At this point the power of page object and functional method patterns become evident. If there were only one PageObject declared for the SignIn screen and one LogIn method that takes only two parameters (namely login and password), then only the body of this LogIn functional method would need to be updated to cover all cases related to this change.
The idea behind the fluent interface implementation is that all the methods inside the page object return another page object class instance (next SUT context screen). For example, the LogIn method in our sample returns main application screen by default. It enables a series of steps of the functional test case one written after another, using the method chaining. As a result, the business methods themselves will be able to make proposals through the IntelliSense facilities of the IDE for the steps to be taken in the code of a test script developer.
With the implementation of crosscutting concerns, aspects appear to be of great value, especially if you need to wrap a particular method into try-catch blocks, or write a report log entry on enter/exit method. This way, inclusion of crosscut implementation aspects helps to improve the automation code and makes it more readable and understandable.
Build/Run time verifications
Automation framework is also often suggested to be a corporate standard. The use of the same approaches for automation development on different projects by most developers is quite a burning issue. So automation framework can also provide a number of facilities to validate whether or not the best practices are followed in the process of the implementation of Business Level library and Functional tests.
There are different approaches for this validation:
- Using IDE extensions/plugins with a possibility to set up custom build rules (example: ReSharper, StyleCop for Visual Studio)
- Writing your own extension for IDE
- Writing a mechanism that will verify whether the test contains the expected attributes in the runtime and that will fail it with the descriptive errors if something is implemented incorrectly.
Here is just a brief list of items that can be verified:
- Naming of the test scripts
- Proper comments/description attributes applied
- Return values/parameters for business methods
- Solution layering (validating that no cross layer access violations are found in the automation code).
Automation Framework "solid" components
Obviously, the framework cannot consist of best practices only. No one would be able to follow them without any infrastructure to support them. Provided below are some guidelines for better understanding of automation framework implementation.
First of all we need to run the tests somehow. In most cases, unit test frameworks are used to run the functional tests and yield the results. There is a wide selection of unit test frameworks for each technology/language that can be flawlessly integrated with both functional test code and continuous integration (CI) systems.
Loading configuration for test run
Test configuration largely depends on SUT domain and test specifics. For instance, in the majority of test flows all the configuration (like parameters, remote connection servers, etc.) can be just hard coded into a test script source.
In the case of data driven tests, when the same test can be run multiple times with different configurations (e.g. input parameters), unit test frameworks also provide facilities to pass the data from external storage to the test script.
Therefore, you need to double check if the implementation of custom configuration loading is needed in your automation framework.
Reporting test results
Reporting test results/debug test information is one of the most important features of an automation framework.
There are a few reasons for it:
- Report analysis simplifies test/application troubleshooting, so the more information you put into the report, the better support it will provide
- Reports are observed by all project stakeholders.
In case you want to track some dynamic changes in the test execution over a period of time, you should provide additional facilities to save test results into a persistent database, so they could be compared afterwards.
Don't forget to put a bit of fancy stuff into the report presentation layer (xml/html), like company logo, structured output, etc. These things can enormously improve your karma in front of the management. If you are providing an "over time" report, charts are also highly appreciated.
Again, most of the unit test frameworks already have a set of supporting mechanisms to make assertions/validations in test scripts. But it is a good practice to create your own validation mechanism so as to:
- Abstract your test case assertions from a certain unit test framework
- Customize a set of validation methods for your needs
- Add specific logic into your validation methods, so that your validation result will be automatically put into the reports.
Automation test solutions are usually very similar across different projects. So adding automation framework facilities to existing solutions should be a very simple task. It's very important in case you want to provide the existing projects with more practical value using your framework to minimize migration efforts.
Aspects are really helpful here. Just add an aspect attribute definition into a test project and in a moment you will have an extensive reporting mechanism live in your test solution! Surely, it requires some advanced aspect implementation, but it's definitely worth the effort.
What about keywords?
You may find it interesting that we didn't mention any keyword-driven frameworks in this article. The keyword solutions are a separate group available in the market, both commercial and open source. And to be sure, there are hundreds of custom keyword-driven frameworks already in existence. However, we find them incomplete for the following reasons:
- They don't address test scripts maintainability. Most of them introduce a lot of duplication.
- Most keyword driven frameworks are tightly bound to a certain automation tool (or are a part of a UI automation tool), which allows for no changes during the solution development.
In this article we described our own experience in implementation of automation frameworks. The principles highlighted here provide the ability to analyze the code of the test solution in depth and proved to be efficient on multiple automation projects. As an example, on one of these we have developed about 500 business methods with 110 test cases and 30 steps per test case on average (please note that one step can also consist of a few business method calls). The described approach enabled us to reach the average reusability of 36 times per one business method. <p>It's up to you to decide what framework to use on your automation project. Maybe it will be a simple recorder/playback UI tool with a bunch of scripts or it may be a set of keyword-driven excel sheets. But, when it comes to automating more than one hundred of test cases, you would need to demonstrate a higher level of maturity to achieve maintainability of your test solution.
Oleksandr Reminnyi works as a software architect at SoftServe Inc., a leading global outsourced product and application development company. Oleksandr is responsible for establishing automation projects and processes for new and existing customers. He believes that automation success and failure are completely dependent on the established process and setting the right goals. Oleksandr is currently working on his PhD research dedicated to automation. He can be contacted here.
David Krauss has more than 30 years of experience in application and product design and delivery, with extensive programming and architecture experience across multiple platforms, technologies, and languages. Proficiency in legacy modernization, collaborative global development process, client/server and Internet platforms, test automation (one patent for automation, one patent pending for automation generation). Over twenty years specializing in test automation tools and paradigms, automation frameworks, and testing methodologies. He can be contacted here.