Futility-Based Testing: Bad Practices That Destroy Test Automation Efforts

Futility-Based Testing: Bad Practices That Destroy Test Automation Efforts

By Clinton Sprauve

Think about it: mainstream, commercial test automation tools have been around since the late 1980s. Open source test automation frameworks and tools like FIT, FitNesse, Selenium, White  Sahi, and Cucumber are common-place. Every up-and-coming development methodology puts a strong emphasis on testing (i.e., Feature-Driven Development, Test-Driven Development, Behavior Driven Development, INSERT YOUR METHODOLOGY HERE-Driven Development, etc.). Although test automation is no longer considered a nice-to-have, why is it that most development organizations still can’t get it right?


The answer is bad practices across the spectrum of development and testing methodologies. Whether it’s traditional waterfall or Agile, the community-at-large still doesn’t get it. What are we doing wrong?

This article will explore a few of the bad practices and futile habits that prevent many organizations from reaping the true benefits of software test automation.

Assuming test automation is so simple, anyone can do it

I mentioned earlier that the current mainstream testing tools have been around since the late ‘80s. Because of the record-and-playback nature of the tools, it is assumed that anyone that can use a mouse can do test automation. Part of this assumption can be blamed the industry’s heavy reliance on subjective criteria (i.e., ease of use or user interface) more than objective criteria of test automation software (i.e. built-in support for the technologies of the application under test, ability to re-factor test code, etc.). In order to sell test automation to the masses, it has to appear so easy that all you have to do is point-and-click your way into ROI heaven. Not true.

Test automation IS software development — period. Assuming that you can send the receptionist to test automation training and build efficiencies into your software development process is ridiculous.

Solution: take the time to build a test automation team of people who understand software development and programming concepts. The testers don’t need to be your uber-developers, but the must have the wherewithal to understand, develop, and maintain test automation code.

See also  Strengthening Business Cybersecurity With Threat Intelligence

Record and Playback

This relates heavily to the last assumption that anyone can build test automation. Why is record and playback so popular? Because it makes your team members feel productive, even though it’s really an exercise in futility. Let me explain.

Let’s say you had the mythical ability to record your morning drive to work, and replay it. What happens when there is a wreck, or traffic, or a four-way stop (you get the idea).

Record and playback lets your team members create throw-away test scripts. The script doesn’t work? Don’t worry, launch the recorder and start over.  Item in the list moved from #5 to #29? No worries; launch the recorder and start over. And so on and so on…

Again, take the time to build a test automation team of people who understand software development concepts. If automation were that easy, then there wouldn’t be a need for test automation specialist.

‘Job Security’ Test Frameworks

Sometimes there is a downside to having an uber test automation specialist, or having just one person responsible for building and maintaining a test framework for an army of testers. I’m sure many of us are familiar with the term “spaghetti code.” What happens when an uber-tester gets hit by a bus or leaves the company? We’ll miss him, but who is going to maintain the framework? Many times the test automation is scrapped upon a new release of the application and the test automation efforts starts all over again.

Never rely on a single engineer to develop and maintain anything that another technical team member or new-hire can’t quickly understand or easily pick up.  It is imperative that your framework is simple enough to maintain and well documented so that the company and team doesn’t lose the efficiencies created by the framework in the first place.

See also  Top 5 Internal Developer Portals of 2024 List

Kumbya (coom baa ya) Keyword-Driven Testing

OK, some of us “get it.” Test automation isn’t easy. Here’s an idea: let’s build a code library of table-based functions/action words so that Ronny, Bobby, Ricky, and Mike from accounting can help us automate the testing of our application. Now the entire QA team can help us automate! Really…

Let’s examine the pros, cons, and assumptions of KDT:


?     Less technical expertise needed to create test automation

?     Attempts to involve BA’s and subject matter experts in the test automation process

?     Automation Engineers do the “heavy lifting”

?     Simplifies the link between testing and requirements specification


?     Can actually increase the amount of maintenance for test automation efforts rather than reduce it. For instance, Suzy from payroll is brought in to test the new accounting app. She’s been trained on the framework and how to build tests. She’s ready to go. While testing the app she gets an error “object xyz failed to initialize. Shutting down.” She has to ping the test automation guru to understand the error. Is this an application problem, or a keyword framework problem? Who has to fix it? When will it be fixed? How many test cases does this affect? Then the crazy cycle starts over again.

?      Difficult to manage changes in AUT as well as maintain automation expertise (i.e. who manages the framework when the creator leaves? Remember the” job security” framework)?

?      Involves subject matter experts (SMEs), business analysts (BAs) and testers in the “wrong way” –  the intentions are good, but it’s setting a trap for failure.

Let me clarify a key point: keyword-driven testing is NOT a bad thing. The problem is not necessarily how it is implemented but for whom it is implemented. Again, it goes back to my previous assertion: most people assume that test automation is so simple, anyone can do it.  The entire team does not need to be involved in the test automation process. Keep it simple. Utilize those on the team that have the technical expertise to develop, maintain, and execute a keyword driven framework. Remember, test automation is software development.

See also  20 Real-World Examples of Embedded Systems

If you are going to implement keyword-driven testing, take inventory of the skill set of your team. KDT works best when the entire team has the technical expertise to maintain, execute, and enhance the framework.

Take advantage of your non-technical team members by utilizing their skills as subject matter experts. Allow them to do exploratory testing. Allow them to validate the business rules of the application. Don’t set them up for failure by forcing them to be what they are not.

Software test automation is not easy. Building efficiencies into the development process is a difficult undertaking in itself. However, repeating the same mistakes over and over again will keep test automation on the “crazy cycle” of software development.  Don’t look for the ultimate panacea for test automation, but rather a practical, realistic approach to building a robust and reusable automation library that will provide true ROI.

Author bio

Clint Sprauve is a Senior Solutions Engineer for the Silk Testing Solutions at Micro Focus. Clint has more than 15 years of experience in the software quality assurance industry, specifically in the areas of enterprise software development, test automation, and product strategy.  Previously he was the senior product marketing manager for Silk Testing Solutions at Borland Software and Segue Software, and served as a senior technical sales engineer for both companies. Clint also has been an independent consultant specializing in quality assurance.  Prior to Segue, Clint was the founder of two companies: Digiratix and Level Technologies, Inc., consulting firms specializing in test automation, test management and quality assurance best practices.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist