Many organizations want the benefits from automated software testing. Most see the “next step” in testing as building automated versions of manual tests. But is this what you need? We explore which choice is right for you in this blog.
Over the last several years, I worked with teams with very similar problems. Each of their organizations was trying to “go Agile,” and for their leadership, this meant “automate all the things,” but automated software tests aren’t always the answer.
The teams all faced similar problems. Many were the result of adding manual “functional” tests, with developed features to the “regression suite.” There, the teams joined their manual tests to other manual tests. Some were simple “happy path” scripts intended to make sure some functionality worked.
Some tests were legacy scripts with an unknown purpose, and the people working with the software did not understand its function. They had too many manual tests to run and not enough time to run them. This left no time to test the new development properly.
One organization barely had time to get their team to run the tests on a single OS platform, let alone the plethora of systems they needed to support. Their solution was to run the suite of tests on a single OS platform combination each time and the next time run them on a different platform.
Automate Everything?
The solution to the teams’ problems appeared to be “automate all the tests, so they can run more efficiently.” This may be a reasonable approach sometimes. But before embarking on that approach, I suggest asking a clarifying statement. For example, “If you know the purpose of each manual test, automating that test might be a good idea.” The important part for each test is, “What do you expect this test to tell you?”
Many organizations have huge test suites. They include functional, regression and performance tests. They sometimes include tests of no known purpose. These same organizations talk about how many test cases or suites they run on a regular basis but automating those suites can be very problematic. I’ve run into many clients who talk about how many “tests” they have run, except no one actually looks to see what these tests are doing.
I remember one client that had “Over 300 tests!” but when I walked through the tests with them, they all had “Assert = True,” so they always worked. The tests, however, did not actually do anything.
Choosing How to Test the Software
The question we need to ask about every test, manual or automated, is “Why are we running this test?” We can have a general idea about what we want to check regularly while focusing on the software’s core functions working under specific conditions. Identifying those conditions takes time and effort.
Before creating any tests, we must first determine what we need to test. When looking at a test, one question to consider is, “What can this test tell us that other tests won’t?”
Without that question, testers often add duplicated scenarios to test suites. These duplicated tests require review and refinement to keep the full test suite repository as relevant and atomic as possible. Having redundant tests for variances in implementation environment, platform and OS may seem thorough. But is that a good use of time and computing resources? There are likely better options.
Sometimes, testers or Software Development Engineers in Test (SDET) intend to exercise redundant tests in multiple environments and have at least some understanding of their purpose. Often others are run simply because they are in the list of scripts to be run.
Let’s look at a couple of different ways you can introduce automated testing into your practices.
Automating Existing Tests
One concern with the “automate everything” mindset is the presumption the existing manual tests were carefully crafted for specific purposes. This also suggests developers regularly test and maintain the existing test scripts. Very few scriptwriters will ask why you need these tests, nor will they ask about the differences between tests. In some cases, they may not understand what they are automating.
It’s likely (at least in the instances where I’ve seen the “automate everything” model implemented) no one will ask any of these questions for a long time after they implement automated testing.
When looking at functional testing, many organizations use a check-the-box approach. Leadership may not say it that way, but by applying pressure to work faster on testing, that is the message they send. In response, testers will write a quick test covering a simple happy path scenario. They will not often write or execute any tests beyond the stated requirement. These are easy to automate and often ignore potential risks. When I asked testers and managers about the tests, I’ve received responses focusing on main functionality and not edge cases.
Automating Exploratory Tests
We rarely find odd behavior in the simple, happy path. Exercising the application with an attitude of “what happens if this should occur” uncovers unusual behavior. Experienced testers working with BAs or other business SMEs often discover scenarios to include in testing, even if they didn’t consider the scenarios in the original plan.
When these tests uncover problems, they can be added to the automated test suites. These cases require consideration around creating the scenario, setting the environment, and defining the sequence of events to exercise the instance that brings the greatest value.
Using exploratory, experience-based testing to exercise these “what-if” scenarios often yields great benefits by revealing paths not covered by existing test scripts. By keeping a careful record of what you did, you have the basis for writing new automated scripts if the test results warrant it.
Suggestions for Strengthening Testing
Making any form of testing meaningful and valuable to the organization requires thoughtful consideration. Think about all tests – manual and automated – and what information they provide or how you can combine similar tests. Consider the intent behind the tests, and look to see if they are delivering on that intent.
Review tests regularly to make sure they remain relevant, to see if newer tests provide similar information and learn if these might be better than the older ones. Compare any newly created test against existing tests for overlap or redundancy.
Conclusion
Is automated software testing important? Yes, absolutely. It is invaluable to delivering software at a predictable cadence. Using the right tool for the purpose at hand is vital for you to have any level of confidence in the results.
Good, automated testing frees up your technical knowledge workers to consider other possible scenarios or paths you might need to explore, so you can make sure your software always delivers for your company.