Code drives test automation. Even when using “codeless” automation tools, code exists under the covers, yet we often overlook this as throw-away code or something less important than the application it tests. In this blog, we discuss why test code deserves the same attention as production code.
There is a great misunderstanding when it comes to software. Many developers, managers and other leaders share in this misunderstanding. They often see a differentiation between “production software” and everything else, including test code. People agree developers should carefully design, model and test “production” software, but some of those same people seem to think non-production software doesn’t need the same attention to detail.
If that ever was true, it’s certainly not true now.
Our customers and users expect production software to function properly. When we design and build software, we work hard to make certain our software addresses the needs of the people depending on it. We ensure we properly apply the design model, engage in common practices of pair and ensemble (sometimes called mob) development and code reviews, and review the design and examine database calls for accuracy. We test the code.
If we use test automation software to help test production software, should we not apply the same level of diligence to that software? We rely on the test code to correctly exercise and report on the production code. How is it not at least as important as the production code?
All software is created to support a business purpose. It may be for increased efficiencies, identifying possible opportunities, or other purposes. Modern software development enables common practices which improve stability and drive value. These include using common design patterns which help maximize code reusability and simplify future maintenance.
We model customer-impacting software and use modular design techniques to reuse code efficiently. These common development practices help with better delivery and better overall application quality. They contribute to a better experience for the software application users. At the very minimum, the software we use to test the software needs the same level of attention as the production, customer-facing application.
When some people think about testing software, they focus on the obvious scenarios – they see a “happy path” and create test cases to get 1 to 1 requirements coverage. They may look a little deeper and consider how much of the code the tests exercise. Often, people don’t consider the idea of line and branch coverage. A single trip through the possible branches is often the extent of in-depth testing. Many developers do not consider the conditions that might impact their testing. Rarely do they think about the environment they test in and the tools they use for testing until it fails.
This is a problem and presents one of the main challenges of testing.
The challenge of testing software is not filling out forms, creating test plans or scripts or cases, or building the tool to test the software. The challenge is the hard thinking about how to do those things.
Some people look at testing as breaking software or approach testing as a box to be checked. A few celebrate finding a problem in someone else’s work. At best, these ideas are counterproductive and weaken how teams work together.
I sometimes think this results from not having a definition of testing that broadly covers what software testing is and can be. A working definition I came up with several years ago is “software testing is a systematic evaluation of the behavior of a piece of software, based on some model.”
The focus is on the behavior of the application. How does it act? By keeping this focus, we can uncover unexpected behavior. It may not be wrong or incorrect. It’s simply not what we thought we’d see. This gives provides an opportunity to clarify understanding around that function.
Making Software for Testing
The cautions against testing production software are true for all software. The business purpose of test automation code is to rigorously apply testing principles against the application it is intended to test. You must evaluate the test automation code against the suitability for that purpose.
Software that is written to drive automation testing needs the same level of scrutiny as the software that you put in front of customers. Customers of the production software can be internal or external –they rely on that software to do their jobs, inform decisions, and make or place orders for products and services. The myriad of functions software possesses run business and leisure activities that would crumble without a good level of reliability.
Customers of test automation software are the teams who rely on software to relieve them of mundane tasks, so they can work on more complex tasks. They rely on that software to work as expected so that they can meet the needs of their customers. To make that happen with predictability, developers need to create the test automation code using the same rigorous standards as the production software.
Creators should build the automation code using an appropriate design pattern, making it as modularized and reusable as possible. After all, as the production code will likely change according to business needs, it does no good if you can’t easily modify the automation code supporting it as needed.
Business needs and expectations drive the development of production software. You should develop the test automation code in parallel with the production code for the greatest efficiency. When possible, I recommend the developers collaborate to combine their efforts.
Using Test-Driven Development (TDD) or Acceptance Test-Driven Development (ATDD) for both sets of code can drive shared understanding and reduce rework cycles. Using these approaches when working on both the production code and the test automation code supports conversation between teams.
This also gives you a greater opportunity to clarify expected behavior with the product owner or business analyst before your developers write any production or test automation code.
Testing Software for Testing
Software development is both a creative and social activity. The biggest challenge I have found to making well-crafted, stable, high-quality software is groups or individuals working in isolation. When I’ve worked with teams with a high degree of interaction, whether they were using TDD or not, I found the software they produced to be higher quality and more stable than those teams that worked in enclosed silos. Practices such as TDD or ATDD make the product better from the start, resulting in fewer problems to be resolved and reduced rework. This speeds up the delivery of the product.
These ideas apply to test automation software and production software. Testing using static analysis techniques and executing the code with known data will help reveal behavior, good and bad. If we can understand and predict behavior with any sort of reliability, we can test the software more broadly. Using exploratory and other investigative techniques gives us a framework for testing the software.
When the team has a high degree of confidence in the test code, and they’ve reviewed, explored and tested it, they can use it together with the production code it was made to exercise. Items found in exploration can themselves be included in future test sessions. In general, the final test of the automation code is to exercise the production software and carefully examine the results.
Once you deploy your production application or test code to production, you need to maintain it. As business needs or the environment change, both application and automation code need continual review. Too many times, I have seen application code changed and the test code ignored because it is only for testing. The “deploy once” or “fire and forget” mindset leads to out-of-date automation, which cannot support the application as it should. It also leads to a greater effort when you need to modify the application code.
By maintaining both together, as you develop them, you can test them together and trap possible problems introduced by the changes. We must regularly review and update the code running our automated tests. This is particularly true for CI/CD test code and code running a standard regression test suite.
The software running our test automation is production software, and we need to treat and respect it as such.