People are fascinated by AI and ChatGPT and are looking for ways to use it. In the tech space, some people are discussing how it will change everything about software and testing, but they are missing something important: We’ve used AI in software testing long before ChatGPT.
We’ve cast artificial intelligence (AI) as the boogeyman countering the “technology makes everything better” trope in popular culture for decades, from HAL in “2001, A Space Odyssey” to Skynet in the Terminator movies to VIKKI in “I, Robot.”
An uncertain future tends to unsettle, if not upset, most people. The scary dystopian future because of technology running amok, as presented in those movies and books, speaks to our fear of being replaced. AI-powered robots will take away everyone’s job, from assembly line workers to call centers and now, apparently, to knowledge workers working in software.
Still, for much of my career, many people listed me as an anti-automation person. I find this interesting because I use various automation tools to assist me in my testing work, even if I don’t use trending name-brand tools. The reason is simple – those tools often did not do what I needed to do without a lot of extra work.
I am wary of the new, buzz-word-laden solution people hail as the next great thing to improve the software world. Early automation tools that did record and playback, and nothing else, were similarly acclaimed. The industry also praised the next several test automation tools that fixed or avoided the problems of earlier tools.
Before I go all-in on a revolutionary tool or approach, I want to see real, repeatable evidence that it works as described. I have seen too many people, teams and companies burned by trusting initial advertising.
The cool, new, attention-grabbing thing is ChatGPT and AI in software testing. So, let’s talk about it.
AI and ChatGPT
A combination of three factors power AI: large volumes of data, significant computing power and the underlying model determining how the AI learns. This model drives the learning algorithms and processes the data, generating results.
Software professionals use internet searches to debug code problems or resolve other technical problems. We might search for how to write a block of code more efficiently than we have. Or we might look for a better approach to a problem than we currently have.
Some might argue traditional search engines aren’t AI. But they are among the more common forms of how we use AI daily. The more we search for something and the more often we search using similar keywords, our searches get more granular. They get more precise.
As we repeat or hone our searches, the algorithms driving the search tool will refine and update presented results based on the results we open or click from previous, similar queries. That behavior, returning results based on what users select from previous keyword search results, is the essence of AI.
With a generative tool such as ChatGPT, instead of keywords driving selections that return results based on a data source and training, we get a created or built response based on prompts and queries. It is possible to use ChatGPT to do things like:
- Researching topics we are unfamiliar with to gain understanding
- Gathering topics for a blog content plan
- Creating starting point marketing content
- Debugging and explaining code (not client owned)
- Asking questions and using it as a brainstorming aid.
Of course, results from ChatGPT can be reasonably accurate or confidently incorrect. So, it can act as an authority on a topic based on what it absorbs, but if that information is incorrect, it will not recognize the error.
What does that tell us? We should never use a response from ChatGPT without proper subject matter experts weighing in on the relevance and accuracy of the results. The results of any machine learning tool will only be as useful as the data we feed it. Any tool can give valuable or worthless results based on how it learns, what data it consumes, and the requests people ask.
Coding and technical answers may give you starting points to consider. As a large language model, ChatGPT can combine pieces from various sources into a very confident – but occasionally very wrong – response. It is risky to rely on such a guide without other sources to verify code or confirm the response.
AI and Better Software Testing
Can AI help us with better testing and test automation? Of course. Used wisely, it can help with test case generation by analyzing code and evaluating possible scenarios. We can use AI to optimize test cases by identifying redundant or unnecessary tests and removing them from the test suite.
AI can also help with automated test execution by creating scripts that simulate user behavior and interact with the system under test. This step helps reduce manual effort and increases the speed and accuracy of testing. It can then monitor and analyze test results to identify patterns and trends that help improve the test suite and look for potential problem areas that might arise with different data or conditions.
This means using AI in software testing can help us analyze the test results and identify the root cause of defects, thus helping developers fix the issues quickly.
It will help us make software itself better, in time, similar to AI functions like automatic braking functions in smart cars, operating autonomous vehicles or onboard avionics for aircraft.
However, many tools are available that incorporate AI into their current work.
AI Testing Tools
Tools to assist testing using machine learning or AI may help us do better, more efficient testing that comes in various forms.
Here are some of them I’m looking into to learn more about:
UiPath originally built automation libraries. They expanded to a desktop tool-building test automation and expanded into robotic process automation (RPA), including an end-to-end RPA platform.
Their business automation platform offering combines process and task mining using pattern recognition to build out recommendations and suggestions based on user inputs. It also has natural language capability native to the tool.
Testim began as a mobile testing tool provider using a “low code” model to test mobile applications. It records the user flows entered, and the more a user works with it, the more it recognizes repeated patterns and offers suggestions. It has a visual editor to use for test creation, focusing primarily on mobile apps, but you can use it reasonably well with web projects. Tricentis acquired Testim in 2022.
Started in 2021, askUI claims to be able to automate everything from Web to native desktop apps. They assert the tool can build out workflows to simulate human actions. The tool also appears to find any visible element on the device screen without an objectid. And it can do cross-device automation, for example, exercising a 2FA integration.
Mabl is another low-code automation tool that offers features from API testing to auto-healing. It also uses smart element locators you can use in different frameworks, which can lead to more powerful tests.
Virtuoso uses a natural language format to describe tests. It has an AI-powered self-healing feature intended to reduce flakiness in tests. For example, if a class or classpath has been changed, it will change the code to accommodate the change. It uses a low-code approach, so onboarding is easy. There are loads of technical integrations and support for scripts.
There are other tools currently available we might label AI in software testing, such as Applitools and Qyrus. The ones here are merely a sample of AI-driven or assisted tools that are available now.
On the surface, the technology might appear immature at this point. However, the pattern recognition aspects of the various tools are reasonably mature and improving rapidly. The challenge seems not to be in the technology but in our understanding of how to use these tools well.
Depending on how risk-averse your organization is, it may be wise to wait for the technology and understanding of how to use it continues to mature. Some may benefit from experimenting with tools or even building their own framework to build and train their machine learning and AI environment.
Patience and pragmatic analysis are good starting points. Learn and understand what you want to do and why. Learn how to apply the tools to that end. Be patient and learn what you want and how to do it. You will still have issues when you, your environment and your toolset are ready. There may be problems in production. However, as the issues dwindle after the first few encounters, it becomes mundane and the glossy newness fades. We use it because it works every time, without question.