Johnston Harris: “QA is the Wild West today, and we turn Cowboys into Sheriffs”

Sandra Parker
6 min readMay 2, 2023

How to test only what matters and bring QA and development teams back together.

Johnston Harris is the CEO & Co-Founder of Appsurify, Inc., a software testing company that uses AI and Machine Learning to automate testing processes and improve software quality. Appsurify aims to help companies accelerate their development processes while reducing costs and improving overall product quality.

We’ve discussed the risks and drawbacks of the bloated test suits, challenges faced by QA teams, the ways to minimize the rift between testers and developers, shift testing left, and relying on cutting-edge AI-powered tools and human creativity. Yes, human QA engineers are still in the game!

Tell me more about your AI-powered Risk-based testing tool. Which QA pain points does it address?

Our smart QA tool utilizes AI to identify the specific areas in which developers have made changes, and automatically selects and executes only the tests associated with those areas.

Traditionally, when a change is made to an application, QA engineers must run all tests, which can be time-consuming and costly. However, our AI-powered approach helps QA teams get instant test feedback on a per-change basis by just running the tests that matter to catch bugs earlier, increase developer output and velocity, shift left, and reduce infrastructure costs.

We recognize that while many AI-powered tools are available to help with test creation, such as low-code platforms that automate test creation, the sheer volume of tests can often create confusion and uncertainty. In such cases, knowing whether the proper tests are being run, if they are testing what they should be testing, and whether the outcomes are relevant to the current development needs becomes challenging. It is the “Wild West” of testing, but we know how to turn Cowboys into Sheriffs.

Is your software one of those “making-human-QA-redundant” innovations?

Absolutely not. Our tool is designed to support and empower QA engineers. AI-powered tools in any industry aim to reduce the number of employees stuck with outdated practices and help those who prioritize efficiency shine.

That said, our smart tool requires some initial training, during which we observe the developers’ activity for about a week. This allows us to link developer commits to tests and associate specific tests with specific areas of the application. When we turn on the model in the CI/CD pipeline and the commit comes through, we know exactly where to focus the test execution.

While AI can automate many testing processes, there will always be a need for human testers. The key is to ensure that testers and developers are in sync and that some processes are not automated or delegated to AI until the trust is there.

There’s no substitute for human creativity when determining whether a test is testing what it should. Manual testing is still necessary in some cases to ensure the app functions as it should. The AI leverages the metadata of the repositories to supercharge the work of QA and Developers, and is only able to optimize what’s already been built.

Our tool can help eliminate distractions and prioritize testing in focus areas, but it’s up to human engineers to dig deeper and ensure the tests are meaningful. Ultimately, our goal is to empower QA teams to catch bugs earlier in the process and improve the efficiency of the automated testing process, not to replace them.

What can QA teams learn from your approach right away?

People often spend insufficient time on analysis and instead focus on creating and repeatedly running too many tests. Working under pressure, QA engineers can be easily distracted and overwhelmed by test failures that are not relevant, wasting time and delaying deployments.

One example of a common distraction is flaky tests, which pass or fail intermittently without any changes to the underlying code. Flakiness can occur due to various environmental factors, such as a pop-up notification appearing on a device, a sudden battery failure, or loss of network connectivity. The reasons for flakiness are endless, leading to false alerts or false positives that distract QA teams.

Flakiness is a significant challenge and can be frustrating, but many teams accept it as a given and do not approach it systematically. Using our tool, you leverage AI to identify and isolate flaky tests, minimizing distractions. With this approach, when a build comes through, we can quickly distinguish between test failures caused by actual bugs and test failures caused by flakiness, providing cleaner feedback and more streamlined workflows.

It’s worth noting that popular test suites like Selenium, Cypress, and Playwright can have a flakiness degree of up to 5%, 10%, or even 20%! Every day, QA teams must sift through numerous tests to identify whether a problem is related to a bug or flakiness, leading to significant time waste and frustration. It’s essential to address this challenge systematically to minimize distractions and optimize QA resources.

What are the most compelling challenges QA teams face now?

One major challenge is communication and synchronization between developers and testers. There has always been some degree of separation because testers have to wait until the build processes are complete, which can take anywhere from 10 minutes to 20 hours, depending on how long the test suite takes. The longer the wait time, the bigger the gap between developers and testers.

Another challenge is context switching. Developers move on to other tasks, and testing is often not done until the very end of a sprint. Tests can be so lengthy that they are not included in the CI/CD. For instance, QA teams may not be able to run UI tests because one test could take 10 minutes. If you have 100 of them, they could take over 16 hours to complete!

This issue often leads to teams removing end-to-end and UI tests from the pipeline because they take too long to run. As a result, they are not part of the development feedback loop and are not in sync with how developers work. This causes developers to switch to other tasks, and QA engineers fall out of context, leading to missed defects and an extended testing cycle.

Our goal is to bring robust regression feedback into the daily CI/CD feedback loop by just executing the UI tests or E2E tests impacted by recent developer changes. Keep the CI/CD optimized and bring in that valuable feedback to deploy faster and more confidently. We like to call it a “Dynamic Smoke Test,” or “Incremental Testing.”

What should QA specialists focus on to overcome these hurdles and stay in the game?

Keep your skills sharp and find ways to make your job easier and more efficient, which not only prevents burnout but also makes you look better in the eyes of management. Use cutting-edge technologies, such as ours, which suggest pushing a smart subset of tests after any small change to get clean feedback faster, instead of running 100% of the tests.

Remember that communication with the development team should be continuous. Prioritize iterative testing to help developers ship higher quality code and get to the market faster. Parallel testing is also a good approach for a bit then starts to hit the law of diminishing returns, is extremely costly, tests begin to collide, and has additional downsides.

While breaking out test suites into functional services may seem attractive, it can lead to managing numerous test suites, which can be challenging and inefficient to maintain. If you do decide to break out your test suites, be prepared to invest a significant amount of time and resources into managing them, as this approach requires a full-time job to handle the spillover effects that can occur between them.

To avoid these challenges, it may be more efficient to keep your test suites centralized, even if it means having a more complex suite.

All said, we at Appsurify are proud to help QA experts worldwide address the mentioned challenges relying on our AI-powered Risk-based testing tool. However, I want to emphasize that human testers will always be necessary, and communication between QA engineers and developers will always be crucial for the project’s success.

--

--

Sandra Parker

Head Of Business Development at QArea. I’m passionate about new technologies and how digital changes the way we do business.