Test execution is the process of running tests to verify a specific functionality in a system. It’s a great way for us to find bugs in our applications, but over time we realized that we needed to improve the speed and efficiency of our test execution method. Here’s how we did it.
After four years of automated test development, we now have a significant collection of tests we can run. These tests can be organized and executed on demand and provide us with valuable data about the current state of our system.
Most popular automated test development platforms offer us some level of control over test execution: parallel suites, for example, to reduce execution times. Some platforms even allow us to dynamically inject test cases during runtime, depending on the current system state.
But what if it’s not enough? What if we need even more control over execution? What if we want to use mixed-type pipelines and dynamically change test data or execution pool thread capabilities?
We execute tests from several different IPs because some of the functionality can be tested only while using a specific tunnel connection. This brings us to Cloudflare accessibility problems, request limit issues, and, occasionally, authentication errors.
Some more complex scenarios require the alteration of test data. This can only be done via microservice-based endpoints. Some of those endpoints are only accessible from an internal network. After a tunnel connection is established with an external server, a test execution bot can no longer reach the internal resources required for this test run.
Another problem is the number of requests being generated during test runs. For security purposes, all environments have strict request limits, but our test activity can easily reach those limits. Dynamic IPs prevent us from whitelisting IP addresses, and it becomes impossible to execute all test collections from one IP address.
After several solutions failed, we finally came up with a test strategy that involved modifying test data upfront.
If access cannot be gained from specific IP, we get access tokens before making the connection. If the alteration of test data via internal endpoints is needed, we execute this before the test run. We also bypassed request limits by switching IPs during the test run.
All of this would be impossible if we did not design a more sophisticated test executor.
We had to design a system that allowed full control of dynamic test execution. The project goal was to have control over the parallel and serial execution of tasks, bound with one executor.
First, data gathering and alteration happen via internal endpoints. A tunnel connection is established, and then parallel test execution takes place to minimize execution time.
Some test suites generate more requests than others, so we must be aware of how many requests are being made and how many suites are in parallel segments. At some point, the IP address has to change, and a new set of test suites are executed again in parallel. This pipeline continues until all tests have been executed.
Thanks to this solution, we can take full control of the test execution pool and execution sequence. In practice, that means we are able to adapt to ever-changing security measures and still provide valuable test execution reports. Our tests allow us to identify bugs faster than ever, enhancing the security and efficiency of all our applications.
Want to read more like this?
Get the latest news and tips from NordVPN.