Robust regression test execution reports show how important they are and the value they bring to the product, even more so if they are automated and run continuously. A regression plan is made up of all the tests aimed at validating the implementations and business flows continuously; as the application grows, so will the regression plan, while other plans, such as smoke, that check the main business flows will remain unchanged as long as these flows exist.
In order to have a correct process at the testing level, the corresponding tests of the functionalities at the code level must be carried out first through Unit and API tests, while the business tests (smoke, regression) will be carried out at a later stage to continue increasing the value of the tests carried out on the application.
However, on many occasions, the high value of regression tests comes with a high cost in maintenance time. All QA automation has faced many hours and days of work digging through logs of test executions and investigating code, trying to figure out if the test error is due to a possible bug, a programming error, or false negatives and positives. In this article, we develop the main guidelines to try to correct the causes that make it difficult to maintain our regression plan.
(Re-) Consider Test Automation Strategy
One of the main reasons why we generally struggle to maintain regression is because our tests tend to perform too many actions and checks at the same time that they tend to run through user flows in our application, many of them unnecessarily. This class of so-called end-to-end tests tends to be, by definition, difficult to maintain, slow, and fragile. They mostly represent a black hole of time in regression maintenance.
Minimizing the number of e2e tests and maximizing the number of functional tests should always be the first starting point for any regression suite.
Functional tests (feature tests) are the antonym of e2e; they allow us to verify that an implementation or feature of the application works, plain and simple. Since an application is full of hundreds of small implementations, testing each of them separately will always guarantee coverage, maintenance, and reliable results on the state of the application.
Consider the following example:
Scenario: User downloads a bill
Given the user logs into the application
And the user creates a new bill
And the user opens created bill
When the user presses download option
Then the bill is downloaded
This is an example of a “flaky test.” The objective of this test is to check the ‘Download’ functionality of the application; for this, an invoice is necessary as a precondition. Do we need to create a new invoice to download it? What happens if the create invoice implementation fails? In this case, not only would another possible test of creating an invoice fail, but this test would specifically fail because it has not been possible to create the invoice, when in this case, creating the invoice is not the target of the test but downloading it. We must try to reduce to the maximum all those preconditions that are unnecessary for what we want to check.
Therefore, the previous example could be adjusted as follows:
Scenario: User downloads a bill
Given the user logs into the application
And the user opens bill ‘X1023’
When the user presses download option
Then the bill is downloaded
Given that in our hypothetical scenario, it is still necessary to open the invoice in order to download it, we must open it before pressing the option, but in this case, the invoice is already in the system in which we run the test, so we minimize the possibility that a precondition causes a failure in our test.
Support Your Tests With API Calls to Your Advantage
Another possibility that tends to be forgotten is to use API calls to support our UI tests; on other occasions, they are directly avoided under the argument of “using UI tests that are faithful to the user experience.” Obviously, UI tests should interact as much as possible, just as a real user would, but care must be taken when handling preconditions, as they are a potential source of unwanted errors that compromise the actual validation of our test. In addition, in our regression suite, there would already be tests whose final validation would already check (true to the user experience) what would be preconditions in other tests.
Relying on API calls for our UI tests will not only make the execution of the tests much faster, but we will also not be altering the behavior of the application since, behind each form that interacts with the backend part, there will always be a call from an endpoint by passing certain information to it. We can directly use the same endpoint by sending it the information that we would send through the form.
Suppose the following scenario: In order to verify a user’s account in the system, the account must be new. Therefore, we cannot have a user account stored in the system as in the previous example.
Scenario: User verifies a created account
Given the user opens sign up form
And the user submits signs up form
And the user logs into the application with created user
When the user verifies the account
Then the user sees a message of verification completed
To create a new account, the user must access the registration form, fill in all the fields of the form, and send the information to create the new user. Knowing the information we send through the endpoint, we can define the test as follows:
Scenario: User verifies a created account
Given a random account is created with the following data
| field | value |
| firstname | John |
| lastname | Doe |
| email | test+random_number@company.com |
| password | Test123456 |
And the user logs into the application with created user
When the user verifies the account
Then the user sees a message of verification completed
The endpoint used in the given step will call the same endpoint that is used in the registration form, and the information used is the same that would be sent through the form. At the end of the step, the generated email with which the user could access the system would be obtained.
Although the implementation of this API call in the automation framework may seem complex at first sight, the benefits of using the application’s API for UI testing in support greatly outweigh the initial investment in preparing the framework with this capability. On the other hand, it also allows the possibility of carrying out backend tests in the same framework, something that is undoubtedly really beneficial to guarantee the correct functioning of the system API.
Speak With Developers for Consistent Locator Strategy: ID Over Xpath
Undoubtedly, communication between the QA and development teams is key when it comes to achieving reliable and robust tests. Among the different ways of selecting the UI elements of an application, the following stand out above all:
- ID
- Xpath
IDs are unique identifiers of the UI elements of the application. If defined correctly, the IDs are immutable names, easily accessible, and used by the main software testing tools such as Selenium. Instead, they require explicit definition and maintenance by the development team.
Xpath (XML Path Language) is a language that allows you to build expressions that traverse and process an XML document. The elements of the DOM tree of an application are built as an XML document, and tools like Selenium allow us to select the elements of the application with that structure without the need for the explicit intervention of the development team and allow us to quickly automate the test through of those elements.
However, the use of Xpath also has its drawbacks. Looking at the example below: