When testing applications, it is important to consider different test environments. Different properties of these environments such as web-browser types and underlying platforms may cause an application to exhibit different types of failures. As applications evolve, they must be regression tested across these different environments. Because there are many environments to consider this process can be expensive, resulting in delayed feedback about failures in applications.
In this work, four techniques and two hybrid techniques are proposed for providing a developer with faster feedback on failures when regression testing applications across different test environments. The proposed techniques draw on methods used in test case prioritization; however, in this case, test environments are prioritized, based on information on recent and frequent failures.
The proposed techniques are especially effective for supporting Continuous Integration (CI) practices. In CI environments, there is a short time interval between runs of regression tests. Developers frequently check their code in to the mainline codebase, and regression tests relevant to that code need to be performed in applicable environments. Existing cost-effective regression testing techniques, which utilize code coverage cannot keep up with the pace of change that occurs in such processes.
The proposed techniques are empirically studied on five non-trivial, popular open source web applications. The results show that the proposed techniques can be cost-effective. The proposed approaches generally detect more failures faster than two baseline approaches, in which test environments are not prioritized or are randomly ordered. In addition, the proposed prioritization techniques are compared and analyzed to decide which techniques are more cost-effective than the other techniques for each experiment object. Furthermore, this study considers developer interests in CI environments. This study shows that the proposed techniques can also give faster feedback on test environments whose testing results are interesting to developers.