I just remembered an older twitter thread about the measurement of test effectiveness and what actions should be taken if tests are not valuable anymore. This is my opinion.
Automation is a vital part of our test strategy. However, it is not something that is implemented once and then running forever. While the application under test is evolving, your automated test suite has to evolve along with it.
Any serious automation project should be treated like any other software project - it has to be refactored, extended, upgraded or even ported to a completely different technology. During these processes, the test scenarios (and their implementation) need to be reviewed regularly.
Practices that are natural to driving the application under test should be just as natural for their tests, e.g. code reviews, planning sessions, and retrospectives.
Tests can be more or less effective depending on different factors:
While all those points are important, the main criteria for the most difficult decision about the future of a test case is maintenance versus value.
So what does this mean for your tests?
It is important to set up the necessary steps for dealing with scenarios that need maintenance and especially flaky tests. When just dealing with them ad-hoc, eventually this will lead to a test suite that is trusted less over time and eventually ignored completely.
Typically, we do it like this:
This is not the complete process as I just wanted to illustrate the main idea here.
Whenever a test is failing multiple times and it is not quick to fix, it is generally reviewed against the criteria I mentioned above.
This has to be done regularly as it keeps the test suite fresh, fast, responsive and meaningful.