Recently, I read a Tweet by Maaret Pyhäjärvi concerning the value of test automation. She claimed that "Lack of test automation moves known knowns to unknown knowns."
I strongly agree because if you don't have automation in place to "remind" you of known issues that might come up again, you will forget about them. In my view, test reporting plays a strong role in this as well.
This is her complete tweet:
Lack of test automation moves known knowns to unknown knowns. We knew we knew. But when we don't keep testing if that is true under conditions of change, amnesia hits us.— Maaret Pyhäjärvi (@maaretp) April 6, 2020
As you may remember from my earlier blog post about Test automation in relation to exploratory testing, I stressed the point that test automation is merely a part of exploration and not something that should be viewed separately.
An essential outcome of exploratory testing - and ultimately for fixing the issues that are found through these means - is well-defined reproduction steps. If a software tester is unable to describe what happened prior to an issue, it is rather unlikely that this can ever be fixed on purpose.
In this regard, taking good notes while exploring is essential to keep track of which steps were performed. The same is true for test automation reporting.
It is not enough to just have an outcome - passed or failed - for a specific test case. It always has to be apparent what steps were traversed in which order and why something ultimately failed. Also, a test report should give you important key data about the environment the test was executed on, the state of the application at the time of failure, specific URLs and parameters that were accessed, the exact time it took to process each operation etc.
All of this needs to be presented in a concise and well-structured format that does not overwhelm the viewers. If it is impossible to grasp directly what failed and why (e.g. if it is just a big blob of numbers and stack traces thrown at you), it is likely to be disregarded for further exploration. This decreases trust in the reports and induces wilful ignorance.
A test report should never stand on its own. It is rarely possible to include all information in there. Instead, it should be linked to other important data sources that give people more background information such as screenshots and video recordings (in case of UI tests), detailed test logs, network logs, dashboards, etc.
Again, it helps a lot if these sources are easily accessible from the central report. Copying and pasting information from the test report into another source in order to find connected data should be avoided at all cost as it disrupts a tester's flow and focus.
It also helps if the test report contains an overview of failed test scenarios on a dedicated page so it gives you a big picture about clusters of failures. This alone can help to grasp a potential cause. If all tests that require a user to log in fail, it is very likely that the authentication is broken.
It is crucial that the test reports are accessible by all people who need them and not only to the testers. It helps immensely if they are stored on a centrally accessible system so they can be shared, linked and quoted.
I hope this gives you an idea of why test reports play an important role in exploration and why they should not be underestimated.
If you are interested in how we do it, check out this example report from our open source reporting solution Cluecumber.