How to tell if security test results are useful, misleading or just rubbish?
Latest reports now online.
In security testing circles there is a theoretical test used to illustrate how misleading some test reports can be.
The chair test
For this test you need three identical chairs, packaging for three anti-virus products (in the old days products came on discs in a cardboard box) and an open window on a high floor of a building.
The methodology of this test is as follows:
- Tape each of the boxes to a chair. Do so carefully, such that each is fixed in exactly the same way.
- Throw each of the chairs out of the window, using an identical technique.
- Examine the chairs for damage and write a comparative report, explaining the differences found.
- Conclude that the best product was the one attached to the least damaged chair.
The problem with this test is obvious: the conclusions are not based on any useful reality.
The good part about this test is that the tester created a methodology and tested each product in exactly the same way.* And at least this was an ‘apples to apples’ test, in which they tested similar products in the same manner. Hopefully any tester running the chair test publishes the methodology so that readers realise that they have carried out a stupidly meaningless test. But that is not a given.
How to tell if a security test is useful
Sometimes test reports make very vague statements about, “how we tested”.
When evaluating a test report of anything, not only security products, we advise that you check how the testing was performed. And check whether or not it complies with a testing Standard. The Anti-Malware Testing Standards Organization’s Standard (see below) is a good one.
Headline-grabbing results (e.g. Anti-virus is Dead!) catch the eye, but we need to focus on the practical realities when trying to find out how best to protect our systems from cyber threats. And that means having enough information to judge a test report’s value. Don’t simply trust blindly that the test was conducted correctly.
*Although some pedants might require that the tester release each chair from the window at exactly the same time. Possibly from windows far enough apart that the chairs would not entangle mid-air and skew the results in some way.
Find out more
If you spot a detail in this report that you don’t understand, or would like to discuss, please contact us via our Twitter or LinkedIn accounts.
SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define ‘threat intelligence’ and how we use it to improve our tests please visit our website and follow us on Twitter.
These test reports were funded by post-test consultation services provided by SE Labs to security vendors. Vendors of all products included in these reports were able to request early access to results and the ability to dispute details for free. SE Labs has submitted the testing process behind this report for compliance with the AMTSO Testing Protocol Standard v1.0. To verify its compliance please check the AMTSO reference link at the bottom of page three of each report or
here.
UPDATE (10th June 2019): AMTSO found these test complied with AMTSO’s Standard.
Our latest reports, for
enterprise,
small business and
home users are now available for free from our website. Please download them and follow us on Twitter and/or LinkedIn to receive updates and future reports.
Posted on June 5th, 2019 by SE Labs Team and tagged 2019, anti-virus, enterprise, home user, security testing, security vendor, small business, standards, test results