Special Edition is the blog for security testing business SE Labs. It explains how we test security products, reports on the internet threats we find and provides security tips for businesses, other organisations and home users.
Wednesday, 11 April 2018
It's why anti-virus has been declared dead on more than one occasion.
Latest report now online.
Security companies have, for some years, developed advanced detection systems, often labelled as using 'AI', 'machine learning' or some other technical-sounding term. The basic idea is that past threats are analysed in deep ways to identify what future threats might look like. Ideally the result will be a product that can detect potentially bad files or behaviour before the attack is successful.
(We wrote a basic primer to understanding machine learning a couple of years ago.)
So does this AI stuff really work? Is it possible to predict new types of evil software? Certainly investors in tech companies believe so, piling hundreds of millions of funding dollars into new start-ups in the cyber defence field.
We prefer lab work to Silicon Valley speculation, though, and built a test designed to challenge the often magical claims made by 'next-gen' anti-malware companies.
With support from Cylance, we took four of its AI models and exposed them to threats that were seen in well-publicised attacks (e.g. WannaCry; Petya) months and even years later than the training that created the models.
It’s the equivalent of sending an old product forward in time and seeing how well it works with future threats. To find out how the Cylance AI models fared, and to discover more about how we tested, please download our report for free from our website.
Follow us on Twitter and/ or Facebook to receive updates and future reports.