A common criticism of computer security products is that they can only protect against known threats. When new attacks are detected and analysed security companies produce updates based on this new knowledge. It’s a reactive approach that can provide attackers with a significant window of opportunity.
Latest report now online.
Security companies have, for some years, developed advanced detection systems, often labelled as using ‘AI’, ‘machine learning’ or some other technical-sounding term. The basic idea is that past threats are analysed in deep ways to identify what future threats might look like. Ideally the result will be a product that can detect potentially bad files or behaviour before the attack is successful.
(We wrote a basic primer to understanding machine learning a couple of years ago.)
So does this AI stuff really work? Is it possible to predict new types of evil software? Certainly investors in tech companies believe so, piling hundreds of millions of funding dollars into new start-ups in the cyber defence field.
We prefer lab work to Silicon Valley speculation, though, and built a test designed to challenge the often magical claims made by ‘next-gen’ anti-malware companies.
With support from Cylance, we took four of its AI models and exposed them to threats that were seen in well-publicised attacks (e.g. WannaCry; Petya) months and even years later than the training that created the models.
It’s the equivalent of sending an old product forward in time and seeing how well it works with future threats. To find out how the Cylance AI models fared, and to discover more about how we tested, please download our report for free from our website.