Special Edition is the blog for security testing business SE Labs. It explains how we test security products, reports on the internet threats we find and provides security tips for businesses, other organisations and home users.

Wednesday, 10 April 2019

Enemy Unknown: Handling Customised Targeted Attacks

Detecting and preventing threats in real-time

Computer security products are designed to detect and protect against threats such as computer viruses, other malware and the actions of hackers.

A common approach is to identify existing threats and to create patterns of recognition, in much the same way as the pharmaceutical industry creates vaccinations against  known biological viruses or police issue wanted notices with photographs of known offenders.

The downside to this approach is that the virus or criminal has to be known to be harmful, most likely after someone has become sick or a crime has already been committed. It would be better to detect new infections and crimes in real-time and to stop them in action before any damage is caused.

This approach is becoming increasingly popular in the cyber security world.

Deep Instinct claims that its D-Client software is capable of detecting not only known threats but those that have not yet hit computer systems in the real world. Determining the accuracy of these claims requires a realistic test that pits the product against known threats and those typically crafted by attackers who work in a more targeted way, identifying specific potential victims and moving against them with speed and accuracy.

This test report used a range of sophisticated, high-profile threat campaigns such as those believed to have been directed against the US Presidential election in 2016, in addition to directing more targeted attacks against the victim systems using techniques seen in well-known security breaches in recent months and years.

The results show that Deep Instinct D-Client provided a wide range of detection and threat blocking capability against well-known and customised targeted attacks, without interfering with regular use of the systems upon which it was deployed. The deep learning system was  trained in August 2018, six months before the customised targeted threats were created.

Latest report now online.

Wednesday, 20 March 2019

Assessing next-generation protection

Malware scanning is not enough. You have to hack, too.

Latest report now online.

The amount of choice when trialling or buying endpoint security is at an all-time high. It has been 36 years since 'anti-virus' first appeared and, in the last five years, the number of companies innovating and selling products designed to keep Windows systems secure has exploded.

And whereas once vendors of these products generally used non-technical terms to market their wares, now computer science has come to the fore. No longer are we offered 'anti-virus' or 'hacker protection' but artificial intelligence-based detection and response solutions. The choice has never been greater, nor has the confusion among potential customers.

While marketing departments appear to have no doubt about the effectiveness of their product, the fact is that without in-depth testing no-one really knows whether or not an Endpoint Detection and Response (EDR) agent can do what it is intended.

Internal testing is necessary but inherently biased: 'we test against what we know'. Thorough testing, including the full attack chains presented by threats, is needed to show not only detection and protection rates, but response capabilities.

EventTracker asked SE Labs to conduct an independent test of its EDR agent, running the same tests as are used against some of the world’s most established endpoint security solutions available, as well as some of the newer ones.

This report shows EventTracker's performance in this test. The results are directly comparable with the public SE Labs Enterprise Endpoint Protection (Oct – Dec 2018) report, available here.

Wednesday, 20 February 2019

Can you trust security tests?

Clear, open testing is needed and now available

Latest reports now online.

A year ago we decided to put our support behind a new testing Standard proposed by the Anti-Malware Testing Standards Organization (AMTSO). The goal behind the Standard is good for everyone: if testing is conducted openly then testers such as us can receive due credit for doing a thorough job; you the reader can gain confidence in the results; and the vendors under test can understand their failings and make improvements, which then creates stronger products that we can all enjoy.

The Standard does not dictate how testers should test. There are pages of detail, but I can best summarise it like this:
Say what you are going to do, then do it. And be prepared to prove it.
(Indeed, a poor test could still comply with the AMTSO Standard, but at least you would be able to understand how the test was conducted and could then judge its worth with clear information and not marketing hype!)

We don't think that it's unreasonable to ask testers to make some effort to prove their results. Whether you are spending £30 on a copy of a home anti-antivirus product or several million on a new endpoint upgrade project, if you are using a report to help with your buying decision you deserve to know how the test was run, whether or not some vendors were at a disadvantage and if anyone was willing and able to double-check the results.

Since the start of the year we put our endpoint reports through the public pilot and then, once the Standard was officially adopted, through the full public process. Our last reports were judged to comply with the AMTSO Standard and we've submitted these latest reports for similar assessment.

At the time of writing we didn't know if the reports from this round of testing complied. We're pleased to report today that they did. You can confirm this by checking the AMTSO reference link at the bottom of page three of this report or here.

If you spot a detail in this report that you don't understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.

SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define 'threat intelligence' and how we use it to improve our tests please visit our website and follow us on Twitter.

This test report was funded by post-test consultation services provided by SE Labs to security vendors. Vendors of all products included in this report were provided with early access to results and the ability to dispute details for free. SE Labs has submitted the testing process behind this report for compliance with the AMTSO Standard v1.0.

Our latest reports, for enterprise, small business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.