SE Labs

Special Edition
Computer security testing comment and analysis from SE Labs

Testing deeper, wider and better

Bad guys evolve; defenders evolve; testing (should) evolve

Latest endpoint protection reports now online for enterprise, small business and home users.

These reports represent the state-of-the-art in computer security endpoint testing. If you want to see how the very best security products handle a range of threats, from everyday (but nevertheless very harmful) malware to targeted attacks, this is a great place to start.

Read more >

Securing a business from scratch

Building and launching a start-up company is a challenge in itself. Securing it when it is new, young and vulnerable is something else. It’s very necessary but also hard if you don’t know what you’re doing. And can you afford a consultant in the early days?

If your new business is IT-based and focused on security then you’re in a stronger position than, say, an organic make-up business or an ethical coffee brand.

Read more >

Breach Response Test: Kaspersky Anti Targeted Attack Platform

Testing anti-breach products needs the full chain of attack.

Kaspersky Lab should be congratulated, not only for engaging with this new and challenging test, but for submitting a product that performed so strongly against attacks that closely replicate advanced, nation-state level threats.

Its endpoint detection and response offering, Kaspersky Anti Targeted Attack Platform, is one of the very first to face our brand new Breach Response Test and it detected all of the attacks, while protecting against the vast majority of them.

Read more >

Anti-malware is just one part of the picture

Beefing up security advice with facts

Latest reports now online for enterprise, small business and home users.

At SE Labs we spend our time testing things that are supposed to protect you but we also understand that securing your business, or your home network, is never as simple as installing one or more security products.

The risks are many and varied, but the ways to mitigate them are often most successful with a good dose of common sense as well as the appropriate technology. You just need to think things through carefully and make sensible decisions.

Read more >

Breach Response Test: Symantec Endpoint Security Complete

Testing anti-breach products needs the full chain of attack.

Symantec’s endpoint detection and response offering, Symantec Endpoint Security Complete, is the first to face our brand new Breach Response Test.

Report now online.

Read more >

SE Labs Annual Report 2019

SE Labs has been working at the core of the cyber security industry since its launch in 2016. We work with all of the major developers of IT security products as well as their main customers and even investors looking to increase their chances when betting on emerging technologies.

Read more >

Targeted attacks with public tools

Over the last few years we have tested more than 50 different products using over 5,000 targeted attacks. And there’s news, both good and bad.

In this article we will look at the different tools available, how effective they are at helping attackers bypass anti-malware products and how security vendors have been handling this type of threat for over a year.

Read more >

The best security tests keep it real

Why it’s important not to try to be too clever

Latest reports now online for enterprisesmall business and home users.

Realism is important in testing, otherwise you end up with results that are theoretical and not a useful report that closely represents what is going on in the real world. One issue facing security testing that involves malware is whether or not you connect the test network to the internet.

The argument against this approach is that computer viruses can spread automatically and a test could potentially infect the real world, making life worse for computer users globally. One counter argument goes that if the tester is helping improve products then a few dozen extra infected systems on the internet is, on balance, worth it considering there are already millions out there. The benefits outweigh the downside.

Another counter argument is that viruses such as we understand them from the 90s are not the same as they are today. There are far fewer self-replicating worms and more targeted attacks that do not generally spread automatically, so the risk is lower.

Connecting to the internet brings more than a few advantages to a test, too. Firstly, the internet is where most threats reside. It would be hard to test realistically with a synthetic internet.

Secondly, for at least 10 years most endpoint security products have made connections back to management or update servers to get the latest information about current threats. So-called ‘cloud protection’ or ‘cloud updates’ would be disabled without an internet connection, effectively reducing the products’ protection abilities significantly. This then makes the test results much less accurate when running assessments.

There are cases in which turning off the internet is useful, though. Last year we ran a test to check whether or not artificial intelligence could predict future threats. We ran our Predictive Malware Response Test without an internet connection to see if a Cylance AI brain, which had been built and trained three years previously, could detect well-known threats that had come into existence since then. You can see the full report here.

But that was a special case. When assessing any security product or service for real-world, practical purposes, a live and unfiltered internet connection is probably a useful and even necessary part of the setup.

Naturally we have always used one in our testing, at one point even going as far as using consumer ADSL lines when testing home anti-malware products for extra realism. When reading security tests check that the tester has a live internet connection and allows the products to update themselves.

If you spot a detail in this report that you don’t understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.

SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define ‘threat intelligence’ and how we use it to improve our tests please visit our website and follow us on Twitter.

This test report was funded by post-test consultation services provided by SE Labs to security vendors. Vendors of all products included in this report were able to request early access to results and the ability to dispute details for free. SE Labs has submitted the testing process behind this report for compliance with the AMTSO Testing Protocol Standard v1.0. To verify its compliance please check the AMTSO reference link at the bottom of page three of this report or here.

UPDATE (24th July 2019): The tests were found to be compliant with AMTSO’s Standard.

Our latest reports, for enterprisesmall business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

How can you tell if a security test is useful or not?

How to tell if security test results are useful, misleading or just rubbish?

Latest reports now online.

In security testing circles there is a theoretical test used to illustrate how misleading some test reports can be.

For this test you need three identical chairs, packaging for three anti-virus products (in the old days products came on discs in a cardboard box) and an open window on a high floor of a building.

The methodology of this test is as follows:

  1. Tape each of the boxes to a chair. Do so carefully, such that each is fixed in exactly the same way.
  2. Throw each of the chairs out of the window, using an identical technique.
  3. Examine the chairs for damage and write a comparative report, explaining the differences found.
  4. Conclude that the best product was the one attached to the least damaged chair.

The problem with this test is obvious: the conclusions are not based on any useful reality.

The good part about this test is that the tester created a methodology and tested each product in exactly the same way.* And at least this was an ‘apples to apples’ test, in which similar products were tested in the same manner. Hopefully any tester running the chair test publishes the methodology so that readers realise what a stupidly meaningless test has been performed, but that is not a given.

Sometimes test reports come with very vague statements about, “how we tested”.

When evaluating a test report of anything, not only security products, we advise that you check how the testing was performed and to check whether or not it has been found compliant with a testing Standard, such as the Anti-Malware Testing Standards Organization’s Standard (see below).

Headline-grabbing results (e.g. Anti-virus is Dead!) catch the eye, but we need to focus on the practical realities when trying to find out how best to protect our systems from cyber threats. And that means having enough information to be able to judge a test report’s value rather than simply trusting blindly that the test was conducted correctly.

*Although some pedants might require that each chair be released from the window at exactly the same time – possible from windows far enough apart that the chairs would not entangle mid-air and skew the results in some way.

If you spot a detail in this report that you don’t understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.

SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define ‘threat intelligence’ and how we use it to improve our tests please visit our website and follow us on Twitter.

These test reports were funded by post-test consultation services provided by SE Labs to security vendors. Vendors of all products included in these reports were able to request early access to results and the ability to dispute details for free. SE Labs has submitted the testing process behind this report for compliance with the AMTSO Testing Protocol Standard v1.0. To verify its compliance please check the AMTSO reference link at the bottom of page three of each report or here.

UPDATE (10th June 2019): The tests were found to be compliant with AMTSO’s Standard.

Our latest reports, for enterprise, small business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

Enemy Unknown: Handling Customised Targeted Attacks

Detecting and preventing threats in real-time

Computer security products are designed to detect and protect against threats such as computer viruses, other malware and the actions of hackers.

A common approach is to identify existing threats and to create patterns of recognition, in much the same way as the pharmaceutical industry creates vaccinations against  known biological viruses or police issue wanted notices with photographs of known offenders.

The downside to this approach is that the virus or criminal has to be known to be harmful, most likely after someone has become sick or a crime has already been committed. It would be better to detect new infections and crimes in real-time and to stop them in action before any damage is caused.

This approach is becoming increasingly popular in the cyber security world.

Deep Instinct claims that its D-Client software is capable of detecting not only known threats but those that have not yet hit computer systems in the real world. Determining the accuracy of these claims requires a realistic test that pits the product against known threats and those typically crafted by attackers who work in a more targeted way, identifying specific potential victims and moving against them with speed and accuracy.

This test report used a range of sophisticated, high-profile threat campaigns such as those believed to have been directed against the US Presidential election in 2016, in addition to directing more targeted attacks against the victim systems using techniques seen in well-known security breaches in recent months and years.

The results show that Deep Instinct D-Client provided a wide range of detection and threat blocking capability against well-known and customised targeted attacks, without interfering with regular use of the systems upon which it was deployed. The deep learning system was  trained in August 2018, six months before the customised targeted threats were created.

Latest report now online.

About

SE Labs Ltd is a private, independently-owned and run testing company that assesses security products and services. The main laboratory is located in Wimbledon, South London. It has excellent local and international travel connections. The lab is open for prearranged client visits.

Contact

SE Labs Ltd
Hill Place House
55A High Street
Wimbledon
SW19 5BA

020 3875 5000

info@selabs.uk

Press