Special Edition is the blog for security testing business SE Labs. It explains how we test security products, reports on the internet threats we find and provides security tips for businesses, other organisations and home users.

Friday, 4 October 2019

Anti-malware is just one part of the picture

Beefing up security advice with facts

Latest reports now online for enterprise, small business and home users.

At SE Labs we spend our time testing things that are supposed to protect you but we also understand that securing your business, or your home network, is never as simple as installing one or more security products.

The risks are many and varied, but the ways to mitigate them are often most successful with a good dose of common sense as well as the appropriate technology. You just need to think things through carefully and make sensible decisions.

Fortunately, there are some schemes out there to help you through the process. In the UK small businesses might consider the Cyber Essentials certification, which helps you address the most common computer security threats.

The five technical controls involve securing internet connections; using security devices and software; controlling access to data and services; using protection from viruses and other malware; and keeping devices and software updated. All good advice and worth following, whether or not you want to achieve certification in the UK.

However, while the advice is good it not very specific. For example, you should install anti-virus software but neither the documentation nor the consultants you talk to will tell you to choose a good product. Any anti-virus will do, it seems!

A more international option is ISO 27001, which is a Standard covering information security management systems. Completely over-the-top for home users and small businesses, but ideal for enterprises and smaller companies that work with sensitive data, this certification puts IT security into a central role in the way an organisation operates. It doesn’t specify what sort of anti-virus, firewalls and other systems should be used, but it leads you to research further and consider the risks when choosing security solutions.

So, while testing is not the be-all and end-all of choosing a good security system, it can definitely help. The testing behind this report is conducted in the most thorough and transparent way and the results are used by consultancies and large businesses around the world to help with purchasing decisions. This free report gives you an insight into the sort of advice that these large organisations follow when building a good security system.


If you spot a detail in this report that you don’t understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.

Tuesday, 1 October 2019

Breach Response Test: Symantec Endpoint Security Complete

Testing anti-breach products needs the full chain of attack.

Symantec's endpoint detection and response offering, Symantec Endpoint Security Complete, is the first to face our brand new Breach Response Test.

Report now online.

This Breach Response Test is a new kind of test. We believe that the testing behind this report used the largest range of relevant threats in any publicly available test and that the analysis of how the products tested work is the most in-depth.

We go into some detail in the report (on page 9) about how threats work in a chain of stages because this is a really important and possibly unique feature of the Breach Response Test. It’s crucial to copy attackers' techniques in full when assessing security products.

A computer breach causes some kind of damage, whether that involves deleting or encrypting files on a computer system; stealing data that damages a company’s ability to compete; or stealing personal data for use in fraud. The possibilities and combinations are endless, but ultimately damage has to be done. Cyber criminals don’t usually hack systems out of simple idle curiosity.

This is an important detail frequently overlooked in security testing, which often examines a product or service’s ability to stop certain stages of attack, but not the full chain of events Breach Response Test: Symantec Endpoint Security Complete that run from the initiation of an attack through to a successful completion of the attacker’s prime goal.

Testers should not assume that certain approaches to protection are better than others. If a security company makes the world’s best behavioural detection system but a test pays attention only to URL blocking technologies then the product will fail the test, while in reality customers who use it would be protected.

It is common for us to see a product appear to fail, and allow malware to run, even to the point where we obtain a remote connection to the target. However, when we try to take control of that system we may be blocked from doing so. A tester that sees the connection open might wrongly conclude that the product has failed. It is only by running through the entire attack process that it is possible to assess a product’s full abilities.

This report is available for free from our website.

If you have any questions, we're here to help on Twitter and Facebook.

Wednesday, 4 September 2019

SE Labs Annual Report 2019

SE Labs has been working at the core of the cyber security industry since its launch in 2016. We work with all of the major developers of IT security products as well as their main customers and even investors looking to increase their chances when betting on emerging technologies.

Report now online.

Over the last few years the team has doubled in size and we have just moved to our new, custom-built offices in Wimbledon, London.

In the last year SE Labs has been nominated as Best Ethical Brand and Best British International Brand by The Telegraph, and was selected as one of the 20 most promising cyber security companies in the UK by Tech Nation, a UK government-backed business growth scheme.

To give more insight into how things have been going, including what the bad guys have been up to (and how the good guys are responding), we have published our first annual report. Please download, read and enjoy! And if you have any questions, we're here to help on Twitter and Facebook.

Thursday, 29 August 2019

Targeted attacks with public tools

Over the last few years we have tested more than 50 different products using over 5,000 targeted attacks. And there's news, both good and bad.

In this article we will look at the different tools available, how effective they are at helping attackers bypass anti-malware products and how security vendors have been handling this type of threat for over a year.

By Stefan Dumitrascu, Chief Technical Officer, SE Labs

Do expensive security products block free attack tools?

If the headline is a question, the answer is usually, "no". These attacks were run in a realistic way using publicly available hacking tools and the results were surprising. As attackers, our success levels were far greater than we’d predicted. Using free tools that are widely distributed on the internet we were able to compromise large numbers of systems, often without detection.

The good news is that, as we work closely with the security vendors, their products have improved over time. We have shared over half a terabyte of data with security partners in an effort to help strength the protection provided by their products.

However, it is interesting to see which hacking approaches were most effective. We used a combination of techniques including:

  • Process and memory injection
  • Anti-malware evasion
  • File-less attacks (including Microsoft PowerShell)

The results show that many endpoint products detect most of the attacks. However, while anti-malware evasion tools were mostly detected and prevented, injection techniques resulted in higher levels of compromise, while using PowerShell is currently an excellent way to break into systems, with far higher levels of compromise compared to the other methods.

Products were generally poor at cleaning up after a detected attack.

We have also tested email security services with many of these attacks and our public reports show that email remains an effective route for attackers. Combined with a good endpoint product, things don’t look too hopeless, but you wouldn't want to rely solely on an email gateway right now.

Why public attack tools?

When we started working on testing with targeted attacks we considered options that the security industry would most easily understand and accept. The obvious answer was Metasploit, a framework for building and executing attacks against a variety of targets. We were worried that every attack we generated would be detected, as Metasploit was such a well-known platform.

The results showed a different story. Initially we found that even using the standard Metasploit modules was enough to differentiate between the efficacy of products. However, as we progressed, we noticed an improvement of security products when dealing with standard threats built using Metasploit. Time for a change! Just like a normal attacker, we moved to new tools and techniques. We have now tested using five publicly available tools.

Using publicly available tools allows both our enterprise customers and partner vendors to replicate the results to check that they are accurate. This transparency allows us to help vendors improve their products rather than just giving them a shiny badge for their marketing departments.

Aren't these attacks just in the domain of script kiddies? While these attacks won’t reflect the capability of an attacker with the resources of a nation state actor, they are still harmful and sufficiently advanced to show a difference in the efficacy of the products in a test. On top of that, there are a lot of reports of public tools being used in malicious campaigns. At the end of the day criminals will use the easiest way to get to their goal, and using public tools is easier than writing malware.

The following table shows the success rate we have enjoyed using different tools:

Toolkit Compromise Rates
(with no detection)
2018 - 2019
PowerShell Empire

For more details about these tools read on...

Scoring and other terms

Before we get stuck into describing the tools that we've used and the relative success rates, it's worth having a quick look at how we score products and the terms we use, because these appear later in the article.

When we say a product 'blocked' a threat we mean that it stopped any malicious activity on the target system, including the execution of malware. If a product stops malicious activity after it starts to occur then the result is a 'neutralisation'. A good example of this happening is letting an executable file run and then stopping the threat before it can achieve its evil goal.

Both a blocked and neutralised result means that the products successfully stopped the threat, but at different stages. However, alongside these possible results we also have 'complete remediation'. To achieve this happy state the product must delete all significant traces of the attack from the system. These are the good results.
Blocked or Neutralised = Protection
+ Complete Remediation = Complete Protection
A 'compromised' result means that the attacker was successful in gaining access to the target system. This is split further into different stages of the attack, which we label as 'access', 'action', 'escalation' and 'post escalation action'. To learn more about these, and how we test using the full attack chain, please read our 2019 Annual Report or any of our enterprise endpoint protection test reports.

Shellter, a tool for shellcode injection

One of the first tools we started testing with was Shellter, an injection tool that can be used to inject shellcode into Windows applications. You can consider it a tool for creating Trojans. Simple to use, we enjoyed Shellter for its ability to produce great results (for the attacker). In the real world of criminal behaviour we observed its use by the Dragonfly threat group against the energy industry. https://www.infosecurity-magazine.com/news/dragonfly-20-attackers-probe/

We have now run over 1,300 tests using malicious files generated using this tool. These executables were delivered via download using a web browser.  We found a 96 per cent detection rate for attacks using Shellter. While this seems like a good ratio for the defender, the more worrying statistic is a compromise rate of 20 per cent. This is worrying when you consider the relatively low barrier of entry and ease of use for this tool.

How is it possible to have a detection rate of 96 per cent and an infection rate of 20 per cent? A security product can detect a threat but fail to protect against it! The table below shows how the products we have tested for over a year have handled Shellter-based threats:

Shellter Detection and Protection Rates
2018 Q1
2018 Q2
2018 Q3
2018 Q4
2019 Q1
Detection rate
Protection rate
Complete protection

We can see that the detection rate is fairly stable, at around an average of 96 per cent. More impressively, the rate at which products can completely protect against the threat rises from 40 per cent steadily to over 75 per cent 15 months later. This shows that the security vendors we work with have made improvements to their products that, in turn, have a positive impact on their customers – you and me.

Evasion frameworks

The goal of an evasion framework is to help an attacker create threats that can evade detection by anti-malware products. We use these in our tests with varying success.

Hiding threats with Veil

One of the most popular evasion frameworks is Veil and we've used if for a full year before swapping it out for Phantom Evasion.

During the year that we used it we ran 1,665 Veil-powered attacks. While the detection rate was similar to Shellter, at around 96 per cent, it only allowed us to compromise the target systems 13 per cent of the time. The overall Complete Protection rating against Veil averaged 69 per cent for the year.

Veil Detection and Protection Rates
2018 Q1
2018 Q2
2018 Q3
2018 Q4
Complete Protection

Hacking with PowerShell Empire

We have found PowerShell Empire to be the most exciting weapon in our tool set. On top of a wide range of capabilities it seems that, unless users outright block all PowerShell scripts, an attacker is likely to be successful. So far in our Endpoint Protection tests we have only really scraped the surface of Empire, having just launched 470 attacks, but it has already generated some of the best results when it comes to compromising systems.

While the detection rate by security products hovers at around 80 per cent across all of the threats tested, it is by far the best at evading detection. We have found a success rate of 20 per cent with this tool and that, when it succeeds, victims will not be notified by their security products. When using any other tool there is usually at least get some sort of notification.

PowerShell Empire Detection and Protection Rates
2018 Q3
2018 Q4
2019 Q1
Complete Protection

Evasive options – Phantom Evasion and Metasploit

We have also used two other frameworks as replacements for Veil: Phantom Evasion and Metasploit's newly introduced evasion modules. It's early days, so we don't have enough data yet to compare their success rates directly with the other tools. However, here are some early figures:

Phantom Evasion Detection and Protection Rates
2019 Q1
Complete Protection

Metasploit Evasion Detection and Protection Rates
2018 Q4
2019 Q1
Complete Protection


Looking at the data from the past 15 months, it's great to see the improvement made by the security vendors when dealing with increasingly advanced attacks. There is always something new around the corner and ways we can improve our tests. However, transparency in what we do helps everyone understand our work better and hopefully improves the efficacy of the tested products.

In every table above, which we have pulled from our raw test data, we've seen constant improvement in protection rates. So rather than go for the 'doom and gloom' approach you might see reported so often when every new breach is uncovered, at the end of the day both security vendors and testers should work together to try and improve protection for who really matters, the end user.

Wednesday, 17 July 2019

The best security tests keep it real

Why it's important not to try to be too clever

Latest reports now online for enterprisesmall business and home users.

Realism is important in testing, otherwise you end up with results that are theoretical and not a useful report that closely represents what is going on in the real world. One issue facing security testing that involves malware is whether or not you connect the test network to the internet.

The argument against this approach is that computer viruses can spread automatically and a test could potentially infect the real world, making life worse for computer users globally. One counter argument goes that if the tester is helping improve products then a few dozen extra infected systems on the internet is, on balance, worth it considering there are already millions out there. The benefits outweigh the downside.

Another counter argument is that viruses such as we understand them from the 90s are not the same as they are today. There are far fewer self-replicating worms and more targeted attacks that do not generally spread automatically, so the risk is lower.

Connecting to the internet brings more than a few advantages to a test, too. Firstly, the internet is where most threats reside. It would be hard to test realistically with a synthetic internet.

Secondly, for at least 10 years most endpoint security products have made connections back to management or update servers to get the latest information about current threats. So-called 'cloud protection' or 'cloud updates' would be disabled without an internet connection, effectively reducing the products' protection abilities significantly. This then makes the test results much less accurate when running assessments.

There are cases in which turning off the internet is useful, though. Last year we ran a test to check whether or not artificial intelligence could predict future threats. We ran our Predictive Malware Response Test without an internet connection to see if a Cylance AI brain, which had been built and trained three years previously, could detect well-known threats that had come into existence since then. You can see the full report here.

But that was a special case. When assessing any security product or service for real-world, practical purposes, a live and unfiltered internet connection is probably a useful and even necessary part of the setup.

Naturally we have always used one in our testing, at one point even going as far as using consumer ADSL lines when testing home anti-malware products for extra realism. When reading security tests check that the tester has a live internet connection and allows the products to update themselves.


If you spot a detail in this report that you don't understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.

SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define 'threat intelligence' and how we use it to improve our tests please visit our website and follow us on Twitter.

This test report was funded by post-test consultation services provided by SE Labs to security vendors. Vendors of all products included in this report were able to request early access to results and the ability to dispute details for free. SE Labs has submitted the testing process behind this report for compliance with the AMTSO Testing Protocol Standard v1.0. To verify its compliance please check the AMTSO reference link at the bottom of page three of this report or here.

UPDATE (24th July 2019): The tests were found to be compliant with AMTSO's Standard.

Our latest reports, for enterprisesmall business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

Wednesday, 5 June 2019

How can you tell if a security test is useful or not?

How to tell if security test results are useful, misleading or just rubbish?

Latest reports now online.

In security testing circles there is a theoretical test used to illustrate how misleading some test reports can be.

For this test you need three identical chairs, packaging for three anti-virus products (in the old days products came on discs in a cardboard box) and an open window on a high floor of a building.

The methodology of this test is as follows:
  1. Tape each of the boxes to a chair. Do so carefully, such that each is fixed in exactly the same way.
  2. Throw each of the chairs out of the window, using an identical technique.
  3. Examine the chairs for damage and write a comparative report, explaining the differences found.
  4. Conclude that the best product was the one attached to the least damaged chair.
The problem with this test is obvious: the conclusions are not based on any useful reality.

The good part about this test is that the tester created a methodology and tested each product in exactly the same way.* And at least this was an 'apples to apples' test, in which similar products were tested in the same manner. Hopefully any tester running the chair test publishes the methodology so that readers realise what a stupidly meaningless test has been performed, but that is not a given.

Sometimes test reports come with very vague statements about, "how we tested".

When evaluating a test report of anything, not only security products, we advise that you check how the testing was performed and to check whether or not it has been found compliant with a testing Standard, such as the Anti-Malware Testing Standards Organization's Standard (see below).

Headline-grabbing results (e.g. Anti-virus is Dead!) catch the eye, but we need to focus on the practical realities when trying to find out how best to protect our systems from cyber threats. And that means having enough information to be able to judge a test report's value rather than simply trusting blindly that the test was conducted correctly.

*Although some pedants might require that each chair be released from the window at exactly the same time – possible from windows far enough apart that the chairs would not entangle mid-air and skew the results in some way.


If you spot a detail in this report that you don't understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.

SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define 'threat intelligence' and how we use it to improve our tests please visit our website and follow us on Twitter.

These test reports were funded by post-test consultation services provided by SE Labs to security vendors. Vendors of all products included in these reports were able to request early access to results and the ability to dispute details for free. SE Labs has submitted the testing process behind this report for compliance with the AMTSO Testing Protocol Standard v1.0. To verify its compliance please check the AMTSO reference link at the bottom of page three of each report or here.

UPDATE (10th June 2019): The tests were found to be compliant with AMTSO's Standard.

Our latest reports, for enterprise, small business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

Wednesday, 10 April 2019

Enemy Unknown: Handling Customised Targeted Attacks

Detecting and preventing threats in real-time

Computer security products are designed to detect and protect against threats such as computer viruses, other malware and the actions of hackers.

A common approach is to identify existing threats and to create patterns of recognition, in much the same way as the pharmaceutical industry creates vaccinations against  known biological viruses or police issue wanted notices with photographs of known offenders.

The downside to this approach is that the virus or criminal has to be known to be harmful, most likely after someone has become sick or a crime has already been committed. It would be better to detect new infections and crimes in real-time and to stop them in action before any damage is caused.

This approach is becoming increasingly popular in the cyber security world.

Deep Instinct claims that its D-Client software is capable of detecting not only known threats but those that have not yet hit computer systems in the real world. Determining the accuracy of these claims requires a realistic test that pits the product against known threats and those typically crafted by attackers who work in a more targeted way, identifying specific potential victims and moving against them with speed and accuracy.

This test report used a range of sophisticated, high-profile threat campaigns such as those believed to have been directed against the US Presidential election in 2016, in addition to directing more targeted attacks against the victim systems using techniques seen in well-known security breaches in recent months and years.

The results show that Deep Instinct D-Client provided a wide range of detection and threat blocking capability against well-known and customised targeted attacks, without interfering with regular use of the systems upon which it was deployed. The deep learning system was  trained in August 2018, six months before the customised targeted threats were created.

Latest report now online.