Detecting and preventing customised targeted attacks in real-time
Experts design computer security products to detect and protect against threats such as computer viruses, other malware and the actions of hackers.
A common approach is to identify existing threats and to create patterns of recognition. This is similar to the way the pharmaceutical industry creates vaccinations against known biological viruses. Or police issuing wanted notices with photographs of known offenders.
Detecting the unknown
The downside to this approach is that you have to know in advance that the virus or criminal is harmful. The most likely time to discover this is after someone has become sick or a crime has already been committed. It would be better to detect new infections and crimes in real-time and to stop them in action before any damage is caused.
The cyber security world is adopting this approach more frequently than before.
Deep Instinct claims that its D-Client software is capable of detecting not only known threats but those that have not yet hit computer systems in the real world. These claims require a realistic test that pits the product against known threats and those typically crafted by attackers. Attackers who work in a more targeted way. Attackers who identify specific potential victims and move against them with speed and accuracy.
This test report used a range of sophisticated, high-profile threat campaigns such as those directed against the US Presidential election in 2016. It also directed targeted attacks against victim systems using techniques seen in well-known security breaches in recent months and years.
The results show that Deep Instinct D-Client provided a wide range of detection and threat blocking capability against well-known and customised targeted attacks. It didn’t interfere with regular use of the systems upon which it was deployed.
The deep learning system was trained in August 2018, six months before the customised targeted threats were created.
A common criticism of computer security products is that they can only protect against known threats. When new attacks are detected and analysed security companies produce updates based on this new knowledge. It’s a reactive approach that can provide attackers with a significant window of opportunity. Some use special technology to predict the future, but does AI really work?
Security companies have, for some years, developed advanced detection systems, often labelled as using ‘AI’, ‘machine learning’ or some other technical-sounding term. The basic idea is that past threats are analysed in deep ways to identify what future threats might look like. Ideally the result will be a product that can detect potentially bad files or behaviour before the attack is successful.
(We wrote a basic primer to understanding machine learning a couple of years ago.)
Does AI really work?
So does this AI stuff really work? Is it possible to predict new types of evil software? Certainly investors in tech companies believe so, piling hundreds of millions of funding dollars into new start-ups in the cyber defence field.
We prefer lab work to Silicon Valley speculation, though, and built a test designed to challenge the often magical claims made by ‘next-gen’ anti-malware companies.
With support from Cylance, we took four of its AI models and exposed them to threats that were seen in well-publicised attacks (e.g. WannaCry; Petya) months and even years later than the training that created the models.
It’s the equivalent of sending an old product forward in time and seeing how well it works with future threats. To find out how the Cylance AI models fared, and to discover more about how we tested, please download our report for free from our website.
Follow us on Twitter and/ or Facebook to receive updates and future reports.
Still dazed from the year that was, Jon Thompson dons his Nostradamus hat, dusts off his crystal ball
and stares horrified into 2017.
Prediction is difficult. Who would have thought a year ago that ransomware would now come with customer care, or that Russia would be openly accused of hacking a bombastic businessman into the Whitehouse. Who even dreamed Yahoo would admit to a billion-account compromise?
So, with that in mind, it’s time to gaze into the abyss and despair…
Let’s get the obvious stuff out of the way first. Mega credential breaches won’t go away. With so many acres of forgotten code handling access to back end databases, it’s inevitable that the record currently held by Yahoo for the largest account breach will be beaten.
Similarly, ransomware is only just beginning. Already a billion-dollar industry, it’s cheap to buy into and easy to profit from. New techniques are already emerging as gangs become more sophisticated. First came the audacious concept of customer service desks to help victims through the process of forking over the ransom. By the end of 2016, the Popcorn Time ransomware gang was offering decryption for your data if you infect two of your friends who subsequently pay up. With this depth of innovation already in place, 2017 will hold even greater horrors for those who naively click attachments.
Targeted social engineering and phishing attacks will also continue to thrive, with innovative
campaigns succeeding in relieving companies of their revenues. Though most untargeted bulk phishing attempts will continue to show a low return, phishers will inevitably get wise and start to make their attacks more believable. At SE Labs, we’ve already seen evidence of this.
It’s also obvious that the Internet of Things will continue to be outrageously insecure, leading to DDoS attacks that will make the 1.1Tbps attack on hosting company OVH look trivial. The IoT will also make ransomware delivery even more efficient, as increasing armies of compromised devices pump out the pink stuff. By the end of 2017, I predict hacking groups (government-backed or otherwise) will have amassed enough IoT firepower to knock small nations offline. November’s test of a Mirai botnet against Liberia was a prelude to the carnage to come.
Bitcoin recently passed the $1,000 mark for the first time in three years, which means criminals will want even more than ever to steal the anonymous cryptocurrency. However, a flash crash in value is also likely as investors take profits and the market panics in response to a sudden fall. It’s happened before, most noticeably at the end of 2013. There’s also the distinct possibility that the growth in value is due to ransomware, in which case the underlying rally will continue regardless of profit takers.
The state-sponsored use of third party hacking groups brings with it plausible deniability, but proof cannot stay hidden forever. One infiltration, one defection, one prick of conscience, and someone will spill the beans regardless of the personal cost. It’s highly likely that 2017 will include major revelations of widespread state-sponsored hacking.
This leads me neatly on to Donald Trump and his mercurial grasp of “the cyber”. We’ve already delved into what he may do as president, and much of what we know comes straight from the man himself. For example, we already know he skips his daily security briefings because they are “repetitive”, and prefers to ask people around him what’s going on because “You know, I’m, like, a smart person.”
Trump’s insistence on cracking down on foreign workers will have a direct impact on the ability of the US to defend itself in cyberspace. The shift from filling jobs with overseas expertise to training homegrown talent has no discernible transition plan. This will leave a growing skills gap for several years as new college graduates find their way to the workplace. This shortfall will be exploited by foreign threat actors.
Then there’s Trump’s pompous and wildly indiscreet Twitter feed. Does the world really need to know when secret security briefings are postponed, or what he thinks of the intelligence presented in those meetings? In espionage circles, everything is information, and Trump needs to understand that. I predict that his continued use of social media will lead to internal conflict and resignations this year, as those charged with national cybersecurity finally run out of patience.
It’s not all doom and gloom, however. The steady development of intelligent anti-spam and anti-malware technologies will see a trickledown from advanced corporate products into the hotly contested consumer market. The first AV vendor to produce an overtly next gen consumer product will change the game – especially if a free version is made available.
There’s also a huge hole in “fake news” just begging to be filled. I predict that 2017 will see the establishment of an infosec satire site. Just as The Onion has unwittingly duped lazy journalists in the past, there’s scope for the same level of hilarity in the cybersecurity community.
However, by far the biggest threat to life online in 2017 will continue to be the end user. Without serious primetime TV and radio campaigns explicitly showing exactly what to look for, users will continue to casually infect themselves and the companies they work for with ransomware, and to give up their credentials to phishing sites. When challenged, I also predict that governments will insist the problem is being addressed.
What’s the difference between artificial intelligence and machine learning? Put simply, artificial intelligence is the area of study dedicated to making machines solve problems that humans find easy, but digital computers find hard. Examples include driving cars, playing chess or recognising sarcasm.
Archive of security product and service test results
Cyber Security DE:CODED Podcast
SE Labs Ltd is a private, independently-owned and run testing company that assesses security products and services. The main laboratory is located in Wimbledon, South London. It has excellent local and international travel connections. The lab is open for prearranged client visits.