SE Labs

Posts filed under 'vulnerabilities'

What is Machine Learning?

machine_learning-1991327… and how do we know it works?

What’s the difference between artificial intelligence and machine learning? Put simply, artificial intelligence is the area of study dedicated to making machines solve problems that humans find easy but digital computers find hard, such as driving cars, playing chess or recognising sarcasm. Machine learning is a subset of AI dedicated to developing techniques for making machines learn to solve these and other “human” problems without the insanely complex task of explicitly programming them.

A machine is said to learn if, with increasing experience, it gets better at solving a problem. Let’s take identifying malware as an example. This is known as a classification problem. Let’s also call into existence a theoretical machine learning program called Mavis. Consistent malware classification is difficult for Mavis because it is deliberately evasive and subtle.

silicon2bbrain-1881268For it to successfully classify malware, we need to show Mavis a huge number of files that are known to be malicious. Once Mavis has digested several million examples, it should be an expert in what makes a file “smell” like malware.

The spectrum of ways in which Mavis might be programmed to learn this task is very wide indeed, and filled with head-spinning concepts and algorithms. Suitable approaches all have advantages and disadvantages. All that counts, however, it’s whether Mavis can spot and stop previously unknown malware even when the “smell” is very faint or deliberately disguised to confuse it into an unfortunate misclassification.

A major problem for developers lies in proving that their implementation of Mavis intelligently detects unknown malware. How much training is enough? What happens when their Mavis encounters a completely new threat that smells clean? Do we need a second, signature-based system until we’re 100% certain it’s getting it right every time? Some vendors prefer a layered approach, while others go all in with their version of Mavis.

Every next generation security product vendor using machine learning says their approach is the best, which is entirely understandable. Like traditional AV products, however, the proof is in the testing. To gain trust in their AI-based products, vendors need to hand them over to independent labs for a thorough, painstaking work out. It’s the best way for the public, private enterprises, and governments to be sure that Mavis in her many guises will protect them without faltering.

A Modest Proposal

IoT security is a mess, but who’s to blame?
 

title2bimage-8671476The internet of things is quickly becoming every cybercriminal’s wet dream, especially given the release of the Mirai botnet source code. The cause is shockingly insecure devices, but can shaming manufacturers avert the coming chaos?
 

Last year, Symantec released a damning report revealing security flaws in common IoT devices. Some, like not using SSL to communicate and not signing updates, are shot through with incompetence and hubris. The report also described basic flaws in some IoT web portals. It’s uneasy reading unless you’re building a botnet, in which case it’s pure gold.
 

Many IoT devices call home for instructions and updates but don’t bother with chains of trust. Using ARP cache poisoning, an army of devices is yours to update with new firmware, and to then command.
 

So, how big is the coming IoT cyber-storm? According to Gartner, by 2020 there will be a staggering 13 billion IoT consumer items online. Driving this growth is a gold rush that will be worth $263bn to manufacturers by the end of the decade.
 

To put this into context, the recent 1Tb/s DDoS against French hosting provider OVH involved just 152,000 hacked devices. To borrow from Al Jolson, we ain’t seen nothin’ yet. 

dsl2bmodem2bspam2bin2bshiva-5468945
We could simply build stronger defences, such as Google’s Project Shield, but this does nothing to address the underlying problem: insecure products.
 

Cybersecurity professionals increasingly spend excessive time and energy defending against those products. And apart from bad publicity, there seems to be little consequence for manufacturers.

Ah, but surely responsible IoT companies provide updates as they become available? Well, yes. Up to a point.
 

Do your parents have any idea how to locate and install a firmware update from a support site? Mine neither. Why should they? They bought white goods, not a system administration course. By now, all IoT updates should just happen automatically, using a chain of trust that begins with code locked securely into the CPU and ends via client and server identity verification with cryptographically signed firmware images.
 

Online safety is at the heart of the problem. Consumers have a right to safe goods. IoT manufacturers have a responsibility to prevent their products harming others online. Do baby monitors that can be accessed by anyone sound safe to you?
 

baby2bmonitor2badmin2bpassword-1273135The lamps in your lounge won’t randomly explode and set the curtains on fire. They meet legally enforceable standards. But a smart lightbulb can be hacked. We live in a changed world, and mere lightbulbs serving ransomware is becoming possible.
 

It’s not as if good IoT security is difficult to implement. Because of this, there’s an obvious and urgent need to enforce legal cyber-safety standards against manufacturers. One potential and very detailed testing methodology comes from the OWASP Internet of Things Project.
 

My modest proposal is that IoT manufacturers be made to implement strong security in their products in order to offer them for sale. For this, we need independent testing bodies. Those products that fail would be denied a safety certificate, just like any other consumer item. Foreign imports would be subject to trading standards examination, with sellers facing prosecution for selling insecure goods just as they do for selling fakes.
 

Maybe then, as older devices fail and are replaced, will the IoT will slowly revert to the consumer paradise it was meant to be.

Let’s get fuzzical

fuzzy-sm-2000511

Why does software seem so insecure? Massive software companies seem incapable of fixing their products for any length of time. Is it their fault, or are they fighting a battle they can’t win?

At its core, Windows is the result of several decades of constant development. Despite this, Microsoft is still obliged to observe Patch Tuesday each month, when users receive the latest fixes to installed products. A large number of these updates fix security vulnerabilities.

This month, for example, Patch Tuesday includes 16 security update bundles covering in excess of 40 new security holes found in products as diverse as IIS, Microsoft’s web server, and the supposedly secure Edge browser. How can it be that this monthly ritual is still required? After all, it’s not like Microsoft is a small company caught out by sudden success, while trying to manage a huge ball of undocumented code. On the contrary, it is literally one of the biggest, best funded tech companies in the history of the planet.

Let’s take another case. Adobe’s Flash and Reader products are also mature, stable software. Yet black hat hackers love them as the gifts that keep on giving. This week brought news of yet another critical Flash vulnerability, which is already being exploited in the wild.

The complexity of some software, despite its maturity, makes it vulnerable. It needs to be all things to all (wo)men at all times. In the case of Adobe Reader, every PDF document it loads must display perfectly, regardless of its complexity or the limitations of the software used to create it. Anything not explicitly forbidden is, therefore, permissible. Reader will always try to render the file you give it.

Such complexity leads neatly to a fundamental question: If companies such as Adobe and Microsoft can’t find all the exploitable bugs in their code, how come private researchers and black hats can?

The answer lies in a technique called fuzz testing, or fuzzing.

In his presentation to CodenomiCON 2010 , Charlie Miller showed that with a little thought, a few lines of Python code and some time, it’s possible to use fuzzing, in the form of completely random mutations to a file, to find a number of hitherto unreported and potentially exploitable crashes in Adobe Reader.

He took 80,000 PDFs from the internet and reduced that total to just over 1,500 based on their uniqueness from each other. From these files, he generated 3 million variations containing random mutations.

When loaded into Reader, these corrupted files caused crashes in over 2,500 cases. Miller showed that several crashes revealed exploitable situations, some of which were subsequently found, reported and patched by Adobe, but others were new.

Given that there are a total of 2 ^ NUMBER_OF_BITS theoretical mutations that can be made to a PDF, and the ease with which each mutation can be automatically evaluated, PDF readers alone should remain a goldmine for new exploits for some time. Meanwhile, there are many other programs and file types that can be also attacked with various fuzzing methods.

bugtraq-9409442
Bugtraq has been highlighting
software vulnerabilities for years

Take a look at the Bugtraq mailing list archive and you’ll see what I mean. Every day brings a new crop of reports and proofs of concept for all kinds of software. In fact, another six were added while I wrote this blog post. Buried amongst the plethora of obscure libraries and applications are often complete howlers in major products. How are these bugs being found? In the case of closed source software, fuzzing techniques can be the primary tools.

Fuzzing comes in many forms, with some methods and frameworks being more intelligent and guided than others, but the aim is always to automate the discovery of exploitable bugs by finding situations for which complex software either hasn’t been tested or cannot be tested.

You may be wondering why, with their wealth and resources, major software manufacturers don’t fuzz their products to death, as well as performing more traditional testing. The short answer is that they do, but due to the sheer number of possibilities and the time required, all they can do is fuzz as much as possible before the release deadline. The overwhelming majority of possible tests may still remain to be run by other, potentially malicious individuals and groups.

Security holes in software are not going away any time soon, so ensure that the security software you run is capable of protecting you. How? Checking out good anti-malware reviews that include exploit attacks such as ours would be a good start.

Author: Jon Thompson (Email: jon@selabs.uk; Twitter: @jon_thompson_uk)

About

SE Labs Ltd is a private, independently-owned and run testing company that assesses security products and services. The main laboratory is located in Wimbledon, South London. It has excellent local and international travel connections. The lab is open for prearranged client visits.

Contact

SE Labs Ltd
Hill Place House
55A High Street
Wimbledon
SW19 5BA

020 3875 5000

info@selabs.uk

Press