SPECIAL EDITION

Special Edition is the blog for security testing business SE Labs. It explains how we test security products, reports on the internet threats we find and provides security tips for businesses, other organisations and home users.

Friday, 4 August 2017

Quantum Inside?

Is this the dawn of the quantum computer age? Jon Thompson investigates.

Scientists are creating quantum computers capable of cracking the most fiendish encryption in the blink of an eye. Potentially hostile foreign powers are building a secure quantum internet that automatically defeats all eavesdropping attempts.

Single computers far exceeding the power of a hundred supercomputers are within humanity's grasp. 

Are these stories true, as headlines regularly claim? The answer is increasingly yes, and it's to China we must look for much current progress.

The Quantum Internet
Let's begin with the uncrackable "quantum internet". Sending messages using the properties of the subatomic world has been possible for years; it's considered the "gold standard" of secure communications. Chinese scientists recently set a new distance record for sending information using quantum techniques when they transmitted data 1,200Km to a special satellite. What's more, China is implementing a quantum networking infrastructure.

QuantumCTek recently announced it is to deploy a network for government and military employees in the Chinese city of Jinan, secured using quantum key distribution. Users will send messages encrypted by traditional means, with a second "quantum" channel distributing the associated decryption keys. Reading the keys destroys the delicate state of the photons that carry them, so it can only be done once by the recipient, otherwise the message cannot be decrypted and the presence of an eavesdropper is instantly apparent.

The geopolitical implications of networks no foreign power can secretly tap are potentially immense. What's scarier is quantum computers cracking current encryption in seconds. What’s the truth here?

Encryption Under threat
Popular asymmetric encryption schemes, such as RSA, elliptic curve and SSL, are under threat from quantum computing. In fact, after mandating elliptic curve encryption for many years, the NSA recently declared it potentially obsolete due to the coming quantum computing revolution.

Asymmetric encryption algorithms use prime factors of massive numbers as the basis for their security. It takes a supercomputer far too long to find the right factors to be useful, but it's thought to be easy for a quantum algorithm called Shor's Algorithm.

For today's strong symmetric encryption, such as AES and Blowfish, which use the same key to encrypt and decrypt, the news is currently a little better. It's thought that initially, quantum computers will have a harder time cracking these, only really halving the time required by conventional hardware. So, if you're using AES with a 256-bit key, in future it'll be as secure as a 128-bit key.

A Quantum Leap
How far are we from quantum computers making the leap from flaky lab experiments to full production? The answer depends on the problem you want to solve, because not all quantum computers are the same. In fact, according to IBM, they fall into three classes.

The least powerful are quantum annealers. These are available now in the form of machines from Canada's D-Wave. They have roughly the same power as a traditional computer but are especially good at solving optimisation problems in exquisite detail.  Airbus is already using this ability to increase the efficiency of wing aerodynamics.

More powerful are analogue quantum computers. These are much more difficult to build, and IBM thinks they're about five years away. They will be the first class of quantum computers to exceed the power of conventional machines. Again, they won’t run programs as we think of them, but instead will simulate incredibly complex interactions, such as those found in life sciences, chemistry and materials science.

The most powerful machines to come are universal quantum computers, which is what most people think of when discussing quantum computers. These could be a decade or more away, but they're coming, and will be exponentially more powerful than today's fastest supercomputers. They will run programs as we understand them, including Shor's Algorithm, and will be capable of cracking encryption with ease. While they're being developed, so are the programs they'll run. The current list stands at about 50 specialised but immensely powerful algorithms. Luckily, there are extremely complex engineering problems to overcome before this class of hardware becomes a reality.

Meanwhile, quantum computer announcements are coming thick and fast.

IBM has announced the existence of a very simple device it claims is the first step on the path to a universal quantum computer. Called IBM Q, there's a web portal for anyone to access and program it, though learning how and what you can do with such a device could take years.

Google is pursuing the quantum annealing approach. The company says it plans to demonstrate a reliable quantum chip before the end of 2017, and in doing so will assert something called "quantum supremacy", meaning that it can reliably complete specialised tasks faster than a conventional computer. Microsoft is also in on the action. Its approach is called StationQ, and the company been quietly researching quantum technologies for over a decade.

Our Universal Future
While there's still a long way to go, the presence of industry giants means there's no doubt that quantum computers are entering the mainstream, but it'll probably be the fruits of their computational power that we see first in everyday life rather than the hardware itself. So, solutions to currently difficult problems and improvements in the efficiency of everything from data transmission to batteries for electric cars could start appearing.

Life will really change when universal quantum computers finally become a reality. Be in no doubt that conventional encryption will one day be a thing of the past. Luckily, researchers are already working on so-called post-quantum encryption algorithms that these machines will find difficult to crack.

As well as understandable fears over privacy, and even the rise of quantum artificial intelligence, the future also holds miracles in medicine and other areas that are currently far from humanity's grasp. The tasks to which we put these strange machines remains entirely our own choice. Let's hope we choose wisely.

Monday, 17 July 2017

Next-generation firewalls: latest report

Using layers of security is a well-known concept designed to reduce the chances of an attacker succeeding in breaching a network. If one layer fails, others exist to mitigate the threat.

Latest reports now online.

In this report (PDF) we explore the effectiveness of network appliances designed to detect and block attacks against endpoint systems.

The systems we have tested here are popular appliances designed to sit between your endpoints and the internet router. They are designed to detect, and often protect against, threats coming in from the internet or passing through the local network.

Their role is to stop threats before they reach the endpoints. If they fail to stop a threat, they might learn that an attack has happened and generate an alert, while subsequently blocking future, similar attacks.

In some cases an appliance will take information it considers suspicious and send it to a cloud-based service for further analysis. In this way it might allow a threat through the first time, explore it more deeply using the cloud service and send back information to the appliance so that it will block  that same (or similar) attack in future.

It’s a little like an immune system.

As immune systems adapt to protect against known threats, so threats adapt in an arms race to defeat protection mechanisms. This report includes our first public set of network security appliance results.

Future reports will keep you updated as to how well the industry competes with the bad guys in the real world.

Monday, 10 July 2017

Can anti-malware be 100 per cent effective?

You can probably guess the answer, but we'll explore how products can score very well in tough tests, and which are the best.

Latest reports now online

There are a lot of threats on the web, and going online without protection is very risky. We need good, consistently effective anti-malware products to reduce our risk of infection.

And the ones included in these reports look great – in fact, some score 100 per cent. That means they stopped all the threats that we exposed them to and didn’t block anything legitimate.

But wait a minute! Those in the security industry know full well that there is no such thing as 100 per cent security. There is always a way past every security measure, and this is as true in the anti-malware world as with any other measures for threat protection.

This test includes some of the very best anti-malware products in the world, and pits them against prevalent threats, be they ones that affect hundreds of thousands of users worldwide, or those that could be used to target individuals and organisations. It’s a tough test, but a fair one.

You could argue that any anti-malware product worth its salt would score 100 per cent or thereabouts.

Products can score 100 per cent in our tests because we’re not choosing thousands of weird and wonderful rare pieces of malware to test. Regular users are extremely unlikely to encounter those in the real world.

We’re looking at the threats that could affect you.

Our mission is to help improve computer security through testing, both publicly and privately. We also want to help customers choose the best products by publishing some of those test results.

But don’t forget that success today is not a guarantee of success tomorrow. It’s important to keep monitoring test results.

Our latest reports, for enterprise, small business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

Thursday, 8 June 2017

Brexit and Cybersecurity

Is the UK headed for a cybersecurity disaster?




With Brexit looming and cybercrime booming, the UK can't afford major IT disasters, but history says they're inevitable.

The recent WannaCry ransomware tsunami was big news in the UK. However, it was incorrectly reported that the government had scrapped a deal with Microsoft to provide extended support for Windows XP that would have protected ageing NHS computers. The truth is far more mundane.

In 2014, the government signed a one-year deal with Microsoft to provide security updates to NHS Windows XP machines. This was supposed to force users to move to the latest version of Windows within 12 months, but with a "complete aversion to central command and control" within the NHS, and no spare cash for such an upgrade, the move was never completed.

This isn't the first IT Whitehall IT disaster by a very long way.

During the 1990s, for example, it was realised that the IT systems underpinning the UK's Magistrates' Courts were inadequate. It was proposed that a new, unified system should replace them. In 1998, the Labour government signed a deal with ICL to develop Project Libra. Costing £146m, this would manage the courts and link to other official systems, such as the DVLA and prisons systems.

Described in 2003 as "One of the worst IT projects ever seen", Project Libra's costs nearly tripled to £390m, with ICL's parent company, Fujitsu, twice threatening to pull out of the project.

This wasn't Labour's only IT project failure. In total, it's reckoned that by the time the government fell in 2010, it had consumed around £26b of taxpayer's money on failed, late and cancelled IT projects.

The coalition government that followed fared no better. £150m paid to Raytheon in compensation for cancelling the e-Borders project, £100m spent on a failed archiving system at the BBC, £56m spent on a Ministry of Justice system that was cancelled after someone realised there was already a system doing the same thing: these are just a few of the failed IT projects since Labour left office seven years ago.

The Gartner group has analysed why government IT projects fail, and discovered several main factors. Prominent amongst these is that politicians like to stamp their authority on the nation with grandiose schemes. Gartner says such large projects fail because of their scope. It also says failure lies in trying to re-implement complex, existing processes rather than seeking to simplify and improve on them by design. The problem is, with Brexit looming, large, complex systems designed to quickly replace existing systems are exactly what's required.


A good example is the ageing HM Customs & Excise CHIEF system. Because goods currently enjoy freedom of movement within the EU, there are only around 60 million packages that need checking in through CHIEF each year. The current system is about 25 years old and just about copes. Leaving the EU will mean processing an estimated 390 million packages per year. However, the replacement system is already rated as "Amber/Red" by the government's own Infrastructure and Projects Authority, meaning it is already at risk of failure before it's even delivered.

Another key system for the UK is the EU's Schengen Information System (SIS-II). This provides real time information about individuals of interest, such as those with European Arrest Warrants against them, terrorist suspects, returning foreign fighters, missing persons, drug traffickers, etc.

Access to SIS-II is limited to countries that abide by EU European Court of Justice rulings. Described by ex-Liberal Democrat leader Nick Clegg as a "fantastically useful weapon" against terrorism, after Brexit, access to SIS-II may be withdrawn.

Late last year, a Commons Select Committee published a report identifying the risks to policing if the UK loses access to SIS-II and related EU systems. The report claimed that then-Home Secretary Theresa May had said that such systems were vital to, "stop foreign criminals from coming to Britain, deal with European fighters coming back from Syria, stop British criminals evading justice abroad, prevent foreign criminals evading justice by hiding here, and get foreign criminals out of our prisons.

The UK will either somehow have to re-negotiate access to these systems, or somehow quickly and securely duplicate them and their content on UK soil. To do so, we will have to navigate the EU's labyrinthine data protection laws and sharing agreements to access relevant data.


If the UK government can find a way to prevent these and other IT projects running into problems during development, there's still the problem of cybercrime and cyberwarfare. Luckily, there's a strategy covering this.

In November 2016, the government launched its National Cyber Security Strategy. Tucked in amongst areas covering online business and national defence, section 5.3 covers protecting government systems. This acknowledges that government networks are complex, and contain systems that are badly in need of modernisation. It asserts that in future there will be, "no unmanaged risks from legacy systems and unsupported software".

The recent NHS WannaCry crisis was probably caused by someone unknowingly detonating an infected email attachment. The Strategy recognises that most attacks have a human element. It says the government will "ensure that everyone who works in government has a sound awareness of cyber risk". Specifically, the Strategy says that health and care systems pose unique threats to national security due to the sector employing 1.6 million people in 40,000 organisations.

The problem is, the current Prime Minister called a snap General Election in May, potentially throwing the future of the Strategy into doubt. If the Conservatives maintain power, there's likely to be a cabinet reshuffle, with an attendant shift in priorities and funding.

If Labour gains power, things are even less clear. Its manifesto makes little mention of cyber security, but says it will order a complete strategic defence and security review "including cyber warfare", which will take time to formulate and agree with stakeholders. It also says Labour will introduce a cyber charter for companies working with the Ministry of Defence.

Regardless of who takes power in the UK this month, time is running out. The pressure to deliver large and complex systems to cover the shortfall left by Brexit will be immense. Such systems need to be delivered on time, within budget and above all they must be secure – both from internal and external threats.

Friday, 19 May 2017

Staying Neutral



Is a fox running the FCC's henhouse?


Net neutrality is a boring but noble cause. It ensures the internet favours no one. So, why is the new chairman of the Federal Communications Commission, Ajit Pai, determined to scrap it?

"For decades before 2015," said Pai in a recent speech broadcast on C-SPAN2, "we had a free and open internet. Indeed, the free and open internet developed and flourished under light-touch regulation. We weren't living in some digital dystopia before the partisan imposition of a massive plan hatched in Washington saved all of us…" Pai also says he wants to "take a weed whacker" to net neutrality, and that its "days are numbered".

These are strong words. A possible reason for them is that Pai was previously Associate General Counsel at Verizon. To understand why this is significant, we must delve into recent history.

In 2007, Comcast (owned by Verizon) was caught blocking BitTorrent traffic. This was ruled illegal by the FCC, and in 2009 Verizon settled a class action for $16 million.

In 2011, Verizon also blocked Google Wallet from being downloaded and installed on its phones in favour of its own, now-unfortunately titled ISIS service, which it founded with T-Mobile and AT&T.

In response, the FCC imposed its Open Internet Order, which forced ISPs to stop blocking content and throttling bandwidth.

At the time, ISPs in the US were regulated under Title I of the 1934 Communications Act. This classed them as "information services" and provided for Pai's so-called "light touch" regulation. Title II companies are "common carriers" on a par with the phone companies themselves, and are considered part of the national infrastructure.


Verizon went to court, and in 2014 successfully argued that the FCC had no authority to impose its will on mere Title I companies. This backfired. In 2015, the FCC decided that ISPs were now part of the national infrastructure, and made them Title II companies. Problem solved, dystopia averted.

Times have changed, and the new Washington administration is keen to roll back what it sees as anti-business Obama-era regulation. Pai was appointed chairman of the FCC in January this year. Given his past at Verizon, his attitude to abolishing net neutrality raises real concerns about the internet's future.

In an April 2017 interview on PBS News Hour, Pai was asked a direct question: supposing a cable broadcaster, like Comcast, created a TV show that competed with an equally popular Netflix show. Without net neutrality, what's to stop Comcast retarding Netflix traffic over its own network, while prioritising that of its own show?

"One thing that's important to remember," came Pai's reply, "is that it is hypothetical. We don't see evidence of that happening…" In fact, net neutrality ensures this situation cannot currently happen, which is why there's no evidence of it.

Taking a wider view, Pai's attitude is also curiously uninformed given that his own web page at the FCC shows that from 2007 to 2011, when neutrality violations were big news and the FCC had to impose its Open Internet Order, he was the FCC's Deputy General Counsel, Associate General Counsel, and Special Advisor to the General Counsel. After a stint in the private sector, he returned to become an FCC Commissioner in 2012.

In his speech on C-SPAN2, Pai also asked, "What happened after the FCC imposed Title II? Sure enough, infrastructure investment declined."

However, the opposite of this assertion is a matter of public record. As Ars Technica discovered, after Title II was imposed, ISP investment continued to rise. Indeed, Verizon's own earnings release shows that in the first nine months of 2015, now labouring under the apparently repressive Title II, the  company invested "approximately $22 billion in spectrum licenses and capital for future network capacity". 

Interestingly, Pai's page at the FCC also states his regulatory position in a series of bullet points. These include:
  • Consumers benefit most from competition, not preemptive regulation. Free markets have delivered more value to American consumers than highly regulated ones
     
  • The FCC is at its best when it proceeds on the basis of consensus; good communications policy knows no partisan affiliation.
History shows that Title II regulation wasn't pre-emptive. It was a response to increasingly bold and shady practices by ISPs that forced the Commission's hand.

Net neutrality currently maintains the kind of free market that big corporations usually crave. It's a rare example of regulation removing barriers to trade. Companies of all types and sizes are currently free to compete on the internet, but cannot deny others from competing.

Scrapping US net neutrality will also affect non-US internet users who access US-based content via a VPN. At the US end of the VPN, traffic is handed off to a US ISP. If that company favours some sites over others (or even blocks them), the ability to access content will be guided towards the choices set out by commercial interests, just as if the user was in the US.

Given all this, it is perhaps rather cynical that the Bill to remove Title II status from ISPs, sponsored by Republican senator Mike Lee of Utah, is called the "Restoring Internet Freedom Act". With Pai also a declared Republican, and dead set on rolling back Title II, the meeting on May 18th to decide whether to proceed could have been a short one with a distinctly partisan flavour.





Tuesday, 11 April 2017

Testing anti-malware's protection layers

Our first set of anti-malware test results for 2017 are now available.

Endpoint security is an important component of computer security, whether you are a home user, a small business or running a massive company. But it's just one layer.

Latest reports now online

Using multiple layers of security, including a firewall, anti-exploit technologies built into the operating system and virtual private networks (VPNs) when using third-party WiFi is very important too.

What many people don't realise is that anti-malware software often actually contains its own different layers of protection. Threats can come at you from many different angles, which is why security vendors try to block and stop them using a whole chain of approaches.

A fun video we created to show how anti-malware tries to stop threats in different ways


How layered protection works

For example, let's consider a malicious website that will infect victims automatically when they visit the site. Such 'drive-by' threats are common and make up about one third of this test's set of attacks. You visit the site with your web browser and it exploits some vulnerable software on your computer, before installing malware – possibly ransomware, a type of malware that also features prominently in this test.


Here's how the layers of endpoint security can work. The URL (web link) filter might block you from visiting the dangerous website. If that works you are safe and nothing else need be done.

But let's say this layer of security crumbles, and the system is exposed to the exploit.

Maybe the product's anti-exploit technology prevents the exploit from running or, at least, running fully? If so, great. If not, the threat will likely download the ransomware and try to run it.

At this stage file signatures may come into play. Additionally, the malware's behaviour can be analysed. Maybe it is tested in a virtual sandbox first. Different vendors use different approaches.

Ultimately the threat has to move down through a series of layers of protection in all but the most basic of 'anti-virus' products.

The way we test endpoint security is realistic and allows all layers of its protection to be tested.

Our latest reports, for enterprisesmall business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

Thursday, 6 April 2017

Back from the Dead

Forgotten web sites can haunt users with malware.

Last night, I received a malicious email. The problem is, it was sent to an account I use to register for web sites and nothing else.

Over the years, I've signed up for hundreds of sites using this account, from news to garden centres. One of them has been compromised. The mere act of receiving the email immediately marked it out as dodgy.

The friendly, well written message was a refreshing change from the usual approach, which most often demands immediate, unthinking action. The sender, however, could only call me "J" as he didn't have my forename. There was a protected file attached, but the sender had supplied the password. It was a contract, he said, and he looked forward to hearing back from me.

The headers said the email came from a French telecoms company. Was someone on a spending spree with my money? My PayPal and bank accounts showed no withdrawals.

Curious about the payload, I spun up a suitably isolated Windows 10 victim system, and detonated the attachment. It had the cheek to complain about having no route to the outside world. I tried again, this time with an open internet connection. A randomly-named process quickly opened and closed, while the file reported a corruption. Maybe the victim system had the wrong version of Windows installed, or the wrong vulnerabilities exposed. Maybe my IP address was in the wrong territory. Maybe (and this is more likely) the file spotted the monitoring software watching its every move, and aborted its run with a suitably misleading message.

Disappointed, after deleting the victim system I wondered which site out of hundreds could have been compromised. I'll probably never know, but it does reveal a deeper worry about life online.

Over the years, we all sign up for plenty of sites about which we subsequently forget, and usually with whichever email address is most convenient. It's surely only a matter of time before old, forgotten sites get hacked and return to haunt us with something more focused than malicious commodity spam – especially if we've been silly enough to provide a full or real name and address. Because of this, it pays to set up dedicated accounts for registrations, or use temporary addresses from places such as Guerrilla Mail.