SPECIAL EDITION

Special Edition is the blog for security testing business SE Labs. It explains how we test security products, reports on the internet threats we find and provides security tips for businesses, other organisations and home users.

Monday, 17 July 2017

Next-generation firewalls: latest report

Using layers of security is a well-known concept designed to reduce the chances of an attacker succeeding in breaching a network. If one layer fails, others exist to mitigate the threat.

Latest reports now online.

In this report (PDF) we explore the effectiveness of network appliances designed to detect and block attacks against endpoint systems.

The systems we have tested here are popular appliances designed to sit between your endpoints and the internet router. They are designed to detect, and often protect against, threats coming in from the internet or passing through the local network.

Their role is to stop threats before they reach the endpoints. If they fail to stop a threat, they might learn that an attack has happened and generate an alert, while subsequently blocking future, similar attacks.

In some cases an appliance will take information it considers suspicious and send it to a cloud-based service for further analysis. In this way it might allow a threat through the first time, explore it more deeply using the cloud service and send back information to the appliance so that it will block  that same (or similar) attack in future.

It’s a little like an immune system.

As immune systems adapt to protect against known threats, so threats adapt in an arms race to defeat protection mechanisms. This report includes our first public set of network security appliance results.

Future reports will keep you updated as to how well the industry competes with the bad guys in the real world.

Monday, 10 July 2017

Can anti-malware be 100 per cent effective?

You can probably guess the answer, but we'll explore how products can score very well in tough tests, and which are the best.

Latest reports now online

There are a lot of threats on the web, and going online without protection is very risky. We need good, consistently effective anti-malware products to reduce our risk of infection.

And the ones included in these reports look great – in fact, some score 100 per cent. That means they stopped all the threats that we exposed them to and didn’t block anything legitimate.

But wait a minute! Those in the security industry know full well that there is no such thing as 100 per cent security. There is always a way past every security measure, and this is as true in the anti-malware world as with any other measures for threat protection.

This test includes some of the very best anti-malware products in the world, and pits them against prevalent threats, be they ones that affect hundreds of thousands of users worldwide, or those that could be used to target individuals and organisations. It’s a tough test, but a fair one.

You could argue that any anti-malware product worth its salt would score 100 per cent or thereabouts.

Products can score 100 per cent in our tests because we’re not choosing thousands of weird and wonderful rare pieces of malware to test. Regular users are extremely unlikely to encounter those in the real world.

We’re looking at the threats that could affect you.

Our mission is to help improve computer security through testing, both publicly and privately. We also want to help customers choose the best products by publishing some of those test results.

But don’t forget that success today is not a guarantee of success tomorrow. It’s important to keep monitoring test results.

Our latest reports, for enterprise, small business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

Thursday, 8 June 2017

Brexit and Cybersecurity

Is the UK headed for a cybersecurity disaster?




With Brexit looming and cybercrime booming, the UK can't afford major IT disasters, but history says they're inevitable.

The recent WannaCry ransomware tsunami was big news in the UK. However, it was incorrectly reported that the government had scrapped a deal with Microsoft to provide extended support for Windows XP that would have protected ageing NHS computers. The truth is far more mundane.

In 2014, the government signed a one-year deal with Microsoft to provide security updates to NHS Windows XP machines. This was supposed to force users to move to the latest version of Windows within 12 months, but with a "complete aversion to central command and control" within the NHS, and no spare cash for such an upgrade, the move was never completed.

This isn't the first IT Whitehall IT disaster by a very long way.

During the 1990s, for example, it was realised that the IT systems underpinning the UK's Magistrates' Courts were inadequate. It was proposed that a new, unified system should replace them. In 1998, the Labour government signed a deal with ICL to develop Project Libra. Costing £146m, this would manage the courts and link to other official systems, such as the DVLA and prisons systems.

Described in 2003 as "One of the worst IT projects ever seen", Project Libra's costs nearly tripled to £390m, with ICL's parent company, Fujitsu, twice threatening to pull out of the project.

This wasn't Labour's only IT project failure. In total, it's reckoned that by the time the government fell in 2010, it had consumed around £26b of taxpayer's money on failed, late and cancelled IT projects.

The coalition government that followed fared no better. £150m paid to Raytheon in compensation for cancelling the e-Borders project, £100m spent on a failed archiving system at the BBC, £56m spent on a Ministry of Justice system that was cancelled after someone realised there was already a system doing the same thing: these are just a few of the failed IT projects since Labour left office seven years ago.

The Gartner group has analysed why government IT projects fail, and discovered several main factors. Prominent amongst these is that politicians like to stamp their authority on the nation with grandiose schemes. Gartner says such large projects fail because of their scope. It also says failure lies in trying to re-implement complex, existing processes rather than seeking to simplify and improve on them by design. The problem is, with Brexit looming, large, complex systems designed to quickly replace existing systems are exactly what's required.


A good example is the ageing HM Customs & Excise CHIEF system. Because goods currently enjoy freedom of movement within the EU, there are only around 60 million packages that need checking in through CHIEF each year. The current system is about 25 years old and just about copes. Leaving the EU will mean processing an estimated 390 million packages per year. However, the replacement system is already rated as "Amber/Red" by the government's own Infrastructure and Projects Authority, meaning it is already at risk of failure before it's even delivered.

Another key system for the UK is the EU's Schengen Information System (SIS-II). This provides real time information about individuals of interest, such as those with European Arrest Warrants against them, terrorist suspects, returning foreign fighters, missing persons, drug traffickers, etc.

Access to SIS-II is limited to countries that abide by EU European Court of Justice rulings. Described by ex-Liberal Democrat leader Nick Clegg as a "fantastically useful weapon" against terrorism, after Brexit, access to SIS-II may be withdrawn.

Late last year, a Commons Select Committee published a report identifying the risks to policing if the UK loses access to SIS-II and related EU systems. The report claimed that then-Home Secretary Theresa May had said that such systems were vital to, "stop foreign criminals from coming to Britain, deal with European fighters coming back from Syria, stop British criminals evading justice abroad, prevent foreign criminals evading justice by hiding here, and get foreign criminals out of our prisons.

The UK will either somehow have to re-negotiate access to these systems, or somehow quickly and securely duplicate them and their content on UK soil. To do so, we will have to navigate the EU's labyrinthine data protection laws and sharing agreements to access relevant data.


If the UK government can find a way to prevent these and other IT projects running into problems during development, there's still the problem of cybercrime and cyberwarfare. Luckily, there's a strategy covering this.

In November 2016, the government launched its National Cyber Security Strategy. Tucked in amongst areas covering online business and national defence, section 5.3 covers protecting government systems. This acknowledges that government networks are complex, and contain systems that are badly in need of modernisation. It asserts that in future there will be, "no unmanaged risks from legacy systems and unsupported software".

The recent NHS WannaCry crisis was probably caused by someone unknowingly detonating an infected email attachment. The Strategy recognises that most attacks have a human element. It says the government will "ensure that everyone who works in government has a sound awareness of cyber risk". Specifically, the Strategy says that health and care systems pose unique threats to national security due to the sector employing 1.6 million people in 40,000 organisations.

The problem is, the current Prime Minister called a snap General Election in May, potentially throwing the future of the Strategy into doubt. If the Conservatives maintain power, there's likely to be a cabinet reshuffle, with an attendant shift in priorities and funding.

If Labour gains power, things are even less clear. Its manifesto makes little mention of cyber security, but says it will order a complete strategic defence and security review "including cyber warfare", which will take time to formulate and agree with stakeholders. It also says Labour will introduce a cyber charter for companies working with the Ministry of Defence.

Regardless of who takes power in the UK this month, time is running out. The pressure to deliver large and complex systems to cover the shortfall left by Brexit will be immense. Such systems need to be delivered on time, within budget and above all they must be secure – both from internal and external threats.

Friday, 19 May 2017

Staying Neutral



Is a fox running the FCC's henhouse?


Net neutrality is a boring but noble cause. It ensures the internet favours no one. So, why is the new chairman of the Federal Communications Commission, Ajit Pai, determined to scrap it?

"For decades before 2015," said Pai in a recent speech broadcast on C-SPAN2, "we had a free and open internet. Indeed, the free and open internet developed and flourished under light-touch regulation. We weren't living in some digital dystopia before the partisan imposition of a massive plan hatched in Washington saved all of us…" Pai also says he wants to "take a weed whacker" to net neutrality, and that its "days are numbered".

These are strong words. A possible reason for them is that Pai was previously Associate General Counsel at Verizon. To understand why this is significant, we must delve into recent history.

In 2007, Comcast (owned by Verizon) was caught blocking BitTorrent traffic. This was ruled illegal by the FCC, and in 2009 Verizon settled a class action for $16 million.

In 2011, Verizon also blocked Google Wallet from being downloaded and installed on its phones in favour of its own, now-unfortunately titled ISIS service, which it founded with T-Mobile and AT&T.

In response, the FCC imposed its Open Internet Order, which forced ISPs to stop blocking content and throttling bandwidth.

At the time, ISPs in the US were regulated under Title I of the 1934 Communications Act. This classed them as "information services" and provided for Pai's so-called "light touch" regulation. Title II companies are "common carriers" on a par with the phone companies themselves, and are considered part of the national infrastructure.


Verizon went to court, and in 2014 successfully argued that the FCC had no authority to impose its will on mere Title I companies. This backfired. In 2015, the FCC decided that ISPs were now part of the national infrastructure, and made them Title II companies. Problem solved, dystopia averted.

Times have changed, and the new Washington administration is keen to roll back what it sees as anti-business Obama-era regulation. Pai was appointed chairman of the FCC in January this year. Given his past at Verizon, his attitude to abolishing net neutrality raises real concerns about the internet's future.

In an April 2017 interview on PBS News Hour, Pai was asked a direct question: supposing a cable broadcaster, like Comcast, created a TV show that competed with an equally popular Netflix show. Without net neutrality, what's to stop Comcast retarding Netflix traffic over its own network, while prioritising that of its own show?

"One thing that's important to remember," came Pai's reply, "is that it is hypothetical. We don't see evidence of that happening…" In fact, net neutrality ensures this situation cannot currently happen, which is why there's no evidence of it.

Taking a wider view, Pai's attitude is also curiously uninformed given that his own web page at the FCC shows that from 2007 to 2011, when neutrality violations were big news and the FCC had to impose its Open Internet Order, he was the FCC's Deputy General Counsel, Associate General Counsel, and Special Advisor to the General Counsel. After a stint in the private sector, he returned to become an FCC Commissioner in 2012.

In his speech on C-SPAN2, Pai also asked, "What happened after the FCC imposed Title II? Sure enough, infrastructure investment declined."

However, the opposite of this assertion is a matter of public record. As Ars Technica discovered, after Title II was imposed, ISP investment continued to rise. Indeed, Verizon's own earnings release shows that in the first nine months of 2015, now labouring under the apparently repressive Title II, the  company invested "approximately $22 billion in spectrum licenses and capital for future network capacity". 

Interestingly, Pai's page at the FCC also states his regulatory position in a series of bullet points. These include:
  • Consumers benefit most from competition, not preemptive regulation. Free markets have delivered more value to American consumers than highly regulated ones
     
  • The FCC is at its best when it proceeds on the basis of consensus; good communications policy knows no partisan affiliation.
History shows that Title II regulation wasn't pre-emptive. It was a response to increasingly bold and shady practices by ISPs that forced the Commission's hand.

Net neutrality currently maintains the kind of free market that big corporations usually crave. It's a rare example of regulation removing barriers to trade. Companies of all types and sizes are currently free to compete on the internet, but cannot deny others from competing.

Scrapping US net neutrality will also affect non-US internet users who access US-based content via a VPN. At the US end of the VPN, traffic is handed off to a US ISP. If that company favours some sites over others (or even blocks them), the ability to access content will be guided towards the choices set out by commercial interests, just as if the user was in the US.

Given all this, it is perhaps rather cynical that the Bill to remove Title II status from ISPs, sponsored by Republican senator Mike Lee of Utah, is called the "Restoring Internet Freedom Act". With Pai also a declared Republican, and dead set on rolling back Title II, the meeting on May 18th to decide whether to proceed could have been a short one with a distinctly partisan flavour.





Tuesday, 11 April 2017

Testing anti-malware's protection layers

Our first set of anti-malware test results for 2017 are now available.

Endpoint security is an important component of computer security, whether you are a home user, a small business or running a massive company. But it's just one layer.

Latest reports now online

Using multiple layers of security, including a firewall, anti-exploit technologies built into the operating system and virtual private networks (VPNs) when using third-party WiFi is very important too.

What many people don't realise is that anti-malware software often actually contains its own different layers of protection. Threats can come at you from many different angles, which is why security vendors try to block and stop them using a whole chain of approaches.

A fun video we created to show how anti-malware tries to stop threats in different ways


How layered protection works

For example, let's consider a malicious website that will infect victims automatically when they visit the site. Such 'drive-by' threats are common and make up about one third of this test's set of attacks. You visit the site with your web browser and it exploits some vulnerable software on your computer, before installing malware – possibly ransomware, a type of malware that also features prominently in this test.


Here's how the layers of endpoint security can work. The URL (web link) filter might block you from visiting the dangerous website. If that works you are safe and nothing else need be done.

But let's say this layer of security crumbles, and the system is exposed to the exploit.

Maybe the product's anti-exploit technology prevents the exploit from running or, at least, running fully? If so, great. If not, the threat will likely download the ransomware and try to run it.

At this stage file signatures may come into play. Additionally, the malware's behaviour can be analysed. Maybe it is tested in a virtual sandbox first. Different vendors use different approaches.

Ultimately the threat has to move down through a series of layers of protection in all but the most basic of 'anti-virus' products.

The way we test endpoint security is realistic and allows all layers of its protection to be tested.

Our latest reports, for enterprisesmall business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

Thursday, 6 April 2017

Back from the Dead

Forgotten web sites can haunt users with malware.

Last night, I received a malicious email. The problem is, it was sent to an account I use to register for web sites and nothing else.

Over the years, I've signed up for hundreds of sites using this account, from news to garden centres. One of them has been compromised. The mere act of receiving the email immediately marked it out as dodgy.

The friendly, well written message was a refreshing change from the usual approach, which most often demands immediate, unthinking action. The sender, however, could only call me "J" as he didn't have my forename. There was a protected file attached, but the sender had supplied the password. It was a contract, he said, and he looked forward to hearing back from me.

The headers said the email came from a French telecoms company. Was someone on a spending spree with my money? My PayPal and bank accounts showed no withdrawals.

Curious about the payload, I spun up a suitably isolated Windows 10 victim system, and detonated the attachment. It had the cheek to complain about having no route to the outside world. I tried again, this time with an open internet connection. A randomly-named process quickly opened and closed, while the file reported a corruption. Maybe the victim system had the wrong version of Windows installed, or the wrong vulnerabilities exposed. Maybe my IP address was in the wrong territory. Maybe (and this is more likely) the file spotted the monitoring software watching its every move, and aborted its run with a suitably misleading message.

Disappointed, after deleting the victim system I wondered which site out of hundreds could have been compromised. I'll probably never know, but it does reveal a deeper worry about life online.

Over the years, we all sign up for plenty of sites about which we subsequently forget, and usually with whichever email address is most convenient. It's surely only a matter of time before old, forgotten sites get hacked and return to haunt us with something more focused than malicious commodity spam – especially if we've been silly enough to provide a full or real name and address. Because of this, it pays to set up dedicated accounts for registrations, or use temporary addresses from places such as Guerrilla Mail.

Friday, 24 March 2017

Inside the CIA...


Who is behind the CIA's hacking tools? Surprisingly ordinary geeks, it seems.

At the start of March came the first part of yet another Wikileaks document dump, this time detailing the CIA's hacking capabilities. The world suddenly feared spooks watching them through their TVs and smartphones. It all made for great headlines.

The Agency has developed scores of interesting projects, not to mention a stash of hitherto unknown zero day vulnerabilities. The dump also gives notes on how to create well-behaved, professional malware that stands the least chance of detection, analysis and attribution to Langley. We've also learned some useful techniques for defeating antivirus software, which the Agency calls Personal Security Products (PSPs).

There's also a deeper tale to tell. It's about the personalities behind the redacted names working on these tools and techniques. They don’t seem so different from anyone else working in infosec.

User #524297 says he is a "Coffee addict, Connoisseur of International Barbecues, and Varied Malt Beverage Enthusiast." Thanks to his comments, we know an ex-boss (nicknamed "Panty-Raider") was considered "really odd". Another had a large, carved wooden desk that went with him from job to job.

User #524297 also maintains a page dedicated to some interesting ideas. One is to use the OpenDNS DNSCrypt service to hide DNS requests emanating from a compromised host.

Another fun-loving User is #71473. He has a page called "List of ideas for fun and interesting ways to kill/crash a process", which enumerates a dozen homebrew techniques and variations. Most are still at the concept stage, but under the list of uses to which they may be put, he includes "Knockover (sic) PSPs" and "Troll people".

He also describes several proof-of-concept tools for his process crashing techniques. One is called DisorderlyShutdown, which waits a programmable amount of time (plus a random offset to make things seem natural) to select a random process to crash in the hope of leading to "data loss and gnashing of teeth". Another is WarheadsToForeheads, which attempts to crash processes. About this tool, he says: "Considering making this an infinite enumeration to squash all user processes and make the user experience especially horrific."

Revealingly, User #71473 also likes to hack the home pages of other Users: " Its 11:30... time to deface people's unprotected user pages..."

User #11628962 was deeply impressed by Subramaniam and Hunt's "Practices of an Agile Developer", and went to great lengths to enumerate the principles behind the work for others in his group. 

Meanwhile, we learn that User # 71475 loves to listen to music online and lists several streaming services and YouTube channels. He's also an avid collector of ASCII-based emoticons. Everyone needs a hobby, right? ¯\_(ツ)_/¯

Amusingly, User #20873595 is keen for people understand that his last name does not begin with C, implying that it is in fact Hunt. There was also some debate about what User #72907's office nickname should be. "Monster Lite" was the apparent front runner.

We also learned from the dump that some of the Users are heavily into the online card game Hearthstone, which unfriendly foreign state actors are likely now feverishly trying to hack.

The public at large has moved on, and the first of the vulnerabilities highlighted in the dump has been patched, but the industrious CIA hackers who originally found them are still beavering away, creating new tools to replace the old ones, finding new zero-days, thinking up new nicknames, trolling each other, and of course playing Hearthstone.