SE Labs

Posts by nikrawlinson_kubik79s

How well do email security gateways protect against targeted attacks?

Email security test explores how and when services detect and stop threats.

Latest report now online.

This new email protection test shows a wide variation in the abilities of the services that we have assessed.

You might see the figures as being disappointing. Surely Microsoft Office 365 can’t be that bad? An eight per cent accuracy rating seems incredible.

Literally not credible. If it misses most threats then organisations relying on it for email security would be hacked to death (not literally).

But our results are subtler than just reflecting detection rates and it’s worth understanding exactly what we’re testing here to get the most value from the data. We’re not testing these services with live streams of real emails, in which massive percentages of messages are legitimate or basic spam. Depending on who you talk to, around 50 per cent of all email is spam. We don’t test anti-spam at all, in fact, but just the small percentage of email that comprises targeted attacks.

In other words, these results show what can happen when attackers apply themselves to specific targets. They do not reflect a “day in the life” of an average user’s email inbox.

We have also included some ‘commodity’ email threats, though – the kind of generic phishing and social engineering attacks that affect everyone. All services ought to stop every one of these. Similarly, we included some clean emails to ensure that the services were not too aggressively configured. All services ought to allow all these through to the inbox.

So when you see results that appear to be surprising, remember that we’re testing some very specific types of attacks that happen in real life, but not in vast numbers comparable to spam or more general threats.

The way that services handle threats are varied and effective to greater or lesser degrees. To best reflect how useful their responses are, we have a rating system that accounts for their different approaches. Essentially, services that keep threats as far as possible from users will win more points than those who let the message appear in or near the inbox. Conversely, those that allow the most legitimate messages through to the inbox rate higher than those which block them without the possibility of recovery from a junk folder or quarantine.

If you spot a detail in this report that you don’t understand, or would like to discuss, please contact us via our Twitter or Facebook accounts.

SE Labs uses current threat intelligence to make our tests as realistic as possible. To learn more about how we test, how we define ‘threat intelligence’ and how we use it to improve our tests please visit our website and follow us on Twitter.
Our latest reports, for enterprisesmall business and home users are now available for free from our website. Please download them and follow us on Twitter and/or Facebook to receive updates and future reports.

Join the most secure one per cent of internet users – in minutes

Hackers have spent well over 20 years stealing users’ passwords from internet companies.

They’ve almost certainly got yours.

The good news is it’s very easy to make your passwords useless to hackers. All you do is switch on Two-Factor Authentication (2FA).

2FA is a second login layer

It works much like the second lock on your front door. If someone’s stolen or copied your Yale key, that double-lock will keep them out.

A digital double-lock is now vital for protecting your online accounts – email, banking, cloud storage, business collaboration and the rest. It’s up there with anti-malware in the league of essential security measures. And it’s much easier to pick a 2FA method than choose the right anti-malware (our Anti-Malware Protection Reports can help you there).

So 2FA is essential, easy, and doesn’t have to cost a thing. It’s a security no-brainer. So how come hardly anyone uses it?

Join the one per cent elite!

Earlier this year, Google revealed that only 10 per cent of their users have ever bothered setting up 2FA. Just a fraction of those – we estimate around one per cent of all internet users – use the most secure type of 2FA, a USB security key.

In this article we’ll show you how to join that elite one per cent for less than £20. If you’d rather watch a step-by-step demo, here’s our YouTube video.


(This blog reflects the views and research of SE Labs, an independent security testing company. We never use affiliate links.)

Why everyone in your business should use 2FA

You’re not the only person who knows your usernames and passwords. Head over to Have I Been Pwned? and type in your email address to find out how many of your accounts have been hit by hacking attacks.

A quick (and scary) web search reveals how many times your passwords have fallen prey to hackers

While you’re digesting those results, here’s a sobering statistic. More than 90 per cent of all login attempts on retail websites aren’t by actual customers, but by hackers using stolen credentials (Shape Security, July 2018).

Nearly everyone has had their passwords stolen. But hardly anyone protects their accounts using 2FA. We’re all leaving our front doors unlocked.

And as hackers plunder more and more big-name services (as well as all those services you’d forgotten you had accounts with), the more chance they have to steal the passwords you use everywhere.

This is why you must never using the same password twice. Don’t be tempted to use a pattern to help you remember them, either (‘123amazon’, ‘123google’ and so on). Hackers decode that stuff for breakfast. We’re also not keen on password managers. They’re Target Number One for hackers.

Instead, store your passwords where no-one can find them (not online!) and deadlock your accounts using 2FA. It’s the only way to make them hack-proof.

Why a USB key is the best way to lock your accounts

The ‘memorable information’ you have to enter when logging into your online bank account is a watered-down version of 2FA. Hackers can easily create spoof login pages that fool you into handing over all your info, as demonstrated in our NatWest phishing attack video.

Proper 2FA methods are much tougher to crack. They involve more than one device, so a hacker can’t simply ransack your computer and steal all pertinent data. Without the separate device, your passwords are useless to them.

Use more than one 2FA method if offered. This double-locks your double-locks – and also gives you another way into your account if one method fails. See our 2FA YouTube video for a step-by-step guide to doing this for your Google account.

Here’s a quick run-through of your options, starting with the most basic.

Google prompt
How it works: Tap your Android screen to confirm your identity.
Pros and cons: Very quick and easy, but only works with Google accounts and Android devices. Useful as a backup option.

SMS code
How it works: You’re texted (and/or voice-messaged) a PIN code to enter after your usual login.
Pros and cons: Authentication is split between two devices. It works on any mobile phone at no additional cost. But it can be slow, and the code may appear on your lock screen.

Authentication app
How it works: A free app, such as Google Authenticator, generates a unique numerical security code that you then enter on your PC.
Pros and cons: Faster and more reliable than SMS, and arguably more secure, but you’ll need a smartphone (Android or iOS).

Authenticate your logins with a code that’s sent to your phone (and only your phone)

Backup codes
How it works: A set of numerical codes that you download and then print or write down – then keep in a safe place. Each code only works once.
Pros and cons: The perfect backup method. No need for a mobile phone. A piece of paper or locally-stored computer file (with disguised filename) is easier to hide from thieves than anything online.

And the most secure 2FA method of all…

USB security key
How it works: You ‘unlock’ your accounts by plugging a unique USB stick (such as this YubiKey) into your computer.
Pros and cons: A whole list of pros. USB keys are great for business security, because your accounts remain locked even if a hacker breaches your phone. They’re convenient: no need to wait for codes then type them in. And they cost very little considering how useful they are. One key costs from £18, and is all you need to deadlock all your accounts. Buy one for all your employees – and clients!

Give a USB security key to all your employees and clients – their security (and yours) will benefit

Deadlock your Google account: a 2FA walk-through
Google lets you lock down your entire account, including Gmail and Google Drive, using multiple layers of 2FA (which it calls 2-Step Verification). It’s one of the most secure 2FA configurations you’ll find, and it’s easy to set up.

Here are the basic steps. For a more detailed step-by-step guide, see our YouTube video.

  1. Order a USB security key. Look for devices described as FIDO (‘Fast IDentity Online’) – here’s a FIDO selection on Amazon – or head straight for the Yubico YubiKey page. Expect to pay from £18 to around £40.
  2. Go to Google’s 2-Step Verification page, click Get Started then sign into your account. Choose a backup 2FA method, click Security Key, then plug in your unique USB stick. Google automatically registers it to you.
  3. Choose a second 2FA method such as SMS code, plus a backup method such as a printable code, Google prompt or authenticator app.
  4. That’s it – welcome to the top one per cent!
Double-lock your double-locks by choosing more than one 2FA method – and a backup

Deadlock all your online accounts in minutes

All reputable online services now offer 2FA options. But, as you’ll discover from the searchable database Two Factor Auth, not all services offer the best 2FA options.

For example LinkedIn only offers 2FA via SMS, and doesn’t support authenticator apps or USB security keys – the most secure types of 2FA. Even Microsoft Office 365 doesn’t yet support security keys. We expect better from services aimed at business users.

What’s more, 2FA settings tend to be well buried in account settings. No wonder hardly anyone uses them. Here’s where to click:

  • Amazon: Go to Your Account, ‘Login & security’, enter your password again, and then click Edit next to Advanced Security settings.
  • Apple: Go to the My Apple ID page then click Security, Two-Factor Authentication.
  • Dropbox: Click the Security tab to set up SMS or app authentication. To configure a USB security key, follow Dropbox’s instructions.
  • Facebook: Go to ‘Security and login’ in Settings and scroll down to ‘Use two-factor authentication’. Click Edit to get set up.
  • LinkedIn: Go to Account Settings then click Turn On to activate SMS authentication.
  • Microsoft: Log in, click Security, click the ridiculously small ‘more security options’ link, verify your identity, and then click ‘Set up two-step verification’. Doesn’t yet support USB security keys. Some Microsoft services, such as Xbox 360, still don’t support 2FA at all.
  • PayPal: Go to My Profile then click My Settings, Security Key and then Get Security Key. Don’t accept the offer to get a new code texted to you every time you log in, because then a hacker can do it too!
  • TeamViewer: Go to the login page, open the menu under your name, click Edit Profile then click Start Activation under the 2FA option. Supports authenticator apps only, not SMS.
  • Twitter: Go to ‘Settings and privacy’, Security, then tick ‘Login verification’.
  • WhatsApp: In the mobile app tap Settings, Account, ‘Two-step verification’.

SE Labs introducing cyber security to schools

It’s widely acknowledged that the cyber security workforce needs more talented young people to engage. Just as we, at SE Labs, want to help fix information technology security by testing products and services, we also want to encourage an interest among young people, hopefully igniting a passion for understanding and defending against hacking attacks.

We test next-gen security products AND encourage the gen-next!

Our attempts to enable youth from progressing from complete novice, through to getting their first job and then to reaching the top of industry, is an initiative to bring about the needed change and fill the gaps.

As part of our new corporate social responsibility programme we set up an event at Carshalton Boys Sports College to introduce the concept of cyber security and its career prospects to the students.

Around 15 participants ranged from year 10s to sixth formers (aged 16-18) attended the main presentation and all year groups approached us at the stand we set up.

We outlined various topics in the presentation including the different types of cybercrime and attacks; and institutions offering free and paid courses to certain age groups on cyber security, aimed at students.

We also addressed how to break into the cyber security sector; what positions are available in the industry; and how employees are in high demand in both public and private sectors, part- and full-time, in virtually every industry in countries around the world.

Then we went through a test run of a targeted attack to demonstrate what it looks like and what it means.

“Why do we use Kali Linux?”, “What should I do to get into cyber security?”, “What are the skills required?”, were a few curious questions asked by the students at the end of the presentation.

Those who came over to the stand wanted to know who we were, what we do and simply, “what is cyber security?”

They were interested in who are clients are (we gave limited answers due to NDAs), what do they need us and how did we manage to get this far. A lot of these were asked by the younger years who were inquisitive to learn more about this subject. Positive!

Feedback from the college:

On behalf of the Governors, Head Principle, students and parents of Carshalton Boys Sports College, I would like to thank you for your valued input, helping to make our Directions and Destinations Day a great success. 

Our staff work tirelessly to open our students’ minds to the possibilities available to them, but without the support of partners like you, that job would be impossible. Together we had the school filled with a sense of purpose all day and responses we have had from students and parents have shown us that the day has inspired our students. 

We have already started thinking about the future and would be grateful if you have any suggestions about how we might make things even better next year. 

Thank you once again for giving your time, energy and expertise last week.

Well, yes! A career in cyber security is a journey for sure, but a worthwhile one. And in the end, it’s more about people than machines, as a mind’s software can be more powerful than any hardware.

Pooja Jain, March 2018

Big Time Crooks

federal_bureau_of_investigation_seal-8620436
When an online scam becomes too successful, the results can be farcical.

In the movie Small Time Crooks, Woody Allen leads an inept gang of would-be robbers who rent a store next to a bank. They plan to tunnel into the vault. As a cover, Allen’s girlfriend (played by Tracey Ullman) sets up a cookie business in the store. Ullman’s business takes off, and to maintain the cover the gang must set up production facilities, hire staff, find distributors, and so on.

Why is this relevant? Well, rewind to 2002. The internet had already taken off in a big way and people were pouring online as new opportunities exploded into the public consciousness. Also exploding was cybercrime, as the internet presented a new breed of tech savvy crooks with their own set of opportunities. For one gang, an Allenesque adventure was about to begin.

Humble Beginnings
How many times have you browsed a web page that suddenly throws up an alarming warning that your computer is infected and the only thing that can save you is to immediately buy a special program or call a special number? If you’re up to date with system patches and use a reputable anti-virus solution, you’re rarely in danger from such sites these days.

It was not always so.

For millions of internet users back in the day, who were running without protection, the apparent authority of such “scareware” sites made them act. They downloaded free “anti-virus” software that infected them with real malware, they parted with real cash, and many also paid again to have their computers cleaned by professionals.

computer-health-alert-large-6338481Look through the history of scareware, and one company repeatedly appears: Innovative Marketing Inc (to give it the name used in US Federal Trade Commission paperwork but also known by a wide
range of other names). Innovative was registered in Belize in 2002. Despite the appearance of being a legitimate business, its initial products were dodgy: pirated music, porn and illicit Viagra, along with sales of “grey” versions of real anti-virus products.

After Symantec and McAfee both put pressure on the company to stop those software sales in 2003, Innovative tried to write its own. The resulting Computershield wasn’t effective as anti-virus protection, but the company sold it anyway as a defence against the MyDoom worm. Innovative aggressively marketed its new product, and according to press reports, it was soon raking in $1 million per month. As the threat from MyDoom receded, so too did profits.

The company initially turned to adware as a new revenue source. This enabled so-called “affiliates” to use malicious web sites to silently install the adware on vulnerable Windows computers. Getting victims to visit those sites was achieved by placing what looked like legitimate adverts on real sites. Click them, and you became infected. The affiliates then pocketed a fee of 10 cents per infection, but it’s through that Innovative made between $2 and $5 from sales of the advertised products.

Meanwhile, development of completely fake anti-virus software snowballed at the company’s Kiev office. A classic example is “XP Antivirus 2008”, though it also went by a large number of pseudonyms and evolved through many versions. A video of it trashing an XP machine can be found here. Its other major names include Winfixer, WinAntivirus, Drivecleaner, and SystemDoctor.

In many ways, Innovative’s scareware was, well, innovative. It disabled any legitimate protection and told you the machine was heavily infected, even going to the trouble of creating fake blue screens of death. At the time, some antivirus companies had trouble keeping up with the rate of development.

xp2bantivirus2b2008-7843602

Attempts to access Windows internet or security settings were blocked. The only way of “cleaning” the machine was to register the software and pay the fee. Millions of people did just that. The FTC estimates that between 2004 and 2008, the company and its subsidiaries raked in $163 million.

In 2008, a hacker with the handle NeoN found a database belonging to one of the developers, revealing that in a single week one affiliate made over $158,000 from infections.

The Problem of Success
Initially, Innovative used banks in Canada to process the credit card transactions of its victims, but problems quickly mounted as disgruntled cardholders began raising chargebacks. These are claims made to credit card companies about shoddy goods or services.

With Canadian banks beginning to refuse Innovative’s business, it created subsidiary companies to hide its true identity, and approached the Bank of Kuwait and Bahrain. Trouble followed, and in 2005 this bank also stopped handling Innovative’s business due to the high number of chargebacks. Eventually, the company found a Singaporean bank called DBS Bank to handle the mounting backlog of credit card transactions.

The only solution to the chargeback problem was to keep customers happy. So, in true Allenesque style, Innovative began to invest in call centres to help customers through their difficulties. It quickly opened facilities in Ukraine, India and the USA. Operatives would talk the customers through the steps needed for the software to miraculously declare their systems free of malware. It seems that enough customers were satisfied to allow the company to keep on raking in the cash.

But people did complain, not to the company but to the authorities. The FTC received over 3,000 complaints in all and launched an investigation. Marc D’Souza has been convicted of his role in the company and ordered to pay £8.2 million, along with his father who received some of the money. The case of Kristy Ross for her part in the scam is still going through the US courts, with lawyers arguing that she was merely an employee.

Several others, including Shaileshkumar “Sam” Jain and Bjorn Daniel Sundin, are still at large, and have had a $163 million judgement entered against them in their absence. Jain and Sundin remain on the FBI’s Most Wanted Cyber Criminal list with rewards for their arrests totalling $40,000.

shaileshkumar-p-jain-3887344 bjorn-daniel-sundin-8778038

An Evergreen Scam
Scareware is a business model that rewards creativity while skirting the bounds of legality. Unlike ransomware, where criminal gangs must cover their tracks with a web of bank accounts and Bitcoin wallets, scareware can operate quite openly from countries with under-developed law enforcement and rife corruption. However, the gap between scareware and ransomware is rapidly closing.

peteris-sahurovs-in-us-federal-court-for-cybercrime-5312054Take the case of Latvian hacker Peteris Sahurovs, AKA “Piotrek” AKA “Sagade”. He was arrested on an international arrest warrant in Latvia in 2011 for his part in a scareware scam, but he fled to Poland where he was subsequently detained in 2016.

He was extradited to the US and pled guilty in February this year to making $150,000 – $200,000.  US authorities claim the total made by Sahurovs’ gang was closer to $2 million. He’s due to be sentenced in June.

According to the Department of Justice, the Sahurovs gang set up a fake advertising agency that claimed to represent a US hotel chain. Once adverts were purchased on the Minneapolis Star Tribune’s website, they were quickly swapped out for ones that infected vulnerable visitors with their malware. This made computers freeze and produce pop-ups explaining that victims needed to purchase special antivirus software to restore proper functionality. This case is interesting as it shows a clear cross over from scareware to ransomware. All data on the machines was scrambled until the software was purchased.

The level of sophistication and ingenuity displayed by scareware gangs is increasing, as is their boldness. You have probably been called by someone from India claiming to be from Microsoft, expressing concern that your computer is badly infected and offering to fix it. Or they may have posed as someone from your phone company telling you that they need to take certain steps to restore your internet connection to full health. There are many variations on the theme. Generally, they want you to download software that confirms their diagnosis. Once done, you must pay them to fix the problem. This has led to a plethora of amusing examples of playing the attackers at their own game.

It’s easy to see the people who call you as victims of poverty with no choice but to scam, but string them along for a while and the insults soon fly. They know exactly what they’re doing, and from the background chatter on such calls, so do hundreds of others. Scareware in all its forms is a crime that continues to bring in a lot of money for its perpetrators and will remain a threat for years to come.

Anatomy of a Phishing Attack

phishing_magnifying_glass_fi-3673555Who attacked a couple of Internet pressure groups earlier this year? Jon Thompson examines the evidence.

For those on those of us engaged in constructing carefully-crafted tests against client email filtering services, the public details of an unusually high-quality spear-phishing attack against a low value target make for interesting reading.

In this case, there were two targets: Free Press, and Fight for the Future. The attack, dubbed “Phish for the Future” in a brief analysis by the Electronic Frontier Foundation, is curious for several reasons.

Free Press is a pressure group campaigning for an open internet, fighting media consolidation by large corporations, and defending press freedom. Fight for the Future works to protect people’s basic online freedoms. Objectively, they’re working for a better online future, which makes the whole affair stand out like a pork buffet at a bar mitzvah.

The first thing that struck me was that the emails were apparently all sent during office hours. The time zones place the senders anywhere between Finland and India, but apparently resolve to office hours when normalised to a single zone.

Another interesting aspect is that even though the emails were sent on 23 active days, the attackers didn’t work weekends. This immediately marks them out as unusual. Anyone who’s run an email honeypot knows that commodity spam flows 24 hours a day.

The attackers first tried generic phishing expeditions, but quickly cranked up their targeting and psychological manipulation. This begs an interesting question: If you’re an experienced, professional, disciplined crew, why jeopardise the operation by beginning with less convincing samples that may alert the target to be on the lookout? Why didn’t they simply start with the good stuff, get the job done, and move on?

One possible explanation is that the attackers were trainees on a course, authorised to undertake a carefully controlled “live fire” exercise. Psychologically manipulative techniques such as pretending to be a target’s husband sending family photos, or a fan checking a URL to someone’s music, imply a level of confident duplicity normally associated with spying scandals.

The level of sophistication and persistence on display forms a shibboleth. It looks and smells somehow “wrong”. The published report reveals an attention to detail and target reconnaissance usually reserved for high value commercial targets. Either the attackers learn at a tremendous rate
through sheer interest alone, or they’re methodically being taught increasingly sophisticated techniques to a timetable. If it was part of a course, then maybe the times the emails were sent show a break for morning coffee, lunch and afternoon tea, or fall into patterns of tuition followed by practical exercises.

phishing2b-6448783The timing of the complete attack also stands out. It began on 7th July, ended on 8th August, and straddled the Net Neutrality Day of Action (12th July). With a lot happening at both targets during that time, and one assumes a lot of email flying about, perhaps the attackers believed they stood a better chance when the staff were busiest.

So, to recap, it looks like highly motivated yet disciplined attackers were operating with uncommonly sophisticated confidence against two small online freedom groups. Neither target has the business acumen of a large corporation, which rules out criminal gain, and yet an awful lot of effort was ranged against them.

The product of phishing is access, either to abuse directly or to be sold to others. Who would want secret access to organisations campaigning for online freedom? Both targets exist to change minds and therefore policy, which makes them political. They’re interesting not only to governments, but also to media companies seeking to control the internet.

I’m speculating wildly, of course. The whole thing could very easily have been perpetrated by an under-worked individual at a large company, using their office computer and keeping regular hours to avoid suspicion. The rest is down to ingenuity and personal motivation.

We’ll never know the truth, but the supporting infrastructure detailed in the EFF report certainly points to some considerable effort over a long period of time. If it was an individual, he’s out there, he’ll strike again, and he learns fast. In many ways, I’d prefer it to have been a security service training new recruits.

The Government Encryption Enigma

big-brother-nsa-snooping-8039934
Is Amber Rudd right about encryption? Jon Thompson isn’t so sure.

UK Home Secretary Amber Rudd recently claimed in an article that “real people” prefer ease of use to unbreakable security when online. She was met immediately by outrage from industry pundits, but does she have a point?

Though paywalled, as reported elsewhere, Rudd asks in her article, “Who uses WhatsApp because it is end-to-end encrypted, rather than because it is an incredibly user-friendly and cheap way of staying in touch with friends and family?”

Rudd name-checked Khalid Masood, who used WhatsApp minutes before he drove a van into pedestrians on Westminster Bridge killing three, and then fatally stabbed a police officer outside Parliament before being shot dead. However, Masood was not part of any MI5 investigation. In fact, a week after the attack, police had to appeal for information about him. His final WhatsApp message seems to have been the first sign that he was about to strike. The recipient was entirely innocent, and knew nothing of his murderous intentions.

There are plenty of other atrocities that were planned in part via social media apps. The attacks on Paris in December 2015, and the Stockholm lorry attack to name but two. In the UK the new UK Investigatory Powers Act 2016 (IPA), which caused so much fuss last year, can compel vendors to decrypt. So, why not just use that? The answer is somewhat complicated.

The IPA makes provision for Communications Service Providers to be served with a notice that they must remove encryption from messages to assist in the execution of an interception warrant. Apart from Providers needing access to private decryption keys, reports suggest that any move to enforce this measure would meet stiff opposition, and may not even be enforceable.

Many of the most popular secure messaging apps use the Signal Protocol, developed by Open Whisper Systems. This is a non-profit organisation and lies outside the UK’s jurisdiction, so its compliance would be difficult to obtain, even if the companies using the protocol agreed to re-engineer their platforms to include backdoors, or to lower encryption standards. There are also plenty of other issues to be resolved if Rudd is to get her way.

If the government mandates weaker encryption for messaging apps in the UK, then companies will face difficult business choices and technological challenges. It boils down to a choice: they could weaken their encryption globally, or they could just weaken encryption in the UK. But what happens
if you send a secure message from outside the UK to someone inside the country? Can the UK authorities read it? Can the recipient, using a lower encryption standard, decrypt it? How would international business communications work if the UK office doesn’t use the same encryption standard as a foreign parent company?

This isn’t the first time the UK government has attempted to find an answer to the problem of encryption. Back in January 2015, the then-Prime Minister David Cameron gave a speech in which he said there should be no means of communication “which we cannot read”. He was roundly criticised as “technologically illiterate” by opposition parties, and later clarified his views, saying he didn’t want to ban encryption, just have the ability to read anyone’s encrypted communications.

amber2brudd-2638730Authoritative voices have since waded into the argument. Lord Evans, the former head of MI5, has recently spoken out about the problems posed by strong encryption: “It’s very important that we should be seen and be a country in which people can operate securely – that’s important for our commercial interests as well as our security interests, so encryption in that context is very positive.”

Besides, if the government can decrypt all messages in the UK, won’t genuine terrorists simply set up their own “dark” services? Ten seconds on Google Search shows plenty of open source, secure chat packages they could use. If such groups are as technologically advanced as we’re led to believe, then it should be simple for them, and terrifying for the rest of us. Wouldn’t it be better to keep such groups using mainstream apps and quietly develop better tools for tracking them via their metadata?

Rudd’s argument that “real people” want ease of use over strong encryption implies that secure apps are in some way difficult to set up and require effort to maintain. The opposite is plainly true, as anyone who’s ever ‘butt dialled’ with their mobile phone can tell you.

Rudd’s argument also plays into the idea that if you have nothing to hide you have nothing to fear. While writing this piece, I accessed several dozen online information sources, from mainstream news reports of terrorist outrages to super paranoid guides for setting up secure chat services. I accessed many of these sources multiple times. I didn’t access any extremist material, but my browsing history shows a clear and persistent interest in recent atrocities perpetrated on UK soil, secure chat methods, MI5 and GCHQ surveillance methods, encryption algorithms, and so on. Joining the dots to arrive at the wrong conclusion would be a grave mistake, and yet without the wider context of this blog piece to explain myself, how would authorities know I’m not planning to be the next Khalid Masood or Darren Osborne? The answer lies in developing better tools that gather more context than just what apps you use.

Quantum Inside?

c0096943-quantum_computer_core-800x533-1387070

Is this the dawn of the quantum computer age? Jon Thompson investigates.

Scientists are creating quantum computers capable of cracking the most fiendish encryption in the blink of an eye. Potentially hostile foreign powers are building a secure quantum internet that automatically defeats all eavesdropping attempts.

Single computers far exceeding the power of a hundred supercomputers are within humanity’s grasp. 

Are these stories true, as headlines regularly claim? The answer is increasingly yes, and it’s to China we must look for much current progress.

The Quantum Internet
Let’s begin with the uncrackable “quantum internet”. Sending messages using the properties of the subatomic world has been possible for years; it’s considered the “gold standard” of secure communications. Chinese scientists recently set a new distance record for sending information using quantum techniques when they transmitted data 1,200Km to a special satellite. What’s more, China is implementing a quantum networking infrastructure.

QuantumCTek recently announced it is to deploy a network for government and military employees in the Chinese city of Jinan, secured using quantum key distribution. Users will send messages encrypted by traditional means, with a second “quantum” channel distributing the associated decryption keys. Reading the keys destroys the delicate state of the photons that carry them, so it can only be done once by the recipient, otherwise the message cannot be decrypted and the presence of an eavesdropper is instantly apparent.

The geopolitical implications of networks no foreign power can secretly tap are potentially immense. What’s scarier is quantum computers cracking current encryption in seconds. What’s the truth here?

Encryption Under threat
Popular asymmetric encryption schemes, such as RSA, elliptic curve and SSL, are under threat from quantum computing. In fact, after mandating elliptic curve encryption for many years, the NSA recently declared it potentially obsolete due to the coming quantum computing revolution.

Asymmetric encryption algorithms use prime factors of massive numbers as the basis for their security. It takes a supercomputer far too long to find the right factors to be useful, but it’s thought to be easy for a quantum algorithm called Shor’s Algorithm.

For today’s strong symmetric encryption, such as AES and Blowfish, which use the same key to encrypt and decrypt, the news is currently a little better. It’s thought that initially, quantum computers will have a harder time cracking these, only really halving the time required by conventional hardware. So, if you’re using AES with a 256-bit key, in future it’ll be as secure as a 128-bit key.

A Quantum Leap

2000q2bsystems2bin2blab2bfor2bwebsite-9704561

How far are we from quantum computers making the leap from flaky lab experiments to full production? The answer depends on the problem you want to solve, because not all quantum computers are the same. In fact, according to IBM, they fall into three classes.

The least powerful are quantum annealers. These are available now in the form of machines from Canada’s D-Wave. They have roughly the same power as a traditional computer but are especially good at solving optimisation problems in exquisite detail.  Airbus is already using this ability to increase the efficiency of wing aerodynamics.

More powerful are analogue quantum computers. These are much more difficult to build, and IBM thinks they’re about five years away. They will be the first class of quantum computers to exceed the power of conventional machines. Again, they won’t run programs as we think of them, but instead will simulate incredibly complex interactions, such as those found in life sciences, chemistry and materials science.

The most powerful machines to come are universal quantum computers, which is what most people think of when discussing quantum computers. These could be a decade or more away, but they’re coming, and will be exponentially more powerful than today’s fastest supercomputers. They will run programs as we understand them, including Shor’s Algorithm, and will be capable of cracking encryption with ease. While they’re being developed, so are the programs they’ll run. The current list stands at about 50 specialised but immensely powerful algorithms. Luckily, there are extremely complex engineering problems to overcome before this class of hardware becomes a reality.

Meanwhile, quantum computer announcements are coming thick and fast.

IBM has announced the existence of a very simple device it claims is the first step on the path to a universal quantum computer. Called IBM Q, there’s a web portal for anyone to access and program it, though learning how and what you can do with such a device could take years.

Google is pursuing the quantum annealing approach. The company says it plans to demonstrate a reliable quantum chip before the end of 2017, and in doing so will assert something called “quantum supremacy“, meaning that it can reliably complete specialised tasks faster than a conventional computer. Microsoft is also in on the action. Its approach is called StationQ, and the company been quietly researching quantum technologies for over a decade.

Our Universal Future

types-quantum-computers-7915887

While there’s still a long way to go, the presence of industry giants means there’s no doubt that quantum computers are entering the mainstream, but it’ll probably be the fruits of their computational power that we see first in everyday life rather than the hardware itself. So, solutions to currently difficult problems and improvements in the efficiency of everything from data transmission to batteries for electric cars could start appearing.

Life will really change when universal quantum computers finally become a reality. Be in no doubt that conventional encryption will one day be a thing of the past. Luckily, researchers are already working on so-called post-quantum encryption algorithms that these machines will find difficult to crack.

As well as understandable fears over privacy, and even the rise of quantum artificial intelligence, the future also holds miracles in medicine and other areas that are currently far from humanity’s grasp. The tasks to which we put these strange machines remains entirely our own choice. Let’s hope we choose wisely.

Brexit and Cybersecurity

Is the UK headed for a cybersecurity disaster?

istock-big-ben-parliament-standard-5154835

With Brexit looming and cybercrime booming, the UK can’t afford major IT disasters, but history says they’re inevitable.

The recent WannaCry ransomware tsunami was big news in the UK. However, it was incorrectly reported that the government had scrapped a deal with Microsoft to provide extended support for Windows XP that would have protected ageing NHS computers. The truth is far more mundane.

In 2014, the government signed a one-year deal with Microsoft to provide security updates to NHS Windows XP machines. This was supposed to force users to move to the latest version of Windows within 12 months, but with a “complete aversion to central command and control” within the NHS, and no spare cash for such an upgrade, the move was never completed.

This isn’t the first IT Whitehall IT disaster by a very long way.

During the 1990s, for example, it was realised that the IT systems underpinning the UK’s Magistrates’ Courts were inadequate. It was proposed that a new, unified system should replace them. In 1998, the Labour government signed a deal with ICL to develop Project Libra. Costing £146m, this would manage the courts and link to other official systems, such as the DVLA and prisons systems.

Described in 2003 as “One of the worst IT projects ever seen“, Project Libra’s costs nearly tripled to £390m, with ICL’s parent company, Fujitsu, twice threatening to pull out of the project.

This wasn’t Labour’s only IT project failure. In total, it’s reckoned that by the time the government fell in 2010, it had consumed around £26b of taxpayer’s money on failed, late and cancelled IT projects.

The coalition government that followed fared no better. £150m paid to Raytheon in compensation for cancelling the e-Borders project, £100m spent on a failed archiving system at the BBC, £56m spent on a Ministry of Justice system that was cancelled after someone realised there was already a system doing the same thing: these are just a few of the failed IT projects since Labour left office seven years ago.

The Gartner group has analysed why government IT projects fail, and discovered several main factors. Prominent amongst these is that politicians like to stamp their authority on the nation with grandiose schemes. Gartner says such large projects fail because of their scope. It also says failure lies in trying to re-implement complex, existing processes rather than seeking to simplify and improve on them by design. The problem is, with Brexit looming, large, complex systems designed to quickly replace existing systems are exactly what’s required.

ukba_and_police-7387838

A good example is the ageing HM Customs & Excise CHIEF system. Because goods currently enjoy freedom of movement within the EU, there are only around 60 million packages that need checking in through CHIEF each year. The current system is about 25 years old and just about copes. Leaving the EU will mean processing an estimated 390 million packages per year. However, the replacement system is already rated as “Amber/Red” by the government’s own Infrastructure and Projects Authority, meaning it is already at risk of failure before it’s even delivered.

Another key system for the UK is the EU’s Schengen Information System (SIS-II). This provides real time information about individuals of interest, such as those with European Arrest Warrants against them, terrorist suspects, returning foreign fighters, missing persons, drug traffickers, etc.

Access to SIS-II is limited to countries that abide by EU European Court of Justice rulings. Described by ex-Liberal Democrat leader Nick Clegg as a “fantastically useful weapon” against terrorism, after Brexit, access to SIS-II may be withdrawn.

Late last year, a Commons Select Committee published a report identifying the risks to policing if the UK loses access to SIS-II and related EU systems. The report claimed that then-Home Secretary Theresa May had said that such systems were vital to, “stop foreign criminals from coming to Britain, deal with European fighters coming back from Syria, stop British criminals evading justice abroad, prevent foreign criminals evading justice by hiding here, and get foreign criminals out of our prisons.

The UK will either somehow have to re-negotiate access to these systems, or somehow quickly and securely duplicate them and their content on UK soil. To do so, we will have to navigate the EU’s labyrinthine data protection laws and sharing agreements to access relevant data.

If the UK government can find a way to prevent these and other IT projects running into problems during development, there’s still the problem of cybercrime and cyberwarfare. Luckily, there’s a strategy covering this.

In November 2016, the government launched its National Cyber Security Strategy. Tucked in amongst areas covering online business and national defence, section 5.3 covers protecting government systems. This acknowledges that government networks are complex, and contain systems that are badly in need of modernisation. It asserts that in future there will be, “no unmanaged risks from legacy systems and unsupported software”.

The recent NHS WannaCry crisis was probably caused by someone unknowingly detonating an infected email attachment. The Strategy recognises that most attacks have a human element. It says the government will “ensure that everyone who works in government has a sound awareness of cyber risk”. Specifically, the Strategy says that health and care systems pose unique threats to national security due to the sector employing 1.6 million people in 40,000 organisations.

The problem is, the current Prime Minister called a snap General Election in May, potentially throwing the future of the Strategy into doubt. If the Conservatives maintain power, there’s likely to be a cabinet reshuffle, with an attendant shift in priorities and funding.

european-union-flag-std_1-9767927

If Labour gains power, things are even less clear. Its manifesto makes little mention of cyber security, but says it will order a complete strategic defence and security review “including cyber warfare”, which will take time to formulate and agree with stakeholders. It also says Labour will introduce a cyber charter for companies working with the Ministry of Defence.

Regardless of who takes power in the UK this month, time is running out. The pressure to deliver large and complex systems to cover the shortfall left by Brexit will be immense. Such systems need to be delivered on time, within budget and above all they must be secure – both from internal and external threats.

Staying Neutral

adjit_pai-4626961
Is a fox running the FCC’s henhouse?

Net neutrality is a boring but noble cause. It ensures the internet favours no one. So, why is the new chairman of the Federal Communications Commission, Ajit Pai, determined to scrap it?

“For decades before 2015,” said Pai in a recent speech broadcast on C-SPAN2, “we had a free and open internet. Indeed, the free and open internet developed and flourished under light-touch regulation. We weren’t living in some digital dystopia before the partisan imposition of a massive plan hatched in Washington saved all of us…” Pai also says he wants to “take a weed whacker” to net neutrality, and that its “days are numbered”.

These are strong words. A possible reason for them is that Pai was previously Associate General Counsel at Verizon. To understand why this is significant, we must delve into recent history.

In 2007, Comcast (owned by Verizon) was caught blocking BitTorrent traffic. This was ruled illegal by the FCC, and in 2009 Verizon settled a class action for $16 million.

In 2011, Verizon also blocked Google Wallet from being downloaded and installed on its phones in favour of its own, now-unfortunately titled ISIS service, which it founded with T-Mobile and AT&T.

In response, the FCC imposed its Open Internet Order, which forced ISPs to stop blocking content and throttling bandwidth.

At the time, ISPs in the US were regulated under Title I of the 1934 Communications Act. This classed them as “information services” and provided for Pai’s so-called “light touch” regulation. Title II companies are “common carriers” on a par with the phone companies themselves, and are considered part of the national infrastructure.

isis-7885076

Verizon went to court, and in 2014 successfully argued that the FCC had no authority to impose its will on mere Title I companies. This backfired. In 2015, the FCC decided that ISPs were now part of the national infrastructure, and made them Title II companies. Problem solved, dystopia averted.

Times have changed, and the new Washington administration is keen to roll back what it sees as anti-business Obama-era regulation. Pai was appointed chairman of the FCC in January this year. Given his past at Verizon, his attitude to abolishing net neutrality raises real concerns about the internet’s future.

In an April 2017 interview on PBS News Hour, Pai was asked a direct question: supposing a cable broadcaster, like Comcast, created a TV show that competed with an equally popular Netflix show. Without net neutrality, what’s to stop Comcast retarding Netflix traffic over its own network, while prioritising that of its own show?

“One thing that’s important to remember,” came Pai’s reply, “is that it is hypothetical. We don’t see evidence of that happening…” In fact, net neutrality ensures this situation cannot currently happen, which is why there’s no evidence of it.

Taking a wider view, Pai’s attitude is also curiously uninformed given that his own web page at the FCC shows that from 2007 to 2011, when neutrality violations were big news and the FCC had to impose its Open Internet Order, he was the FCC’s Deputy General Counsel, Associate General Counsel, and Special Advisor to the General Counsel. After a stint in the private sector, he returned to become an FCC Commissioner in 2012.

In his speech on C-SPAN2, Pai also asked, “What happened after the FCC imposed Title II? Sure enough, infrastructure investment declined.”

However, the opposite of this assertion is a matter of public record. As Ars Technica discovered, after Title II was imposed, ISP investment continued to rise. Indeed, Verizon’s own earnings release shows that in the first nine months of 2015, now labouring under the apparently repressive Title II, the  company invested “approximately $22 billion in spectrum licenses and capital for future network capacity”. 

verizon-3260126Interestingly, Pai’s page at the FCC also states his regulatory position in a series of bullet points. These include:

  • Consumers benefit most from competition, not preemptive regulation. Free markets have delivered more value to American consumers than highly regulated ones
     
  • The FCC is at its best when it proceeds on the basis of consensus; good communications policy knows no partisan affiliation.

History shows that Title II regulation wasn’t pre-emptive. It was a response to increasingly bold and shady practices by ISPs that forced the Commission’s hand.

Net neutrality currently maintains the kind of free market that big corporations usually crave. It’s a rare example of regulation removing barriers to trade. Companies of all types and sizes are currently free to compete on the internet, but cannot deny others from competing.

Scrapping US net neutrality will also affect non-US internet users who access US-based content via a VPN. At the US end of the VPN, traffic is handed off to a US ISP. If that company favours some sites over others (or even blocks them), the ability to access content will be guided towards the choices set out by commercial interests, just as if the user was in the US.

Given all this, it is perhaps rather cynical that the Bill to remove Title II status from ISPs, sponsored by Republican senator Mike Lee of Utah, is called the “Restoring Internet Freedom Act“. With Pai also a declared Republican, and dead set on rolling back Title II, the meeting on May 18th to decide whether to proceed could have been a short one with a distinctly partisan flavour.

Back from the Dead

email-1932571Forgotten web sites can haunt users with malware.

Last night, I received a malicious email. The problem is, it was sent to an account I use to register for web sites and nothing else.

Over the years, I’ve signed up for hundreds of sites using this account, from news to garden centres. One of them has been compromised. The mere act of receiving the email immediately marked it out as dodgy.

The friendly, well written message was a refreshing change from the usual approach, which most often demands immediate, unthinking action. The sender, however, could only call me “J” as he didn’t have my forename. There was a protected file attached, but the sender had supplied the password. It was a contract, he said, and he looked forward to hearing back from me.

The headers said the email came from a French telecoms company. Was someone on a spending spree with my money? My PayPal and bank accounts showed no withdrawals.

Curious about the payload, I spun up a suitably isolated Windows 10 victim system, and detonated the attachment. It had the cheek to complain about having no route to the outside world. I tried again, this time with an open internet connection. A randomly-named process quickly opened and closed, while the file reported a corruption. Maybe the victim system had the wrong version of Windows installed, or the wrong vulnerabilities exposed. Maybe my IP address was in the wrong territory. Maybe (and this is more likely) the file spotted the monitoring software watching its every move, and aborted its run with a suitably misleading message.

Disappointed, after deleting the victim system I wondered which site out of hundreds could have been compromised. I’ll probably never know, but it does reveal a deeper worry about life online.

Over the years, we all sign up for plenty of sites about which we subsequently forget, and usually with whichever email address is most convenient. It’s surely only a matter of time before old, forgotten sites get hacked and return to haunt us with something more focused than malicious commodity spam – especially if we’ve been silly enough to provide a full or real name and address. Because of this, it pays to set up dedicated accounts for registrations, or use temporary addresses from places such as Guerrilla Mail.

About

SE Labs Ltd is a private, independently-owned and run testing company that assesses security products and services. The main laboratory is located in Wimbledon, South London. It has excellent local and international travel connections. The lab is open for prearranged client visits.

Contact

SE Labs Ltd
Hill Place House
55A High Street
Wimbledon
SW19 5BA

020 3875 5000

info@selabs.uk

Press