When security policies and security testing meet…
Security solutions can stop you getting things done. They can make mistakes, interpreting your actions as malicious. And then block your work. But they can also blindly follow security policies set by the IT department. Sometimes they do both! How can you predict which products will be most accurate after you buy them?
Custom security policies
Your business most likely doesn’t rely entirely on the detections and protections offered by security solutions. IT usually needs to make a least some configuration changes. Default settings should be good, but businesses commonly make their own adjustments. Every company has its own characteristics and one size definitely does not fit all.
What is a security policy?
In this article a security policy is a setting that directs how the security product should behave. For example, a common policy for an email service is to block executable files.
It’s rare to receive programs as email attachments these days. Bad guys have used that vector to send malware for years so popular email services block programs by default. Also, program files are large and email isn’t the best service to use. Dropbox, for example, would be more appropriate.
Another common email security policy would block Dropbox or Google Drive links, because these can also be abused. If you struggle to send large documents to your bank it’s probably because they have policies blocking shared folder services.
A company that uses policies to blocks PDFs, shared folder links and Zip files has made great strides in keeping third-party files out of its network. Unfortunately it’s also successfully stopped customers and partners from communicating using modern technology. If you need to fax information over there might be a too-stringent security policy in place!
So policies need to balance convenience with security, which is the perennial challenge of all security – cyber and physical.
Security policies in security testing
Security tests often look at how well a product detects threats and allows legitimate behaviour. When it gets things wrong it can generate a false positive or a false negative. A false positive is when it mistakes a good thing as being bad, while a false negative is when it classifies bad things as being harmless. Both outcomes can be disastrous.
We covered the issue of false positives in testing in our blog post, How to test for ‘false positives’.
In addition to that complex subject, there is another thing testers need to take account of – security policies. When a security policy becomes involved in a test it can be tricky to know how to score the product.
Why? Because a tester might expect one thing to happen but the product’s default policy doesn’t match that expectation.
Testers can be (unconsciously) biased
A basic example might involve entertainment software. The tester likes iTunes and includes it in the set of legitimate applications. The enterprise endpoint solution under test then blocks the application. The tester scores the product negatively because it has failed, but why did it block iTunes?
If the product wrongly classified iTunes as malware then that is indeed a false positive and the product should be scored down. But if it blocked iTunes because a default policy rejects entertainment software then it’s not a false positive and the product shouldn’t be penalised.
The tester, if particularly diligent (and belligerent) might then survey the world’s top 1,000 security customers and find their opinions about iTunes. They could then confront the security vendor and say, “see, everyone wants iTunes. Your default policy is stupid!” But technically there’s nothing wrong with the policy, particularly as customers can change it. If the test is about how accurate the product is, rather than how astute the default policy makers are, then nothing changes and the product should score well.
Of course, this is a straight-forward example, but as with anything in security testing things get more complicated and interesting!
Why is this email blocked?
Imagine that you are assessing an email security product that you intend to roll out across the business.
As part of this process your team configures the service to deny access to PDF files. This is appropriate because your company does not wish to receive PDFs over email. Maybe your business partners already have some other file-sharing scheme that works for you. Blocking PDFs over email is, therefore, a very sensible way to reduce the risk of malware.
While testing the service you make the reasonable decision to send a PDF to a test account. The service detects the PDF and prevents delivery. When you look at the event logs to confirm what happened, imagine seeing the following:
Email blocked: Block PDF policy
This seems accurate. The service is configured to block PDFs and it does so, reporting that the reason for its behaviour was a Block PDF policy.
But things don’t always go according to plan. In a real example that we saw earlier this year the event log looked like this:
Email blocked: GenericTrojanABC malware!
Correct action, wrong classification
In this case the email policy had blocked a legitimate file using a policy, which is good, but it categorised it as malware, even specifying the type of malware.
The email should have been blocked, because the policy commanded that it should be. You can’t blame a security product for following the policies it’s set to use. But you can blame it for incorrectly categorising the file.
For a business, the outcome is the same. The policy was upheld and PDFs were kept at arm’s length. But large businesses have security teams. Their job includes monitoring incoming threats and working out what happened and what they should do about it.
Incorrect alerts about malware are, at best, a distraction. They might even lead a security team to analyse threats that aren’t actually threats. Security products are supposed to save time and resources, not add to the client’s workload.
Scoring a product
Testers should score products according to how accurately they behave and how accurately they record their behaviour. In the incorrect example above the product provided detection but generated a classic false positive (FP) result. An result that says, “this legitimate file is actually malware,” is the very definition of an FP!
There is a question about which policies a tester should use. It’s not practical for testers to run repeatable tests using every conceivable combination of policies. So what should they do?
If the testing team is internal to an organisation then using the existing company policy makes a lot of sense.
Third-party testers working for such organisations can do the same.
What do reports mean to you?
General test reports, that week to give an opinion on a range of security products face a challenge. The reports can only give a strong impression about a product’s abilities, not the full picture. But that has always been the case, even with basic anti-malware testing. One important question consumers of security reports need to ask themselves is this…
Why am I reading this report? Do I expect these security products to work in my own organisation:
- Very closely
- Not much. I’m carrying out basic due diligence
- Not even a bit. I don’t know why I bother!
We respectfully suggest that somewhere between items 2 and 3 would be a sensible approach. Read more than one report from more than one tester. Understand that things like policy changes alter results in tests and the real world. But don’t allow this lack of certainty to drive you to item 4. We cover this in more depth in Can general security tests be useful?
Impact on business
Incorrect classifications can impact the business. Security specialists have enough to do without chasing threats that don’t exist. Desktop users can be annoyed too.
Keeping end users in the dark can cause problems ranging from decreased security to lower productivity.
Security products can alert both the user and the network administrators. Which is the best option? It’s hard to say, because a firewall that detects malware won’t alert the user, but should alert the security team. But then will the user persist in trying to download something that seems to be failing, but without any good reason?
For example, a social engineering attack has convinced Colin that he needs to download a certain PDF. When he clicks the link nothing happens. The security team’s dashboard registers an attempted malware download but Colin simply sees his connection time out. As a dedicated member of the workforce he wants that PDF. Remember, he’s already convinced that he needs it. So we uses various techniques, maybe including a different computer, a handheld device or a VPN to access the important file. The network didn’t alert him to the threat so he persisted.
In another example, Gia tries to download iTunes but is prevented from doing so by a policy. Fortunately the security solution is friendly enough to give an alert and to explain why the software is not available. It offers links to functional alternatives that might be useful, and provides a means to contact the helpdesk to request special access. As Gia runs the company podcast they might make an exception for her.
The truth of policies
It’s a good idea to test a security product to ensure it enforces policies, just as it is to check that and protections detections work. Testers should not assume that default policies are everything. There is too much room for customisation to make any serious judgement about a product’s general suitability based on policies. It makes sense to use in-house policies or to create a test based on the policies of others.
A completely independent tester might even create a replica organisation that uses similar policies to well-respected companies and use that configuration in a test that could be useful to a wider audience. Discussing policies with large businesses is a challenging but necessary step in making general tests more useful to the wider community.
Find out more
See all blog posts relating to test results.