We had been asked to talk about security products and how they might not do what you assume they will.
Reports like this (PDF) provide an interesting insight into how security products actually work. Marketing messages will inevitably claim world-beating levels of effectiveness, while basic tests might well support these selling points. But when you actually hack target systems through security appliances you sometimes get a very different picture.
Some vendors will support the view that testing using a full attack chain (from a malicious URL pushing an exploit, which in turn delivers a payload that finally provides us with remote access to the system) is the right way to test. Others may point out that the threats we are using don't exactly exist in the real world of criminality because we created them in the lab and are not using them to break into systems worldwide.
We think that is a weak argument. If we can obtain access to certain popular, inexpensive tools online and create threats then these (or variants extremely close to them) are just as likely to exist in the 'real world' of the bad guys as in a legitimate, independent test lab. Not only that, but we don't keep creating new threats until we break in, which is what the criminals (and penetration testers) do. We create a set and, without bias, expose all of the tested products to these threats.
But in some ways we have evolved from being anti-malware testers to being penetration testers, because we don't just scan malware, execute scripts or visit URLs. Once we gain access to a target we perform the same tasks as a criminal would do: escalating privileges, stealing password hashes and installing keyloggers. The only difference between us and the bad guys is that we're hacking our own systems and helping the security vendors plug the gaps.
Latest report (PDF) now online.