Tuesday, 2 November 2010

CYA (Consider Your Audience)

Just picking up on one of the points from the discussion Rory Alsop and Chris Riley were having on the penetration testing industry. I've been giving some thought to the issue of communication.

With any presentation or document, probably the most important thing to think about is "who will be receiving this information"? Definitely whenever I'm preparing a presentation, that's the first thing I think about, as it drives the rest of what I create.

Now in the penetration testing industry the main means of communication is the post-test report. Whether that's purely a written document that's handed over at the end of testing, or handled as part of a wash-up meeting, it's the tester's main opportunity to communicate what they did and what the real value of the test was.

Unfortunately in a lot of cases, the opportunity to customize the report heavily for a specific audience isn't available, as it'll go to multiple groups, but with a wash-up meeting there can be some good opportunities to focus on the test in different ways.

To illustrate the point, lets take a pretend test for Hypothetical Corp, which looked at their perimeter network and their main transactional web application.

Lets say there were findings for unencrypted management protocols on external devices, out of date web server software and several web application findings, in session management, authorisation and input validation (SQL Injection and XSS), the standard type of things you may see in many areas.

Now we've been asked to do a wash-up meeting to explain our findings. Depending on who the main people at the meeting are, I'd say there's at least three completely separate ways to present this information.

If we're presenting to developers and admins, you could focus on:
  • Exactly which areas of the application had input validation issues, what the best way of mitigating those is, possible ways that their app. framework could be used to help them.
  • For the unencrypted management protocols, you might talk about sensible alternatives that still allow the admins to get access (VPN, move to encrypted protocols, partial mitigation like source IP address restrictions etc.)
  • Demonstrate authorisation problems in the application and explain how an attacker might be able to get additional access to the system. Again suggesting how you'd recommend that they approach fixing it can be very useful.
If we're presenting to the IT Security team you could take a slightly different tack:

  • Talking about potential for an attacker to use the issues discovered (eg, SQL injection) to expand their attack and compromise additional areas of the environment.
  • Talk about current trends in attacks, providing some information on attacks that are likely.
  • Look at what data is potentially compromised by any of the attacks, particularly in terms of any credit card or personal information that may have a regulatory impact if it's lost.
Lastly, you may be "lucky" enough to present your findings to senior management, and again there's probably a different set of things that they're interested in:

  • Potential business impacts of one of the findings being exploited. Loss of customer data, regulatory fines etc.
  • Where the organisation is, in comparison to other companies in their industry or area, is likely to be of interest.
  • Demos (if possible). My experience has been that actually demonstrating how easy a compromise is can be quite convincing to senior management.
So here we see three different presentations, all from the same test, and there are numerous other ways that this information could be tailored, depending on the audience.

The main point of all this, is that considering the recipient of any information is key to getting your message across.

Building in time to understand the potential audience at the start of the engagement can be invaluable. For more mature buyers of security testing, agreeing standard formats or documents which have standard sections which can be pulled out as needed can help to get the message across effectively, e.g. a report with a targeted executive summary, plus an xml section for the techies so they can grab the data into their own reporting tool.

One further point which we have seen on occasion is where a test offers findings which are the responsibility of different 3rd parties. Delivering a single report back may not be allowed as confidentiality could be breached. What we have had to do in these situations is write up multiple reports for the same level of audience, but with sections redacted. Planning for this at the start of a test will save you time.


  1. Hi Rory

    You say that you may be "lucky" enough to present your findings to senior management and that you can explain the "potential business impact". I wonder how can you explain the overall risk to senior management? For example you discover a SQL injection vulnerability which allows you to do whatever you want with the database. So that has a huge potential business impact. Let's say it's difficult to exploit. The pen tester took a day working out how to actually exploit the vulnerability. How do you estimate the probability that it will be be exploited in the wild? Is the actual risk so low that it is not worth fixing? (The developers are long since gone). That is for me the interesting question - which is difficult to answer.


  2. Hi Alexis,

    Your right business impact can be a pretty tricky question.

    For me a large part of sorting out potential business impact comes in having some kind of threat model for the company and potentially the application under review.

    Knowing who are the likely threat groups and what they're goals would be can help articulate why it's worth fixing things.

    To take SQL injection as an example. if the issue was findable by an automated scanner, then there's a body of evidence (from SQL injection worms) that it's likely to be exploited, regardless of whether there's any data of value on the site.

    So in that case it's a easier equation of whether the site is valuable enough to fix, and if not is it needed at all.

    What becomes trickier, in my experience, is situations where the exploit is only achievable by a decently knowledgeable attacker and/or by an authenticated user.

    So at that point you get the question of "will my site get targeted?". If there's things like financial data present (Credit card numbers etc), then again there's a decent body of evidence from breach reports that sites get attacked for that kind of information.

    In some cases there's also the reputational angle. Some companies may be a target for protesters (eg, RIAA) so vulnerabilities there are very likely to get exploited, and that presents a case for fixing issues.

    It does become difficult and to be honest, it's one of the reasons why I think compliance and regulations are useful. Having a clear policy or industry requirement which says "you must do X" to be secure, can simplify the argument about fixing things quite a bit.

    The only regulation I've seen which actually go down to a decent level of technical specification (ie actually mention something like SQL Injection by name) is PCI.

    In the UK there's the Data Protection Act, which can be a useful point for vulns that expose personal information.