Tuesday 28 September 2010
Penetration Testing? A Taxonomy
One of the key issues that we see is that there are different reasons to go broad, or deep. A wide review could aim to identify a range of areas which should be improved, whereas a targeted attack simulation could give good information on what an attacker could do with an opening in the perimeter, combined with weak access controls for example, but may not find many vulnerabilities.
The second issue is with vendors that sell you a "penetration test" but only deliver a lower level of assessment and this can lead to a false sense of security.
So the problem with the "penetration test" term is that most people associate it with this idea that you'll also get coverage of security issues, rather than a focus on specific weaknesses and how they're exploitable.
At the end of the day, an attacker only needs to find one exploitable vulnerability, so while there are certain situations where allowing security testers free reign to go for the crown jewels may be the best option, due to the prevalence of the perimeterised "hard on the outside, soft on the inside" security model, organisations may find a broader approach provides greater assurance for the same budget.
So there is almost a forked model of testing. Typically you would begin with discovery, scanning for common vulnerabilities, and then assessment of those vulnerabilities. After this, the split could be towards Security Assessment (the broad review to find as many vulnerabilities as possible and assess the risk to the business) or towards Penetration Testing (the attempt to exploit and penetrate the organisation to gain access to a particular target).
There will be occasions where these two forks could join up again, where you want a broad review with added information on the extent to which a real world attacker could penetrate.
In order to make it easier to discuss the various stages, our taxonomy is as follows. Please leave comments if you feel improvements are required, and we will develop the taxonomy accordingly:
Discovery
The purpose of this stage is to identify systems within scope and the services in use. It is not intended to discover vulnerabilities, but version detection may highlight deprecated versions of software / firmware and thus indicate potential vulnerabilities.
Vulnerability Scan
Following the discovery stage this looks for known security issues by using automated tools to match conditions with known vulnerabilities. The reported risk level is set automatically by the tool with no manual verification or interpretation by the test vendor. This can be supplemented with credential based scanning that looks to remove some common false positives by using supplied credentials to authenticate with a service (such as local windows accounts).
Vulnerability Assessment
This uses discovery and vulnerability scanning to identify security vulnerabilities and places the findings into the context of the environment under test. An example would be removing common false positives from the report and deciding risk levels that should be applied to each report finding to improve business understanding and context.
Security Assessment
Builds upon Vulnerability Assessment by adding manual verification to confirm exposure, but does not include the exploitation of vulnerabilities to gain further access. Verification could be in the form of authorised access to a system to confirm system settings and involve examining logs, system responses, error messages, codes, etc. A Security Assessment is looking to gain a broad coverage of the systems under test but not the depth of exposure that a specific vulnerability could lead to.
Penetration Test
Penetration testing simulates an attack by a malicious party. Building on the previous stages and involves exploitation of found vulnerabilities to gain further access. Using this approach will result in an understanding of the ability of an attacker to gain access to confidential information, affect data integrity or availability of a service and the respective impact. Each test is approached using a consistent and complete methodology in a way that allows the tester to use their problem solving abilities, the output from a range of tools and their own knowledge of networking and systems to find vulnerabilities that would/ could not be identified by automated tools. This approach looks at the depth of attack as compared to the Security Assessment approach that looks at the broader coverage.
Thursday 23 September 2010
What's in a name
While we've been working on 7 Elements, we've been putting some thought into how penetration testing is currently sold and delivered and how we can improve the process for customers and suppliers.
The first step for us was to understand some of the problems, as we see them, and the first of these is the name itself.
Penetration testing has come to mean a wide variety of things and it tends to get used interchangeably. It was originally understood to refer to a specific type of testing where the tester would emulate an attacker (generally in a black-box style of test) and try to get access to a specific set of services. A penetration test wasn't concerned necessarily with finding as many security issues as possible, but with proving whether an attacker could get unauthorised access to a system.
Now it seem to be used to refer to anything vaguely security testing related, from vulnerability scanning, through web application testing and code review, to actual penetration testing.
The major problem this causes is that it means that people are referring to "penetration testing" and having completely different ideas of what that testing will deliver.
This can cause problems in several areas, such as buying of testing services. How does a customer compare two companies selling penetration testing if one charges £400 a day and another charges £1200 a day?
Another problem comes when regulators or customers specify that an organisation must have a "penetration test", when what they really want to do is get some assurance that that organisation has addressed commonly occurring security issues across all parts of a given system.
So what's the answer to all this? Well we think that the best way forward is to move away from the "penetration test" terminology and begin to categorise types of security testing/assurance/review. We have been working with individuals across a range of organisations, including CREST, OWASP, buyers and vendors and have created a draft outline.
In our next post we plan to further develop this straw man into an industry ready draft.
Monday 20 September 2010
OWASP Ireland 2010
Set in sunny Dublin the day hosted a wide range of interesting talks on Web application security related topics. The conference was very well attended and seemed to have people from a wide variety of backgrounds.
John Viega's keynote kicked off the day with a theme that persisted over many of the talks, which is the need to have a realistic and pragmatic approach to security. John has had a lot of experience in managing software security teams and one of the key messages that we took from the talk is that perfectly secure software is unattainable and that it's important to focus limited resources where they will make the most difference.
After that keynote there was a brief mini-presentation from Eoin Keary and Dinis Cruz on what's the OWASP board have been focusing on over the last year and what's in store over the next 12 months. We also got a mini version of Samy Kamkars Blackhat presentation How I met your Girlfriend, which had made a neat combination of XSS issues in home routers and Geo-Location facilities provided by Google, to allow for precisely locating someone based on them visiting a site you control.
The conference split into two tracks at this point, the following covers highlights from each.
Dr Marian Ventuneac had an interesting presentation looking at how web application vulnerabilities can affect a wide range of e-mail security appliances, including virtual appliances and SaaS offerings (eg, Google Postini). It was a good reminder of how widespread web application issues can be and also why it's important to review all web application interfaces that are in use by an company even if their provided by a "trusted" vendor.
After that 7 Elements' David Stubley was up to talk about "Testing the resilience of security". This is something that we'll be covering on our main site so we won't talk about it too much here other than to congratulate Dave on a well received presentation.
In the other room at the same time, Ryan Berg from IBM gave a very enthusiastic presentation on the process of secure development and the reality of software security. It was interesting to hear the theme of assuming that your internal network is compromised come up again from Ryan. There's been a growing chorus of voices in the security industry pointing to the fact that the complexity of modern IT environments and the flexibility demanded by business management mean that it's almost impossible to rely on a "secure perimeter" as a defence, and instead defenders should assume that attackers have some level of access to the internal network when designing their security controls.
Dan Cornell from Denim gave a fun canter through the subject area of iPhone and Android applications under the title "Smart Phones with Dumb Apps". Key take away was around the need to educate developers that the bad guys can and will decompile your application, so be aware of the sensitive data they contain. Another point made was that even though iPhones swamp the market at the moment, Android sales have the largest take up rate momentum, given this we feel that development of Android applications for financial organisations will become more prevalent as they become the next "must have" for business marketing and sales teams.
After lunch in the scenic Trinity College Dining Hall, Professor Fred Piper gave a talk on the changing face of cryptography. His talk covered quite a bit of the history of cryptography and how it's uses have changed over time. Fred also touched on some areas where cryptography goes wrong and he made the point that it's usually the implementation of an algorithm that is successfully attacked rather than the algorithm itself.
The next presentation that we sat in on was from Dinis Cruz on his O2 platform. As usual with Dinis there was an awful lot of information to take in, but it's obvious that he's doing some really interesting things with the O2 platform, and it'll be very interesting to see how it matures over time.
After Dinis, the remaining two members of the 7 Elements team (Rory Alsop and Rory McCune) were up, to talk about the realities of penetration/security testing. We've put our slides up here but this topic is one that we want to cover off in more detail in this blog over the next couple of weeks.
Unfortunately after that our time was up and we needed to head off to the airport to get back off to Scotland.
Thanks to Eoin and the team for inviting us over to present.