We spent the last couple of posts looking at getting the name right, which is important to ensure that everyone's on the same page with what the test should deliver.
The next piece to look at is "what to test?". Getting the scope of a test right is critical to ensuring that the client gets best value for money and the tester gets enough time to actually do the work. Remember that this should all be driven by a clear business need and an understanding of the risk environment for the test to add real value to the organisation.
The fact of the matter is that a tester could spend an almost infinite amount of time testing most environments, you just keep going deeper into the system you're testing, once you've gone beyond basic version and vulnerability checking, you can look at password guessing, then you could look at fuzzing all the services running, then reverse engineering all the software etc. Of course it'd be a rare test that provided that kind of time.
So with the assumption that there's never enough time, how can you decide what a reasonable time to spend on a test is? Well there's a number of techniques that can help the estimation process and they kind of vary depending on what type of system is under review. We'll cover a few of the more common ones here.
External Infrastructure Reviews - This is the "classic" external security review that most medium/large companies should be doing on their identified external perimeter to ensure that there's no major holes that an attacker can waltz through. Assuming that we're going for a security review (as defined earlier ) the major variables to consider in the scoping excercise is number of live services on the hosts under review.
We think that this is a better approach than the commonly used "number of IP addresses" as ultimately what the tester is assessing is live services. If you've got a Class C to review but there's only 2 servers with a couple of open ports, that's likely to be a quick job, but on the same class C if you've got 80 live hosts with 10's of services each, you've got a really long haul on to get a decent level of coverage.
Web Application Reviews - With reviews focusing on web applications there tends to be a lot more moving parts which can affect how long is needed to complete them. The basic metric that gets used is the number of pages on the site, but this isn't usually a good indication as static HTML pages with no input are really easy to cover and some sites use CMS systems which hide everything behind a single URL. Some of the factors that need to be considered when working out a scope for a web application test are :-
- Number of business functions that the site has. Business logic testing tends to be done per-function (eg, for a banking site, a move money request would be one function, for a shopping site it could be the checkout process)
- Number of parameters on the site. Like number of pages this one can be deceptive, as the same parameter may be used on many different pages, but it's probably closer to being useful
- Number of user type/roles. Assuming that the test is covering all the authenticated areas of the site, horizontal and vertical authorisation testing may need to cover all user types, so the number of roles to be tested is pretty important.
- Environment that the test is occuring in. This one is often overlooked, but can have a large effect on the the timing of the test. If you're testing against a live application, particularly if authenticated testing is needed, then it can limit the amount of automation/scripting that can be used. Automated scanners will fill forms in many times to assess different vulnerability classes, and this can be a pretty undesirable result for a live site, so manual testing becomes the order of the day. Conversely in a test environment where the tester has exclusive access, automated tools can be used more heavily without risking impacting the site.
- Are linked sites in-scope? With modern sites there can be a lot of cases where functionality used on the site is actually sourced from other hosts or even from other companies. If the site owner wants assurance over the whole solution this can expand the scope quite a bit and if third parties are involved can make the whole test quite a lot more complicated. Ultimately those bits might get excluded but it's worth asking the questions up front to avoid a disappointed customer.
Had this useful comment from Carl over at the Chain:
ReplyDeleteOne obvious consideration to add in the 'what?' question is the criticality of the system or data it holds. Some static sites could have higher reputational damage from defacement, some transactional sites esp eTailers might have sufficient backend process controls to prevent the transactions actually being shipped.
From my time as a tester I found it was all too easy to get sucked into testing something 'interesting' but not realising a successful attack would have limited impact.
So I think a risk-based discussion around what are you trying to protect is an important tool for setting the scope.
Absolutely agree with Carl on this - we will be looking at the whole concept of business risk focused decisions on scope in a future post.
Please keep the comments coming
Agreed you need to make sure any scoping discussions clearly identify the risks that have been identified and that need to be covered by the testing. Theres no point testing something they don't care about.
ReplyDeleteMatthew (still at EY)