Chris over at Catch22 just posted up this excellent blog article.
A huge amount of commonality in thinking - Some extra thoughts on this:
Communication - over the last 12 or so years I have tried various training for testers along business lines etc., and there are very few who I would say are at the top of their game in both testing and reporting in business language. The few around are worth their weight in gold, but very rare, so my fallback solution was always to have a member of the team responsible for business QA and reporting. They'd still need to be at a high level of technical expertise, but the focus is different. (I do like Chris's idea of a tech reporting course though!)
Relevance - understanding the customer's needs is definitely key. As we've discussed, working with the customer so they understand what their options are, the value in different services etc., should be a part of every engagement.
Accountability - two thoughts on this. One is the name and shame as Chris mentions, but there are bound to be legal challenges, so the alternative is to use certifications (eg CREST, SANS etc) to be able to demonstrate to board level that you chose the right testers for the job, as the certification is effectively the entry qualification to the industry. In addition, you could go down the route of extensive logging (also would help for the repeatability section below) so you can prove every step.
Standards - absolutely! See our earlier posts on taxonomy and nomenclature to understand an element of where we see standards going, and we are planning to continue to work with a good range of experienced security individuals to define a set of industry standards.
Repeatability - I think where possible a number of organisations already do this. On a recent project, my customer wanted at least a minimum (including the parameters used and screenshots) to allow them to replicate the issue. That is only applicable for certain types of tests, but it goes a long way to help, and it is relatively light on resource so shouldn't price you out of the market.
The great thing is that more and more people are aiming the same direction. This has been a long time coming, but with passionate individuals, organisations and bodies, I think moving from the end of 2010 into 2011 will see a step change in the professionalisation of the industry.
Sunday, 31 October 2010
Friday, 29 October 2010
Scottish Financial Crime Group Conference Highlights
This year's SFCG Conference was held at the Corn Exchange in Edinburgh yesterday and was a great success, with a wide range of delegates from Financial Services, Consultants, Vendors, Academia, Public Sector and Law Enforcement (Scottish and Welsh police, and the FBI)
For me the key highlights included:
A presentation by Robert Hartman of KPMG on Bribery and Corruption in the Financial Sector. Some very worrying statistics, but also a down to earth approach to the problem. Robert also highlighted to useful sources of information: Transparency International and Trace Compendium.
A presentation on the risks around Social Media by DI Keith McDevitt of the SCDEA, a topic which is close to my heart and one which 7 Elements plan to present on to one of the winter New Media Breakfast Briefings. Lots of interest in this area, and I had a good discussion with a number of delegates afterwards.
The launch of the e-crime Scotland website - with a huge amount of support from the Welsh Assembly Government, who launched theirs some time ago, Scotland now has it's own portal for information on e-crime, a reporting mechanism, and a gateway into the topic.
There was also a surprise talk by Professor Martin Gill, of the University of Leicester, who stepped in when one speaker was held up in transit. He spends a lot of his time interviewing criminals in prison and taking them to the crime scene to demonstrate how and why they commit their crimes. Some of his findings seem very non-intuitive, for example when confronted with the automatic lights homeowners may have fitted to the outside of the house, most burglars use them to scope out the property, identifying tools, escape routes, entry points etc. Not one stated it would put them off, as no-one ever checks when an automatic light comes on! Similarly CCTV was not seen as an issue.
Another useful point which came up was that when asked what they thought the likelihood of getting caught was (when given the options high, medium, low, none) they laughed at the question and said "zero likelihood" otherwise they wouldn't commit the crime, so the corollary to this is if we can persuade offenders that they will get caught at the time they are about to commit the crime then they are very unlikely to do it.
Although his talk was mostly about burglars, shoplifters and murderers, the same concepts hold true for white collar crime, so can we find ways to make criminals less certain they will get away with it at the time?
A member of the local fraud squad did tell me his solution was to push for removal of property under the Proceeds of Crime act, as going in to prison without the reward of a couple of million pounds at the end of the term can suddenly be a less enjoyable prospect, and letting criminals know that 'getting away' with a small stretch is no longer profitable can be a valuable deterrent.
Caught up with Lindsay Hamilton of Cervello - his company carries out database auditing (in fact he has joined forces with The Pete Finnegan to offer an awesome tool for Oracle auditing)
Some interesting exhibitors this year - M86 Security (the guys who incorporated Finjan into their product line) had some good chat around secure web gateways.
It was as ever a great networking opportunity - I always meet a lot of old friends and colleagues, as well as clients old and new, and these events give a good chance to catch up. One individual surprised me, as out of context I did not recognise her - a detective constable with the Specialist Fraud Unit. Turns out she sings with the Lothian and Borders Police Choir (who I play session guitar for on an occasional basis)
Rory Alsop
For me the key highlights included:
A presentation by Robert Hartman of KPMG on Bribery and Corruption in the Financial Sector. Some very worrying statistics, but also a down to earth approach to the problem. Robert also highlighted to useful sources of information: Transparency International and Trace Compendium.
A presentation on the risks around Social Media by DI Keith McDevitt of the SCDEA, a topic which is close to my heart and one which 7 Elements plan to present on to one of the winter New Media Breakfast Briefings. Lots of interest in this area, and I had a good discussion with a number of delegates afterwards.
The launch of the e-crime Scotland website - with a huge amount of support from the Welsh Assembly Government, who launched theirs some time ago, Scotland now has it's own portal for information on e-crime, a reporting mechanism, and a gateway into the topic.
There was also a surprise talk by Professor Martin Gill, of the University of Leicester, who stepped in when one speaker was held up in transit. He spends a lot of his time interviewing criminals in prison and taking them to the crime scene to demonstrate how and why they commit their crimes. Some of his findings seem very non-intuitive, for example when confronted with the automatic lights homeowners may have fitted to the outside of the house, most burglars use them to scope out the property, identifying tools, escape routes, entry points etc. Not one stated it would put them off, as no-one ever checks when an automatic light comes on! Similarly CCTV was not seen as an issue.
Another useful point which came up was that when asked what they thought the likelihood of getting caught was (when given the options high, medium, low, none) they laughed at the question and said "zero likelihood" otherwise they wouldn't commit the crime, so the corollary to this is if we can persuade offenders that they will get caught at the time they are about to commit the crime then they are very unlikely to do it.
Although his talk was mostly about burglars, shoplifters and murderers, the same concepts hold true for white collar crime, so can we find ways to make criminals less certain they will get away with it at the time?
A member of the local fraud squad did tell me his solution was to push for removal of property under the Proceeds of Crime act, as going in to prison without the reward of a couple of million pounds at the end of the term can suddenly be a less enjoyable prospect, and letting criminals know that 'getting away' with a small stretch is no longer profitable can be a valuable deterrent.
Caught up with Lindsay Hamilton of Cervello - his company carries out database auditing (in fact he has joined forces with The Pete Finnegan to offer an awesome tool for Oracle auditing)
Some interesting exhibitors this year - M86 Security (the guys who incorporated Finjan into their product line) had some good chat around secure web gateways.
It was as ever a great networking opportunity - I always meet a lot of old friends and colleagues, as well as clients old and new, and these events give a good chance to catch up. One individual surprised me, as out of context I did not recognise her - a detective constable with the Specialist Fraud Unit. Turns out she sings with the Lothian and Borders Police Choir (who I play session guitar for on an occasional basis)
Rory Alsop
Friday, 22 October 2010
Incident Response
Today's blog post is based on a presentation I gave to the Scottish OWASP chapter around using security testers as part of your incident response approach.
The talk focused on the different skills and opportunities that using non-forensic based teams can bring to the party in dealing with a security incident and where you really need to use forensic based teams.
The highlights being:
Incident Response -
1. Key question to ask at the beginning of any incident is
"Is this likely to go to court or involve law enforcement?"
If there is any possible outcome that turns this question in to a yes then you have no choice but to use an approach that meets the evidential handling requirements of the local legal jurisdiction of the incident. Within the UK the foundations for this approach have been documented by the Association of Chief Police Officers (ACPO) and serve to ensure that evidence handling, investigation practices and supporting activity are carried out legally.
The following show the four high level principles set out by ACPO :-
Principle 1:
No action taken by law enforcement agencies or their
agents should change data held on a computer or storage
media which may subsequently be relied upon in court.
Principle 2:
In circumstances where a person finds it necessary
to access original data held on a computer or on storage
media, that person must be competent to do so and be
able to give evidence explaining the relevance and the
implications of their actions.
Principle 3:
An audit trail or other record of all processes applied
to computer-based electronic evidence should be created
and preserved. An independent third party should be able
to examine those processes and achieve the same result.
Principle 4:
The person in charge of the investigation (the case
officer) has overall responsibility for ensuring that the
law and these principles are adhered to.
Basically, if you get involved in such an incident then make sure you work with forensically trained teams, be it internally resources or in partnering with a specialist service provider.
2. In terms of basics for incident response, well PPPPPP! That's Prior Planning Prevents P$£& Poor Performance!
Make sure you have a documented approach to incident management, defined roles and responsibilities and that you have tested it all out prior to dealing with your first incident! Otherwise it is unlikely to go well. A few pointers that I have picked up over the years to aid this are -
3. Use IS specialists to provide advice and guidance to the incident group and not have them running the incident. This approach enables your specialist to be just that, the specialist.
4. If the incident is in response to an event that could impact the business being able to meet its objectives (say, make money?) then it should be the business representative that makes the final decision (based upon sound advice and guidance from the specialists). Too many times I have been involved in incident calls where the business rep has looked to offload the decision making on to techies.
5. If you are dealing with a complex issue that requires both technical teams and business focused teams to be working, think about splitting the two teams in to focused indecent groups and have a link person that delivers messages between them. This enables the noise and chatter that is generated to be compartmentalised, the business team do not need to know how specific lines of code are going to be updated to stop that SQL injection attack and the technical teams don't need to be listening to the business managers talking about media statements and legal advice. This approach lets each team focus on the key issues that they will have to deal with.
6. Have a dedicated (for the period of the incident) resource to manage incident calls, take notes and track actions. There is nothing worse than sitting on a call for three hours to reconvene later on to find out that no one has actually done anything! This approach also helps to establish a time-line for the incident and will enable a more effective post incident review to be conducted.
7. If you are not having to go down the forensic route, it could be useful to engage the services of your friendly hackers (we call them security testers) to provide their expertise.
Security testers enjoy problem solving, they can generally code (which is very useful when managing large amounts of data) and they have an innate understanding of exploits and the reality of what can be achieved by hackers. This insight can go along way to gaining an understanding of what has happened, the risk exposure to the business and in highlighting potential options for recovery.
Your friendly hacker is also very good at testing any fix to see if this successfully mitigates the exposure or to conducted targeted assurance tests to understand if you are vulnerable to the same issue in other areas of your organisation.
In summary, you can gain a huge amount of advantage through the use of security testers as part of your incident response approach.
However, remember that sound forensic practices need to be use in cases that will involve the local law enforcement or courts! Given the choice of putting a forensic engineer in front of the courts or a pale, caffeine addicted hacker I will choose the forensics engineer every time. :-)
The talk focused on the different skills and opportunities that using non-forensic based teams can bring to the party in dealing with a security incident and where you really need to use forensic based teams.
The highlights being:
Incident Response -
1. Key question to ask at the beginning of any incident is
"Is this likely to go to court or involve law enforcement?"
If there is any possible outcome that turns this question in to a yes then you have no choice but to use an approach that meets the evidential handling requirements of the local legal jurisdiction of the incident. Within the UK the foundations for this approach have been documented by the Association of Chief Police Officers (ACPO) and serve to ensure that evidence handling, investigation practices and supporting activity are carried out legally.
The following show the four high level principles set out by ACPO :-
Principle 1:
No action taken by law enforcement agencies or their
agents should change data held on a computer or storage
media which may subsequently be relied upon in court.
Principle 2:
In circumstances where a person finds it necessary
to access original data held on a computer or on storage
media, that person must be competent to do so and be
able to give evidence explaining the relevance and the
implications of their actions.
Principle 3:
An audit trail or other record of all processes applied
to computer-based electronic evidence should be created
and preserved. An independent third party should be able
to examine those processes and achieve the same result.
Principle 4:
The person in charge of the investigation (the case
officer) has overall responsibility for ensuring that the
law and these principles are adhered to.
Basically, if you get involved in such an incident then make sure you work with forensically trained teams, be it internally resources or in partnering with a specialist service provider.
2. In terms of basics for incident response, well PPPPPP! That's Prior Planning Prevents P$£& Poor Performance!
Make sure you have a documented approach to incident management, defined roles and responsibilities and that you have tested it all out prior to dealing with your first incident! Otherwise it is unlikely to go well. A few pointers that I have picked up over the years to aid this are -
3. Use IS specialists to provide advice and guidance to the incident group and not have them running the incident. This approach enables your specialist to be just that, the specialist.
4. If the incident is in response to an event that could impact the business being able to meet its objectives (say, make money?) then it should be the business representative that makes the final decision (based upon sound advice and guidance from the specialists). Too many times I have been involved in incident calls where the business rep has looked to offload the decision making on to techies.
5. If you are dealing with a complex issue that requires both technical teams and business focused teams to be working, think about splitting the two teams in to focused indecent groups and have a link person that delivers messages between them. This enables the noise and chatter that is generated to be compartmentalised, the business team do not need to know how specific lines of code are going to be updated to stop that SQL injection attack and the technical teams don't need to be listening to the business managers talking about media statements and legal advice. This approach lets each team focus on the key issues that they will have to deal with.
6. Have a dedicated (for the period of the incident) resource to manage incident calls, take notes and track actions. There is nothing worse than sitting on a call for three hours to reconvene later on to find out that no one has actually done anything! This approach also helps to establish a time-line for the incident and will enable a more effective post incident review to be conducted.
7. If you are not having to go down the forensic route, it could be useful to engage the services of your friendly hackers (we call them security testers) to provide their expertise.
Security testers enjoy problem solving, they can generally code (which is very useful when managing large amounts of data) and they have an innate understanding of exploits and the reality of what can be achieved by hackers. This insight can go along way to gaining an understanding of what has happened, the risk exposure to the business and in highlighting potential options for recovery.
Your friendly hacker is also very good at testing any fix to see if this successfully mitigates the exposure or to conducted targeted assurance tests to understand if you are vulnerable to the same issue in other areas of your organisation.
In summary, you can gain a huge amount of advantage through the use of security testers as part of your incident response approach.
However, remember that sound forensic practices need to be use in cases that will involve the local law enforcement or courts! Given the choice of putting a forensic engineer in front of the courts or a pale, caffeine addicted hacker I will choose the forensics engineer every time. :-)
Thursday, 14 October 2010
Making the test go smoothly
Carrying on our series of posts about the various stages of the security testing process, we're moving on to another frequently overlooked piece of the puzzle, which is test logistics.
Most testing that's carried out has quite tight timescales. On a big project, security testing will usually get a specific window to be completed in, so it's important that everything goes smoothly. Also as testing is usually charged by days effort, time not testing due to logistics problems is essentially money down the drain.
So what are the main causes of logistics problems and some possible solutions to them :-
Most testing that's carried out has quite tight timescales. On a big project, security testing will usually get a specific window to be completed in, so it's important that everything goes smoothly. Also as testing is usually charged by days effort, time not testing due to logistics problems is essentially money down the drain.
So what are the main causes of logistics problems and some possible solutions to them :-
- Credentials - For a lot of testing, authentication to systems is a requirement. Without credentials the tester can't do the work, and depending on the company and the application getting new users onto the system can take a while. So always worth ensuring that as a tester you've clearly laid out what accounts you need and as a client you kick off the processes to get them sorted, well before the test.
- Letters of Authorisation (LoA) - Making sure that no-one's going to accuse you of illegal hacking is kind of important for a smooth running test :o) Especially where there's a 3rd party involved (eg, hosting companies, outsourcing companies) the LOA is a very useful way to ensure that all relevant parties are aware that the testing is happening, what the dates are and confirming that they're happy for it to go ahead. A useful requirement of the LoA is to have a test term which is quite a bit longer than the expected test window, as delays often happen for various reasons, and it is generally easier to have one LoA covering a month for a week long test than to raise separate one-week LoA's if things get delayed.
- Technical contact - While sometimes your customer will be the technical contact, in large companies it's likely that there are other departments or even other companies involved, and it's always a good idea to have the names and phone numbers of the right people, so that if a system crashes during testing, you can get in touch and minimize any problems.
- Escalation contact - Especially when testing environments which support large customer groups, financial transactions or critical data flow, being able to provide timely information on hold-ups, the business impact of technical issues, critical findings which just can't wait until the end of the test etc. to the right people can be a lifesaver - both for the customer, and for the tester. Without an escalation contact, tests are often halted for all manner of glitches, including those unconnected with the test. The contact is often all that is needed to provide context to business so they can make informed go/no-go decisions.
- Network Access - Not a problem on every test, but internal tests, especially for large companies, can run into problems when it's not possible to get a clear network connection to the systems to be reviewed. Always worth connecting a machine before the testers get to site and make sure you can get an IP addresses and reach the in-scope systems.
- Availability of system to be tested - Might seem like a no-brainer, but applications, systems or websites are sometimes down for maintenance, operational testing etc., and the teams working on them may be separate from the individuals liaising with the security tester. All this does is incur cost, and eat into test windows, so we would recommend ensuring all relevant teams have visibility of the security test, and its requirements.
- Jurisdiction - Especially for testing companies operating in different countries, getting agreement (often in the form of an LoA or specific contract terms) for data potentially being accessed in another jurisdiction, for example testing a European organisation from the Far East. Looking into the legal requirements up front at the initial scoping stage can save a whole lot of pain further down the line
- Desk and chair ! - Shouldn't need a mention but testers do need somewhere to sit :) Many of us have carried out tests huddled on the floor of a cooled data centre, but it is good practice to follow basic Health and Safety policy
Friday, 8 October 2010
The Web Hacking Incident Database 2010
The Web Hacking Incident Database 2010 part year results have just been published WHID 2010 , while the statistics are based on a limited population it does represent the tip of the iceberg of a much larger issue. Many incidents will go unreported to the wider public for many reasons, some due to commercial decisions and others because the organisation is not even aware they have been compromised.
So what are the key points from this report and our take on what this means?
The report highlights that this is in fact against users and not the financial institutions directly, by attacking end points to obtain customer account details and then using the credentials to move money away.
This finding goes hand in hand with the first point raised about the targeting of the financial sector. The use of Trojans have been a well established route for cyber criminals to gain access to sensitive information, the change here is in the organised element of on-line crime. With global networks and the use of mules (individuals solicited to aid extraction and movement of money from accounts) they are now able to use the stolen credentials to cash out with large amounts of money. The Zeus Trojan is a recent example of this escalation with 37 people arrested and charged with being members of an international crime ring that stole $3 million. Attacks against financial institutions do happen and some are successful, the parallel between client side attacks and direct attacks is seen in the use of globally coordinated use of mules to cash out.
Interesting finding and sits within our view that you need to have a resilient approach to security. A good statistic here would have been a comparison against actual downtime and any 'Recovery Time Objectives' that an organisation should have as part of its business continuity plans. Are organisations resilient enough to meet their business objectives or not?
This echoes one of the core principles around a resilient approach to security that we outlined at OWASP Dublin 2010 about the need to be able to effectively detect and react to an attack. We will continue to look at the world of security resilience in further posts.
All in all, concise points and clear graphics makes this a good read and well worth a view. If you liked the WHID then head over to the Verizon 2010 report which draws upon a wider population for its statistical analysis.
So what are the key points from this report and our take on what this means?
- "A steep rise in attacks against the financial vertical market is occurring in 2010."
The report highlights that this is in fact against users and not the financial institutions directly, by attacking end points to obtain customer account details and then using the credentials to move money away.
- "Banking Trojans (which result in stolen authentication credentials) made the largest jump for attack methods."
This finding goes hand in hand with the first point raised about the targeting of the financial sector. The use of Trojans have been a well established route for cyber criminals to gain access to sensitive information, the change here is in the organised element of on-line crime. With global networks and the use of mules (individuals solicited to aid extraction and movement of money from accounts) they are now able to use the stolen credentials to cash out with large amounts of money. The Zeus Trojan is a recent example of this escalation with 37 people arrested and charged with being members of an international crime ring that stole $3 million. Attacks against financial institutions do happen and some are successful, the parallel between client side attacks and direct attacks is seen in the use of globally coordinated use of mules to cash out.
- "Application downtime, often due to denial of service attacks, is a rising outcome."
Interesting finding and sits within our view that you need to have a resilient approach to security. A good statistic here would have been a comparison against actual downtime and any 'Recovery Time Objectives' that an organisation should have as part of its business continuity plans. Are organisations resilient enough to meet their business objectives or not?
- "Organizations have not implemented proper Web application logging mechanisms and thus are unable to conduct proper incident response to identify and correct vulnerabilities."
This echoes one of the core principles around a resilient approach to security that we outlined at OWASP Dublin 2010 about the need to be able to effectively detect and react to an attack. We will continue to look at the world of security resilience in further posts.
All in all, concise points and clear graphics makes this a good read and well worth a view. If you liked the WHID then head over to the Verizon 2010 report which draws upon a wider population for its statistical analysis.
Thursday, 7 October 2010
What to test?
We're going to use this post to continue the theme of talking about how the process of setting up and delivering a security test (or penetration testing) can be looked at and improved.
We spent the last couple of posts looking at getting the name right, which is important to ensure that everyone's on the same page with what the test should deliver.
The next piece to look at is "what to test?". Getting the scope of a test right is critical to ensuring that the client gets best value for money and the tester gets enough time to actually do the work. Remember that this should all be driven by a clear business need and an understanding of the risk environment for the test to add real value to the organisation.
The fact of the matter is that a tester could spend an almost infinite amount of time testing most environments, you just keep going deeper into the system you're testing, once you've gone beyond basic version and vulnerability checking, you can look at password guessing, then you could look at fuzzing all the services running, then reverse engineering all the software etc. Of course it'd be a rare test that provided that kind of time.
So with the assumption that there's never enough time, how can you decide what a reasonable time to spend on a test is? Well there's a number of techniques that can help the estimation process and they kind of vary depending on what type of system is under review. We'll cover a few of the more common ones here.
External Infrastructure Reviews - This is the "classic" external security review that most medium/large companies should be doing on their identified external perimeter to ensure that there's no major holes that an attacker can waltz through. Assuming that we're going for a security review (as defined earlier ) the major variables to consider in the scoping excercise is number of live services on the hosts under review.
We think that this is a better approach than the commonly used "number of IP addresses" as ultimately what the tester is assessing is live services. If you've got a Class C to review but there's only 2 servers with a couple of open ports, that's likely to be a quick job, but on the same class C if you've got 80 live hosts with 10's of services each, you've got a really long haul on to get a decent level of coverage.
Web Application Reviews - With reviews focusing on web applications there tends to be a lot more moving parts which can affect how long is needed to complete them. The basic metric that gets used is the number of pages on the site, but this isn't usually a good indication as static HTML pages with no input are really easy to cover and some sites use CMS systems which hide everything behind a single URL. Some of the factors that need to be considered when working out a scope for a web application test are :-
We spent the last couple of posts looking at getting the name right, which is important to ensure that everyone's on the same page with what the test should deliver.
The next piece to look at is "what to test?". Getting the scope of a test right is critical to ensuring that the client gets best value for money and the tester gets enough time to actually do the work. Remember that this should all be driven by a clear business need and an understanding of the risk environment for the test to add real value to the organisation.
The fact of the matter is that a tester could spend an almost infinite amount of time testing most environments, you just keep going deeper into the system you're testing, once you've gone beyond basic version and vulnerability checking, you can look at password guessing, then you could look at fuzzing all the services running, then reverse engineering all the software etc. Of course it'd be a rare test that provided that kind of time.
So with the assumption that there's never enough time, how can you decide what a reasonable time to spend on a test is? Well there's a number of techniques that can help the estimation process and they kind of vary depending on what type of system is under review. We'll cover a few of the more common ones here.
External Infrastructure Reviews - This is the "classic" external security review that most medium/large companies should be doing on their identified external perimeter to ensure that there's no major holes that an attacker can waltz through. Assuming that we're going for a security review (as defined earlier ) the major variables to consider in the scoping excercise is number of live services on the hosts under review.
We think that this is a better approach than the commonly used "number of IP addresses" as ultimately what the tester is assessing is live services. If you've got a Class C to review but there's only 2 servers with a couple of open ports, that's likely to be a quick job, but on the same class C if you've got 80 live hosts with 10's of services each, you've got a really long haul on to get a decent level of coverage.
Web Application Reviews - With reviews focusing on web applications there tends to be a lot more moving parts which can affect how long is needed to complete them. The basic metric that gets used is the number of pages on the site, but this isn't usually a good indication as static HTML pages with no input are really easy to cover and some sites use CMS systems which hide everything behind a single URL. Some of the factors that need to be considered when working out a scope for a web application test are :-
- Number of business functions that the site has. Business logic testing tends to be done per-function (eg, for a banking site, a move money request would be one function, for a shopping site it could be the checkout process)
- Number of parameters on the site. Like number of pages this one can be deceptive, as the same parameter may be used on many different pages, but it's probably closer to being useful
- Number of user type/roles. Assuming that the test is covering all the authenticated areas of the site, horizontal and vertical authorisation testing may need to cover all user types, so the number of roles to be tested is pretty important.
- Environment that the test is occuring in. This one is often overlooked, but can have a large effect on the the timing of the test. If you're testing against a live application, particularly if authenticated testing is needed, then it can limit the amount of automation/scripting that can be used. Automated scanners will fill forms in many times to assess different vulnerability classes, and this can be a pretty undesirable result for a live site, so manual testing becomes the order of the day. Conversely in a test environment where the tester has exclusive access, automated tools can be used more heavily without risking impacting the site.
- Are linked sites in-scope? With modern sites there can be a lot of cases where functionality used on the site is actually sourced from other hosts or even from other companies. If the site owner wants assurance over the whole solution this can expand the scope quite a bit and if third parties are involved can make the whole test quite a lot more complicated. Ultimately those bits might get excluded but it's worth asking the questions up front to avoid a disappointed customer.
Friday, 1 October 2010
Taxonomy #2
Building on from the last taxonomy post and from what Pieter Danhieux had to say, we thought we would add some further commentary around this and at the end of the blog add two more definitions. One for Security Audit and one for Security Review.
One of the key points that Pieter made was that,
"(IT) Security Assessment is in my opinion a holistic name for different types".
This is in fact the main crux of the issue and why we wanted to put a taxonomy together.
As security professionals we all interchange the use of words such as penetration test and security assessment, to fit the current situation that we find ourselves in and in some cases we will even use the same term differently within the same conversation!
While this approach can work within a skilled population of practitioners, it can lead to confusion for consumers of the service or those on the outside looking at their security team for advice and guidance. At worst it could be used to misguide a consumer into believing that they have received an adequate security test, while delivering nothing further than a discovery exercise.
The aim of the taxonomy is to have a publicly available resource that can be referenced.
When looking to establish the scope of a security test it is key to understand what has been requested by the consumer, and for the consumer it is key that they understand what it is that they have asked for and what will be delivered as part of the security test.
If both parties are aware of this then the relationship will be strong. This will enable open and constructive engagement, it will also lead to a better understanding around the different levels of risk that testing can bring and empower the consumer. For example, to ask for suitably skilled individuals if they in fact wish to conduct real 'penetration testing' on live environments or given the understanding of what this would attempt to do, decide that a 'security assessment' would be a more suitable solution.
What the taxonomy is not trying to do is confine the approach that individuals want to take. Many security professionals will take a blended approach to deliver against the consumers requirements or have their own approach that adds further value. The taxonomy should be used as a base point, where the security team can then show how they differ from this, but leaving the consumer clear as to the minimum level of activity that would be completed.
The previous post around the taxonomy focused on actual testing of security, whereas Pieter's comment,
"Information Security Assessment also exists … but then it is usually - benchmarking against a standard- assessing policies and standards".
Shows that there is more than just the actual testing angle, so in this blog we would like to share with you two more definitions;
Security Audit
Driven by an Audit / Risk function to look at a specific control or compliance issue. Characterised by a narrow scope, this type of engagement could make use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test).
Security Review
Verification that industry or internal security standards have been applied to system components or product. This is typically completed through gap analysis and utilises build / code reviews or by reviewing design documents and architecture diagrams. This activity does not utilise any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit)
One of the key points that Pieter made was that,
"(IT) Security Assessment is in my opinion a holistic name for different types".
This is in fact the main crux of the issue and why we wanted to put a taxonomy together.
As security professionals we all interchange the use of words such as penetration test and security assessment, to fit the current situation that we find ourselves in and in some cases we will even use the same term differently within the same conversation!
While this approach can work within a skilled population of practitioners, it can lead to confusion for consumers of the service or those on the outside looking at their security team for advice and guidance. At worst it could be used to misguide a consumer into believing that they have received an adequate security test, while delivering nothing further than a discovery exercise.
The aim of the taxonomy is to have a publicly available resource that can be referenced.
When looking to establish the scope of a security test it is key to understand what has been requested by the consumer, and for the consumer it is key that they understand what it is that they have asked for and what will be delivered as part of the security test.
If both parties are aware of this then the relationship will be strong. This will enable open and constructive engagement, it will also lead to a better understanding around the different levels of risk that testing can bring and empower the consumer. For example, to ask for suitably skilled individuals if they in fact wish to conduct real 'penetration testing' on live environments or given the understanding of what this would attempt to do, decide that a 'security assessment' would be a more suitable solution.
What the taxonomy is not trying to do is confine the approach that individuals want to take. Many security professionals will take a blended approach to deliver against the consumers requirements or have their own approach that adds further value. The taxonomy should be used as a base point, where the security team can then show how they differ from this, but leaving the consumer clear as to the minimum level of activity that would be completed.
The previous post around the taxonomy focused on actual testing of security, whereas Pieter's comment,
"Information Security Assessment also exists … but then it is usually - benchmarking against a standard- assessing policies and standards".
Shows that there is more than just the actual testing angle, so in this blog we would like to share with you two more definitions;
Security Audit
Driven by an Audit / Risk function to look at a specific control or compliance issue. Characterised by a narrow scope, this type of engagement could make use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test).
Security Review
Verification that industry or internal security standards have been applied to system components or product. This is typically completed through gap analysis and utilises build / code reviews or by reviewing design documents and architecture diagrams. This activity does not utilise any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit)