Friday 17 December 2010

Being Prepared



With this winter in Scotland already a repeat of the freezing conditions of last year we are still astonished at how many people leave themselves at risk by being entirely unprepared. Not only does this cause them problems, but it also causes some impact to those who are prepared. As a follow up to the brief post on Rory Alsop's personal blog we thought we'd provide a few notes on how to minimise the impact from adverse weather and foolhardy unprepared individuals on the roads.

Obviously the simplest solution is don't go outdoors - get stocks of food and drink in and batten down the hatches. Cosy, but not always a workable solution, so lets have a look at what you can do if you do need to go somewhere.

Practice:
Okay, so Rory A is a petrolhead, and so takes any opportunity to go out on a racetrack, but knowing how to handle ice is within anyone's grasp. While the Andros Trophy could be a little excessive, having at least one skid pan session under your belt will get you through a lot of ice. You'll learn how to use the right amount of torque - unlike the many people we have seen over the last couple of weeks trying to drive under full power, wheels spinning and sliding - resulting in some interestingly stuck vehicles! The driving test in Finland requires a test on a slippery course - is it any wonder they do so well in the World Rally Championship?



Planning the route:
Look at an OS map to understand the hills. Last winter Rory A had a very tense hour driving the last couple of miles to Drumoak in Aberdeenshire as he didn't prepare his route (but trusted a Tom Tom... mistake!) - he ended up descending a very steep slope using the ditch on the right hand side of the road as a runner to stop the car sliding off the left hand side of the road, which had no barrier other than some trees further down the slope. Learnt that lesson now, but wouldn't ever want to go through it again.

Avoid motorways - you would think they would be fine as the inclines are minimal, and they are wide, but unfortunately they are not sheltered, and when conditions deteriorate it is all too easy to be caught out, or get stuck behind someone else who does. When the inevitable crashes happen, you can't get off a motorway easily, and being stationary in heavy snow can lead to being stuck there for many hours.


Mechanical:

Defrost/de-ice your car every day. Not only will this help you avoid having to call out the AA/RAC/equivalent for your country, but you will avoid the doors freezing solid, ice buildup inside (which can easily damage wiring.) In addition you'll find it much easier to keep all your windows and lights clear of snow and ice - this doesn't seem to be understood by many road users. Personally we like to be able to see everything around us, and ensure they can see us - don't want to be anywhere near another car with the windows all frosted up and just a small patch on the windscreen for them to peer out! Minimising risk here is a good thing (tm)

At the start of winter you really want to ensure the car is properly serviced. Fresh tyres, new wiper blades, engine oil, antifreeze levels correct. Then take every opportunity to fill up the petrol tank - just in case you need to run the engine for warmth while stuck for days! In the more remote areas you should consider snow tyres, snow socks or even chains - they can make all the difference.



Supplies:
Everyone should have a blanket, sleeping bag or slanket in their car anyway. They are so cheap or even free at garages that you might as well. Not just an essential to keep you warm if you do have to overnight in the car, but they are really useful to give you grip if you are really stuck - tucking a blanket or rug under the tyres can give a lot of traction.
Gloves and Hat - yep, simple, but if you are trying to dig yourself out and the temperature is down below minus 15 you want to conserve heat! Possibly a Cthulhu Balaclava is the best solution.
YakTrax Ice Grips - get yourself a set of these essential accessories:

Snow shovel - if you can find one! The telescopic ones can easily be stored in the boot.
Drinks - would be really nice to have a flask of hot coffee or soup, but realistically you can keep juice or cans in the car really easily. You can dehydrate very quickly when stationary and running the engine to keep the car warm. Keep some bottled water as well, and ideally some coffee powder (see below)
Food - cereal bars or chocolate are easy to store in a car for long periods of time.



The important bit - Geek essentials:

An inverter - ideally reasonably high wattage, so you can charge your laptop.

Torch - ultrabright LED torch, or for extra bling, one of these 10 Million Candlepower torches.

High gain antenna (at least 9dB) and 802.11 card if necessary. How are you going to update your blog, check out your Stack Exchange posts and twitter feed, follow the Met Office updates detailing the cold and ice coming your way, or keep yourself entertained with iPlayer if you can't connect?

Immersion heater - either a 12v car version, or a 240v one to run off the inverter - so you can make coffee.

USB Handwarmers - keep your typing speed up. Or your strafe speed in Brink!

eBook Reader - whichever flavour floats your boat.

In car mp3 player - you don't want to run out of tunes before help arrives! Ideally at least a half a terabyte of music will avoid any risk of boredom. Especially this kind of music!


Best wishes for the festive season - see you in 2011

7 Elements

Thursday 16 December 2010

Apple iOS Devices and Encryption

As I've had cause recently to spend some time looking at Apple iOS encryption, and I've picked up some information that was new to me, I thought it'd be worth putting hand to keyboard about it.

Recent iterations of Apple's iOS based devices (iPad, iPhone, iPod touch) have got a number of encryption features which can protect data on them. However some of the descriptions of the features can leave people with a false sense of security, so it's important to realise what they can and cannot do for you.

First up is Apple's "Hardware Encryption". By default all data on the user partition of an iOS device is encrypted with keys stored in hardware on the device. Apple describe this as protecting "data at rest" and also enabling their fast remote wipe capability.

One interesting thing to realise about this capability is that it is not designed to protect user data from a "lost or stolen device" scenario. In looking at my own iPad, which has the latest version of iOS installed, it was initially possible to get access to all the user information stored on it, without knowing the passcode.

Accessing this information can be done by booting an alternate operating system and then using SSH to view and copy data from the device, over the Apple connector cable (a description of the requirements and process for setting this up is available here ). So whilst this attack is relatively technical, there's really no major barrier for a technically savvy attacker, as all the information required is in the public domain.

Apple also have an additional layer of protection available to them, which is their Data Protection feature. This feature encrypts specific information on the device with a key derived from the users passcode.

There's two interesting things to note about this feature. Firstly, it requires applications to specifically support its use, and at the moment there don't appear to be many that do. From Apple's perspective, only their mail client supports it in current iOS release.

Secondly, if a device has had iOS 3.X on it and has been upgraded to iOS 4.X, then Data Protection is not enabled, until the device has had a complete operating system restore carried out on it, as described in this Apple Support Document .

Once Data Protection is enabled, e-mail data seems to be quite well protected, although it's worth pointing out that as the key is derived from the users passcode, it becomes very important to ensure that the user has a strong passcode set (ie, not just the 4 digit simple passcode option), to prevent a brute-force attack.

From having seen some of Apples technical information around Data Protection it actually seems like a good concept for mobile device protection, so once it's more widely in use, I think that it'll greatly enhance iOS devices resistance to attack in a "lost device" scenario, but at the moment it's a bit limited.

Thursday 25 November 2010

When was the last time you read an EULA?

Most if not all of us will by default approve the end user licence agreement and never give it another thought. I am the same, however the other day I happened to download a dictation app for the iPhone and decided to read through the EULA and here are my findings:


As part of the service the provider can;



  • "collect and use the contact names that appear in your address book"


  • "You allow (company name) to do so by enabling the Service."

It is then your responsibility to know they have done this and to update the software settings to prohibit this access. Once done the provider will;



  • "delete all contact names that it has collected from your address book."

The first question has to be why do they need my contacts? Secondly, why is this an opt out process that attempts to close the barn door after the horse has bolted?

On the positive side;



  • "will not use the data you provide to contact any of the contact names that appear in your address book for any reason, nor will (company name) share contact names you provide with any third party."

Oh, that's ok then, nothing to worry about, but wait! Within the EULA is a URL link to a further page that states -



  • "(company name) would like its software to send speech data to (company name) to improve the accuracy of this and future products or services. We do this because our software and other offerings can learn from experience about the language you use. “Speech Data” means the audio files, associated transcriptions and log files provided by you hereunder or generated in connection with the product or service. By clicking the “ACCEPT” button when installing the software, you agree to the collection and processing of such Speech Data as set out in this privacy policy."

Ok, so now they have my contacts and take copies of my audio files to keep? I hope I never say anything bad about one of those people in my phone book! So who has access to this information?



  • "The only people with access to this data will be our employees, research partners, permitted agents, sub-contractors etc. on a need to know basis, all of whom are bound by obligations of confidentiality to keep the data strictly confidential."


Oh ok, everyone then, and where will this data reside?



  • "will transfer the personal data to its data collection sites. These may be located outside of the European Economic Area (EEA). However, (company name) shall ensure that any such transfer is compliant with the European Union Data Protection Directive."

So the transfer of data will be compliant with the EUDPD, how about the processing and storage of this data? Not to mention the lack of any discussion around information security for the data collection sites.


In summary, I have no reason to presume that this organisation has any intention to treat my personal data in a malicious or evil manner or that they are doing anything wrong / unethical. The reason for the blog is to highlight that we shouldn't blindly trust organisations and that we should be more aware of the contractual rights you are placing when accepting EULAs.


To that end I made a personal decision to not click 'accept' and I removed the app from the iPhone.


Back to old school note taking.

Wednesday 17 November 2010

Security in Scotland

A topic very dear to the 7 Elements team is the development of the Information Security profession, but specifically in Scotland, and we thought it would be worthwhile posting some information on initiatives in Scotland that help with this aim, as well as discuss areas where stronger involvement from the wider industry would be welcomed. We have selected a few of the key organisations and events, but if you feel we another is key, please let us know and we'll update this post.

The Institute of Information Security Professionals, of which Rory Alsop is the Scottish chair, is providing support and guidance to universities and companies across the UK through the Graduate Development Scheme, Academic Partnerships, the Accredited Training Scheme and the IISP Skills Framework. The IISP's mission is to be the authoritative body for information security professionals, with the principal objective to advance the professionalism of the industry as a whole. Whilst the existing IISP membership in Scotland is strong I would encourage individuals and companies to visit the website or speak to representatives to understand what they can get out of membership (at all levels from student through to full membership) and more importantly for the industry what they can offer in return from their own experience or skills. The IISP always welcomes speakers who have a story to tell in the information security space, so please get in touch if you would like to present at one of our quarterly events.

Similarly, ISACA aims to define the roles of information systems governance, security, audit and assurance professionals. Through close links with local industry, ISACA Scotland provides guidance, benchmarks and effective tools for organisations in Scotland. The majority of members in Scotland have the CISA certification so here there is a very strong focus on audit and control, but we are seeing increasing numbers in security management, governance of enterprise IT and risk and information systems control. Like the IISP, ISACA Scotland would welcome guest presenters or new members - the global knowledge base and information flow are extensive and the opportunities for networking are invaluable.

The Scottish Universities, under the leadership of Professor Buchanan have created the framework for a Centre of Excellence in Security and Cybercrime in Scotland - with strong links already forming between academia, law enforcement, industry and professional bodies such as the IISP. One goal is to provide academia with a greater awareness of real world security issues and activities through a number of avenues including volunteer work, summer placements, guest lecturers etc. From the perspective of your organisation, if you find that when hiring software developers, for example, you need to give them additional training in secure development or spend resource remediating vulnerable code, the argument for providing a small amount of resource to help develop coursework in these subjects, or to provide the odd guest lecture is a very strong one. 7 Elements are strongly bought into the concept that as an industry we can make great improvements by simply providing the new entrants with the benefits of at least some of our years learning the hard way.

The e-Crime Scotland website was officially launched at the Scottish Financial Crime Group Conference on the 28th of October. Currently this has been set up with support from, and using the framework developed by the Welsh Assembly, demonstrating an excellent level of sharing of expertise and resource. This website provides a portal of information on e-crime, a reporting mechanism and is planned to develop as Scotland takes greater ownership of content.

The Scottish Financial Crime Group, under the ownership of the Scottish Business Crime Centre, has been working with law enforcement and clearing banks for the last 35 years, but more recently through the annual conferences and an active presence in many forums has been in a good position to draw on expertise from a wide range of specialist individuals and organisations to develop opportunities to disrupt the criminal element in our society. Membership of the SFCG or at the very least, attendance at the annual conference is invaluable both from a learning perspective and an opportunity to influence discussion relating to financial crime.

The National Information Security Conference is held in St. Andrews each summer and provides speakers renowned within their field, education and an excellent networking opportunity to meet like minded individuals from industry and security experts. This three day residential event attracts many security professionals who are trying to drive the industry forwards and should not be missed!

On the more technical front, the Scottish OWASP chapter, headed up by Rory McCune is a growing group of individuals from across various industries focused on improving web application security. Join the mailing list to find out about meetings, initiatives etc. The scope of interest includes everything from SCADA to online banking and from smart meters to social networking.

Monday 8 November 2010

Key Security Risks and Practical Remediation - ISACA Event notes - October 26 2010

Rory Alsop, Vice-President of ISACA Scotland and chairman of the Scottish branch of the IISP chaired a session titled "Key Security Risks and Practical Remediation." Audit Scotland hosted the session, and we had a good turnout representing the financial and government sectors as well as law firms and retail.

A quick introduction from round the table did confirm that the problems faced were common - low resource or budget, escalating security and risk requirements, ever increasing threats, targets spreading - not just large financial organisations any more, so the opportunity to outline some simple, effective activities which any organisation could carry out was highly appropriate.

For our regular readers, some or all of the following should be old news, however we still see so few organisations carrying out basic remediation activities that we would recommend reading and looking to see where you can improve the security in your environment through these simple steps. The risk areas were taken from OWASP, Verizon and WHID work to identify the most common issues.

We would stress that nothing here is a magic bullet to cure all ills, but if you can take some of the actions listed you will be improving your security baseline without incurring too high a cost:

Input Validation

Very old news, but:
  • The top two web application security risks (OWASP top 10 list) are Injection and Cross Site Scripting, both of which can be successfully mitigated by strong input validation
  • The 2010 Data Breach Report by Verizon lists the top two causes of breaches as use of Stolen Credentials and SQL Injection
  • Examples include Worldpay from 2008 (over $9.4Million stolen) and the Royal Navy this week - this is still an issue
This is a relatively easy area to improve on:
  • Popular frameworks have input validation modules – why not use them
  • With modern applications, a call to an input validation module is often straightforward
  • Never trust the client – validate all input at server side
  • White listing or black listing - both are acceptable and have their own pros and cons
Also think about output encoding – providing strongly validated output will also help prevent SQL Injection and Cross Site Scripting attacks, although it typically requires more effort to accomplish.

Brute Force and Dictionary attacks

More old news, but:
  • The 2010 WHID Report by the Web Application Security Consortium lists Brute Force attacks in the top 5
  • Tools to carry out brute force or dictionary attacks are simple to use, prevalent and free
  • Humans are still pretty bad at choosing strong passwords

Remediation should be in a number of areas:
  • Brute forcing shows up in logs – typically it generates a high network load and can usually be spotted by simple statistical analysis tools
  • Utilise exponential delays - eg 5 seconds after 1 failed attempt, 10 after the second, 30 after the third etc. This rapidly makes brute forcing unusable, without requiring account lockouts (which often require helpdesk resource)
  • Awareness training works – for a few months at a time. Combined with regular password strength audits this can have lasting effect
Prevalence of 0-day exploits

For organisations with significant assets that are targeted by organised crime (FS, Government, Pharmaceuticals etc.) there's an increasing likelihood that 0-days will be part of the attack. This throws an interesting light on defensive controls other than patching and configuration, as you can only patch for weaknesses you know about.

Use of IDS/Log monitoring becomes more important - you won’t necessarily catch the initial attack (no signature available) but you may be able to catch the attacker doing things afterwards. At the very least detective controls can help the incident response and clean up.

Defence in depth – another old mantra, but it helps. While a 0-day can get an attacker through a security device, or an application control, multiple layers require more work, or a longer time frame – during which time the issues may be patched.

Client-side Attacks

Krebs reported on the increasing wave of attacks targeting Java (not javascript) on client PCs. It's a common mistake for client patching not to touch Java (especially as some applications require specific older versions).

Microsoft and Qualys have both confirmed the scale of the issue with over 40% of all PC’s being vulnerable, and over 90% of all successful exploits in the Blackhole toolkit and over 50% of those in the SEO Sploit Pack being through Java. The Crimepack and Eleonore exploit packs also show Java flaws to be the leading exploit vectors.

The simple answer is to remove Java from machines. Most do not need it!

For those that do need it, keep it up to date. Very few developers update their code with the latest revisions, which can hinder user uptake of the latest Java update, so ensure your developers are kept up to date.

As part of audit look at the budget assigned for product maintenance or ongoing development

The Cloud

Moving to ‘The Cloud’ is popular – it can save money on hardware costs, it is flexible, it can save power and is generally considered a good thing™ for business.

  • Unfortunately it tends to break security structures, as layers which used to be in different environments, such as DMZs, may now be on the same physical platform, and may no longer have firewalls or other access control devices present
  • The volatile and dynamic nature of virtual environments can mean asset registers and licensing are difficult to manage
  • The tasks which used to be separated out to network, system, database and platform administrators may now be carried out by one team
Good practice includes the following steps:

  • Model the new architecture on existing good practice
  • Be aware of the requirements of a highly volatile asset register, and licensing requirements for dynamic assets
  • Understand segregation of duties needs between administrators

Widespread DDoS

WHID and Verizon indicate a dramatic increase in Distributed Denial of Service attacks:

  • Blackmail, especially of internet gambling sites is on the increase
  • Punishment DDoS (for example ACS Law) removing web sites from the internet in response to an action
  • Bot net slots available for hire at cheap rates
(update - the DDoS against Burma last week shows the traffic levels which can be generated: at 10-15 Gbps this was significantly larger than the 2007 Georgia attack at 814 Mbps)

It is very difficult to resist a Distributed Denial of Service attack – even a small bot net can overwhelm a company’s Internet connection
Concentrate instead on resilience – do you have a fully tested business continuity plan or IT disaster recovery plan which can cope?
Does your ISP have mechanisms to mitigate such an attack?

IPv4 Address Space Exhaustion

Little bit more off the wall –

Whilst some of the stories around at the moment are probably more scare mongering than anything else, it seems likely that 2011 is going to see a greater restriction in IPv4 address and subsequently a big push to IPv6.

The interesting part is that a lot of security controls are dependent on IPv4 ways of thinking and there's also a big risk that new IPv6 implementations will require different ways of implementing network security and will be buggy early on.
  • Review your networks to understand the security structures in the infrastructure and protocol stacks
  • Work with your telecommunications and network service providers to ensure you are prepared
More Generally

I would remind auditors that they need to not only ensure that each security management process is in place but that it works works.
A modicum of technical assurance work (vulnerability analysis by an experienced person) will go a long way.

Work in partnership with IS specialists to:
  • Add value to audits and gain a more holistic picture of the current state of security
  • Understand new threats and risks
  • Always take a holistic look – what are the threats to the business, not just to IT
  • Improve your security testing process – we have demonstrated over 30% savings through managing security testing and assessment efficiently
Threats will continue to develop – aim for resilience!

References

OWASP Top Ten
http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

Verizon Data Breach Report
http://www.verizonbusiness.com/resources/reports/rp_2010-data-breach-report_en_xg.pdf

Krebs Java Security Report
http://krebsonsecurity.com/2010/10/java-a-gift-to-exploit-pack-makers/

WHID Security Report
https://files.pbworks.com/download/loBVUfSYDp/webappsec/29750234/WHIDWhitePaper_WASC.pdf

Potaroo IPv4 Address Report
http://www.potaroo.net/tools/ipv4/index.html

Tuesday 2 November 2010

IISP event on 4th November 2010

The next Scottish branch of the Institute of Information Security Professionals event is on the 4th of November, and will be kindly hosted by Napier University in Edinburgh (room F.29 at the Merchiston campus). Come along if you are in Scotland and:
  • a member of the IISP
  • in the Information Security industry and thinking of joining the IISP but wondering what membership can give you
  • studying computer science, software development or associated undergraduate courses and want to know more about the industry
  • or just want to network with like minded security professionals
Rory Alsop, the chair of the Scottish branch, is delighted to welcome two speakers:
  • IISP Member - Matthew Pemble: "Preparing for the End - Data Destruction". Matthew is a popular speaker at many conferences and events and from two aspects of his day job has a pretty interesting take on this topic. Find out more at Idrach's website.
  • IISP Programmes Manager and Chief Operations Officer - Triona Tierney: "The IISP Graduate Development and University Outreach Programmes" - if seriously considering information security as a career this talk could be invaluable.

Kickoff is from 6 for 6.30. Please do come along and support your local branch, join in the lively discussion, and meet fellow IISP members in your area. For more information and to register for this meeting, please email events@instisp.com

The best source for joining instructions/maps etc is the Napier Merchiston page - it includes a link to Google Maps.

CYA (Consider Your Audience)

Just picking up on one of the points from the discussion Rory Alsop and Chris Riley were having on the penetration testing industry. I've been giving some thought to the issue of communication.

With any presentation or document, probably the most important thing to think about is "who will be receiving this information"? Definitely whenever I'm preparing a presentation, that's the first thing I think about, as it drives the rest of what I create.

Now in the penetration testing industry the main means of communication is the post-test report. Whether that's purely a written document that's handed over at the end of testing, or handled as part of a wash-up meeting, it's the tester's main opportunity to communicate what they did and what the real value of the test was.

Unfortunately in a lot of cases, the opportunity to customize the report heavily for a specific audience isn't available, as it'll go to multiple groups, but with a wash-up meeting there can be some good opportunities to focus on the test in different ways.

To illustrate the point, lets take a pretend test for Hypothetical Corp, which looked at their perimeter network and their main transactional web application.

Lets say there were findings for unencrypted management protocols on external devices, out of date web server software and several web application findings, in session management, authorisation and input validation (SQL Injection and XSS), the standard type of things you may see in many areas.

Now we've been asked to do a wash-up meeting to explain our findings. Depending on who the main people at the meeting are, I'd say there's at least three completely separate ways to present this information.

If we're presenting to developers and admins, you could focus on:
  • Exactly which areas of the application had input validation issues, what the best way of mitigating those is, possible ways that their app. framework could be used to help them.
  • For the unencrypted management protocols, you might talk about sensible alternatives that still allow the admins to get access (VPN, move to encrypted protocols, partial mitigation like source IP address restrictions etc.)
  • Demonstrate authorisation problems in the application and explain how an attacker might be able to get additional access to the system. Again suggesting how you'd recommend that they approach fixing it can be very useful.
If we're presenting to the IT Security team you could take a slightly different tack:

  • Talking about potential for an attacker to use the issues discovered (eg, SQL injection) to expand their attack and compromise additional areas of the environment.
  • Talk about current trends in attacks, providing some information on attacks that are likely.
  • Look at what data is potentially compromised by any of the attacks, particularly in terms of any credit card or personal information that may have a regulatory impact if it's lost.
Lastly, you may be "lucky" enough to present your findings to senior management, and again there's probably a different set of things that they're interested in:

  • Potential business impacts of one of the findings being exploited. Loss of customer data, regulatory fines etc.
  • Where the organisation is, in comparison to other companies in their industry or area, is likely to be of interest.
  • Demos (if possible). My experience has been that actually demonstrating how easy a compromise is can be quite convincing to senior management.
So here we see three different presentations, all from the same test, and there are numerous other ways that this information could be tailored, depending on the audience.

The main point of all this, is that considering the recipient of any information is key to getting your message across.

Building in time to understand the potential audience at the start of the engagement can be invaluable. For more mature buyers of security testing, agreeing standard formats or documents which have standard sections which can be pulled out as needed can help to get the message across effectively, e.g. a report with a targeted executive summary, plus an xml section for the techies so they can grab the data into their own reporting tool.

One further point which we have seen on occasion is where a test offers findings which are the responsibility of different 3rd parties. Delivering a single report back may not be allowed as confidentiality could be breached. What we have had to do in these situations is write up multiple reports for the same level of audience, but with sections redacted. Planning for this at the start of a test will save you time.

Sunday 31 October 2010

Discussion re the Penetration Testing Industry

Chris over at Catch22 just posted up this excellent blog article.

A huge amount of commonality in thinking - Some extra thoughts on this:

Communication - over the last 12 or so years I have tried various training for testers along business lines etc., and there are very few who I would say are at the top of their game in both testing and reporting in business language. The few around are worth their weight in gold, but very rare, so my fallback solution was always to have a member of the team responsible for business QA and reporting. They'd still need to be at a high level of technical expertise, but the focus is different. (I do like Chris's idea of a tech reporting course though!)

Relevance - understanding the customer's needs is definitely key. As we've discussed, working with the customer so they understand what their options are, the value in different services etc., should be a part of every engagement.

Accountability - two thoughts on this. One is the name and shame as Chris mentions, but there are bound to be legal challenges, so the alternative is to use certifications (eg CREST, SANS etc) to be able to demonstrate to board level that you chose the right testers for the job, as the certification is effectively the entry qualification to the industry. In addition, you could go down the route of extensive logging (also would help for the repeatability section below) so you can prove every step.

Standards - absolutely! See our earlier posts on taxonomy and nomenclature to understand an element of where we see standards going, and we are planning to continue to work with a good range of experienced security individuals to define a set of industry standards.

Repeatability - I think where possible a number of organisations already do this. On a recent project, my customer wanted at least a minimum (including the parameters used and screenshots) to allow them to replicate the issue. That is only applicable for certain types of tests, but it goes a long way to help, and it is relatively light on resource so shouldn't price you out of the market.

The great thing is that more and more people are aiming the same direction. This has been a long time coming, but with passionate individuals, organisations and bodies, I think moving from the end of 2010 into 2011 will see a step change in the professionalisation of the industry.

Friday 29 October 2010

Scottish Financial Crime Group Conference Highlights

This year's SFCG Conference was held at the Corn Exchange in Edinburgh yesterday and was a great success, with a wide range of delegates from Financial Services, Consultants, Vendors, Academia, Public Sector and Law Enforcement (Scottish and Welsh police, and the FBI)

For me the key highlights included:

A presentation by Robert Hartman of KPMG on Bribery and Corruption in the Financial Sector. Some very worrying statistics, but also a down to earth approach to the problem. Robert also highlighted to useful sources of information: Transparency International and Trace Compendium.

A presentation on the risks around Social Media by DI Keith McDevitt of the SCDEA, a topic which is close to my heart and one which 7 Elements plan to present on to one of the winter New Media Breakfast Briefings. Lots of interest in this area, and I had a good discussion with a number of delegates afterwards.

The launch of the e-crime Scotland website - with a huge amount of support from the Welsh Assembly Government, who launched theirs some time ago, Scotland now has it's own portal for information on e-crime, a reporting mechanism, and a gateway into the topic.

There was also a surprise talk by Professor Martin Gill, of the University of Leicester, who stepped in when one speaker was held up in transit. He spends a lot of his time interviewing criminals in prison and taking them to the crime scene to demonstrate how and why they commit their crimes. Some of his findings seem very non-intuitive, for example when confronted with the automatic lights homeowners may have fitted to the outside of the house, most burglars use them to scope out the property, identifying tools, escape routes, entry points etc. Not one stated it would put them off, as no-one ever checks when an automatic light comes on! Similarly CCTV was not seen as an issue.

Another useful point which came up was that when asked what they thought the likelihood of getting caught was (when given the options high, medium, low, none) they laughed at the question and said "zero likelihood" otherwise they wouldn't commit the crime, so the corollary to this is if we can persuade offenders that they will get caught at the time they are about to commit the crime then they are very unlikely to do it.

Although his talk was mostly about burglars, shoplifters and murderers, the same concepts hold true for white collar crime, so can we find ways to make criminals less certain they will get away with it at the time?

A member of the local fraud squad did tell me his solution was to push for removal of property under the Proceeds of Crime act, as going in to prison without the reward of a couple of million pounds at the end of the term can suddenly be a less enjoyable prospect, and letting criminals know that 'getting away' with a small stretch is no longer profitable can be a valuable deterrent.

Caught up with Lindsay Hamilton of Cervello - his company carries out database auditing (in fact he has joined forces with The Pete Finnegan to offer an awesome tool for Oracle auditing)

Some interesting exhibitors this year - M86 Security (the guys who incorporated Finjan into their product line) had some good chat around secure web gateways.

It was as ever a great networking opportunity - I always meet a lot of old friends and colleagues, as well as clients old and new, and these events give a good chance to catch up. One individual surprised me, as out of context I did not recognise her - a detective constable with the Specialist Fraud Unit. Turns out she sings with the Lothian and Borders Police Choir (who I play session guitar for on an occasional basis)


Rory Alsop

Friday 22 October 2010

Incident Response

Today's blog post is based on a presentation I gave to the Scottish OWASP chapter around using security testers as part of your incident response approach.

The talk focused on the different skills and opportunities that using non-forensic based teams can bring to the party in dealing with a security incident and where you really need to use forensic based teams.

The highlights being:

Incident Response -

1. Key question to ask at the beginning of any incident is

"Is this likely to go to court or involve law enforcement?"

If there is any possible outcome that turns this question in to a yes then you have no choice but to use an approach that meets the evidential handling requirements of the local legal jurisdiction of the incident. Within the UK the foundations for this approach have been documented by the Association of Chief Police Officers (ACPO) and serve to ensure that evidence handling, investigation practices and supporting activity are carried out legally.

The following show the four high level principles set out by ACPO :-

Principle 1:
No action taken by law enforcement agencies or their
agents should change data held on a computer or storage
media which may subsequently be relied upon in court.
Principle 2:
In circumstances where a person finds it necessary
to access original data held on a computer or on storage
media, that person must be competent to do so and be
able to give evidence explaining the relevance and the
implications of their actions.
Principle 3:
An audit trail or other record of all processes applied
to computer-based electronic evidence should be created
and preserved. An independent third party should be able
to examine those processes and achieve the same result.
Principle 4:
The person in charge of the investigation (the case
officer) has overall responsibility for ensuring that the
law and these principles are adhered to.

Basically, if you get involved in such an incident then make sure you work with forensically trained teams, be it internally resources or in partnering with a specialist service provider.


2. In terms of basics for incident response, well PPPPPP! That's Prior Planning Prevents P$£& Poor Performance!
Make sure you have a documented approach to incident management, defined roles and responsibilities and that you have tested it all out prior to dealing with your first incident! Otherwise it is unlikely to go well. A few pointers that I have picked up over the years to aid this are -

3. Use IS specialists to provide advice and guidance to the incident group and not have them running the incident. This approach enables your specialist to be just that, the specialist.

4. If the incident is in response to an event that could impact the business being able to meet its objectives (say, make money?) then it should be the business representative that makes the final decision (based upon sound advice and guidance from the specialists). Too many times I have been involved in incident calls where the business rep has looked to offload the decision making on to techies.

5. If you are dealing with a complex issue that requires both technical teams and business focused teams to be working, think about splitting the two teams in to focused indecent groups and have a link person that delivers messages between them. This enables the noise and chatter that is generated to be compartmentalised, the business team do not need to know how specific lines of code are going to be updated to stop that SQL injection attack and the technical teams don't need to be listening to the business managers talking about media statements and legal advice. This approach lets each team focus on the key issues that they will have to deal with.

6. Have a dedicated (for the period of the incident) resource to manage incident calls, take notes and track actions. There is nothing worse than sitting on a call for three hours to reconvene later on to find out that no one has actually done anything! This approach also helps to establish a time-line for the incident and will enable a more effective post incident review to be conducted.

7. If you are not having to go down the forensic route, it could be useful to engage the services of your friendly hackers (we call them security testers) to provide their expertise.

Security testers enjoy problem solving, they can generally code (which is very useful when managing large amounts of data) and they have an innate understanding of exploits and the reality of what can be achieved by hackers. This insight can go along way to gaining an understanding of what has happened, the risk exposure to the business and in highlighting potential options for recovery.

Your friendly hacker is also very good at testing any fix to see if this successfully mitigates the exposure or to conducted targeted assurance tests to understand if you are vulnerable to the same issue in other areas of your organisation.


In summary, you can gain a huge amount of advantage through the use of security testers as part of your incident response approach.

However, remember that sound forensic practices need to be use in cases that will involve the local law enforcement or courts! Given the choice of putting a forensic engineer in front of the courts or a pale, caffeine addicted hacker I will choose the forensics engineer every time. :-)

Thursday 14 October 2010

Making the test go smoothly

Carrying on our series of posts about the various stages of the security testing process, we're moving on to another frequently overlooked piece of the puzzle, which is test logistics.

Most testing that's carried out has quite tight timescales. On a big project, security testing will usually get a specific window to be completed in, so it's important that everything goes smoothly. Also as testing is usually charged by days effort, time not testing due to logistics problems is essentially money down the drain.

So what are the main causes of logistics problems and some possible solutions to them :-
  • Credentials - For a lot of testing, authentication to systems is a requirement. Without credentials the tester can't do the work, and depending on the company and the application getting new users onto the system can take a while. So always worth ensuring that as a tester you've clearly laid out what accounts you need and as a client you kick off the processes to get them sorted, well before the test.
  • Letters of Authorisation (LoA) - Making sure that no-one's going to accuse you of illegal hacking is kind of important for a smooth running test :o) Especially where there's a 3rd party involved (eg, hosting companies, outsourcing companies) the LOA is a very useful way to ensure that all relevant parties are aware that the testing is happening, what the dates are and confirming that they're happy for it to go ahead. A useful requirement of the LoA is to have a test term which is quite a bit longer than the expected test window, as delays often happen for various reasons, and it is generally easier to have one LoA covering a month for a week long test than to raise separate one-week LoA's if things get delayed.
  • Technical contact - While sometimes your customer will be the technical contact, in large companies it's likely that there are other departments or even other companies involved, and it's always a good idea to have the names and phone numbers of the right people, so that if a system crashes during testing, you can get in touch and minimize any problems.
  • Escalation contact - Especially when testing environments which support large customer groups, financial transactions or critical data flow, being able to provide timely information on hold-ups, the business impact of technical issues, critical findings which just can't wait until the end of the test etc. to the right people can be a lifesaver - both for the customer, and for the tester. Without an escalation contact, tests are often halted for all manner of glitches, including those unconnected with the test. The contact is often all that is needed to provide context to business so they can make informed go/no-go decisions.
  • Network Access - Not a problem on every test, but internal tests, especially for large companies, can run into problems when it's not possible to get a clear network connection to the systems to be reviewed. Always worth connecting a machine before the testers get to site and make sure you can get an IP addresses and reach the in-scope systems.
  • Availability of system to be tested - Might seem like a no-brainer, but applications, systems or websites are sometimes down for maintenance, operational testing etc., and the teams working on them may be separate from the individuals liaising with the security tester. All this does is incur cost, and eat into test windows, so we would recommend ensuring all relevant teams have visibility of the security test, and its requirements.
  • Jurisdiction - Especially for testing companies operating in different countries, getting agreement (often in the form of an LoA or specific contract terms) for data potentially being accessed in another jurisdiction, for example testing a European organisation from the Far East. Looking into the legal requirements up front at the initial scoping stage can save a whole lot of pain further down the line
  • Desk and chair ! - Shouldn't need a mention but testers do need somewhere to sit :) Many of us have carried out tests huddled on the floor of a cooled data centre, but it is good practice to follow basic Health and Safety policy
With a bit of effort up-front logistics shouldn't get in the way of a good test, but they can trip you up, so definitely worth considering.

Friday 8 October 2010

The Web Hacking Incident Database 2010

The Web Hacking Incident Database 2010 part year results have just been published WHID 2010 , while the statistics are based on a limited population it does represent the tip of the iceberg of a much larger issue. Many incidents will go unreported to the wider public for many reasons, some due to commercial decisions and others because the organisation is not even aware they have been compromised.

So what are the key points from this report and our take on what this means?


  • "A steep rise in attacks against the financial vertical market is occurring in 2010."

The report highlights that this is in fact against users and not the financial institutions directly, by attacking end points to obtain customer account details and then using the credentials to move money away.


  • "Banking Trojans (which result in stolen authentication credentials) made the largest jump for attack methods."

This finding goes hand in hand with the first point raised about the targeting of the financial sector. The use of Trojans have been a well established route for cyber criminals to gain access to sensitive information, the change here is in the organised element of on-line crime. With global networks and the use of mules (individuals solicited to aid extraction and movement of money from accounts) they are now able to use the stolen credentials to cash out with large amounts of money. The Zeus Trojan is a recent example of this escalation with 37 people arrested and charged with being members of an international crime ring that stole $3 million.
Attacks against financial institutions do happen and some are successful, the parallel between client side attacks and direct attacks is seen in the use of globally coordinated use of mules to cash out.


  • "Application downtime, often due to denial of service attacks, is a rising outcome."

Interesting finding and sits within our view that you need to have a resilient approach to security. A good statistic here would have been a comparison against actual downtime and any 'Recovery Time Objectives' that an organisation should have as part of its business continuity plans. Are organisations resilient enough to meet their business objectives or not?


  • "Organizations have not implemented proper Web application logging mechanisms and thus are unable to conduct proper incident response to identify and correct vulnerabilities."

This echoes one of the core principles around a resilient approach to security that we outlined at OWASP Dublin 2010 about the need to be able to effectively detect and react to an attack.
We will continue to look at the world of security resilience in further posts.

All in all, concise points and clear graphics makes this a good read and well worth a view. If you liked the WHID then head over to the Verizon 2010 report which draws upon a wider population for its statistical analysis.



Thursday 7 October 2010

What to test?

We're going to use this post to continue the theme of talking about how the process of setting up and delivering a security test (or penetration testing) can be looked at and improved.

We spent the last couple of posts looking at getting the name right, which is important to ensure that everyone's on the same page with what the test should deliver.

The next piece to look at is "what to test?". Getting the scope of a test right is critical to ensuring that the client gets best value for money and the tester gets enough time to actually do the work. Remember that this should all be driven by a clear business need and an understanding of the risk environment for the test to add real value to the organisation.

The fact of the matter is that a tester could spend an almost infinite amount of time testing most environments, you just keep going deeper into the system you're testing, once you've gone beyond basic version and vulnerability checking, you can look at password guessing, then you could look at fuzzing all the services running, then reverse engineering all the software etc. Of course it'd be a rare test that provided that kind of time.

So with the assumption that there's never enough time, how can you decide what a reasonable time to spend on a test is? Well there's a number of techniques that can help the estimation process and they kind of vary depending on what type of system is under review. We'll cover a few of the more common ones here.

External Infrastructure Reviews - This is the "classic" external security review that most medium/large companies should be doing on their identified external perimeter to ensure that there's no major holes that an attacker can waltz through. Assuming that we're going for a security review (as defined earlier ) the major variables to consider in the scoping excercise is number of live services on the hosts under review.

We think that this is a better approach than the commonly used "number of IP addresses" as ultimately what the tester is assessing is live services. If you've got a Class C to review but there's only 2 servers with a couple of open ports, that's likely to be a quick job, but on the same class C if you've got 80 live hosts with 10's of services each, you've got a really long haul on to get a decent level of coverage.

Web Application Reviews - With reviews focusing on web applications there tends to be a lot more moving parts which can affect how long is needed to complete them. The basic metric that gets used is the number of pages on the site, but this isn't usually a good indication as static HTML pages with no input are really easy to cover and some sites use CMS systems which hide everything behind a single URL. Some of the factors that need to be considered when working out a scope for a web application test are :-
  • Number of business functions that the site has. Business logic testing tends to be done per-function (eg, for a banking site, a move money request would be one function, for a shopping site it could be the checkout process)
  • Number of parameters on the site. Like number of pages this one can be deceptive, as the same parameter may be used on many different pages, but it's probably closer to being useful
  • Number of user type/roles. Assuming that the test is covering all the authenticated areas of the site, horizontal and vertical authorisation testing may need to cover all user types, so the number of roles to be tested is pretty important.
  • Environment that the test is occuring in. This one is often overlooked, but can have a large effect on the the timing of the test. If you're testing against a live application, particularly if authenticated testing is needed, then it can limit the amount of automation/scripting that can be used. Automated scanners will fill forms in many times to assess different vulnerability classes, and this can be a pretty undesirable result for a live site, so manual testing becomes the order of the day. Conversely in a test environment where the tester has exclusive access, automated tools can be used more heavily without risking impacting the site.
  • Are linked sites in-scope? With modern sites there can be a lot of cases where functionality used on the site is actually sourced from other hosts or even from other companies. If the site owner wants assurance over the whole solution this can expand the scope quite a bit and if third parties are involved can make the whole test quite a lot more complicated. Ultimately those bits might get excluded but it's worth asking the questions up front to avoid a disappointed customer.
So these are some of the factors that we think need to be considered when scoping out tests, anyone got any others?

Friday 1 October 2010

Taxonomy #2

Building on from the last taxonomy post and from what Pieter Danhieux had to say, we thought we would add some further commentary around this and at the end of the blog add two more definitions. One for Security Audit and one for Security Review.

One of the key points that Pieter made was that,

"(IT) Security Assessment is in my opinion a holistic name for different types".

This is in fact the main crux of the issue and why we wanted to put a taxonomy together.

As security professionals we all interchange the use of words such as penetration test and security assessment, to fit the current situation that we find ourselves in and in some cases we will even use the same term differently within the same conversation!

While this approach can work within a skilled population of practitioners, it can lead to confusion for consumers of the service or those on the outside looking at their security team for advice and guidance. At worst it could be used to misguide a consumer into believing that they have received an adequate security test, while delivering nothing further than a discovery exercise.

The aim of the taxonomy is to have a publicly available resource that can be referenced.

When looking to establish the scope of a security test it is key to understand what has been requested by the consumer, and for the consumer it is key that they understand what it is that they have asked for and what will be delivered as part of the security test.

If both parties are aware of this then the relationship will be strong. This will enable open and constructive engagement, it will also lead to a better understanding around the different levels of risk that testing can bring and empower the consumer. For example, to ask for suitably skilled individuals if they in fact wish to conduct real 'penetration testing' on live environments or given the understanding of what this would attempt to do, decide that a 'security assessment' would be a more suitable solution.

What the taxonomy is not trying to do is confine the approach that individuals want to take. Many security professionals will take a blended approach to deliver against the consumers requirements or have their own approach that adds further value. The taxonomy should be used as a base point, where the security team can then show how they differ from this, but leaving the consumer clear as to the minimum level of activity that would be completed.

The previous post around the taxonomy focused on actual testing of security, whereas Pieter's comment,

"Information Security Assessment also exists … but then it is usually - benchmarking against a standard- assessing policies and standards".

Shows that there is more than just the actual testing angle, so in this blog we would like to share with you two more definitions;

Security Audit

Driven by an Audit / Risk function to look at a specific control or compliance issue. Characterised by a narrow scope, this type of engagement could make use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test).

Security Review

Verification that industry or internal security standards have been applied to system components or product. This is typically completed through gap analysis and utilises build / code reviews or by reviewing design documents and architecture diagrams. This activity does not utilise any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit)

Tuesday 28 September 2010

Penetration Testing? A Taxonomy

As promised, we at 7 Elements have been putting some thought into how penetration testing is currently sold and delivered and how we can improve the process for customers and suppliers.

One of the key issues that we see is that there are different reasons to go broad, or deep. A wide review could aim to identify a range of areas which should be improved, whereas a targeted attack simulation could give good information on what an attacker could do with an opening in the perimeter, combined with weak access controls for example, but may not find many vulnerabilities.

The second issue is with vendors that sell you a "penetration test" but only deliver a lower level of assessment and this can lead to a false sense of security.

So the problem with the "penetration test" term is that most people associate it with this idea that you'll also get coverage of security issues, rather than a focus on specific weaknesses and how they're exploitable.

At the end of the day, an attacker only needs to find one exploitable vulnerability, so while there are certain situations where allowing security testers free reign to go for the crown jewels may be the best option, due to the prevalence of the perimeterised "hard on the outside, soft on the inside" security model, organisations may find a broader approach provides greater assurance for the same budget.

So there is almost a forked model of testing. Typically you would begin with discovery, scanning for common vulnerabilities, and then assessment of those vulnerabilities. After this, the split could be towards Security Assessment (the broad review to find as many vulnerabilities as possible and assess the risk to the business) or towards Penetration Testing (the attempt to exploit and penetrate the organisation to gain access to a particular target).

There will be occasions where these two forks could join up again, where you want a broad review with added information on the extent to which a real world attacker could penetrate.

In order to make it easier to discuss the various stages, our taxonomy is as follows. Please leave comments if you feel improvements are required, and we will develop the taxonomy accordingly:

Discovery

The purpose of this stage is to identify systems within scope and the services in use. It is not intended to discover vulnerabilities, but version detection may highlight deprecated versions of software / firmware and thus indicate potential vulnerabilities.


Vulnerability Scan

Following the discovery stage this looks for known security issues by using automated tools to match conditions with known vulnerabilities. The reported risk level is set automatically by the tool with no manual verification or interpretation by the test vendor. This can be supplemented with credential based scanning that looks to remove some common false positives by using supplied credentials to authenticate with a service (such as local windows accounts).


Vulnerability Assessment

This uses discovery and vulnerability scanning to identify security vulnerabilities and places the findings into the context of the environment under test. An example would be removing common false positives from the report and deciding risk levels that should be applied to each report finding to improve business understanding and context.


Security Assessment

Builds upon Vulnerability Assessment by adding manual verification to confirm exposure, but does not include the exploitation of vulnerabilities to gain further access. Verification could be in the form of authorised access to a system to confirm system settings and involve examining logs, system responses, error messages, codes, etc. A Security Assessment is looking to gain a broad coverage of the systems under test but not the depth of exposure that a specific vulnerability could lead to.


Penetration Test

Penetration testing simulates an attack by a malicious party. Building on the previous stages and involves exploitation of found vulnerabilities to gain further access. Using this approach will result in an understanding of the ability of an attacker to gain access to confidential information, affect data integrity or availability of a service and the respective impact. Each test is approached using a consistent and complete methodology in a way that allows the tester to use their problem solving abilities, the output from a range of tools and their own knowledge of networking and systems to find vulnerabilities that would/ could not be identified by automated tools. This approach looks at the depth of attack as compared to the Security Assessment approach that looks at the broader coverage.

Thursday 23 September 2010

What's in a name

As we mentioned in the last post, we'd like to expand a bit on the presentation that Rory Alsop and Rory McCune made to OWASP Ireland.

While we've been working on 7 Elements, we've been putting some thought into how penetration testing is currently sold and delivered and how we can improve the process for customers and suppliers.

The first step for us was to understand some of the problems, as we see them, and the first of these is the name itself.

Penetration testing has come to mean a wide variety of things and it tends to get used interchangeably. It was originally understood to refer to a specific type of testing where the tester would emulate an attacker (generally in a black-box style of test) and try to get access to a specific set of services. A penetration test wasn't concerned necessarily with finding as many security issues as possible, but with proving whether an attacker could get unauthorised access to a system.

Now it seem to be used to refer to anything vaguely security testing related, from vulnerability scanning, through web application testing and code review, to actual penetration testing.

The major problem this causes is that it means that people are referring to "penetration testing" and having completely different ideas of what that testing will deliver.

This can cause problems in several areas, such as buying of testing services. How does a customer compare two companies selling penetration testing if one charges £400 a day and another charges £1200 a day?

Another problem comes when regulators or customers specify that an organisation must have a "penetration test", when what they really want to do is get some assurance that that organisation has addressed commonly occurring security issues across all parts of a given system.

So what's the answer to all this? Well we think that the best way forward is to move away from the "penetration test" terminology and begin to categorise types of security testing/assurance/review. We have been working with individuals across a range of organisations, including CREST, OWASP, buyers and vendors and have created a draft outline.

In our next post we plan to further develop this straw man into an industry ready draft.

Monday 20 September 2010

OWASP Ireland 2010

The 7 Elements team had their first team outing last week, to the OWASP Ireland 2010 conference.


Set in sunny Dublin the day hosted a wide range of interesting talks on Web application security related topics. The conference was very well attended and seemed to have people from a wide variety of backgrounds.

John Viega's keynote kicked off the day with a theme that persisted over many of the talks, which is the need to have a realistic and pragmatic approach to security. John has had a lot of experience in managing software security teams and one of the key messages that we took from the talk is that perfectly secure software is unattainable and that it's important to focus limited resources where they will make the most difference.

After that keynote there was a brief mini-presentation from Eoin Keary and Dinis Cruz on what's the OWASP board have been focusing on over the last year and what's in store over the next 12 months. We also got a mini version of Samy Kamkars Blackhat presentation How I met your Girlfriend, which had made a neat combination of XSS issues in home routers and Geo-Location facilities provided by Google, to allow for precisely locating someone based on them visiting a site you control.

The conference split into two tracks at this point, the following covers highlights from each.

Dr Marian Ventuneac had an interesting presentation looking at how web application vulnerabilities can affect a wide range of e-mail security appliances, including virtual appliances and SaaS offerings (eg, Google Postini). It was a good reminder of how widespread web application issues can be and also why it's important to review all web application interfaces that are in use by an company even if their provided by a "trusted" vendor.



After that 7 Elements' David Stubley was up to talk about "Testing the resilience of security". This is something that we'll be covering on our main site so we won't talk about it too much here other than to congratulate Dave on a well received presentation.







In the other room at the same time, Ryan Berg from IBM gave a very enthusiastic presentation on the process of secure development and the reality of software security. It was interesting to hear the theme of assuming that your internal network is compromised come up again from Ryan. There's been a growing chorus of voices in the security industry pointing to the fact that the complexity of modern IT environments and the flexibility demanded by business management mean that it's almost impossible to rely on a "secure perimeter" as a defence, and instead defenders should assume that attackers have some level of access to the internal network when designing their security controls.

Dan Cornell from Denim gave a fun canter through the subject area of iPhone and Android applications under the title "Smart Phones with Dumb Apps". Key take away was around the need to educate developers that the bad guys can and will decompile your application, so be aware of the sensitive data they contain. Another point made was that even though iPhones swamp the market at the moment, Android sales have the largest take up rate momentum, given this we feel that development of Android applications for financial organisations will become more prevalent as they become the next "must have" for business marketing and sales teams.



After lunch in the scenic Trinity College Dining Hall, Professor Fred Piper gave a talk on the changing face of cryptography. His talk covered quite a bit of the history of cryptography and how it's uses have changed over time. Fred also touched on some areas where cryptography goes wrong and he made the point that it's usually the implementation of an algorithm that is successfully attacked rather than the algorithm itself.

The next presentation that we sat in on was from Dinis Cruz on his O2 platform. As usual with Dinis there was an awful lot of information to take in, but it's obvious that he's doing some really interesting things with the O2 platform, and it'll be very interesting to see how it matures over time.

After Dinis, the remaining two members of the 7 Elements team (Rory Alsop and Rory McCune)

were up, to talk about the realities of penetration/security testing. We've put our slides up here but this topic is one that we want to cover off in more detail in this blog over the next couple of weeks.

Unfortunately after that our time was up and we needed to head off to the airport to get back off to Scotland.

Thanks to Eoin and the team for inviting us over to present.