Wednesday 16 November 2011

How Card Payments work and PCI DSS

In support of some blog posts I'm planning regarding the Payment Card Industry Data Security Standard (PCI DSS), I thought it would be useful to set out some basic information about how card payments actually work and what the PCI DSS is. This is very high level and is meant as a primer for those who are not familiar with the underlying processes and terminology. This may be particularly useful for small businesses who are just starting out with card payments and the PCI DSS.

How do card payments actually work?

At the very top we have a collection of organisations known as the card schemes. VISA and MasterCard are examples of card schemes but there are plenty of others. Essentially they each operate a network over which card payment transactions can occur and they make money from their members through connection fees and something called interchange. Interchange is a complex set of commission fees paid by members on each card transaction.

Card scheme members go through a certification process to prove that their IT and business systems are able to correctly communicate with the scheme's network and, once approved, they are issued with a range of Bank Identification Numbers (BIN, more on that later), and are permitted to connect and send transaction messages to other members on the network.

The majority of members are also 'issuers'. Issuers are the people who actually produce and issue cards, be they credit, debit, prepaid or other. For the purposes of illustration, I will use HSBC and debit cards as an example. An individual has a personal bank account with HSBC. HSBC will issue that individual with a debit card that they can use with that bank account. The debit card itself has an account number, something called the Primary Account Number or PAN. You and I know this as the card number, typically 16 digits and printed across the middle of the card.

HSBC will issue the card under one of the card schemes (this is a business decision based on the rates offered to them by the scheme, acceptance rates and other factors). For this illustration I have been issued with a VISA card. The PAN assigned to my card is not a random string, it has meaning. The first part of the PAN is allocated from the issuer's BIN range. The BIN range is a range of numbers, typically six digits in length which denote who issued the card and what it's for. The BIN for HSBC VISA debit cards is 465942 (see The rest of the number is arbitrary and up to the issuer but in order to be considered valid it must pass something called the LUHN check which is a simple checksum algorithm. This algorithm is not designed to offer any kind of security, merely to prevent accidental errors with the PAN.

So, this is all great but what does that machine do when I go in a shop and put my card in it (or near it), or that website when I put my details into the form? Enter 'Merchants' and 'Acquirers'. A Merchant is the shop, website or other organisation with which you interact when you make a card payment. They are the ones you are paying money to for the goods or services you are purchasing. How they get the money is down to something called a Merchant Account which is a credit account with an Acquirer. The Acquirer supplies the Merchant with a Merchant Account and a means to take card payments, be this over the Internet in the case of e-commerce or physical terminal equipment in the case of real life. They also take care of communications with card issuers via the card scheme networks. The Acquirer makes money by charging a regular fee for the account as well as commission fees on each transaction.

In the case of many large financial institutions, such as HSBC, they are both an issuer and an acquirer. This has obvious benefits in terms of an overall proposition to customers and in reducing costs.

So, I've just walked into my local shoe shop and picked up a new pair of Converse All Stars and I pull out my HSBC debit card. What actually happens? I insert my card into the slot, enter my PIN and the card machine makes a telephone call, yes, literally a telephone call, to the acquiring bank. To make things interesting let's say that my local shoe shop is "acquired" by Barclays, not HSBC. The Point-of-Sale (POS) terminal dials up Barclaycard Merchant Services (BMS) phone number, makes a network connection with its authorisation systems and begins transmitting authorisation messages.

BMS' systems will derive the BIN from the incoming PAN, work out which issuer and scheme this relates to and send an authorisation over the relevant scheme network to the issuer. In this example therefore, BMS will send an authorisation request over VISA's VISANet to HSBC. HSBC's systems will check the bank account associated with my debit card and decide whether or not to allow payment. In my case I'm in luck, the boss has decided to pay me this month so I can have those Converse. HSBC respond with a success and include something called an Authorisation Code.

BMS tell the terminal that the payment was successful, I breath a sigh of relief, remove my card from the machine, collect my little slip of paper (which has the authorisation code on it along with information about the POS terminal id and amount debited) and head out of the shop.

All done right? Nope. The authorisation is only part of the process. No funds have actually been debited at this point. At the end of the day, the merchant will cash up. Through interaction with the POS terminal they perform an upload of the transactions to the acquirer, via the same phone call mechanism. The acquirer receives the transaction list and proceeds to upload settlement batch files to each of the card schemes requesting payment. The card scheme ensures the delivery of this information to the correct issuer who is then responsible for remitting the funds to the card scheme. The card scheme deducts its interchange and sends the remaining funds to the acquirer. The acquirer deducts their fees and the merchant is then paid the remainder. This generally happens over night.

The overall process is the same for an Internet merchant except that the payment service provider takes care of all the interaction with the acquirer (and in many cases actually is the acquirer), including the transaction upload at the end of the day. Many Internet payment service providers maintain what is called a Master Merchant Agreement with an acquirer which allows them to use their merchant account in order to process transactions on other sub-merchants behalf.

A brief history of crime

As with just about any process, criminals look to find a way to abuse it. Right from the start fraudsters realised that copying cards or acquiring card details provided them with a rich revenue stream. The boom in Internet e-commerce in the early 2000's provided criminals with two significant new attack vectors, end user PCs through malware or viruses and web site databases. It's fair to say that few e-commerce companies were developing secure sites in those early years, both in terms of code or storage, and it was common for websites taking payments to have databases full of unencrypted card payment details. Criminals quickly worked out ways to infiltrate these databases and walk away with thousands, or even millions of card numbers.

Due to chargeback rules, if a cardholder detects misuse of their account they can contact their card issuer, declare transactions as fraud and the issuer will return the funds or reinstate the credit. The card issuer is then left with the problem of tracking down the fraudster to try and recover the monies. Not an easy task. The card schemes needed to find an approach to deal with this.

The PCI DSS is born

Five of the card schemes, VISA, MasterCard, American Express, JCB and Discover, combined their independent card security programmes in 2004 to create the PCI DSS. Its aim is to provide a baseline level of security for card payment processing. The PCI DSS is a set of six "control objectives" made up of twelve high-level requirements. These high-level requirements are then broken down into nearly three hundered individual low-level requirements.

The control objectives are as follows:

Build and Maintain a Secure Network
Protect Cardholder Data
Maintain a Vulnerability Management Program
Implement Strong Access Control Measures
Regularly Monitor and Test Networks
Maintain an Information Security Policy

You get a feel for the sort of things they are looking for here. I won't take up any more space on this blog post regurgitating the long list of requirements, they can be found at the PCI DSS website

The PCI DSS is overseen by the PCI Security Standards Council (PCI SSC), comprised of representatives of each of those card schemes. Other businesses can join the SSC as a Participating Organisation (for a fee of course) and review proposed changes to the DSS.

The PCI DSS applies to all merchants that store, process or transmit cardholder data. The SSC breaks merchants down into levels based on transaction volume with the validation requirements for each level varying as appropriate for their size. The standard itself doesn't change however, just the process of validation.

LevelCriteriaValidation Requirements
1Processing over 6 million transactions per year, or has previously suffered breachAnnual Onsite Security Assessment carried out by Qualified Security Assessor (QSA) and quarterly network scan by an Approved Scanning Vendor (ASV)
2Processing between 1 and 6 million transactions per yearAnnual Self Assessment Questionnaire and quarterly network scan by an ASV
3Processing between 20,000 and 1 million transactions per yearAnnual Self Assessment Questionnaire and quarterly network scan by an ASV
4Processing less than 20,000 transactions per yearAnnual Self Assessment Questionnaire and, depending on acquirer, quarterly network scan by an ASV

Merchants who comply with the PCI DSS are given "safe harbour" from fines and penalties associated with any card data theft which occurs from them, if they are proven to be PCI DSS compliant at the time of the breach. The scope of an assessment can be one of the hardest parts to nail down, before you even start to think about whether you comply or not. If you took the scoping statement literally you could argue that any device with an Internet connection is in scope and herein lies one of the core problems that we shall discuss in a further post, there is an awful lot of interpretation to be done with the PCI DSS and so much of a successful compliance assessment is in correctly understanding the standard and applying it appropriately.

In upcoming posts I will get in to some of the positives of the PCI DSS, how you can use it to make a genuine difference to your organisation's security, not just card data security and ways to invest time and money smartly. I will also examine some of the reasons I feel the PCI DSS is deeply flawed and is doing a disservice to information security professionals everywhere by distracting businesses to solve someone else's problem for them. You have to consider who is the real beneficiary in all of this, it's not the merchants, that is for sure.

Tuesday 15 November 2011

Why I love the PCI DSS

I've been involved with PCI DSS since virtually the beginning, through work with a number of large Internet Payment Service Providers I was responsible for designing and implementing solutions which would accomodate some of its more specific requirements.

It's fair to say I have a love/hate relationship with PCI DSS and in this, the first of two posts, I'm going to explore my love of it. I think you can probably guess what the other post is going to be about. I'm not going to put forward any of the counter-arguments in this post so you'll need to read both to get the balanced view.

If you're unsure what PCI DSS is or how it may apply to you, we have published a high level overview of card payments and the PCI DSS here.

Raising Awareness

So what's to love about PCI DSS? Well, as with any compliance requirement it's going to get the attention of a company's board. "Attention" being that thing that we security professionals are trying hard to achieve all the time. So raising awareness of security issues; PCI DSS does that. Executives realise that they are going to have to evaluate their security practice and even spend some money to make improvements in a lot of cases. With PCI DSS as a compliance requirement, organisations which may have struggled to secure funding and focus are suddenly able to make a host of overall security and operational updates. This has led to improvements in policies, processes, infrastructure, physical controls and overall awareness of card data security. It all sounds great right? We'll come back to that in a later blog post.

Something is better than nothing

Well, in information security terms this is probably true in many cases. PCI DSS is a very prescriptive standard in a lot of respects, forcing even small merchants to consider security techniques that otherwise they may not. As independent consultants we work with many businesses who just aren't sure where to start. Many produce in-house web applications but their developers have never heard of Cross-Site Scripting, Injection flaws or any of the other common security risks on the OWASP Top Ten, let alone OWASP itself.

PCI DSS explicitly requires that companies develop in-house applications in accordance with widely available secure coding practice and cites OWASP and SANS specifically. So PCI DSS, if you do it right, gives you the opportunity and buy-in to create a secure development training programme, school your developers in doing things the "right" way and introduce them to the consequences insecure code brings.

It's certainly possible to write insecure code and still be PCI DSS compliant but isn't it surely better to have a development team who are aware of the core principles of secure development? Realising that you can't trust any input to your application is really the first step on a long journey, but it's an important one.

Ooh, shiny

Like most information security professionals who have to deal with compliance, I'm dismayed at the number of "compliance-in-a-box" products being churned out. Hey, I understand basic economics, there's no supply without demand, but seriously, there's no box for this. If you can see past all this nonsense and look at the standard for what it is you've actually got a huge number of process and documentation requirements and very few actual software/hardware ones.

This is good. What is going to save your company's data? That shiny IPS box with flashing blue lights or your staff following good security processes, reviewing logs and making educated decisions based on solid incident response procedures? That's just one example and PCI DSS requires you to have these types of procedures in place. It's surely difficult to argue with that isn't it? It has to make a positive difference.

Real world examples

Let's take a theoretical situation, ignore PCI DSS for a second and consider the following requirement from a security architecture perspective. Your IT Manager comes to you and says he needs to allow his Unix guys remote access to one of their Unix servers. The server is hosted in some remote datacentre.

After you've ascertained what's on the server and are comfortable that it can be Internet facing, what's on your checklist for allowing this access securely?

* Unique usernames for each engineer?
* Encrypted protocols only, probably SSH or SSH over a VPN?
* If it's SSH you'll want to use Public Key authentication only - so, Two-factor authentication ensuring engineers have passphrases on their keys?
* No direct root logins?
* Firewall off any other services not required?
* Remove any software you don't need?
* Make sure it's patched
* Make sure this host is in the DMZ, not the internal network?
* Ship logs to a central syslog server and monitor logins?
* Perform a remote scan to ensure you've left nothing silly available?

I could go on further but you get the idea. These are the sorts of things most security guys are going to say and they make sense. You'd be in pretty reasonable shape if you did all those things.

Well, PCI DSS is going to make you do all of those things. It's hard to criticise that.

What about designing a network to host a web application with a baackend database? How would you lay it out?

* Put the webservers in a DMZ?
* Place the database server on an internal network behind a further firewall?
* Only allow connections to the database server from the DMZ?
* Not permit direct Internet connections from the database network to the Internet, only the DMZ?

Again, these are just basics but they're likely on most security architects layered defence checklist. I probably don't need to say it but, PCI DSS will make you do this stuff.

Summing up

So, I've touched on a few of the positives of PCI DSS so far. It gives companies the impetus to implement a good security approach where perhaps they otherwise wouldn't. It opens the door for the information security team to get some valuable improvement done but it still takes a lot of work to implement solutions which actually improve security. For this reason I love it. I wish it wasn't the case but I'm a realist. If this is what it takes for companies to think about security then so be it.

PCI DSS is criticised for providing only a minimum baseline but, in its defence, that's exactly its purpose, the minimum. PCI DSS is guilty of far worse failings than that and in my next blog post I will explore my views on some of these.

On that note, the point I want to leave you with is this:

For all the apparent positives of the above, what is it focused on? Protecting card data. Is protecting card data the only data that needs protecting above all else? For almost all businesses the answer is no.

So is any of this actually improving your overall security? Well, maybe but maybe not. It's up to you to take the things which PCI DSS is asking you to do and squeeze every last drop of security value out of it while the budget and business focus is there.

Implement policies and procedures which cover PCI DSS but also cover the things that matter to your business. Design a network architecture that supports PCI DSS but which can also host the servers and data which really matter to your business.

Check out my next blog post as I dive deeper into the murky waters of PCI DSS compliance and expose it for what it really is.

Monday 31 October 2011

hashdays 2011 Diary

I've just got back from hashdays 2011 in Switzerland where I had a great time meeting old and new friends and listening to some great talks. For those who didn't attend or those interested in what I thought, here is my hashdays 2011 diary.


We made the short walk around the corner to the Radisson Blu, grabbed some breakfast croissants and coffee and settled down to hear the opening speech from Pascal. I hadn't met Pascal at this point but he came over as a passionate, fun-loving guy who clearly wanted everyone to have a great time at hashdays. After running through some of the details about the con itself, the badges and the wireless network he introduced Mikko Hypponen for the keynote.

I've seen videos of a few of Mikko's talks and I've always enjoyed them. Seeing a talk IRL was great. Mikko is a polished speaker, confident, fluent and engaging. His talk took us through some of the real threats facing computer users today, the types of malware we are seeing, the types of people behind them.

Viruses or malware have changed significantly over the twenty years or so in which Mikko has analysed it. It used to be that people did it for fun, or to cause destruction but the most important thing was that somehow you knew you had been infected. Most viruses did not employ much stealth, that wasn't their goal. Nowadays, malware infection is big business and the ability to stay hidden and keep the host infected is important as it increases the possible revenue.

But are the people behind modern malware actually making any money? The resounding answer is YES! Organised gangs are sitting on literally hundreds of millions of US dollars. It's a phenomenal amount of money, with the total turnover worth more than the international drug trade. That is some achievement for an industry so young.

Why is it working though? How are so many people fooled? Some of the examples Mikko showed us make it pretty clear. My favourite example was a banking trojan which inserted a single page after login and before displaying your accounts page. The page informed the banks customer that the bank had set up a new investment account which will return x percent over some period of time. The account had been automatically created and here was the account number. All you need to do to take advantage of this new investment opportunity is transfer some money.

Of course, the account number belonged to the criminal but the beauty was now the customer, believing only the bank could have placed this page on an authenticated part of the site and that it's a genuine product, actually wants to transfer the money. This means they willingly perform all the multi-factor, out-of-band authentication required to complete the transaction before continuing to their accounts as normal.

As Mikko suggests, it's not obvious how to stop that. Mikko ended his talk with the statement there is "job security in security". As someone who works in the industry I suppose this is good news and bad news.

Next up I chose Track 2 and "Pentesting iPhone and iPad Applications" by Annika Meyer and Sebastien Andrivet. After a slightly nervous introduction from Annika, Sebastien took us through a live demo of some techniques you can use to assess the security of iOS applications. The demo was impacted slightly by the "curse of live demonstrations" but in the end it wasn't enough to diffuse the point. Essentially though I was a little disappointed, I was hoping for less reliance on jailbreaking.

My main takeaway was that, despite what we may be led to believe by Apple, apps are going in to the App Store which contain all the usual software security bugs. Go figure. Most seem to be relying on the "closed" nature of iOS to protect credentials, Sebastien demonstrated PIN codes and passwords stored in plain text in config files. This is of course trivial to get around, either by jailbreaking or accessing unencrypted (the default) backup images created by iTunes.

Maybe I shouldn't be but I was slightly surprised to discover the official LinkedIn application uses plain HTTP for all communications aside from log in. Though, as I just checked, so does the main website so at least it's consistent but seriously, must do better.

I had high expectations for the next talk "IPv6, the new network hackers playground". IPv6 is one of those areas where I feel we are not paying enough attention. Most people I speak to still have only a basic understanding of the next generation of IP though I'm convinced this is going to be a rich avenue of attack for some time while the stack matures.

I thought they would cover this and more during this talk but unfortunately most of the talk seemed to cover attacks which are just as valid in IPv4. Pinging the "broadcast" address to discover live nodes on the network for example. Many of you may be aware there is no broadcast traffic in IPv6, instead there is much more use of multicast. So to ping the local "broadcast" address the equivalent (Linux) command is now:
ping6 ff02::1%eth0
assuming the interface you wish to use is eth0. If you want to take this a step further check out the Multicast address Wikipedia article and look for "Well know IPv6 multicast addresses".
What was interesting was discussion of how to abuse Microsoft's Teredo service, a transitionary 6to4 service with built-in support in Windows. That's what all those ISATAP NetBIOS queries are that you see on networks. I definitely need to look more into the possibilities with this.

There was a lengthy section on Apple iOS mDNS and the fact that iPhones and iPads like to transmit their hostname to the mDNS address. This is not new to anyone with a wireless network and a copy of Wireshark. By default when you activate an iOS device, iTunes will name the device after your name. This means that, unless you rename it (and you do rename them right?) or don't use your name on your OS you are transmitting your full name into the ether on any network your iOS device is connected to.

They demonstrated what you can do with this using people's name and LinkedIn/Google. I feel that way too much time was spent on this. It had little to do with IPv6 and was more suited to a Google Hacking 101 talk in my opinion.

The talk finished up with some discussion of routing protocols and I was interested to learn that OSPFv3 will not support authentication. It is expected that "security" will be provided by utilising IPSEC capabilities in IPv6. This seems crazy to me. Forcing such a significant architectural change is not the responsibility of a routing protocol and is either going to lead to lack of adoption of OSPF in IPv6 networks or an increase in route injection attacks. That said, I'd be curious to know how many people are doing authenticated OSPF in IPv4 anyway.

Then it was lunch and the food laid on was excellent. Various hot dishes as well as sandwiches and soft drinks. The hotel bar was open throughout too for those who wanted something a little stronger. As always, lots of good chats to be had with some very smart people.

After lunch I went to Ian Amit's talk "Pushing in, and pulling out slowly without anyone paying attention". I caught the end of this talk at BruCON but was pleased to be able to see it in full. Ian is a great speaker, nice clear slides, plenty of enthusiasm and great, original content.

I covered the basics of the talk in my BruCON post so I won't repeat myself here.

I then hit the corridor track for a short time before going to Chris Gate's talk "From Low to PWNED". This was an excellent talk filled with loads of good, useful examples of why you shouldn't take your vulnerability scanner's word for it. Don't ignore Low and Medium severity vulnerabilities. Some of the ownage included SharePoint, Cold Fusion and WebDAV. I spoke to Chris afterwards and he had plenty more he didn't put in due to time.

I can vouch for this first hand, I've seen Nessus report readable SMB share as Low. When I browsed the anonymously readable share it was full of company sensitive documents. No shell required that time. So, if you are going to run a scanner make sure you review the output thoroughly, don't just look for the Highs with a Metasploit module available.

To finish the day I had to make a hard choice between a cool looking HTML5 talk or watching FX talk about Lessons learned from Stuxnet. I chose to watch FX, he's something of a legend and I wondered what new light he might shed on Stuxnet.

It was a great talk. FX owned the stage and filled the talk with the sort of low level technical detail I know him for. Stuxnet uses a number of Windows vulnerabilities in order to spread. FX walked us through each of these in wonderful detail. Essentially Stuxnet was built on reliable, remote code execution bugs, creatively conceived and well executed.

The talk went on to highlight where we are now versus where we were with industrial systems before wrapping up the day.


I always feel sorry for the people who start day two of a con. I missed the first talk from Marc Ruef and Luca Dal Molin unfortunately but I made it in time for the second, a Lockpicking 101 from Walter Belgers. As someone who has never spent as much time learning lockpicking as I'd have liked, I really enjoyed it. He was a good speaker who engaged the audience well and presented some live demos using his laptop webcam. I learned a lot more about some of the concepts I've come across over the years, including lock bumping, radio locks and how to defeat some of the countermeasures used by lock manufacturers.

After a short break it was time for the PTES Panel. This was one of the reasons I wanted to attend hashdays as I was keen to engage in discussion about how PTES can really make a difference to the Penetration Testing industry. It started out as complete car crash viewing, in a good way, with Nickerson, Ian and Stefan joined by straight-from-the-airport Dave Kennedy engaging in some bar-room banter but it finally got under way proper. Nickerson first conducting a poll from the audience as to what a Penetration Test is. There were some expected and unexpected answers before moving on to another important question. Once you've defined what a penetration test is to you, how are you validating the company you choose to do it?

The consensus was somewhere between "try them", "we know them" and "leave things for them to find, if they don't find them, they're no good". None of these are good answers which is where the PTES is designed to help. It is trying to clearly define what the minimum expectation will be for the customer and of course, the service provider. It is not designed to be a prescriptive methodology for how to perform the technical elements of a test, though a guide is available, it is more about defining a baseline standard for the steps required for the whole test, from pre-engagement to reporting.

I was impressed with Dave Kennedy's passion and clear articulation of the problem and how he's using (and promoting) PTES as a solution. It sounds like PTES is gaining momentum and I for one believe that if we all get behind it we genuinely will make a big difference to a broken industry.

In my previous blog I was a little critical of some of the BruCON talks which focused on bad pentesting. It wasn't that I didn't enjoy them, that they weren't good talks or enjoyable, just that I want to know how we're going to make a difference. What can I do as a tester to help drive this forward? This is what I was looking for from the PTES panel. To an extent I got this. I've seen and heard enough to know this has a serious chance of working if we can get the adoption rate up. Unfortunately this will mean talking to compliance bodies like the PCI SSC to incorporate PTES as a mandatory requirement if we can but hey, the key thing is to get it out there. If that means feeling a bit dirty (compliance should be a result of good security, not a reason for it) then it's a price we'll just have to pay for businesses to start getting true value from penetration testing.

Thanks to Dave and Nickerson for taking the time afterwards to talk through some of the ideas and challenges I've come across.

After lunch it was the hour of Eurotrash as Chris John Riley and Dale Pearson went head-to-head, albeit in different rooms. The decision was easy for me as I'd seen Dale's talk at BruCON so I went to see Chris "Scrub SAP clean with SOAP".

SAP is a huge product set found in most companies of virtually any size. It is also, it turns out, full of frankly scary insecurities. Chris demonstrated some of the research he has done over the previous year or so, including a collection of Metasploit modules he has written.

While nothing demonstrated is a simple case of point-click-shell, the amount of information given away through the management SOAP interface without requiring authentication is simply staggering. Usernames, instance IDs, operating system details and patch levels to name a few. Pulling this information together Chris talked through a theoretical brute force password attack against SAP. Made much simpler given we can get a list of users and the password policy for free through SOAP requests.

If that doesn't float your boat, you could just try a classic MITM attack as by default SAP's interface uses HTTP over a plain-text connection in conjunction with Basic auth. Clear-text passwords FTW!

If you're hungry for shell, once authenticated to SAP there is the ability to call OS commands - queue Meterpreter demonstration.

I know Chris put a lot of effort in to the slides (including some Monty Python-esque animation) and it showed. Despite his last slide saying "Sorry for sucking so much" he needn't have worried. I learned a bunch, the content was sufficiently detailed and presented well. The live demos even worked (but slides included some example output just in case they hadn't) so all in all, job well done sir.

Next up was Dave Kennedy "Making sense of (in)security". Dave is another great presenter, someone clearly comfortable being in that position and it's easy to see why he can hold down a CISO position at a Fortune 1000 in the US. I've listened to numerous podcasts with Dave on and his enthusiasm shines through in everything he does. In this talk he talked quickly through the problems we currently have selling security to our companies, backed by rapid-fire slide deck, but going on to offer his solution to the problem. Technical CSOs. I'm with him on this. It's not for everyone of course but, as he points out, if we asked someone with no legal background to become Head of Legal, or someone who has never worked in Human Resources to head up HR we'd think it was nuts. Why isn't it the same with Information Security? It's a specialist subject and the CSO must have cut their cloth working up through the ranks before they can surely be qualified to make the correct decisions.

Throw away those ridiculous risk formulas and just get stuff done. Starting talking in language the CEO can understand and focus on what actually matters to the business.

Dave finished with some technical demos, just because he wanted to. :-) He showed an updated SET which includes a new PowerShell attack vector for the Teensy device and changes to the Java Attack Payload which has some clever updates to deploy Metasploit modules without touching disk, of course, bypassing AV.

Last up was Chris Nickerson. If you listen to Exotic Liability you will have some idea that Nickerson is a man with opinions and he's not afraid to share them. In his sights this talk was compliance and it took a beating.

He took it right back to the start, discussing Guidelines, Standards, Best Practises and ultimately Compliance. He put in plain language just how absurd some of the requirements of companies are these days and how obvious it is that large chunks of it have been put together by security vendors to increase sales of off-the-shelf compliance products.

Chris did acknowledge that for some companies, particularly smaller ones (my note, not his), it has given them "something" where perhaps there may have been "nothing" and that has to be good. But on the whole external compliance is causing huge amounts of time and money to be spent on areas of the business which are not necessarily critical to that business' continuity. Amen brother.

Every business is unique. Find out what makes yours (or your client's) money, then protect it.

And then, it was over.....almost. There was just time for a surprise guest at the closing ceremony. Pascal! Last year Pascal did not make it to day two, such was the impact on him of the after party. This time his entry to the room was accompanied by fanfare and wild applause. The badge challenge winner(s) were announced (well done Bob and Ben, you certainly earned it) and the obligatory feedback form draw was performed before time was called on hashdays 2011.

My overall impression of hashdays is that it is a great conference with some high quality talks and speakers. It was well organised and I can't honestly think of anything about it I didn't like. I will definitely be heading back next year if I get the opportunity. I can only hope that the Swiss Franc is not so strong as it was ridiculously expensive. I'm thinking of submitting to the CFP for 2012 so if I got accepted that would certainly help with the costs. Fingers crossed for that but either way I'll be back for hashdays 2012. See you there!

Monday 3 October 2011

7E welcomes on board Marc Wickenden

7E welcomes on board Marc Wickenden who joins the team as our Principal Security Consultant.

Marc brings with him over 13 years of technical experience with the majority of that time within both FSA and Payment Card regulated industries, all with a focus on Internet based business models.

Marc has been heavily involved in Internet Payment services industry for the last 8 years where he specialises in all aspects of information security from PCI DSS compliance through to technical network security solutions. So expect to see some PCI focused blogs in the future.

Make sure to say hello if you are at Hashdays as Marc will be attending.

Friday 9 September 2011

SANS European Digital Forensics and Incident Response Summit

David Stubley from 7 Elements will be speaking at this years Digital Forensics and Incident Response Summit in London on the following topic:
A Hackers Guide to Incident Response

In the middle of an incident it is easy to lose focus on the bigger picture. This fast paced talk will draw on real world security incidents to provide you with practical technical information on how to balance the need for effective incident response with evidentially sound forensic work.

The event takes place over the 21st-22nd September, for more information see the SANS website.

We also have a discount code for the event, if you are interested then email

Friday 12 August 2011

Compliance v’s Assurance

Compliance is often the sole assurance activity undertaken. But is this really enough?


Many companies take steps to ensure that they comply with industry standards and regulations, as well as requiring individual business areas to meet the organisation’s own internal policies and standards/procedures. Compliance activity is then undertaken to check that these policies and standards are met.

What Is Compliance?

Compliance activity is generally carried out to confirm that a defined baseline standard of security is reached across the broad scope of an organisation. These baseline standards though do not necessarily ensure that systems, networks and assets meet the level of security required by the organisation or the individual business area, or that the security risk sits within the organisation’s risk appetite. Compliance alone will therefore not provide assurance that the organisation is secure, but rather that the policies and standards have been met.

As such, compliance can become a ceiling rather than the baseline.

Why Do More?

In addition to compliance, assurance should also be sought. The information security threat space is a rapidly evolving environment and as such security controls need to be responsive to prevent or combat the threat. Standards can easily and quickly become out of date. Compliance alone is therefore not enough. Assurance activity will take into account the broader threat environment, and look at the risks to an asset within the context of the external environment and the criticality of the data or function that the asset represents.

What is required?

A blended approach that takes into account the need to be compliant with policy and the ability to gain assurance is required to adequately manage IT Security risk effectively. Assurance testing would look to test the control to confirm that the control is not only the right control but that it also provides the level of protection required. This therefore provides a true assessment of the security risks faced by that asset and would expose any false sense of security that misplaced trust in a control had provided.

This approach would enable organisations to satisfy themselves that they are within risk appetite by ensuring that systems and assets not only meet the standards laid out in a policy but that the level of security risk is fully understood.

Monday 18 July 2011

Smartphone Apps

At the recent OWASP AppSec EU conference, Dan Cornell provided an update on the technical issues and risks posed by the use of applications on smartphones, building upon his talk last year. The talk focused on the security testing side and provided a great introduction to this area and is well worth a read if you are involved with mobile applications.

From a wider Information Security focus and as an aid to anyone dealing with the implementation of smartphone based apps, I have developed a short questionnaire that can be used to gain an initial understanding of the application and highlight any specific areas of risk (such as sensitive data being stored on the smartphone without encryption!). This can then be used to conduct more targeted assurance activities.

The Questionnaire

Data Related:
  • Could you please describe the type of data involved with this app?
  • Could you please confirm if any information will be stored on the device?
  • If stored, will the data be encrypted within the app itself?
  • Does the app encrypt data during transit?
  • If so, what method of encryption is used?

App focused:
  • What platforms will be supported?
  • Where will the app be available for download?
  • Does the app synchronise data to other end devices (such as via iTunes to user laptop)?
  • Does the app use 3rd party active content?

Supporting Infrastructure:
  • Please describe how the app will be updated / content managed?
  • Will web services be used to manage the app?
  • If so, will the web services infrastructure sit within owned networks?

Technical Assurance:
  • Will the app be subjected to a security code review as part of development process?
  • If so, will the report be shared?
  • Will the app be subjected to security testing as part of the development process?
  • If so, will the report be shared?

Tuesday 12 July 2011

Insider/Outsider – does it matter?

The recent BCS Information Security Specialist Group conference in Edinburgh focussed on the threat from Insiders. The main focus was on how individuals with authenticated access can do real damage to an organisation. A lot of effort within organisations goes into tackling the threat from Insiders and it is often treated as a separate category of threat. But is this right? Should the Insider threat be treated in isolation?

Defining the Insider Threat

The insider threat is often defined as someone acting from within an organisation’s internal systems. This is principally treated as an individual with authenticated access and stands in contrast to an external attack which originates outside of the organisation and does not have permission to access to the network. These definitions result in these threats being treated separately and in isolation of each other.

The idea of there being a difference between an insider and outsider is predicated by the assumption that there is a clear boundary between the internal network of an organisation and the outside world. However, in today’s world the reality is very different. There is often a blurred line between what is an internal system and the outside world. The boundary is no longer at the firewall, the real edge of your network is now the user’s device, be it a smartphone, laptop or home pc (using remote access to the organisation). This is because the end user device has Internet connectivity for email or web browsing and in many organisations egress filtering is lacking and services such as instant messaging can be used. This enables organisations’ internal systems to become visible to an external attacker, even where inbound filtering has been implemented. These recent developments in end user devices therefore provide greater opportunities for an external attacker to compromise a device and use it to access an internal system.

External Attack – Inside Threat

Historically, where an attacker is able to compromise a user’s device, the expected action is to then use this to attempt further compromise of the internal network. This approach can be quite noisy and may prompt internal alerting, but if successful enable the attacker to gain access to internal systems. This method makes it obvious that the external attacker has broken into the network.

However, an external attacker can also choose to masquerade as the user within the internal network and access systems and resources within the context of that user. One approach to achieve this would be for the attacker to use the user’s security token that is stored within the operating system. It is possible to ‘steal’ this token and assume the user’s identity within the internal network. At this point the external attacker can be seen to be an ‘insider’ threat as they are now conducting attacks against the organisations data as an internal user would.

Detecting the Insider – External Attack

An external attack can develop this technique to gain the information required without arousing suspicion. If the user doesn’t have the level of access required the attacker can steal other valid tokens and use this approach to escalate a user’s privileges, even to administrator level. Doing so enables the attacker to then target information held within the network without the need to conduct further hacking or compromises. The key point here is that the traffic generated would look like normal network traffic and access to systems would be as valid users.

Insider / Outsider - Does it Matter?

This example demonstrates that an external attack can act as an ‘Insider’ Threat would. Organisations continue to define the Insider threat as the authenticated user within their firewall, however, the recent developments in end user devices has provided greater opportunities for an external attacker to access a network masquerading as an ‘Insider’. The Insider Threat is based on out-moded definitions of what is inside the network and no longer effectively describes the threat. These definitions are no longer relevant. So how do we deal effectively with the ‘Insider Threat’?

As an external attacker could choose to access a network and masquerade as an insider, there is little value in focussing too much on the classic insider / outsider threat categories. Instead organisations should focus their security on what an attacker could do, regardless of their starting point, and look to be able to detect / react to and recover from ‘any’ compromise. By taking this resilient approach to security, organisations are better equipped to identify, respond and recover from an attack regardless of where it originated.

Tuesday 21 June 2011

APT in a nutshell

APT in a Nutshell

Recently I had the chance to present at Owasp AppSec EU ( on the topic of APT (Advanced Persistent Threat). The talk went down well and as such here is a follow up based on my slides and the questions raised by the audience


To be fair, I have a love - hate relationship with the term APT. It is used to describe state sponsored espionage, which i know exists and can be a significant threat to specific entities such as government departments and defence contractors. I’m also aware of APT style attacks on corporate organisations. However, the term can be banded around and badly applied to security ‘hacks’ as well as being seen as a ‘China’ only issue. This is especially true when dealing with the media and specific security vendors. This is the side of APT that I hate.

So what is APT anyway?

APT is used to describe a variety of attacks but has its origins in what would be categorised as state sponsored cyber espionage. This traditional cyber espionage was concentrated on government agencies and supporting defence contractors. This has been extended to encompass a wider focus, resulting in what is known today as APT.

The objective of APT is to;

  • Gain access to information,

  • Maintain access to gather large quantities of data,

  • To serve a specific set of goals / objectives.

APT stands for Advanced Persistent Threat. So what do these terms really mean? A number of definitions exist, but from my perspective I see APT as:

Advanced – This relates to the ability of the attacker. It doesn’t however mean that they will only use custom created code to launch what is known as ‘zero day’ attacks on a network. It is important to understand that they will use the path of least resistance when looking to compromise a network. If this can be done through trivially guessable passwords then this is the method they will use, but they have the capability to research and develop new attack code if required.

Persistent – This is a key differentiator with other threat actors. The aim of those conducting APT is to gain access to information. The information that is targeted is of greatest value when gathered in volume. APT attacks therefore seek to maintain access to the network for as long as is required to achieve this. Smash and grab attacks, such as those that target credit card information fall within a different class of threat actor, and should not be confused with APT. However, APT style attacks could be completed in such a fashion if they are able to meet their objectives with one short attack.

Threat – The use of the term threat within the context of APT relates to the fact that this is a targeted attack, which is directed to achieve a defined purpose and has both the intent and capability to gain access to the desired information.

So how does APT differ from cybercrime?

There is a degree of cross-over between cybercrime and APT. Highly capable threat actors within both areas are highly organised, well-motivated and funded. This makes both these actors a real threat. The key difference between an attack being classified as APT or cybercrime is the intention or driver of the behind attack.

The Hype

APT has become over hyped and this has been used to sell products and services based on the APT threat.
As an example, major security vendors now sell anti-APT services and products, with strap lines such as:

“Do You Know if Your Network Has Been Breached by Botnets, Advanced Malware or Persistent Threats?”

"threats such as the Advanced Persistent Threat (APT). These are one of the most dangerous types of threats"

"Introduces New Security Solutions to Counter Advanced Persistent Threats"

"Enterprise Computer Protection from Advanced Malware Threats/APT"

"New Security Solutions to Counter Advanced Persistent Threats"

In a recent press release, Tom Reilly (CEO of ArcSight.Symantec) explained that revenue for the second quarter is expected to be in the range of $55 million to $57 million (that is 21%-25% growth over the same quarter last year). This is based partly on “growing cybercriminal activity and heightened awareness of the Advanced Persistent Threat”.

Even the media carry provocative statements:

“The APT attackers, however, employ undetectable zero-day exploits and social engineering techniques against company employees to breach networks.”

All of this feeds on the fear, uncertainty and doubt that exists around the term APT and implies the big bad guys are going to get you, regardless of who you are! Media focus is clearly on China and how they are behind all APT attacks. However this paints a very narrow picture of the reality and is predicated by the belief that all APT is state sponsored and worse, that China are the only players.

The Reality

In reality the threat is wider than that posed by ‘China’, in fact evidence leaked as part of the WikiLeaks stories showed that the US believed that “French espionage is so widespread that the damages [it causes] the German economy are larger as a whole than those caused by China or Russia.” (

If we look more closely at APT, we can see that it falls into two categories. Firstly with its roots firmly as a method of cyber-espionage, it was focused on gaining government information and this would clearly be an activity undertaken by nation states. Secondly as a style of attack that is broadly aligned to gaining access to intellectual property (IP) and commercially sensitive data. This second category indicates that APT style attacks may not just be the preserve of state sponsored entities.

The aim of targeting commercial IP is to gain access to knowledge that can provide a competitive edge, such as blue prints, merger information and strategy documents. This type of information would give competitors an advantage over their rivals and this is the real driving force behind the wider use of attacks classed as APT. These motivating factors could be attributed to individual organisations as well as governments pursuing economic growth.

Are you at risk?

If we listen to the hype then we are all at risk, even the mighty Google and the IMF have fallen foul of APT! Then again we are not all in the same market as Google (with all its sensitive customer data) or the IMF. So the first question to ask yourself is; do you hold information that someone is willing to spend time and effort in trying to obtain? If the answer is no, then you can sleep soundly in bed at night. Well, from an APT perspective at least.

I believe that we are at the most risk when we are looking in the wrong direction. Before we became aware of APT, organisations assessed that commercial data that had no value from a cybercrime or fraud perspective wouldn't be a target in a hack. As a result, breaches went unnoticed. However, this continues to be the case. Why therefore do the attacks go unnoticed, how do they breach the network in the first place? In my opinion a lot of this is down to the wrong focus within the organisation and in what they are trying to protect (if anything!).

What do I mean by this? Well, within the UK we have a number of regulatory drivers that help organisations focus their priorities. The Information Commissioner and Data Protection requirements keep an organisation focused on protecting personal sensitive data. Financial regulation and PCI-DSS keep others focused on protecting financial data, however there is no requirement other than an individual organisations risk appetite in terms of protecting intellectual property. This is exactly what those conducting APT are targeting. They are going for something you didn’t think to protect. You made a risk based decision to focus on regulatory drivers, they made the decision to target your corporate network and steal as much data as they could and then see what was useful.


So what is the solution? How do we defend against APT? In reality, in the same way you defend against other cyber threats, through resilient information security. This will depend on how a business approaches risk management, the level of assurance required and based on organisation's risk appetite.

Any approach taken should be driven by a clear business need and understanding of the risk environment and the organisation's risk management structure. The business needs to be aware of the threat environment that they are in and be able to make informed decisions, and not just be blinkered into making regulatory based decisions only.

We also need to accept that we are not able to achieve 100% security, especially through appliance based solutions or by just doing penetration testing or by being regulatory compliant. Instead we should approach this problem from the point of view of business resiliency, which captures the ability for an organisation to be robust to attack and to be able to detect / react / recover from an incident.

Tuesday 19 April 2011

2011 Verizon DBIR Out - It's still the basics that matter

It's Verizon DBIR time again, which means another fascinating look at how the data breach landscape has been developing over the year.

As always the report is packed with interesting information and I'd recommend that it's worth reading all the way through. However there were a couple of themes that emerged that I thought would be worth specifically pulling out an commenting on.

It's still the basics that matter - Looking at the hacking methods mentioned in the report, right up near the top is ... "exploitation of default or guessable credentials", and next in line behind it is "brute force and dictionary attacks". There's two fairly obvious points that come out of this.

Firstly is password/credential management. For large companies perhaps that means using enterprise password managment solutions that allow for one-time access passwords, but even small companies can benefit from using things like password safes, to ensure that strong passwords are in use for all accounts. The simple fact is that humans are bad a remembering complex strings, and computers are pretty good at it, so using password safes (whilst it brings its own risks) could be a means of moving to a stronger password approach.

Secondly this ties into a later point in the report about lack of detection in security. Brute force attacks are noisy and really easy to see in any security log, yet they're a tool of choice for attackers. Some basic intrusion detection/reaction capabilities could help to spot and defend against this kind of attack. It's well worth looking at projects like OSSEC for some options.

Carrying on the theme of basic controls that could really help, the quanitity of attackers using backdoors on compromised systems is noticable. Basically once the attackers get onto your systems they'll look to get persistent access and backdoors/rootkits are a good way to go about it.

Defending against these isn't always easy as they'll commonly avoid Malware detection (DBIR reckons that 60% of malware seen in breaches had been customized), but firewall egress filtering could be another (rarely deployed) option to combat this.

It's worth thinking about your server estate and, for DMZs, thinking "do my servers really need to initiate any outbound connections to the Internet". In many cases they won't (beyond things like DNS lookup), so block outbound access. At the least, it'll make the attackers job harder and make it easier to detect what they're doing..

The last one that I'd comment on is the persistent lack of detective controls (IDS/Log review). It's a shame to see this as the systems to gather the relevant information exist and have been around for a long time, but companies still seem to view it as not worth the cost. The point that data relevant to the breach was in place in 69% of cases but that log analysis and review accounted for a whopping 0% of detection of breaches is a pretty stark point.

All in all it's a really interesting read, and hopefully we'll see many more effort along the same line, bringing some hard data to help guide security investment.

Sunday 6 March 2011

Report: Joint IISP and ISACA event in Scotland

The Scottish branch of the IISP and ISACA Scotland hosted a joint talk on the 17th of February at the English Speaking Union with our guest speaker, Louise Behan, of the Lothian And Borders Police Specialist Fraud Unit.

Louise described the remit of the fraud unit, which includes investigation of contraventions of the Company, Insolvency and Bankruptcy laws, all public sector corruption enquiries, major or complex enquiries involving offences against the financial industry, major embezzlements, particularly those perpetrated by professional persons such as solicitors, accountants and bank officials, enquiries from government departments, the Procurator Fiscal and the Crown Office Fraud Unit, multiple account enquiries, e.g. cross-firing of cheques, collusive merchant enquiries, counterfeit credit cards, major credit/debit card enquiries as well as complex enquiries from other Forces and Agencies.

A significant amount of casework the fraud squad deals with originates in people misusing systems in place or getting round technical controls. In terms of honesty, Louise pointed out that a recent survey showed that most people (80%) are not 100% honest. When times are hard, as now, crime tends to increase, as people struggle with difficult economic circumstances. Very often, the cases dealt with by the Unit have as their main suspect someone with no criminal record. This also means that profiling fraudsters is hard – and of course the best ones are very good at hiding it.

Louise estimated around a third of the fraud she personally sees is internal – with an employee or manager of a company discovering a weak control that can be subverted, and using their position to hide the evidence of fraud. She provided a quick look at some niche frauds, where a criminal has found an area where they could make money in the short term – such as forging one pound coins. It’s unexpected, and when the fakes were good enough, it remained undiscovered for many years. Even ‘small’ frauds can evidently mount up to significant losses, and so the point is that a long term ‘small’ scheme can have just as much impact as a short term ‘big hit’.

In subverting IT controls for financial gain, the risk can be perceived by individuals as very low, whereas the reward can be very high. For example, mortgage fraud can net large sums of money. For the fraud unit investigating these crimes, the issue is that if the controls are too poor, gaining enough evidence to present a reasonable case can be a challenge – so if you don’t keep solid audit logs and implement strong access controls, this may lead to insufficiency of evidence when your systems are breached, without which the Procurator Fiscal cannot take the case forward to prosecution.

The nature of fraud means that investigations often take some time, and there are evidential requirements which can take some time to fulfil, such as obtaining and executing warrants to obtain information, which requires to be appropriately authenticated, and continuity of evidence ensured during seizure. Recovery of money or loss depends entirely on the criminal - if there are recoverable assets, the police always look at the potential for compensation, however, if the fraud is remote (for eg, perpetrated from outwith the UK) the likelihood of recovery tends to be less. And if the criminal has no assets then recovery isn’t possible.

The aim of the unit is to make the life of the fraudster as unattractive and uncomfortable as possible. It’s not likely to be an aim with an end in sight-fraud is only limited by human ingenuity, but we continue nonetheless to try to keep up, or sometimes get ahead a little.

So what can you do to help?

  • Keep an eye out for known individuals – the Fraud Squad and SCDEA do provide information to intelligence departments in banks
  • Audit rigorously and log everything
  • Use mystery shoppers to test in store security procedures
  • Make examples of the ones who get caught – especially for internal fraud
  • Understand the mind of the fraudster – how would YOU subvert your controls.

The next ISACA meeting is on the 17th of March, and the next IISP Scotland meeting will be towards the end of May.

Friday 18 February 2011

Day 3 - RSA, ISACA and IISP

After all the B-Sides fun and games, I managed to get an Expo pass for RSA (thanks to the Damballa folks) so thought I should pop in, chat to a few key folks and grab some swag to take home.


I got to take apart the Enigma machine at the NSA booth!
Almost won a kindle at the M86 quizshow
Had a good chat with the Australians at the Cryptsoft booth
Learned all about splunk
Had to sit through a very content free Kaspersky talk
Gal Shpantzer gave me a good Becrypt run through
Had far too many burgers at the Qualys bar

Made it back to the hotel in time to try and repack my bags with all the swag (see the picture below) before heading off to the airport. Fairly uneventful trip the 8000 miles back home and arrived in Edinburgh just in time to host a talk by Louise Behan of the Specialist Fraud Squad on behalf of ISACA Scotland and the Scottish branch of the Institute of Information Security Professionals. It's been a long week :-)

From B-SidesSF 2011

Day 2 of B-Sides San Francisco

Day 2 of B-Sides SF (pictures all up now on my Picasa page)

After very little sleep, headed over to Zeum early and as one of the volunteers was missing presumed sick I volunteered to be a Roamer for the day. Red T-shirt (would this mean I wasn't goin to return), earpiece and simple duties (keep an eye out for people going where they shouldn't.)

There were so many good speakers on Day 2 I found myself dotting between them to try and pick up content, but I did enjoy Anton Chuvakin's talk on SIEM. Key point he made was that you need to plan resource for it. I quote "If you only have an hour a month to do SIEM, stick to log management. Dedicate at least 50% of someones time"

Andrew Hay, Richard Bejtlich and Travis Reese's talk on Cyber Security Marketecture was well received as well. Some arguments about particular points, but in a generally productive spirit. I think they focused a little too hard on APT to the exclusion of all else, but they did cover APT in a rational way, unlike the usual FUD. I like the comment about Stuxnet - not very advanced or persistent but definitely a cyber warfare threat.

I also managed to get brief interviews with Jack Daniel of Astaro and Jon Speer of Tripwire to find out what sponsors get out of BSides. They both had remarkably similar viewpoints. They see value from:
  • Connecting with security professionals
  • Learning from and teaching the security community
  • Meeting potential employees
  • Having fun
For those companies not sure, just get involved. Sponsor as little or as much as you can, and be part of the community.

After lunch I managed to win The Manga Guide to Databases in the raffle (Excellent Prize) before the BSidesSF Carousel ride!

Dave Shackleford and Andrew Hay's "A Brief History of Hacking" was also very entertaining, including along the way the good and bad hacker films.

Robert Zigweid of IOActive then spoke about a topic quite close to our hearts here at 7 Elements - Threat modelling taxonomy. He splits out into the following types:

  • spoofing
  • tampering
  • repudiation
  • information disclosure
  • denial of service
  • privilege escalation
And these impact categories:
  • damage potential
  • reproduceability
  • exploitability
  • affected users
  • discoverability
Damon Cortesi's talk on Developers also included Threat Modelling - it is becoming pervasive.

The EFF panel were very well received but I only caught a small piece of it: key usage of end to end encryption to avoid compromise from threat sources as well as to avoid misuse by governments and their view that subject lines and text messages are definitely content, and email addresses and IP addresses may be in certain circumstances.

Raffael Marty's log analysis and visualisation in the cloud. This is an area which is likely to become all too important as more and more services are pushed to the cloud. Loggly have the concept of logging as a service, and Raffael's talk included an important piece on the need for visibility of dynamically scaling virtualised environments and the hypervisor, as well as availability.

I then said my goodbyes to the wonderful BSidesSF folks and volunteers - Banasidhe, MikD, djbphaedrus, Duckie, CindyV etc and headed east for the Owasp meet, where we had very worrying discussion around the security of critical national infrastructure...