Best Practices: Live Capture and Analysis of Network Based Evidence

Network captures and traffic analyses can provide further information and understanding on the activity of a compromised machine. This guide is specific to Linux based tools.

1.  Recommended tools for capturing network-based evidence files include:

    • Netcat (http://nmap.org/ncat) – Create TCP channels between devices that can be used to pass information.
    • Native Linux commands and tools (Explained in further detail throughout the guide)
      • Tcpdump
      • Hd (hexdump)
      • Tcpdstat (http://staff.washington.edu/dittrich/talks/core02/tools/tools.html) – Breaks down traffic patterns and provides an average of transfer rates for any given communication libpcap formatted file.
      • Snort (http://www.snort.org/) – An open source intrusion prevention and detection system.
      • Tcptrace(http://www.tcptrace.org/) – Provides data on connections such as elapsed time, bytes and segments sent/received, retransmission, round trip times, window advertisements and throughput.
      • Tcpflow (https://github.com/simsong/tcpflow)– A tool that captures and stores communications in a convenient way for protocol analysis and debugging.

2.  There are four types of information that can be retrieved with network-based evidence (Jones, Bejtlich and Rose).

    • Full content data – Full content data includes the entire network communications recorded for analysis. It consists of every bit present in the actual packets including headers and application information.
    • Session data – Data that includes the time, parties involved and duration of the communication.
    • Alert data – Data that has been predefined as interesting.
    • Statistical data – Reporting data

3.  Since data cannot be written to disk, it is best to map a network drive or use Netcat to transfer information between analyzing machine and the victim machine.

Warning: All files created or tools used need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software. A good tool to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290).

4.  If using Netcat to transfer logs the following commands can be used:

Command to setup a Netcat listener on host machine: nc –v –l  –p <PORT> > <LOG FILE (FOR TCPDUMP FILE USE EXTENSION .lpc>

The port number is any port desired for the Netcat listener to listen on for communication. The log file is just a file for the data to be stored in.

Command to send data from a victim machine: <COMMAND> | nc <IP ADDRESS OF LISTENING MACHINE> <PORT>

Basically the command sends the results of a command performed on the victim machine. <COMMAND> is the command issued on the victim machine. The IP address and port are of the host machine with NetCat listening.

5.  Begin by capturing traffic.

Command: tcpdump –n –i <NETWORK INTERFACE> –s <BYTES TAKEN FROM PACKET>

Pipe the file over Netcat or send to a file on a network drive with additional parameters: -w <FILE PATH>.lpc

Warning: The file will show evidence of any unknown communications. It will not provide detail on any results or items actual obtained in the communications (Jones, Bejtlich and Rose).

6.  The tool tcpdstat can be used from the analyzing machine on the file to gain statistical data on the general flow of traffic found in the tcpdump. Statics help describe the traffic patterns and communication protocols used over a period of time.

Command: tcpdstat <TCP DUMP FILE> > <RESULTS FILE>

Warning: Telnet and FTP are communication protocols that transfer data in clear text. However, smarter intruders will use different protocols to encrypt their communications.

7.  Snort can be used to find alert data (Jones, Bejtlich and Rose).

Command: snort –c <LOCATION OF SNORT.CONF OR ANOTHER RULESET> -r <TCPDUMP FILE> -b –l <RESULT FILE PATH>

Warning: Snort will only raise flags and alerts depending on the rule set provided. Be familiar with the rules used in order to know what type of traffic may pass unnoticed by snort.

3

8.  The tool tcptrace is used to gain session information.

Command: tcptrace –n –r <TCPDUMP FILE> > <RESULT FILE PATH>

Warning: Session data can provide evidence to suspicious communications of abnormal lengths. Numerous attempted sessions over a short period of time could show signs of a brute force attack on a network (Jones, Bejtlich and Rose).

9.  Tcpflow organizes and prints out full content data on a TCP stream from a given log.

Command: tcpflow –r <TCPDUMP FILE> port <SOURCE PORT NUMBER> and <DESTINATION PORT NUMBER>

The results will be stored in a file formatted with the following name structure:

[timestampT]sourceip.sourceport-destip.destport[–VLAN][cNNNN]

To read the results a hex editor is required. Linux environments include a local tool to perform the read.

Command: hd <TCPFLOW FILENAME> > <RESULT FILE PATH>

Warning: This is a great tool that can visually show commands and inputs an intruder used in a particular stream; however, an investigator has to be aware of suspicious ports in order to retrieve quality pieces of evidence. The other tools used in this best practices help identify those communications (Jones, Bejtlich and Rose).

10.  When analyzing network-based pieces of evidence, it is import to be paranoid and research anything that is not obviously a familiar service, process, file or activity. The evidence found can help administrators understand weaknesses in the system in order to strengthen security and improve case standings in a court.

Jones, Keith J., Richard Bejtlich, and Curtis W. Rose. Real Digital Forensics. Upper Saddle River (N. J.): Addison-Wesley, 2006. Print.

Best Practices: Linux Live Analysis

There are some instances where a computer cannot be powered off for an investigation. For these circumstances a live incident response is performed. Another advantage to live investigations is that volatile and non-volatile data can both be analyzed providing more evidence than a typical offline analysis on a hard drive. The victim machine is the target of these investigations. These best practices are tool specific to Linux machines.

1.  In performing a digital forensic live incident response investigation it is important to setup the environment correctly. Linux environments come with a variety of tools pre-installed that are useful in the investigation. However, for additional tools not already installed on the system, a disk or network drive needs to be created containing executables to the tools that will be used to uncover evidence. Recommended tools include:

    • NetCat (http://nmap.org/ncat) – Create TCP channels between devices that can be used to pass information.
    • Native Linux commands and tools (Explained in further detail throughout the guide)
      • Netstat
      • Lsof
      • Ps
      • Lsmod
      • Crontab
      • Df
      • Top
      • Ipconfig
      • Uname
      • Gdb
      • Strings

Warning: The reason only executables are used in an investigation is because software installers will write into memory possible overwriting data. If incriminating data was deleted by a user, there is still a chance of uncovering it because it is not removed from disk. The only way to completely get rid of data is to overwrite it. There is a chance that software installers will overwrite such data and therefore are not to be used. This is why live investigations do not involve any type of information writes to the victim machine’s disk (Jones, Bejtlich and Rose).

2.  Since data cannot be written to disk, it is best to map a network drive or use Netcat to transfer information between analyzing machine and the victim machine.

Warning: All files created or tools used need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software. A good tool to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290).

3.  If using Netcat to transfer logs the following commands can be used:

Command to setup a Netcat listener on host machine: nc –v –l  –p <PORT> > <LOG FILE>

The port number is any port desired for the Netcat listener to listen on for communication. The log file is just a file for the data to be stored in.

Command to send data from a victim machine: <COMMAND> | nc <IP ADDRESS OF LISTENING MACHINE> <PORT>

Basically the command sends the results of a command performed on the victim machine. <COMMAND> is the command issued on the victim machine. The IP address and port are of the host machine with Netcat listening.

4.  The first step in the investigation is to retrieve the time and date on the victim machine along with some basic system details. It is helpful to have this information when an investigation is carried over multiple devices. Time and date should be compared to a trustworthy server. The date command can be used to retrieve this information.

Command: date

Warning: This is very important for time sensitive evidence to be presented in a case (Jones, Bejtlich and Rose).

Command: ipconfig

Ipconfig retrieves network information such as the machine’s IP address and MAC address.

Command: uname –a

This command retrieves information such as a systems hostname and operating system version.

5.  Current network connections are to be analyzed next to see if there are any unusual connections established to the computer or ports listening. Netstat is a native command that can be used to see these connections. This command also shows all open ports.

Command: Netstat –an

Warning: Look for established connections that are not authorized or that are communicating with unfamiliar IP addresses. Ports higher than 900, are not ports normally opened by the operating system and should be flagged as suspicious (Jones, Bejtlich and Rose).

6.  Lsof is a native command to Linux that stands for “List Open Files.” It can be used to see the executable processes running on open ports and it displays files that processes have opened. Unknown processes accessing a port should be flagged as suspicious and analyzed.

Command: lsof -n

7.  The Linux command ps can be used to view running processes. Suspicious processes flagged earlier can be viewed in more detail. It should be noted that if another process is found having started around the same time as the suspicious process, it should also be flagged. The two processes might have been started by the same attack or service.

Command: ps -aux

8.  It is also important to check the user currently logged into a machine remotely and locally. The users command can be used to see logged in users.

Command: users

Warning: A non-authorized user may be logged in or hijacking an account remotely (Jones, Bejtlich and Rose).

9.  The internal routing table should be examined to ensure a hacker or process has not manipulated the traffic from a device. Netstat can be used to view the routing table. Unfamiliar routes or connections should be flagged as suspicious and used to formulate a hypothesis.

Command: Netstat –rn

10.  The lsmod command can be used to view all loaded kernels.

Command: lsmod

Warning: There is a chance that a kernel could have been trojaned, if this is the case it can be discovered from viewing all loaded kernel modules (Jones, Bejtlich and Rose).

11.  Scheduled Jobs on a Linux machine can be viewed with crontab. An attacker with the right privileges can schedule malicious jobs to run at designated times. A simple for loop command can be used to see the jobs each user has scheduled.

Command: for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done

Warning: A hacker can run a job at odd times, such as 2:00 AM, which would likely go unannounced to most users (Jones, Bejtlich and Rose).

12.  Mount or df can be used to see all mounted file systems.

Command: df

Warning: Hackers can mount a file system as a way to transfer data to the victim machine.

13.  The top command can be used to show all services running. Foreign services should be marked as suspicious and researched.

Command: top

Further service startup scripts can be seen in the /etc/init.d directory.

Command: ls /etc/init.d

14.  Process dumps are important in reviewing the actions of a process. A live dump file of process activity can be found in the proc. Following, the string command can be used to pull out any words or sentences found in the dump file. This material can be reviewed to gain an understanding of what actions a process or executable performs.

Command to find process and memory block: cat /proc/<PID>/maps

The picture below highlights where addresses for a piece of memory can be found in /proc/<PID>/maps

Capture

Command: gdb –-pid=<PID>

dump memory <NETWORK DRIVE LOCATION TO WRITE MEMORY> <ADDRESS RANGE>

Ex: dump memory /mnt/test 0x75d5000 0xb77c8000

Command: strings <NAME OF DUMP FILE>

15.  After the volatile information is analyzed, non-volatile information can be examined. This includes all logs, incriminating files stored on the system, internet history, stored emails or any other physical file on disk. Manual investigation can be used to review this material.

16.  When performing a live response investigation, it is import to be paranoid and research anything that is not obviously a familiar service, process, file or activity.

Jones, Keith J., Richard Bejtlich, and Curtis W. Rose. Real Digital Forensics. Upper Saddle River (N. J.): Addison-Wesley, 2006. Print.

File Checksums

All files created or tools used in a forensic investigation need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software.

A good tool for Windows to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290). Tool use is very simple.

Command: 
<File Checksum Integrity Verifier EXECUTABLE> <FILE TO CHECKSUM>

Capture4
A good tool pre-installed in most Linux environments to use to create checksums is md5sum.
Command: 
md5sum <FILE TO CHECKSUM>

Ncat for Live Incident Response

When a system is vital in daily operations, it often cannot be taken offline for duplication. Also because of its importance it cannot risk the chance of state change, forensic tools cannot be downloaded onto the system. In a court case, the installation of tools could be considered as tampering with the evidence because there is a chance the tools could overwrite important data. The same goes for saving data on the victim machine. A live incident response looks to collect data from a machine without changing the environment. I recommend mapping a network drive or preferably using Ncat to transfer information between analyzing machine and the victim machine during a live investigation.

Ncat comes pre-installed on most Linux distributions and can be called by the ‘nc’ command. For Windows, a portable executable can be downloaded from here.

If using Ncat to transfer logs the following commands can be used:

Command to setup a Ncat listener on host machine: 
Linux: nc –v –l  –p <PORT> > <LOG FILE>
Windows: <NCAT EXECUTABLE> –v –l  –p <PORT> > <LOG FILE>
Capture

The port number is any port desired for the Ncat listener to listen on for communication. The log file is just a file for the data to be stored in on the analyzing host machine.

Command to send data from a victim machine: 
Linux: <COMMAND> | nc <IP ADDRESS OF LISTENING MACHINE> <PORT>
Windows: <COMMAND> | <NCAT EXECUTABLE> <IP ADDRESS OF LISTENING MACHINE> <PORT>
Capture2

Basically the command sends the results of a command performed on the victim machine to the listening host machine. <COMMAND> is the command issued on the victim machine. The IP address and port are of the host machine with Ncat listening. The connection can be closed by CONTROL C/D or closing the terminal/command prompt. Once closed, the listener will output all received data to the output file.

Not only can Ncat be used to send command output but it can be used to listen for text or file transfers.

Capture3

 

Overall, it is an easy to use clean tool for transferring information between host machines.

Recovering Data with Autopsy

A little while back, a friend brought a USB to me and asked if I could attempt to retrieve any data off the device. After some research, I found the open source tools, autopsy and Sleuthkit, that can be used for recovering data. I found these tools very helpful in retrieving deleted data off a drive. The web interface is not the easiest to use but it is still an effective recovery tool.

Autopsy is a web user interface tool that utilizes the Sleuthkit forensics toolkit. Backtrack comes with Autopsy and Sleuthkit already install but for any other Debian based Linux system, they can be installed with:

sudo apt-get install autopsy
sudo apt-get install sleuthkit

Autopsy will perform forensics on a disk/storage image file. Anything from an image of a hard drive partition to a USB device. An image file can be created of any mounted drives. In Linux, to view current mounted drives use the command:

fdisk -l

The system column provides a description of the drive. The last item printed is that of an attached USB. Once, the desired drive to perform forensics on has been located, create an image of it.

dd if=<Device Boot Location> of=<Saving Location>.img bs=2048
Example: dd if=/dev/sdb1 of=usb.img bs=2048

This will take a few minutes.

On Backtrack R1, when first trying to run Autopsy (typing ‘autopsy’ in the terminal), I ran into an error saying autopsy was not installed.

This happens if autopsy does not have an initialized evidence locker. Find and run autopsy from the Applications->Backtrack->Forensics under Forensic Suites. A new terminal will open prompting for the full path to an evidence locker location. A valid/existing location has to be given or else autopsy will error again.

When autopsy is successfully running, it will begin servicing a website user interface that can be accessed in a browser by going to:

http://localhost:9999/autopsy

On this webpage, choose to create a new case. Autopsy will ask background questions on the forensic case. Since this is most likely for personal use, these questions do not matter. Fill them out however you like.

Continuing, once prompted for an image file, enter the full path location to the one created earlier. Depending on what type of storage image, select Disk or Partition. When I performed this I was analyzing a USB image and selected partition. The third question asks for the desired type of import. Choose whichever method you prefer. I like to use Symlink.

Following, the data integrity section can be ignored. The last section needs to be verified. Check to make sure the file system type is correct. Then complete the process and add the case.

Now the image can be searched for lost files. Choose to “Analyze” the image. Then select “File Analysis” from the top left of the window. If this button is disabled, it is most likely the wrong image type was selected (Disk or Partition). Re-create the case and choose the other option. File Analysis, provides a view of all files and directory contents found on the image. Now you can search through the files and ideally find the lost file to recover. Once it is found, the file can be exported to your local system.

Good Luck! This is just one method of data recovery.

I received the most help from this article in performing this type of forensics.

Scanning With Nmap

Nmap is an effective network-scanning tool that can be used for host and open port service discovery. It can be downloaded from here.

In my experiences, to find hidden services or special services, not located on common ports, the below scans can be used. Different services respond to different packet messages. The “-p” tag specifies a port range, it is not required. However, when I stated the range, I found more running services than when the range was not stated. My theory is nmap, on a basic scan will look at popular ports and not necessarily all ports when not stated.

  • Find UDP Services: nmap –sU <ADDRESS> –p1-6000
  • Basic Service Scan: nmap –v <ADDRESS> –p1-6000
  • Basic All Service Scan: nmap –A <ADDRESS> –p1-6000
  • Null port scan (Does not set and bits in the TCP flag header): nmap –sN <ADDRESS> –p1-6000
  • Fin port scans (Sets just the TCP FIN bit): nmap –sF <ADDRESS> –p1-6000
  • Christmas port scans (Sets the FIN, PSH and URG flags): nmap –sX <ADDRESS> –p1-6000

Ping Sweep

nmap is a great tool to use to perform a network ping sweep, however there is an effective way to perform a ping sweep with out any additional installation. A FOR loop can be used to perform consecutive pings.

Ping Sweep FOR Loop: FOR /L %i in (<Host Number Start (0-255)>,1,<Ending Host Number (0-255)>) do @ping -n 1 <Network Prefix>.%i | find “Reply”

The FOR loop is basically saying start at a network prefix with stated starting host number value and send a ping. Once a reply as been received the first loop is finished and it continues to the next loop. After each loop, the host number increases and a ping is sent to that address on the network. For example, say the network prefix is 192.168.0 and we want to ping host numbers (3-43). We would enter 3 as our beginning host number and 43 as are finishing host number. The one in between the two parameters states to increase each address by one for each running of the for loop. This allows us to ping each host on the the specified network range, thus performing a ping sweep.

Windows Example:

The following command ping sweeps addresses in range 192.168.100.0 – 192.168.100.255

FOR /L %i in (1,1,255) do @ping -n 1 192.168.100.%i | find "Reply"

The same function can be done in the Linux Terminal.

Linux Ping Sweep:

Linux is slightly different but follows almost the same pattern.

for i in {0..255}; do ping -c 1 -t 1 <IP PREFIX>.$i | grep 'from'; done

Mount a Network Drive Linux

Mount a Linux Drive in Linux Machine:

  1. Create a directory for mount: mkdir /<mount place>
  2. Mount Drive: mount <Linux file system address>:/<share> /<mount place>
  3. View Contents of mounted drive: ls /<mount place>

Mount a Windows Drive in a Windows Machine:

  1. Create a directory for mount: mkdir /<mount place>
  2. Mount Drive: mount -t smbfs //<file system address>/<share> /<mount place> -o username=<username>,password=<password>
  3. View contents of mounted drive: ls /<mount place>

 

In a Linux machine, for a Windows Drive, it is required to state that the drive uses a Samba File System. This will notify Linux of how to read the drive.

Termainl User Commands

List Users: cat /etc/passwd | grep “/home” |cut -d: -f1 && cat /etc/passwd | grep “/root” |cut -d: -f1

The Command ‘cat /etc/passwd | grep “/home” |cut -d: -f1’ on its own will list all the users found in the home directory. However root is not found in that directory so I added the additional statement to grep users in the /root directory, which will be root.

Add User: adduser <username> OR useradd <username>

Remove User: userdel <username>

Create User gGoup: groupadd <group name>

Add User to a Group: usermod -a -G <group name> <username>

Remove user:

  1. vi /etc/group
  2. Find the group and delete the user from it’s details
  3. Save File (Hit ESC then type :wq ENTER)

to

Delete Group: groupdel <group name>