Best Practices: Windows Live Analysis

There are some instances where a computer cannot be powered off for an investigation. For these circumstances a live incident response is performed. Another advantage to live investigations is that volatile and non-volatile data can be analyzed. The victim machine is the target of these investigations. These best practices are tool specific to Windows machines; however some of the commands will work in other environments.

1.  In performing a digital forensic live incident response investigation it is important to setup the environment correctly. A disk or network drive needs to be created containing executables to the tools that will be used to uncover evidence. Recommended tools include:

    • Sysinternals Suite (http://technet.microsoft.com/en-us/sysinternals) – This contains numerous forensic tools.
    • FPort (http://www.scanwith.com/Fport_download.htm) – This tool enumerates ports and executables running on the ports.
    • UserDump (http://www.microsoft.com/en-us/download/details.aspx?id=4060, Contained in User Mode Process Dumper) – Is used to create memory dumps for specific processes.
    • NetCat (http://nmap.org/ncat) – Create TCP channels between devices that can be used to pass information.

Warning: The reason only executables are used in an investigation is because software installers will write into memory. If incriminating data was deleted, there is still a chance of uncovering it because it is not removed from disk. The only way to completely get rid of data is to overwrite it. There is a chance that software installers will overwrite such data and therefore are not to be used. This is also why live investigations do not involve any type of information writes to the victim machine’s disk (Jones, Bejtlich and Rose).

2.  Since data cannot be written to disk, it is best to map a network drive or use Netcat to transfer information between analyzing machine and the victim machine.

Warning: All files created or tools used need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software. A good tool to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290).

3.  The first step in the investigation is to retrieve the time and date on the victim machine. It is helpful to have this information when an investigation is carried over multiple devices. Time and date should be compared to a trustworthy server.

Warning: This is very important for time sensitive evidence to be presented in a case (Jones, Bejtlich and Rose).

4.  Current network connections are to be analyzed next to see if there are any unusual connections established to the computer or ports listening. Netstat –an, is a native command that can be used to see these connections. This command also shows all open ports.

Warning: Look for established connections that are not authorized or that are communicating with unfamiliar IP addresses. Ports higher than 515, are not ports normally opened by the operating system and should be flagged as suspicious (Jones, Bejtlich and Rose).

5.  FPort is used to see the executables running on open ports. Unknown processes accessing a port should be flagged as suspicious and analyzed.

6.  On machines older than Windows 2003, NetBIOS were used to label a machine instead of the IP address in a connection record found in the event log. In order to validate the machines identity as unique, the command nbtstat –c can be used.

Warning: Hackers could change the name of BIOS, perform an attack and then change it back to another name. The logs would then show the changed name and not the current, leading investigators to a dead end. This is why it is important to check for a unique NetBIOS name (Jones, Bejtlich and Rose).

7.  It is also important to check the user currently logged into a machine remotely and locally. The Sysinternal tool, PSLoggedOn can perform a check.

Warning: A non-authorized user may be logged in or hijacking an account remotely (Jones, Bejtlich and Rose).

8.  The internal routing table should be examined to ensure a hacker or process has not manipulated the traffic from a device. Netstat –rn can be used to view the routing table. Unfamiliar routes or connections should be flagged as suspicious and used to formulate a hypothesis.

9.  The Sysinternals tool, PsList, can be used to view running processes. Process flagged earlier can be viewed. It should be noted that if another process was found started around the same time as the suspicious process, it should also be flagged. The two processes might have been started by the same attack or service.

10.  The Sysinternals tool, PsService, is used to look at running services. Services that do not contain descriptions are not obvious services maintained by the operating system and are suspicious.

Warning: Services are used to hide attacking programs and should be analyzed carefully (Jones, Bejtlich and Rose).

11.  Scheduled Jobs on a Windows machine can be viewed with the at command. An attacker with the right privileges can schedule malicious jobs to run at designated times.

Warning: A hacker can run a job at odd times, such as 2:00 AM, which would likely go unannounced to most users (Jones, Bejtlich and Rose).

12.  Examining opened files may relay information more relevant to an investigation. The Sysinternals tool, PsFile, is used to look at all remotely opened files that cannot be immediately viewed on the victim machine.

13.  Process dumps are important in reviewing the actions of a process. The tool, UserDump, can be used to create a dump file of a suspicious process.

1

14.  Following, the Sysinternal tool strings can be used to pull out any words sentences found in the dump file. This material can be reviewed to gain an understanding of what actions a process or executable performs.


2

15.  After the volatile information is analyzed, non-volatile information can be examined. This includes all logs, incriminating files stored on the system, internet history, stored emails or any other physical file on disk. Manual investigation can be used to review this material.

16.  When performing a live response investigation, it is import to be paranoid and research anything that is not obviously a familiar service, process, file or activity.

 

Best Practices: Live Capture and Analysis of Network Based Evidence

Network captures and traffic analyses can provide further information and understanding on the activity of a compromised machine. This guide is specific to Linux based tools.

1.  Recommended tools for capturing network-based evidence files include:

    • Netcat (http://nmap.org/ncat) – Create TCP channels between devices that can be used to pass information.
    • Native Linux commands and tools (Explained in further detail throughout the guide)
      • Tcpdump
      • Hd (hexdump)
      • Tcpdstat (http://staff.washington.edu/dittrich/talks/core02/tools/tools.html) – Breaks down traffic patterns and provides an average of transfer rates for any given communication libpcap formatted file.
      • Snort (http://www.snort.org/) – An open source intrusion prevention and detection system.
      • Tcptrace(http://www.tcptrace.org/) – Provides data on connections such as elapsed time, bytes and segments sent/received, retransmission, round trip times, window advertisements and throughput.
      • Tcpflow (https://github.com/simsong/tcpflow)– A tool that captures and stores communications in a convenient way for protocol analysis and debugging.

2.  There are four types of information that can be retrieved with network-based evidence (Jones, Bejtlich and Rose).

    • Full content data – Full content data includes the entire network communications recorded for analysis. It consists of every bit present in the actual packets including headers and application information.
    • Session data – Data that includes the time, parties involved and duration of the communication.
    • Alert data – Data that has been predefined as interesting.
    • Statistical data – Reporting data

3.  Since data cannot be written to disk, it is best to map a network drive or use Netcat to transfer information between analyzing machine and the victim machine.

Warning: All files created or tools used need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software. A good tool to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290).

4.  If using Netcat to transfer logs the following commands can be used:

Command to setup a Netcat listener on host machine: nc –v –l  –p <PORT> > <LOG FILE (FOR TCPDUMP FILE USE EXTENSION .lpc>

The port number is any port desired for the Netcat listener to listen on for communication. The log file is just a file for the data to be stored in.

Command to send data from a victim machine: <COMMAND> | nc <IP ADDRESS OF LISTENING MACHINE> <PORT>

Basically the command sends the results of a command performed on the victim machine. <COMMAND> is the command issued on the victim machine. The IP address and port are of the host machine with NetCat listening.

5.  Begin by capturing traffic.

Command: tcpdump –n –i <NETWORK INTERFACE> –s <BYTES TAKEN FROM PACKET>

Pipe the file over Netcat or send to a file on a network drive with additional parameters: -w <FILE PATH>.lpc

Warning: The file will show evidence of any unknown communications. It will not provide detail on any results or items actual obtained in the communications (Jones, Bejtlich and Rose).

6.  The tool tcpdstat can be used from the analyzing machine on the file to gain statistical data on the general flow of traffic found in the tcpdump. Statics help describe the traffic patterns and communication protocols used over a period of time.

Command: tcpdstat <TCP DUMP FILE> > <RESULTS FILE>

Warning: Telnet and FTP are communication protocols that transfer data in clear text. However, smarter intruders will use different protocols to encrypt their communications.

7.  Snort can be used to find alert data (Jones, Bejtlich and Rose).

Command: snort –c <LOCATION OF SNORT.CONF OR ANOTHER RULESET> -r <TCPDUMP FILE> -b –l <RESULT FILE PATH>

Warning: Snort will only raise flags and alerts depending on the rule set provided. Be familiar with the rules used in order to know what type of traffic may pass unnoticed by snort.

3

8.  The tool tcptrace is used to gain session information.

Command: tcptrace –n –r <TCPDUMP FILE> > <RESULT FILE PATH>

Warning: Session data can provide evidence to suspicious communications of abnormal lengths. Numerous attempted sessions over a short period of time could show signs of a brute force attack on a network (Jones, Bejtlich and Rose).

9.  Tcpflow organizes and prints out full content data on a TCP stream from a given log.

Command: tcpflow –r <TCPDUMP FILE> port <SOURCE PORT NUMBER> and <DESTINATION PORT NUMBER>

The results will be stored in a file formatted with the following name structure:

[timestampT]sourceip.sourceport-destip.destport[–VLAN][cNNNN]

To read the results a hex editor is required. Linux environments include a local tool to perform the read.

Command: hd <TCPFLOW FILENAME> > <RESULT FILE PATH>

Warning: This is a great tool that can visually show commands and inputs an intruder used in a particular stream; however, an investigator has to be aware of suspicious ports in order to retrieve quality pieces of evidence. The other tools used in this best practices help identify those communications (Jones, Bejtlich and Rose).

10.  When analyzing network-based pieces of evidence, it is import to be paranoid and research anything that is not obviously a familiar service, process, file or activity. The evidence found can help administrators understand weaknesses in the system in order to strengthen security and improve case standings in a court.

Jones, Keith J., Richard Bejtlich, and Curtis W. Rose. Real Digital Forensics. Upper Saddle River (N. J.): Addison-Wesley, 2006. Print.

Best Practices: Linux Live Analysis

There are some instances where a computer cannot be powered off for an investigation. For these circumstances a live incident response is performed. Another advantage to live investigations is that volatile and non-volatile data can both be analyzed providing more evidence than a typical offline analysis on a hard drive. The victim machine is the target of these investigations. These best practices are tool specific to Linux machines.

1.  In performing a digital forensic live incident response investigation it is important to setup the environment correctly. Linux environments come with a variety of tools pre-installed that are useful in the investigation. However, for additional tools not already installed on the system, a disk or network drive needs to be created containing executables to the tools that will be used to uncover evidence. Recommended tools include:

    • NetCat (http://nmap.org/ncat) – Create TCP channels between devices that can be used to pass information.
    • Native Linux commands and tools (Explained in further detail throughout the guide)
      • Netstat
      • Lsof
      • Ps
      • Lsmod
      • Crontab
      • Df
      • Top
      • Ipconfig
      • Uname
      • Gdb
      • Strings

Warning: The reason only executables are used in an investigation is because software installers will write into memory possible overwriting data. If incriminating data was deleted by a user, there is still a chance of uncovering it because it is not removed from disk. The only way to completely get rid of data is to overwrite it. There is a chance that software installers will overwrite such data and therefore are not to be used. This is why live investigations do not involve any type of information writes to the victim machine’s disk (Jones, Bejtlich and Rose).

2.  Since data cannot be written to disk, it is best to map a network drive or use Netcat to transfer information between analyzing machine and the victim machine.

Warning: All files created or tools used need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software. A good tool to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290).

3.  If using Netcat to transfer logs the following commands can be used:

Command to setup a Netcat listener on host machine: nc –v –l  –p <PORT> > <LOG FILE>

The port number is any port desired for the Netcat listener to listen on for communication. The log file is just a file for the data to be stored in.

Command to send data from a victim machine: <COMMAND> | nc <IP ADDRESS OF LISTENING MACHINE> <PORT>

Basically the command sends the results of a command performed on the victim machine. <COMMAND> is the command issued on the victim machine. The IP address and port are of the host machine with Netcat listening.

4.  The first step in the investigation is to retrieve the time and date on the victim machine along with some basic system details. It is helpful to have this information when an investigation is carried over multiple devices. Time and date should be compared to a trustworthy server. The date command can be used to retrieve this information.

Command: date

Warning: This is very important for time sensitive evidence to be presented in a case (Jones, Bejtlich and Rose).

Command: ipconfig

Ipconfig retrieves network information such as the machine’s IP address and MAC address.

Command: uname –a

This command retrieves information such as a systems hostname and operating system version.

5.  Current network connections are to be analyzed next to see if there are any unusual connections established to the computer or ports listening. Netstat is a native command that can be used to see these connections. This command also shows all open ports.

Command: Netstat –an

Warning: Look for established connections that are not authorized or that are communicating with unfamiliar IP addresses. Ports higher than 900, are not ports normally opened by the operating system and should be flagged as suspicious (Jones, Bejtlich and Rose).

6.  Lsof is a native command to Linux that stands for “List Open Files.” It can be used to see the executable processes running on open ports and it displays files that processes have opened. Unknown processes accessing a port should be flagged as suspicious and analyzed.

Command: lsof -n

7.  The Linux command ps can be used to view running processes. Suspicious processes flagged earlier can be viewed in more detail. It should be noted that if another process is found having started around the same time as the suspicious process, it should also be flagged. The two processes might have been started by the same attack or service.

Command: ps -aux

8.  It is also important to check the user currently logged into a machine remotely and locally. The users command can be used to see logged in users.

Command: users

Warning: A non-authorized user may be logged in or hijacking an account remotely (Jones, Bejtlich and Rose).

9.  The internal routing table should be examined to ensure a hacker or process has not manipulated the traffic from a device. Netstat can be used to view the routing table. Unfamiliar routes or connections should be flagged as suspicious and used to formulate a hypothesis.

Command: Netstat –rn

10.  The lsmod command can be used to view all loaded kernels.

Command: lsmod

Warning: There is a chance that a kernel could have been trojaned, if this is the case it can be discovered from viewing all loaded kernel modules (Jones, Bejtlich and Rose).

11.  Scheduled Jobs on a Linux machine can be viewed with crontab. An attacker with the right privileges can schedule malicious jobs to run at designated times. A simple for loop command can be used to see the jobs each user has scheduled.

Command: for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done

Warning: A hacker can run a job at odd times, such as 2:00 AM, which would likely go unannounced to most users (Jones, Bejtlich and Rose).

12.  Mount or df can be used to see all mounted file systems.

Command: df

Warning: Hackers can mount a file system as a way to transfer data to the victim machine.

13.  The top command can be used to show all services running. Foreign services should be marked as suspicious and researched.

Command: top

Further service startup scripts can be seen in the /etc/init.d directory.

Command: ls /etc/init.d

14.  Process dumps are important in reviewing the actions of a process. A live dump file of process activity can be found in the proc. Following, the string command can be used to pull out any words or sentences found in the dump file. This material can be reviewed to gain an understanding of what actions a process or executable performs.

Command to find process and memory block: cat /proc/<PID>/maps

The picture below highlights where addresses for a piece of memory can be found in /proc/<PID>/maps

Capture

Command: gdb –-pid=<PID>

dump memory <NETWORK DRIVE LOCATION TO WRITE MEMORY> <ADDRESS RANGE>

Ex: dump memory /mnt/test 0x75d5000 0xb77c8000

Command: strings <NAME OF DUMP FILE>

15.  After the volatile information is analyzed, non-volatile information can be examined. This includes all logs, incriminating files stored on the system, internet history, stored emails or any other physical file on disk. Manual investigation can be used to review this material.

16.  When performing a live response investigation, it is import to be paranoid and research anything that is not obviously a familiar service, process, file or activity.

Jones, Keith J., Richard Bejtlich, and Curtis W. Rose. Real Digital Forensics. Upper Saddle River (N. J.): Addison-Wesley, 2006. Print.