Skip to content

Just another technical day

Somethingk: Tech Blog

  • Home
  • About

LIVECAP Project v1.0

By K4Paul April 2, 2013 May 12, 2013 0 Forensics, Windows Automated Tools, Live Analysis, Livecap

There are currently tools made available, such as Windows Forensic Toolchest, which automate a live Windows forensic investigation. However, these tools are private and require a purchasing fee. The Livecap project, started by Francis Mensah, is an open source Windows forensic tool alternative. The tool was entirely developed as a contribution towards anyone interested in the open source forensic community. The tool is publicly available on Google (http://code.google.com/p/livecap-project/).

The Livecap project is a forensic framework intended to simplify the task of forensic live capture. It is designed to automate the live forensic investigation and provide a formatted HTML report of the findings.

Picture1

 All that the user needs to do is specify the source of the tools that will be used in addition to a few configuration details and Livecap does the rest. Livecap adheres to standard forensic practices such as not doing anything that can tamper with forensic evidence on the victim workstation from which information is being captured. Through the use of client/server architecture Livecap transfers all its data from the victim workstation to the forensic workstation via a TCP/IP connection. Where this approach is not feasible the tool also supports other storage means including mounted remote drive and attached USB storage. It is, however, recommended the client/server TCP/IP connection be used with the client being run from a CD ROM on the victim workstation. This guarantees the least interference with forensic evidence.

Best Practices: Windows Live Analysis

By K4Paul April 2, 2013 May 12, 2013 0 Forensics, Windows Best Practices, Live Analysis

There are some instances where a computer cannot be powered off for an investigation. For these circumstances a live incident response is performed. Another advantage to live investigations is that volatile and non-volatile data can be analyzed. The victim machine is the target of these investigations. These best practices are tool specific to Windows machines; however some of the commands will work in other environments.

1.  In performing a digital forensic live incident response investigation it is important to setup the environment correctly. A disk or network drive needs to be created containing executables to the tools that will be used to uncover evidence. Recommended tools include:

    • Sysinternals Suite (http://technet.microsoft.com/en-us/sysinternals) – This contains numerous forensic tools.
    • FPort (http://www.scanwith.com/Fport_download.htm) – This tool enumerates ports and executables running on the ports.
    • UserDump (http://www.microsoft.com/en-us/download/details.aspx?id=4060, Contained in User Mode Process Dumper) – Is used to create memory dumps for specific processes.
    • NetCat (http://nmap.org/ncat) – Create TCP channels between devices that can be used to pass information.

Warning: The reason only executables are used in an investigation is because software installers will write into memory. If incriminating data was deleted, there is still a chance of uncovering it because it is not removed from disk. The only way to completely get rid of data is to overwrite it. There is a chance that software installers will overwrite such data and therefore are not to be used. This is also why live investigations do not involve any type of information writes to the victim machine’s disk (Jones, Bejtlich and Rose).

2.  Since data cannot be written to disk, it is best to map a network drive or use Netcat to transfer information between analyzing machine and the victim machine.

Warning: All files created or tools used need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software. A good tool to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290).

3.  The first step in the investigation is to retrieve the time and date on the victim machine. It is helpful to have this information when an investigation is carried over multiple devices. Time and date should be compared to a trustworthy server.

Warning: This is very important for time sensitive evidence to be presented in a case (Jones, Bejtlich and Rose).

4.  Current network connections are to be analyzed next to see if there are any unusual connections established to the computer or ports listening. Netstat –an, is a native command that can be used to see these connections. This command also shows all open ports.

Warning: Look for established connections that are not authorized or that are communicating with unfamiliar IP addresses. Ports higher than 515, are not ports normally opened by the operating system and should be flagged as suspicious (Jones, Bejtlich and Rose).

5.  FPort is used to see the executables running on open ports. Unknown processes accessing a port should be flagged as suspicious and analyzed.

6.  On machines older than Windows 2003, NetBIOS were used to label a machine instead of the IP address in a connection record found in the event log. In order to validate the machines identity as unique, the command nbtstat –c can be used.

Warning: Hackers could change the name of BIOS, perform an attack and then change it back to another name. The logs would then show the changed name and not the current, leading investigators to a dead end. This is why it is important to check for a unique NetBIOS name (Jones, Bejtlich and Rose).

7.  It is also important to check the user currently logged into a machine remotely and locally. The Sysinternal tool, PSLoggedOn can perform a check.

Warning: A non-authorized user may be logged in or hijacking an account remotely (Jones, Bejtlich and Rose).

8.  The internal routing table should be examined to ensure a hacker or process has not manipulated the traffic from a device. Netstat –rn can be used to view the routing table. Unfamiliar routes or connections should be flagged as suspicious and used to formulate a hypothesis.

9.  The Sysinternals tool, PsList, can be used to view running processes. Process flagged earlier can be viewed. It should be noted that if another process was found started around the same time as the suspicious process, it should also be flagged. The two processes might have been started by the same attack or service.

10.  The Sysinternals tool, PsService, is used to look at running services. Services that do not contain descriptions are not obvious services maintained by the operating system and are suspicious.

Warning: Services are used to hide attacking programs and should be analyzed carefully (Jones, Bejtlich and Rose).

11.  Scheduled Jobs on a Windows machine can be viewed with the at command. An attacker with the right privileges can schedule malicious jobs to run at designated times.

Warning: A hacker can run a job at odd times, such as 2:00 AM, which would likely go unannounced to most users (Jones, Bejtlich and Rose).

12.  Examining opened files may relay information more relevant to an investigation. The Sysinternals tool, PsFile, is used to look at all remotely opened files that cannot be immediately viewed on the victim machine.

13.  Process dumps are important in reviewing the actions of a process. The tool, UserDump, can be used to create a dump file of a suspicious process.

1

14.  Following, the Sysinternal tool strings can be used to pull out any words sentences found in the dump file. This material can be reviewed to gain an understanding of what actions a process or executable performs.


2

15.  After the volatile information is analyzed, non-volatile information can be examined. This includes all logs, incriminating files stored on the system, internet history, stored emails or any other physical file on disk. Manual investigation can be used to review this material.

16.  When performing a live response investigation, it is import to be paranoid and research anything that is not obviously a familiar service, process, file or activity.

 

Best Practices: Live Capture and Analysis of Network Based Evidence

By K4Paul April 2, 2013 May 12, 2013 0 Linux, Network Capture Best Practices, Live Capture

Network captures and traffic analyses can provide further information and understanding on the activity of a compromised machine. This guide is specific to Linux based tools.

1.  Recommended tools for capturing network-based evidence files include:

    • Netcat (http://nmap.org/ncat) – Create TCP channels between devices that can be used to pass information.
    • Native Linux commands and tools (Explained in further detail throughout the guide)
      • Tcpdump
      • Hd (hexdump)
      • Tcpdstat (http://staff.washington.edu/dittrich/talks/core02/tools/tools.html) – Breaks down traffic patterns and provides an average of transfer rates for any given communication libpcap formatted file.
      • Snort (http://www.snort.org/) – An open source intrusion prevention and detection system.
      • Tcptrace(http://www.tcptrace.org/) – Provides data on connections such as elapsed time, bytes and segments sent/received, retransmission, round trip times, window advertisements and throughput.
      • Tcpflow (https://github.com/simsong/tcpflow)– A tool that captures and stores communications in a convenient way for protocol analysis and debugging.

2.  There are four types of information that can be retrieved with network-based evidence (Jones, Bejtlich and Rose).

    • Full content data – Full content data includes the entire network communications recorded for analysis. It consists of every bit present in the actual packets including headers and application information.
    • Session data – Data that includes the time, parties involved and duration of the communication.
    • Alert data – Data that has been predefined as interesting.
    • Statistical data – Reporting data

3.  Since data cannot be written to disk, it is best to map a network drive or use Netcat to transfer information between analyzing machine and the victim machine.

Warning: All files created or tools used need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software. A good tool to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290).

4.  If using Netcat to transfer logs the following commands can be used:

Command to setup a Netcat listener on host machine: nc –v –l  –p <PORT> > <LOG FILE (FOR TCPDUMP FILE USE EXTENSION .lpc>

The port number is any port desired for the Netcat listener to listen on for communication. The log file is just a file for the data to be stored in.

Command to send data from a victim machine: <COMMAND> | nc <IP ADDRESS OF LISTENING MACHINE> <PORT>

Basically the command sends the results of a command performed on the victim machine. <COMMAND> is the command issued on the victim machine. The IP address and port are of the host machine with NetCat listening.

5.  Begin by capturing traffic.

Command: tcpdump –n –i <NETWORK INTERFACE> –s <BYTES TAKEN FROM PACKET>

Pipe the file over Netcat or send to a file on a network drive with additional parameters: -w <FILE PATH>.lpc

Warning: The file will show evidence of any unknown communications. It will not provide detail on any results or items actual obtained in the communications (Jones, Bejtlich and Rose).

6.  The tool tcpdstat can be used from the analyzing machine on the file to gain statistical data on the general flow of traffic found in the tcpdump. Statics help describe the traffic patterns and communication protocols used over a period of time.

Command: tcpdstat <TCP DUMP FILE> > <RESULTS FILE>

Warning: Telnet and FTP are communication protocols that transfer data in clear text. However, smarter intruders will use different protocols to encrypt their communications.

7.  Snort can be used to find alert data (Jones, Bejtlich and Rose).

Command: snort –c <LOCATION OF SNORT.CONF OR ANOTHER RULESET> -r <TCPDUMP FILE> -b –l <RESULT FILE PATH>

Warning: Snort will only raise flags and alerts depending on the rule set provided. Be familiar with the rules used in order to know what type of traffic may pass unnoticed by snort.

3

8.  The tool tcptrace is used to gain session information.

Command: tcptrace –n –r <TCPDUMP FILE> > <RESULT FILE PATH>

Warning: Session data can provide evidence to suspicious communications of abnormal lengths. Numerous attempted sessions over a short period of time could show signs of a brute force attack on a network (Jones, Bejtlich and Rose).

9.  Tcpflow organizes and prints out full content data on a TCP stream from a given log.

Command: tcpflow –r <TCPDUMP FILE> port <SOURCE PORT NUMBER> and <DESTINATION PORT NUMBER>

The results will be stored in a file formatted with the following name structure:

[timestampT]sourceip.sourceport-destip.destport[–VLAN][cNNNN]

To read the results a hex editor is required. Linux environments include a local tool to perform the read.

Command: hd <TCPFLOW FILENAME> > <RESULT FILE PATH>

Warning: This is a great tool that can visually show commands and inputs an intruder used in a particular stream; however, an investigator has to be aware of suspicious ports in order to retrieve quality pieces of evidence. The other tools used in this best practices help identify those communications (Jones, Bejtlich and Rose).

10.  When analyzing network-based pieces of evidence, it is import to be paranoid and research anything that is not obviously a familiar service, process, file or activity. The evidence found can help administrators understand weaknesses in the system in order to strengthen security and improve case standings in a court.

Jones, Keith J., Richard Bejtlich, and Curtis W. Rose. Real Digital Forensics. Upper Saddle River (N. J.): Addison-Wesley, 2006. Print.

Best Practices: Linux Live Analysis

By K4Paul April 2, 2013 May 12, 2013 0 Forensics, Linux Best Practices, Live Analysis

There are some instances where a computer cannot be powered off for an investigation. For these circumstances a live incident response is performed. Another advantage to live investigations is that volatile and non-volatile data can both be analyzed providing more evidence than a typical offline analysis on a hard drive. The victim machine is the target of these investigations. These best practices are tool specific to Linux machines.

1.  In performing a digital forensic live incident response investigation it is important to setup the environment correctly. Linux environments come with a variety of tools pre-installed that are useful in the investigation. However, for additional tools not already installed on the system, a disk or network drive needs to be created containing executables to the tools that will be used to uncover evidence. Recommended tools include:

    • NetCat (http://nmap.org/ncat) – Create TCP channels between devices that can be used to pass information.
    • Native Linux commands and tools (Explained in further detail throughout the guide)
      • Netstat
      • Lsof
      • Ps
      • Lsmod
      • Crontab
      • Df
      • Top
      • Ipconfig
      • Uname
      • Gdb
      • Strings

Warning: The reason only executables are used in an investigation is because software installers will write into memory possible overwriting data. If incriminating data was deleted by a user, there is still a chance of uncovering it because it is not removed from disk. The only way to completely get rid of data is to overwrite it. There is a chance that software installers will overwrite such data and therefore are not to be used. This is why live investigations do not involve any type of information writes to the victim machine’s disk (Jones, Bejtlich and Rose).

2.  Since data cannot be written to disk, it is best to map a network drive or use Netcat to transfer information between analyzing machine and the victim machine.

Warning: All files created or tools used need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software. A good tool to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290).

3.  If using Netcat to transfer logs the following commands can be used:

Command to setup a Netcat listener on host machine: nc –v –l  –p <PORT> > <LOG FILE>

The port number is any port desired for the Netcat listener to listen on for communication. The log file is just a file for the data to be stored in.

Command to send data from a victim machine: <COMMAND> | nc <IP ADDRESS OF LISTENING MACHINE> <PORT>

Basically the command sends the results of a command performed on the victim machine. <COMMAND> is the command issued on the victim machine. The IP address and port are of the host machine with Netcat listening.

4.  The first step in the investigation is to retrieve the time and date on the victim machine along with some basic system details. It is helpful to have this information when an investigation is carried over multiple devices. Time and date should be compared to a trustworthy server. The date command can be used to retrieve this information.

Command: date

Warning: This is very important for time sensitive evidence to be presented in a case (Jones, Bejtlich and Rose).

Command: ipconfig

Ipconfig retrieves network information such as the machine’s IP address and MAC address.

Command: uname –a

This command retrieves information such as a systems hostname and operating system version.

5.  Current network connections are to be analyzed next to see if there are any unusual connections established to the computer or ports listening. Netstat is a native command that can be used to see these connections. This command also shows all open ports.

Command: Netstat –an

Warning: Look for established connections that are not authorized or that are communicating with unfamiliar IP addresses. Ports higher than 900, are not ports normally opened by the operating system and should be flagged as suspicious (Jones, Bejtlich and Rose).

6.  Lsof is a native command to Linux that stands for “List Open Files.” It can be used to see the executable processes running on open ports and it displays files that processes have opened. Unknown processes accessing a port should be flagged as suspicious and analyzed.

Command: lsof -n

7.  The Linux command ps can be used to view running processes. Suspicious processes flagged earlier can be viewed in more detail. It should be noted that if another process is found having started around the same time as the suspicious process, it should also be flagged. The two processes might have been started by the same attack or service.

Command: ps -aux

8.  It is also important to check the user currently logged into a machine remotely and locally. The users command can be used to see logged in users.

Command: users

Warning: A non-authorized user may be logged in or hijacking an account remotely (Jones, Bejtlich and Rose).

9.  The internal routing table should be examined to ensure a hacker or process has not manipulated the traffic from a device. Netstat can be used to view the routing table. Unfamiliar routes or connections should be flagged as suspicious and used to formulate a hypothesis.

Command: Netstat –rn

10.  The lsmod command can be used to view all loaded kernels.

Command: lsmod

Warning: There is a chance that a kernel could have been trojaned, if this is the case it can be discovered from viewing all loaded kernel modules (Jones, Bejtlich and Rose).

11.  Scheduled Jobs on a Linux machine can be viewed with crontab. An attacker with the right privileges can schedule malicious jobs to run at designated times. A simple for loop command can be used to see the jobs each user has scheduled.

Command: for user in $(cut -f1 -d: /etc/passwd); do crontab -u $user -l; done

Warning: A hacker can run a job at odd times, such as 2:00 AM, which would likely go unannounced to most users (Jones, Bejtlich and Rose).

12.  Mount or df can be used to see all mounted file systems.

Command: df

Warning: Hackers can mount a file system as a way to transfer data to the victim machine.

13.  The top command can be used to show all services running. Foreign services should be marked as suspicious and researched.

Command: top

Further service startup scripts can be seen in the /etc/init.d directory.

Command: ls /etc/init.d

14.  Process dumps are important in reviewing the actions of a process. A live dump file of process activity can be found in the proc. Following, the string command can be used to pull out any words or sentences found in the dump file. This material can be reviewed to gain an understanding of what actions a process or executable performs.

Command to find process and memory block: cat /proc/<PID>/maps

The picture below highlights where addresses for a piece of memory can be found in /proc/<PID>/maps

Capture

Command: gdb –-pid=<PID>

dump memory <NETWORK DRIVE LOCATION TO WRITE MEMORY> <ADDRESS RANGE>

Ex: dump memory /mnt/test 0x75d5000 0xb77c8000

Command: strings <NAME OF DUMP FILE>

15.  After the volatile information is analyzed, non-volatile information can be examined. This includes all logs, incriminating files stored on the system, internet history, stored emails or any other physical file on disk. Manual investigation can be used to review this material.

16.  When performing a live response investigation, it is import to be paranoid and research anything that is not obviously a familiar service, process, file or activity.

Jones, Keith J., Richard Bejtlich, and Curtis W. Rose. Real Digital Forensics. Upper Saddle River (N. J.): Addison-Wesley, 2006. Print.

Overview of Basic Windows Live Investigation Tools

By K4Paul February 13, 2013 May 12, 2013 2 Forensics, Windows Command, PsFile, PsList, PSLoggedOn, PsService, Strings, Sysinternals Suite, UserDump

These are tools I have used for a live investigation of a target Windows machine and I recommend to other users. These tools are open source and provide clean GUI or command line executable.

Sysinternals Suite contains numerous forensic tools for a Windows environment. A few helpful tools for a live investigation include:

    • PSLoggedOn – Check the users currently logged into a machine remotely and locally, a non-authorized user may be logged in or hijacking an account remotely and this tool will display the account.
    • PsList – View running processes,  if a process was found started around the same time as the suspicious process, it should also be flagged as suspicious  The two processes might have been started by the same attack or service.
    • PsService – Look at running services, services that do not contain descriptions are not obvious services maintained by the operating system and are suspicious.
    • PsFile – View all remotely opened files that cannot be immediately seen on the victim machine.
    • Strings – Used to pull out any words sentences found in a target file.

FPort is a tool that enumerates ports and executables running on the ports. Unknown processes accessing a port should be flagged as suspicious and analyzed.

UserDump  is a tool used to create memory dumps for specific processes. Process dumps are important in reviewing the actions of a process. After the dump file is created the Sysinternal tool strings can be used to pull out any words sentences found in the dump file. This material can be reviewed to gain an understanding of what actions a process or executable performs.

File Checksums

By K4Paul February 13, 2013 May 12, 2013 0 Forensics, Linux, Windows checksum, CMD, Command, Terminl

All files created or tools used in a forensic investigation need to include a checksum for validation against fraud. A checksum is basically the value of a file hash. If one character in the code or file is changed, the hash will produce a different checksum. This helps validate content. A specific application version will have a unique checksum different from all other versions of the software.

A good tool for Windows to use to create checksums is File Checksum Integrity Verifier (http://support.microsoft.com/kb/841290). Tool use is very simple.

Command: 
<File Checksum Integrity Verifier EXECUTABLE> <FILE TO CHECKSUM>

Capture4
A good tool pre-installed in most Linux environments to use to create checksums is md5sum.
Command: 
md5sum <FILE TO CHECKSUM>

Ncat for Live Incident Response

By K4Paul February 13, 2013 May 12, 2013 0 Forensics, Linux, Windows CMD, Command, File Transfer, Ncat, Terminal

When a system is vital in daily operations, it often cannot be taken offline for duplication. Also because of its importance it cannot risk the chance of state change, forensic tools cannot be downloaded onto the system. In a court case, the installation of tools could be considered as tampering with the evidence because there is a chance the tools could overwrite important data. The same goes for saving data on the victim machine. A live incident response looks to collect data from a machine without changing the environment. I recommend mapping a network drive or preferably using Ncat to transfer information between analyzing machine and the victim machine during a live investigation.

Ncat comes pre-installed on most Linux distributions and can be called by the ‘nc’ command. For Windows, a portable executable can be downloaded from here.

If using Ncat to transfer logs the following commands can be used:

Command to setup a Ncat listener on host machine: 
Linux: nc –v –l  –p <PORT> > <LOG FILE>
Windows: <NCAT EXECUTABLE> –v –l  –p <PORT> > <LOG FILE>
Capture

The port number is any port desired for the Ncat listener to listen on for communication. The log file is just a file for the data to be stored in on the analyzing host machine.

Command to send data from a victim machine: 
Linux: <COMMAND> | nc <IP ADDRESS OF LISTENING MACHINE> <PORT>
Windows: <COMMAND> | <NCAT EXECUTABLE> <IP ADDRESS OF LISTENING MACHINE> <PORT>
Capture2

Basically the command sends the results of a command performed on the victim machine to the listening host machine. <COMMAND> is the command issued on the victim machine. The IP address and port are of the host machine with Ncat listening. The connection can be closed by CONTROL C/D or closing the terminal/command prompt. Once closed, the listener will output all received data to the output file.

Not only can Ncat be used to send command output but it can be used to listen for text or file transfers.

Capture3

 

Overall, it is an easy to use clean tool for transferring information between host machines.

Recovering Data with Autopsy

By K4Paul March 3, 2012 May 12, 2013 0 Forensics, Linux Autopsy, Data Recovery

A little while back, a friend brought a USB to me and asked if I could attempt to retrieve any data off the device. After some research, I found the open source tools, autopsy and Sleuthkit, that can be used for recovering data. I found these tools very helpful in retrieving deleted data off a drive. The web interface is not the easiest to use but it is still an effective recovery tool.

Autopsy is a web user interface tool that utilizes the Sleuthkit forensics toolkit. Backtrack comes with Autopsy and Sleuthkit already install but for any other Debian based Linux system, they can be installed with:

sudo apt-get install autopsy
sudo apt-get install sleuthkit

Autopsy will perform forensics on a disk/storage image file. Anything from an image of a hard drive partition to a USB device. An image file can be created of any mounted drives. In Linux, to view current mounted drives use the command:

fdisk -l

The system column provides a description of the drive. The last item printed is that of an attached USB. Once, the desired drive to perform forensics on has been located, create an image of it.

dd if=<Device Boot Location> of=<Saving Location>.img bs=2048
Example: dd if=/dev/sdb1 of=usb.img bs=2048

This will take a few minutes.

On Backtrack R1, when first trying to run Autopsy (typing ‘autopsy’ in the terminal), I ran into an error saying autopsy was not installed.

This happens if autopsy does not have an initialized evidence locker. Find and run autopsy from the Applications->Backtrack->Forensics under Forensic Suites. A new terminal will open prompting for the full path to an evidence locker location. A valid/existing location has to be given or else autopsy will error again.

When autopsy is successfully running, it will begin servicing a website user interface that can be accessed in a browser by going to:

http://localhost:9999/autopsy

On this webpage, choose to create a new case. Autopsy will ask background questions on the forensic case. Since this is most likely for personal use, these questions do not matter. Fill them out however you like.

Continuing, once prompted for an image file, enter the full path location to the one created earlier. Depending on what type of storage image, select Disk or Partition. When I performed this I was analyzing a USB image and selected partition. The third question asks for the desired type of import. Choose whichever method you prefer. I like to use Symlink.

Following, the data integrity section can be ignored. The last section needs to be verified. Check to make sure the file system type is correct. Then complete the process and add the case.

Now the image can be searched for lost files. Choose to “Analyze” the image. Then select “File Analysis” from the top left of the window. If this button is disabled, it is most likely the wrong image type was selected (Disk or Partition). Re-create the case and choose the other option. File Analysis, provides a view of all files and directory contents found on the image. Now you can search through the files and ideally find the lost file to recover. Once it is found, the file can be exported to your local system.

Good Luck! This is just one method of data recovery.

I received the most help from this article in performing this type of forensics.

Scanning With Nmap

By K4Paul February 23, 2012 May 12, 2013 0 Backtrack, Fingerprinting, Kali, Linux, Penetration Testing, Windows CMD, Command, nmap, Terminal

Nmap is an effective network-scanning tool that can be used for host and open port service discovery. It can be downloaded from here.

In my experiences, to find hidden services or special services, not located on common ports, the below scans can be used. Different services respond to different packet messages. The “-p” tag specifies a port range, it is not required. However, when I stated the range, I found more running services than when the range was not stated. My theory is nmap, on a basic scan will look at popular ports and not necessarily all ports when not stated.

  • Find UDP Services: nmap –sU <ADDRESS> –p1-6000
  • Basic Service Scan: nmap –v <ADDRESS> –p1-6000
  • Basic All Service Scan: nmap –A <ADDRESS> –p1-6000
  • Null port scan (Does not set and bits in the TCP flag header): nmap –sN <ADDRESS> –p1-6000
  • Fin port scans (Sets just the TCP FIN bit): nmap –sF <ADDRESS> –p1-6000
  • Christmas port scans (Sets the FIN, PSH and URG flags): nmap –sX <ADDRESS> –p1-6000

Backtrack Reverseraider

By K4Paul February 5, 2012 May 12, 2013 0 Backtrack, Exploit, Penetration Testing Backtrack, Brute Force, Command, Reverseraider

This is a tool used to brute force subdomains and domains for a specified website. It accomplishes the attacks through the use of a wordlists.

 Command Use:

  • cd /pentest/enumeration/reverseraider
  • ./reverseraider –d <domain> -w <wordlist file, a few can be found in the wordlist directory contained>

Posts navigation

Older posts
Newer posts
Connect with me on Linkedin

Categories

Recent Comments

  • Jk on Reordering an Array Based on Another Array’s Order in Javascript
  • LDLC_KolDzeRa on Simple AVL Tree in C++
  • vepambattu chand on Reordering an Array Based on Another Array’s Order in Javascript
  • Unknown on OpenVAS Quick and Easy: Scheduling and Running Tasks
  • K4Paul on Javascript Mousemove Scroll Event
Boka WordPress Theme By ThemeTim