How to Run It

This guide will walk you through the process of setting up and running Heph4estus for distributed reconnaissance.

Before You Begin

Make sure you've met all the requirements before proceeding with this guide.

1. Clone the Repository

First, clone the Heph4estus repository and navigate to the project directory:

git clone <repository-url>
cd heph4estus

2. Deploy Infrastructure with Terraform

Heph4estus uses Terraform to provision the necessary AWS infrastructure. Run the following commands:

# Initialize Terraform
cd terraform/environments/dev
terraform init

# Review the planned changes
terraform plan

# Apply the changes
terraform apply

After Terraform completes, it will output several important values that you'll need in the next steps. Make note of the following outputs:

  • ecr_repository_url: The URL of the ECR repository
  • state_machine_arn: The ARN of the Step Functions state machine
  • s3_bucket_name: The name of the S3 bucket for storing scan results

3. Build and Push the Container Image

Next, build the Docker image containing Nmap and push it to the ECR repository:

# Build the Docker image
docker build -t nmap-scanner .

# Use the ecr_repository_url from Terraform output
ECR_REPO=repo.url.example

# Extract the registry URL
ECR_REGISTRY=$(echo $ECR_REPO | cut -d/ -f1)

# Login to ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin $ECR_REGISTRY

# Tag and push the image
docker tag nmap-scanner:latest $ECR_REPO:latest
docker push $ECR_REPO:latest

4. Create a Targets File

Create a file named targets.txt with a list of targets to scan, one per line:

example.com -sV -p 80,443
10.0.0.0/24 -sS -p 22
192.168.1.1 -A

Format: <target> [nmap options]

If no options are specified, the default options (-sS) will be used. Currently, the project only supports scanning targets that are open to the internet.

5. Build the Producer Application

Build the producer application that will submit scan jobs:

# Build the producer
go build -o bin/producer cmd/producer/main.go

6. Start Scanning

Now you can start the scanning process:

# Use the state_machine_arn from Terraform output
export STATE_MACHINE_ARN=arn.example.aws

# Run the producer with your targets file
./bin/producer -file targets.txt

This will submit the targets to the Step Functions state machine, which will orchestrate the scanning process using ECS tasks.

7. Viewing Results

Scan results are stored in JSON format in the S3 bucket created by Terraform:

# Use the s3_bucket_name from Terraform output
S3_BUCKET=s3.bucketname.holder

# List scan results
aws s3 ls s3://$S3_BUCKET/scans/

# Download a specific result
aws s3 cp s3://$S3_BUCKET/scans/example.com_1646347520.json .

8. Clean Up

When you're done with your scanning operations, you can clean up the AWS resources to avoid incurring additional costs:

# First, empty the S3 bucket
aws s3 rm s3://$S3_BUCKET/scans/ --recursive

# Then destroy the infrastructure
cd terraform/environments/dev
terraform destroy

Troubleshooting

If you encounter issues while running Heph4estus, check the following:

  • Ensure all AWS credentials and permissions are correctly configured
  • Verify that the Docker image was successfully pushed to ECR
  • Check CloudWatch Logs for any error messages from the ECS tasks
  • Ensure your targets are accessible from the internet (for now)

Next Steps

Now that you've learned how to run Heph4estus, you might want to explore the Architecture section to understand how the system works in detail, or check out the Use Cases for examples of how Heph4estus can be used in different scenarios.