LLM Agent Honeypot

LLM Agent Honeypot

Unveiling Real-World AI Threats

Project Overview

The LLM-Hack Agent Honeypot is a project designed to monitor, capture, and analyze autonomous AI Hacking Agents in the real world.

How It Works:

  1. Simulation: We deploy a simulated "vulnerable" service to attract potential threats.
  2. Catching Mechanisms: This service incorporates specific counter-techniques designed to detect and capture AI-Hacking Agents.
  3. Monitoring: We monitor and log all interactions, waiting for potential attacks from LLM-powered agents.
  4. Capture and Analysis: When an AI agent engages with our system, we capture the attempt and their system prompt details.

Why?

Our objectives aim to improve awareness of AI Hacking Agents and their current state of risks by understanding their real-world usage and studying their algorithms and behavior in the wild.

Total Interactions

1135203

Attempts to engage with our honeypot

AI Agents

6

Potential AI-driven hacking attempts

Weekly Attack Distribution

Top Threat Origins

  • 94.156.8.237 63543 attempts
  • 43.239.111.78 49145 attempts
  • 93.188.83.96 43605 attempts
  • 145.239.255.60 42015 attempts
  • 103.75.180.159 38598 attempts
  • 138.197.167.143 33595 attempts
  • 47.236.1.124 32947 attempts
  • 64.23.235.210 32529 attempts
  • 47.243.35.191 30819 attempts
  • 94.56.40.180 26741 attempts

Global Threat Distribution

  • China 18.15%
  • Canada 9.26%
  • United States 8.21%
  • Hong Kong 7.27%
  • India 6.48%
  • Singapore 5.84%
  • France 4.55%
  • Vietnam 4.43%
  • Uzbekistan 3.95%
  • Brazil 3.47%

Top AI Threat Origins

  • 195.158.248.232 4 attempts
  • 195.158.248.230 2 attempts

Global AI Threat Distribution

  • India 100.0%

Ongoing Research

Our project continues to evolve as we gather more data on real-world AI threat actors. We're constantly refining our methods to stay ahead of emerging attack vectors and contribute valuable insights to the cybersecurity community.

By studying these AI agents in action, we're not just theorizing about potential risks—we're documenting and analyzing actual threats as they unfold. This real-time approach allows us to develop more effective defenses and push the boundaries of AI security research.

Contact: [email protected] | LinkedIn