LLM Agent Honeypot
Unveiling Real-World AI Threats
Project Overview
The LLM-Hack Agent Honeypot is a project designed to monitor, capture, and analyze autonomous AI Hacking Agents in the real world.
How It Works:
- Simulation: We deploy a simulated "vulnerable" service to attract potential threats.
- Catching Mechanisms: This service incorporates specific counter-techniques designed to detect and capture AI-Hacking Agents.
- Monitoring: We monitor and log all interactions, waiting for potential attacks from LLM-powered agents.
Why?
Our objectives aim to improve awareness of AI Hacking Agents and their current state of risks by understanding their real-world usage and studying their algorithms and behavior in the wild.
Total Interactions
7819042
Attempts to engage with our honeypot
Potential AI Agents
7
Passed prompt injection detection
Confirmed AI Agents
1
Passed both prompt injection and temporal analysis
Monthly AI Hacking Rate and Attack Distribution
Potential AI Agents Origins
- 195.158.248.232 4 attempts
- 195.158.248.230 2 attempts
- 43.154.253.197 1 attempts
Potential AI Agents Distribution
- India 85.71%
- Hong Kong 14.29%
Confirmed AI Agents Origins
- 43.154.253.197 1 attempts
Confirmed AI Agents Distribution
- Hong Kong 100.0%
Top Threat Origins
- 159.203.11.24 156368 attempts
- 93.188.83.96 71071 attempts
- 89.252.190.72 64023 attempts
- 94.156.8.237 63543 attempts
- 47.243.35.191 62142 attempts
- 91.132.146.181 57180 attempts
- 103.75.180.159 55397 attempts
- 176.124.198.49 54713 attempts
- 157.173.196.166 49810 attempts
- 43.239.111.78 49176 attempts
Global Threat Distribution
- China 16.82%
- United States 14.58%
- Singapore 7.64%
- Hong Kong 6.9%
- India 6.85%
- Germany 5.3%
- Canada 4.61%
- Indonesia 2.98%
- The Netherlands 2.85%
- Vietnam 2.3%
Ongoing Research
Our project continues to evolve as we gather more data on real-world AI threat actors. We're constantly refining our methods to stay ahead of emerging attack vectors and contribute valuable insights to the cybersecurity community.
By studying these AI agents in action, we're not just theorizing about potential risks—we're documenting and analyzing actual threats as they unfold. This real-time approach allows us to develop more effective defenses and push the boundaries of AI security research.