The Dark Forest Theory

"The universe is a dark forest. Every civilization is an armed hunter stalking through the trees... In this forest, hell is other people. An eternal threat that any life that exposes its own existence will be swiftly wiped out." — Liu Cixin, The Dark Forest Hypothesis (from The Three-Body Problem trilogy)

With the rise of GenAI, the Internet has become a dark forest. AI-powered attackers can scan, discover, and exploit vulnerabilities at unprecedented speed and scale. Every exposed service is a signal that attracts these tireless hunters. In this environment:

AI Attacks: From Minutes to Seconds

AI-powered attacks are accelerating at an alarming rate. What once took hackers hours now takes seconds. Human defenders simply cannot keep up—the window to detect and respond has collapsed entirely.

⏱️ Time to Full Compromise

↓ 94%
7 min
2022
2:07
2023
51s
2024
27s
2025

Source: CrowdStrike 2026 Global Threat Report

⚠️ New Vulnerabilities

↑ 100%
25K
2022
29K
2023
40K
2024
50K
2025

Source: CVE Details / NVD

The math is clear: Faster exploits + More vulnerabilities = inevitable breach.
Unless you're invisible.

GARTNER
"In the age of GenAI, preemptive capabilities—not detection and response—are the future of cybersecurity."

— Gartner, September 2025 Read more →

VISIBILITY = VULNERABILITY

AI agents can now autonomously scan for exposed services, discover vulnerabilities, generate working exploits, and compromise systems—all without human intervention, 24/7, at near-zero cost. Every open port is an invitation.

AISI

UK AISI: AI Now Completes Expert-Level Cyber Attack Tasks

Apr 2026

The UK AI Security Institute's first Frontier AI Trends Report documents a rapid acceleration in AI cyber capability: the best models now complete apprentice-level cyber attack tasks 50% of the time, up from under 9% in late 2023. In 2025, AISI tested the first-ever model able to complete expert-level attack tasks that typically require 10+ years of human experience. The duration of cyber attack tasks AI can complete unassisted is doubling roughly every eight months — from under ten minutes in early 2023 to over an hour by mid-2025.

Claude Code Security: AI Finds Decades-Old Vulnerabilities

Feb 2026

Anthropic's Claude Code Security, powered by Opus 4.6, reads and reasons about codebases like a senior security researcher—tracing data flows, understanding component interactions, surfacing complex logic vulnerabilities that pattern-matching tools miss entirely. In internal testing on open-source software running across enterprise systems and critical infrastructure, it found vulnerabilities that had gone undetected for decades. Markets reacted immediately: CrowdStrike −8%, Cloudflare −8.1%, Zscaler −5.5%, Okta −9.2%.

S

Stanford ARTEMIS: AI Outperforms Human Hackers

Dec 2025

Stanford's ARTEMIS AI agent outperformed 9 of 10 professional penetration testers in a live enterprise environment, discovering vulnerabilities with 82% accuracy. At $18/hour vs $60/hour for humans, AI hackers are now "dangerously close" to matching—and beating—human capabilities.

CVE Exploits in 10-15 Minutes, $1 Per Exploit

Aug 2025

AI systems can automatically generate working exploits for newly published CVEs in just 10-15 minutes at ~$1 per exploit, compressing the traditional patch window from weeks to minutes. All generated exploits are publicly available on their research database.

AI finds and exploits vulnerabilities faster than you can patch.
NHP eliminates the attack surface entirely — invisible services can't be exploited.

PUBLIC = HARVESTED

Even if your data is public and you're not worried about exploitation, you still can't control who consumes it. AI bots crawl the Internet 24/7, at near-zero cost, instantly harvesting any content that becomes available. Your public data becomes part of AI training datasets — permanently, without your consent.

🤖

Perplexity AI Bypasses robots.txt Restrictions

Aug 2025

Cloudflare accused Perplexity AI of using "stealth crawling" techniques to bypass robots.txt directives, accessing content from websites that explicitly disallow AI scraping.

PC Gamer Report →
Reddit

Reddit Sues Anthropic for Unauthorized Scraping

Jun 2025

Reddit filed a lawsuit against Anthropic, alleging the company used automated bots to scrape Reddit user data without consent to train Claude, violating terms of service and user privacy.

AP News Report →
TC

OpenAI Models "Memorize" Copyrighted Content

Apr 2025

Research revealed that OpenAI's models may have "memorized" copyrighted content during training, raising serious legal and ethical questions about AI training data sources and consent.

TechCrunch Report →

robots.txt won't save you. AI companies ignore it. Lawsuits come too late.
NHP makes your content invisible to unauthorized bots.

The NHP Paradigm: Invisible by Default, Authenticate First, Connect Later

NHP (Network-Infrastructure Hiding Protocol) inverts the traditional security model. Instead of exposing services and authenticating after a connection is established, NHP hides everything by default—services only become visible after cryptographic authentication proves the client's identity.

Attack SurfaceTraditional: Connect FirstNHP: Authenticate First
Port ScanningServices visible to scannersAll ports appear closed
DNS EnumerationDomain records publicNXDOMAIN for unauthorized
Pre-Auth ExploitsAttack before loginNo connection possible
DDoS AttacksKnown IP can be floodedIP hidden from attackers
Zero-Day ExploitsService exposed to attacksService unreachable

NHP Architecture and Workflow

👤 NHP-Agent

Client-side component that initiates cryptographic knock requests to authenticate and gain access.

🛡️ NHP-Server

Cryptographically verifies identity of agents, coordinates with Auth Provider, and grants time-limited access tokens.

🚧 NHP-AC

Access Controller that enforces default-deny policy and opens ports only for verified agents.

Control Plane
🛡️
NHP-Server
🔐
Auth Provider
① Knock
③ Grant
② Allow
Data Plane
Resource Requestor
👤
NHP-Agent
④ Connect
Resource Provider
🚧
NHP-AC
🖥️
Resource
1
Knock
Agent → Server
2
Allow
Server → AC
3
Grant
Server → Agent
4
Connect
Agent → Resource

NHP and TLS: Complementary, Not Competing

NHP operates at OSI Layer 5 (Session), TLS at Layer 6 (Presentation). Together, they provide defense in depth.

L7
Application
HTTP, SMTP, etc.
L6
Presentation
🔐 TLS
L5
Session
🛡️ NHP
L4
Transport
TCP / UDP
L3
Network
IP
L2
Data Link
Ethernet
L1
Physical

🔐 TLS: Encrypt After Connect

Service must be reachable for handshake. Protects data in transit, but attackers can still find and probe the service.

🛡️ NHP: Authenticate Before Connect

Service is invisible until authenticated. No handshake possible without cryptographic proof of identity.

Join the Zero Trust Revolution

The dark forest is here. The question is whether your infrastructure will be visible or invisible.