Autonomous Penetration Testing: Solving Capture-the-Flag Challenges with LLMs

Outlet Title

2025 Cyber Awareness and Research Symposium (CARS)

Document Type

Conference Proceeding

Publication Date

2025

Abstract

This study evaluates the ability of GPT-4o to au-tonomously solve beginner-level offensive security tasks by con-necting the model to OverTheWire's Bandit capture-the-flag game. Of the 25 levels that were technically compatible with a single-command SSH framework, GPT-4o solved 18 unaided and another two after minimal prompt hints for an overall 80% success rate. The model excelled at single-step challenges that involved Linux filesystem navigation, data extraction or decoding, and straightforward networking. The approach often produced the correct command in one shot and at a human-surpassing speed. Failures involved multi-command scenarios that required persistent working directories, complex network reconnaissance, daemon creation, or interaction with non-standard shells. These limitations largely reflect the architectural harness choices rather than a lack of general exploit knowledge. The results demonstrate that large language models (LLMs) can automate a substantial portion of novice penetration-testing workflow, potentially lowering the expertise barrier for attackers and offering productivity gains for defenders who use LLMs as rapid reconnaissance aides. Further, the unsolved tasks reveal specific areas where secure-by-design environments might frustrate simple LLM -driven attacks, informing future hardening strategies. Beyond offensive cyberse-curity applications, results suggest the potential to integrate LLMs into cybersecurity education as practice aids.

Share

COinS