- Gleam 98.3%
- Dockerfile 0.9%
- Shell 0.7%
- Erlang 0.1%
| .forgejo/workflows | ||
| src | ||
| test | ||
| .gitignore | ||
| config.toml | ||
| Containerfile | ||
| entrypoint-wrapper.sh | ||
| gleam.toml | ||
| manifest.toml | ||
| README.md | ||
| template.md | ||
OpenPEN Agent
AI-powered web application security scanner. OpenPEN Agent combines ZAP (an open-source web scanner) with an LLM to analyze scan findings and produce actionable security reports.
How it works
The agent runs in two phases that can be executed together or independently:
Analysis — ZAP crawls and scans the target(s), then the agent sends each
finding to an LLM for independent assessment. Findings are analyzed in
parallel batches for throughput. The LLM returns a structured JSON assessment
with severity, confidence, exploitability, and remediation. Results are saved
as analysis.json, alerts.json, and llm-log.json.
Reporting — The agent takes the analyzed findings and asks the LLM to
fill in a report template, producing report.md. This can run immediately
after analysis (--produce-report) or later from a previous scan's
analysis.json using the report command.
ZAP spider → ZAP active scan → Collect alerts
→ LLM analyzes each alert (parallel batches, with retries)
→ (optional) LLM generates report from template
→ Output: analysis.json, alerts.json, llm-log.json, report.md
Quick start
The container image bundles ZAP, Erlang/OTP, and the agent. You just need an LLM endpoint.
podman run --rm \
--network host \
-e LLM_API_BASE=http://localhost:11434/v1 \
-e LLM_MODEL=devstral-small-2:24b \
-v ./results:/reports:z,U \
git.krispy.tech/openpen/agent:latest \
scan https://example.com --produce-report
Results appear in ./results/:
results/
├── analysis.json # Structured LLM assessments (schema v1.0.0)
├── alerts.json # Raw ZAP alerts
├── llm-log.json # Full LLM call history with token usage
└── report.md # LLM-generated security report (if requested)
To run without Docker, see Local development.
Commands
scan
Scan one or more targets, analyze findings, and optionally generate a report.
scan TARGET [TARGET...]
report
Generate a report from a previous scan's analysis.json, without
re-scanning or re-analyzing.
report PATH
Output files
analysis.json
The primary structured output. Contains the LLM's assessment of every finding
alongside the scanner's original rating. Schema version is 1.0.0.
{
"schema_version": "1.0.0",
"metadata": {
"model": "devstral-small-2:24b",
"api_base": "http://localhost:11434/v1",
"scanner": {
"name": "ZAP",
"version": "2.16.1",
"scans_performed": [
{ "type": "spider", "recurse": true },
{ "type": "active_scan", "recurse": true }
],
"scan_policy": "default"
},
"targets": ["https://example.com"],
"timestamp": "2026-03-25T14:30:00Z"
},
"findings_summary": {
"total": 12,
"analyzed": 11,
"analysis_failed": 1,
"by_assessed_severity": {
"critical": 1,
"high": 3,
"medium": 4,
"low": 1,
"info": 1,
"false_positive": 1
}
},
"findings": [
{
"id": "1",
"alert_name": "SQL Injection",
"url": "https://example.com/login",
"scanner_assessment": { "risk": "High", "confidence": "Medium" },
"llm_assessment": {
"severity": "Critical",
"confidence": "High",
"category": "Injection",
"exploitability": "High",
"business_impact": "Full database compromise",
"explanation": "SQL injection via the username parameter.",
"remediation": "Use parameterized queries."
},
"description": "...",
"solution": "...",
"analysis_error": null,
"evidence": "Parameter: username, Attack: ' OR 1=1--"
}
]
}
alerts.json
Raw ZAP alerts, unmodified.
llm-log.json
Audit trail of every LLM call made during the run, including token usage and attempt counts. Used for debugging and cost tracking.
report.md
Human-readable security report generated by the LLM from a markdown template.
Settings
Settings can come from CLI flags, environment variables, and/or
config.toml. When the same thing is set in multiple places, CLI flags
take priority over environment variables, which take priority over the
config file.
Pass --config FILE to overlay your own config on top of the built-in
defaults. Only the fields you include are overridden. See the shipped
config.toml for defaults.
| Setting | CLI | Env | Config |
|---|---|---|---|
| LLM endpoint | --api-base URL |
LLM_API_BASE |
[llm] api_base |
| Model | --model NAME |
LLM_MODEL |
[llm] model |
| API key | — | LLM_API_KEY |
— |
| Parallel requests | --llm-parallelism N |
— | [llm] parallel_requests |
| Analysis retries | --analysis-retries N |
— | [llm] analysis_retries |
| Output directory | --output DIR |
— | [output] dir |
| Verbose | --verbose |
— | [output] verbose |
| Strict mode | --strict |
— | [output] strict |
| Report on scan | --produce-report |
— | [report] produce_report_on_scan |
| Report template | --report-template FILE |
— | [report] template |
| ZAP URL | --zap-url URL |
— | [zap] base_url |
Strict mode
By default, findings that fail LLM analysis (bad JSON after all retries)
are recorded with "llm_assessment": null and the scan exits successfully.
With --strict, the scan writes all output files and then exits with a
non-zero status code if any findings failed analysis. Useful as a CI gate.
Analysis retries
When the LLM returns invalid JSON, the agent retries by appending the bad
response and a correction prompt. JSON mode (response_format: json_object)
is enabled by default and automatically disabled if the backend doesn't
support it. Default: 3 retries.
Prompts
The system prompt, finding analysis prompt, retry prompt, and report
generation prompt are configurable in config.toml under [prompts].
Local development
Prerequisites
- Gleam >= 1.15.1
- Erlang/OTP 28
Setup
Start ZAP headless:
docker run --rm -p 8080:8080 ghcr.io/zaproxy/zaproxy:bare \
/zap/zap.sh -daemon -host 0.0.0.0 -port 8080 -config api.disablekey=true
Start an LLM (e.g. Ollama):
ollama run devstral-small-2:24b
Run the agent:
LLM_API_BASE=http://localhost:11434/v1 \
LLM_MODEL=devstral-small-2:24b \
gleam run -- \
scan https://example.com \
--config config.toml \
--output ./results \
--produce-report
Build and test
gleam deps download
gleam test
gleam format --check src/ test/
Building the container
docker build -t openpen-agent -f Containerfile .
CI/CD
Pushes to main run tests. Tagging v* builds and pushes the container
image to git.krispy.tech/openpen/agent.