The Scan Optimizer is a prioritisation layer over Hugin’s scanning surface. It analyses captured proxy traffic and recommends which endpoints are most likely vulnerable and which checks are worth running — so a 10,000-flow project’s scan plan focuses on the high-value targets instead of testing everything equally.
The scoring is rule-based + feedback-tuned (not LLM-driven). Feedback is persisted to ~/.hugin/scan_optimizer_feedback.json so subsequent plans get sharper as you mark hits/misses.
🔗Scoring Heuristics
Each (endpoint, check) candidate accumulates points along multiple dimensions. Examples (constants from scan_optimizer.rs):
- Mutation method — POST / PUT / PATCH / DELETE → +15 points
- Parameter count — 3+ params → +10, with per-extra-param bonus capped at +20
- Sensitive path segments — admin / login / token / payment / etc. attract score
- Auth state — authenticated endpoints score higher than fully public ones
- Recently captured — newer flows weighted higher than stale ones
- Past finding density — endpoints that produced findings before in this project score higher (feedback loop)
The exact weights are tunable in source (SCORE_* constants). The result is a per-(endpoint, check) score; the plan exposes the top-N tuples.
🔗Adaptive Learning
After each scan run you can submit feedback: which (endpoint, check) pairs produced findings, which were false positives, which were waste of time. The feedback updates per-project weights persisted in ~/.hugin/scan_optimizer_feedback.json so the next plan is better targeted.
🔗Producing a Plan
The analyze action scans the project’s flows + intelligence and returns a ranked list. The recommend_checks action narrows to one specific endpoint. The profile action analyses a single flow and outputs a per-flow plan. The quick action returns a fast top-N pass for time-constrained workflows. stealth produces a low-noise plan for sensitive engagements.
🔗MCP
The scan_optimizer MCP tool exposes 7 actions:
analyze— full prioritised plan for the active projectquick— fast top-N variantstealth— low-noise planrecommend_checks— recommend checks for one endpointprofile— analyse one flow + per-flow planstats— past plan accuracy (precision / recall vs actual findings)learn— submit feedback (mark plan items as useful / wasted)
Combine with Auto Mode for hands-off scanning — the LLM agent uses Optimizer output to decide which checks to run, then learns from the outcome to feed the next iteration.