A Campaign is a persistent Intruder attack with full lifecycle controls — start, pause, resume, stop — and durable state. Where a one-shot Intruder run lives only in memory, a Campaign is stored in SQLite and survives Hugin restarts: pick up where you left off, inspect previous runs, rename / duplicate / re-run.
🔗What’s a Campaign vs an Intruder Run?
Both run the same engine — payload generators, processors, grep extract, etc. The difference is persistence and lifecycle:
| Intruder Run | Campaign | |
|---|---|---|
| Persistence | In-memory | SQLite (CampaignRecord) |
| Resume after restart | No | Yes |
| Status visibility | Active session only | List view across all campaigns |
| Pause / resume | Per-attack | Per-campaign |
| Bulk operations | None | Duplicate, rename, multi-select |
For a quick fuzz on a captured request, use Intruder. For a long-running attack that needs to survive across sessions or that you want to revisit later, use a Campaign.
🔗Attack Modes
Campaigns support all four standard Intruder modes (the same ones surfaced in Intruder):
sniper— single payload set, single positionbattering_ram— single payload set, multiple positions (same value at each)pitchfork— multiple payload sets, indexed in parallelcluster_bomb— multiple payload sets, Cartesian product
Mode is normalised case-insensitively (e.g. pitchfork, Pitchfork, and pitch-fork all map to the same enum).
🔗Creating a Campaign
- Campaigns view → + New Campaign
- Set name + description
- Configure the attack: target request, position markers, payload generators, processors, grep matchers — same fields as a single Intruder attack
- Save → the campaign is persisted as a
CampaignRecordwith statusIdle
🔗Campaign Kinds
Two workload kinds today; the discriminator lives in the ScheduleMeta.kind field on the campaign description JSON:
intruder(default, Community + Pro) — classic payload attack: base request + positions + payload sets + rate limit. Runs throughhandlers::intruder::start_attack_internal.bac(Pro only) — scheduled BAC audit. Runs through thebac_audit auditMCP action with params fromScheduleMeta.bac_config(same shape the MCP tool consumes:project_id,identity_ids,flow_ids,request_budget,delay_ms,disable_*toggles). Community tier skipskind=baccampaigns with a warn log.
Example envelope for a nightly BAC sweep:
{
"text": "Nightly BAC sweep",
"kind": "bac",
"schedule": "0 2 * * *",
"bac_config": {
"identity_ids": ["baseline-profile-id", "user-b-profile-id"],
"request_budget": 10000,
"delay_ms": 100
}
}
New kinds are cheap to add — one field on ScheduleMeta + one dispatch arm in handlers::campaigns_scheduler.
🔗Lifecycle
Idle → Running → Paused → Running → Completed
↘ Stopped
start— kick off the attack; transitions to Runningpause— suspend mid-attack; resumableresume— continue from where pause left offstop— terminate; results so far are keptstatus— current state + progress countersresults— paginated result set
🔗Right-Click Actions
- Duplicate — clone the campaign config (fresh state)
- Rename — change the display name
- Delete — remove the campaign + its results
🔗Use Cases
- Long credential sprays that take hours and you don’t want to babysit
- Engagement-spanning attacks you’ll come back to over multiple sessions
- Reproducible runs — saved config can be re-executed verbatim
- Triage queue — keep “things I want to attack later” durably configured
🔗What Campaigns Are NOT
This implementation is intentionally a single-attack lifecycle wrapper, not a multi-step orchestration engine. There is no built-in:
- Chained-step pipeline (output of step 1 feeds payloads of step 2)
- Per-step retry policies
- Cross-attack data flow
For multi-step automation, build a Workflow that triggers further actions when a finding fires, or chain attacks manually by extracting values from one campaign’s results into the wordlist for the next.
🔗MCP
The campaigns MCP tool exposes the full lifecycle:
list,get,create,update,deleteduplicate,renamestart,pause,resume,stopstatus— live progressresults— paginated results setpitchfork— pitchfork-mode helper