Hugin includes a built-in AI assistant powered by OpenRouter, Anthropic, Google Gemini, or local Ollama models. The assistant lives in the Assistant sidebar tab with four sub-tabs: Chat, Agent, Auto, and History.
🔗Provider Support
Three provider types (ProviderType enum):
- OpenAICompatible (default) — works with OpenRouter, OpenAI, Groq, Together, Ollama, LM Studio, vLLM, Azure OpenAI. Just point
base_urlat the right endpoint. - Anthropic — Claude models with 90% cache savings via
cache_controlblocks. - Google — Gemini via
generateContentAPI withcachedContentssupport.
Ollama is supported through the OpenAICompatible provider with base_url = "http://localhost:11434/v1".
🔗Chat
Streaming chat with markdown rendering. When a flow is selected in HTTP History, the assistant automatically includes request/response context (method, URL, key headers, status, body size).
🔗Per-Tab AI Buttons
17 tabs have dedicated AI actions:
- Repeater — “Analyze Request” + right-click context menu
- Findings — “Triage” + “Draft Report with AI”
- Intruder — “Analyze Results” + “AI Payloads”
- Scanner — “Explain Finding”
- Search — Natural language search
- Match & Replace — “Generate with AI”
- Decoder — “What is this?”
- Workflows — “Create with AI”
- Intercept — “AI Modify” for NL request rewriting
- Sitemap — “Analyze Attack Surface”
- Cookie Jar — “Analyze Cookies”
- Comparer — “Explain Differences”
- Sequencer — “Analyze Randomness”
- Scopes — “Suggest with AI”
- Plugins/Scripts — “Generate with AI”
🔗Configuration
Add an [assistant] section to ~/.hugin/config.toml:
[assistant]
enabled = true
provider = "open_a_i_compatible" # or "anthropic", "google"
base_url = "https://openrouter.ai/api/v1"
api_key_encrypted = "..." # encrypted at rest
model = "meta-llama/llama-3.3-70b-instruct"
max_tokens = 4096
temperature = 0.7
max_concurrent_llm = 5 # global LLM rate-limit semaphore
max_concurrent_agents = 3 # explore/auto agent sessions
cache_enabled = true
max_tokens_per_day = 0 # 0 = unlimited; useful for free tiers
# Per-task model routing for cost optimization:
# [assistant.task_routes.summarize]
# provider = "open_a_i_compatible"
# model = "openai/gpt-4o-mini"
Provider keys are encrypted at rest using the OS keyring or a derived key.