Deep Research, MCP Tools, and the End of Vendor Lock-In
· Hivemind Team
This Is a Big One
Hivemind 2026.03.01 is our biggest release yet. Deep research, MCP integration, a migration wizard, per-agent security controls, and a lot more. Here's what shipped and why it matters.
Deep Research Tool
Agents can now do real multi-step iterative web research — not single-shot queries, but autonomous research loops that follow leads, cross-reference sources, and synthesize findings across multiple steps.
The deep research tool is fully LLM-agnostic. It works with whatever provider you've configured, local or cloud. Point it at Ollama running Qwen3-Coder locally, or at Claude through the Anthropic API — the research pipeline doesn't care. Your agents plan their research strategy, execute searches, evaluate results, and iterate until they have what they need.
This is the kind of capability that used to require a dedicated SaaS product. Now it's a tool your agents can invoke alongside everything else they already do.
MCP Tool Bridge with OAuth
This one positions Hivemind squarely in the MCP ecosystem. Agents can now execute MCP tools with proper OAuth token support, powered by an Anthropic Agent SDK sidecar.
What this means in practice: if a service exposes MCP-compatible tools, your Hivemind agents can use them with authenticated access. The OAuth flow is handled automatically. You configure the provider once, and every agent that needs it gets scoped access.
MCP is becoming the standard protocol for AI tool integrations. This release makes Hivemind a first-class participant in that ecosystem.
Web Migration Wizard for OpenClaw
We already had \rake openclaw:migrate\ for command-line imports. Now there's a full GUI migration wizard built into the web interface.
Walk through each step visually: select your OpenClaw directory, preview what will be imported, choose which agents, skills, and settings to bring over, and confirm. Your agents, configurations, and integrations come with you. Secrets get moved into the encrypted vault. Skills get scanned before they run.
If you've been on the fence about switching, the barrier just dropped to zero.
Agent Identity and Personality
Three changes that make agents dramatically more useful out of the box:
Capability-aware identity prompts. Agents now know what tools they have access to and adjust their behavior accordingly. An agent with web search and file access behaves differently than one with only chat — because it knows what it can do.
Reworked agent templates. We rewrote the default templates with real personality and memory awareness. Agents aren't generic chatbots anymore. They have context about their role, their team, and their history.
Editable custom_soul in the UI. The team settings page now lets you edit the \custom_soul\ — the personality and instructions that shape how your agents think and communicate. This was previously config-file-only. Now non-technical team members can tune agent behavior directly from Mission Control.
Local LLM Flexibility
We went deep on local model support in this release:
Any OpenAI-compatible API. Hivemind now works with llama.cpp, LocalAI, and anything else that speaks the OpenAI-compatible protocol. This was a hard-won lesson — we're making sure you're never locked into a single inference backend.
Per-agent model settings. Temperature, context size, and other model parameters are now configurable per individual agent. Your research agent can run with a large context window and moderate temperature while your code agent stays focused with low temperature and tight parameters.
Provider creation in the UI. Add and configure LLM providers directly from the management page. No more editing config files to add a new backend.
Security
Security isn't a section we tack on at the end — it's foundational. This release ships three significant improvements:
Per-agent network egress controls. Each agent gets its own network allowlist. Agent A can reach \api.github.com\ and nothing else. Agent B can reach your internal services. A compromised agent can't phone home to an arbitrary endpoint.
Rack Attack configuration. Rate limiting and brute force protection are configured out of the box. No additional setup required.
Skill security scanner and analyzer. Skills and plugins are scanned for security issues before they run. Dangerous patterns — shell injection vectors, unrestricted network access, filesystem writes outside the workspace — get flagged for review.
Developer Experience
Local development Docker setup. Contributors can spin up the full Hivemind stack locally with Docker. Fork, clone, \docker compose up\, and you're developing. This matters for open-source adoption — the easier it is to contribute, the faster the project moves.
Release candidate build support. We now have a proper RC workflow for testing before cutting releases. This means more stable releases and a clear path for community testing.
Bug Fixes
The small stuff that adds up:
- Dynamic \
num_ctx\sizing for Ollama — no more silent prompt truncation - Fixed content block flattening for Ollama and llama.cpp adapters
- Corrected \
ask_user\response routing through team chat - Removed bracketed name prefixes from agent responses
- Added human name to team chat context
- Fixed \
INTERNAL_API_SECRET\generation when key exists but value is empty - Updated Discord invite link to permanent version
What's Next
This release lays the groundwork for the next phase: formal capability declarations, the dual-context architecture, and multi-tenant isolation. Every feature in this release — the MCP bridge, the per-agent controls, the security scanner — is a building block toward that roadmap.
Try it out. Break it. Tell us what you think.
Your agents just got a serious upgrade.