release security dashboard fast-mode

OpenClaw 3.11 & 3.12: Dashboard Rewrite, Fast Mode, and 8 Security Fixes You Should Care About

OpenClaws.io Team

OpenClaws.io Team

@openclaws

March 13, 2026

9 min read

OpenClaw 3.11 & 3.12: Dashboard Rewrite, Fast Mode, and 8 Security Fixes You Should Care About

We skipped the 3.11 blog post. Not because nothing happened—3.11 was packed—but because 3.12 landed four days later and it made more sense to cover them together. Two releases, one post.

Here's everything that matters.

Control UI: Dashboard v2

The Control UI got a ground-up rewrite. Not a reskin—a rethink.

The old dashboard was a single page that tried to do everything. The new one splits into dedicated views: Overview, Chat, Config, Agent, and Sessions. Each does one thing well. There's a command palette (Cmd+K) for power users, mobile bottom tabs for phone access, and the chat view now supports slash commands, message search, export, and pinned messages.

This is the work of @BunsDev in PR #41503, and it's the kind of contribution that changes how people interact with OpenClaw daily. If you manage your instance through the browser, this is a different experience now.

Fast Mode: GPT-5.4 and Claude, One Toggle

Fast mode used to be a vague concept—some models had it, the toggle was inconsistent, and the behavior varied across interfaces.

3.12 unifies it. There's now a single /fast toggle that works across TUI, Control UI, and ACP. For OpenAI, it shapes requests for GPT-5.4's fast tier. For Anthropic, it maps directly to the API's service_tier parameter. Both are verified live—if your account doesn't have fast-tier access, the system tells you instead of silently degrading.

Per-model config defaults mean you can set fast mode as the default for specific models while keeping standard mode for others. Session-level overrides let you flip it mid-conversation.

One toggle, two providers, consistent behavior everywhere. That's the whole story.

Ollama: First-Class Citizen (3.11)

Ollama went from "supported" to "first-class" in 3.11.

The onboarding wizard now has a dedicated Ollama path with two modes: Local (everything runs on your machine) and Cloud + Local (cloud models for heavy lifting, local for privacy-sensitive tasks). There's browser-based cloud sign-in, curated model suggestions based on your hardware, and smart handling that skips unnecessary local pulls when you're using cloud models.

This matters because local models are the privacy answer. No API keys, no data leaving your machine, no monthly bill. For users who want that, the setup path is now as smooth as the cloud provider path. Thanks @BruceMacD.

Separately, Ollama, vLLM, and SGLang all moved onto the provider-plugin architecture in 3.12. Provider-owned onboarding, model discovery, picker setup, and post-selection hooks are now modular. If you're building custom provider integrations, this is the pattern to follow.

Multimodal Memory: Images and Audio (3.11)

OpenClaw's memory system learned to see and hear.

3.11 adds opt-in multimodal indexing for memorySearch.extraPaths. Point it at a folder of images or audio files, and the memory search can now surface them in context. Under the hood, it uses Gemini's gemini-embedding-2-preview model with configurable output dimensions and automatic reindexing when dimensions change.

This is gated behind explicit opt-in—no surprise indexing of your photo library. But for users who want their agent to remember visual context (screenshots, diagrams, receipts), it's a meaningful capability addition. Thanks @gumadeiras.

iOS and macOS: Native Getting Better (3.11)

Two significant native platform improvements landed in 3.11:

iOS Home Canvas (@ngutman): The iOS app got a bundled welcome screen with a live agent overview that refreshes on connect, reconnect, and foreground return. Floating controls were replaced with a docked toolbar, the layout adapts to smaller phones, and chat now opens in the resolved main session instead of a synthetic iOS session. Also: a TestFlight beta release flow with Fastlane support.

macOS Chat UI (@ImLukeF): The macOS chat composer now has a model picker, persists thinking-level selections across relaunch, and properly syncs session models across providers.

These aren't headline features, but they compound. Every time the native experience gets a little smoother, the gap between "I use OpenClaw through Telegram" and "I use OpenClaw natively" gets smaller.

Kubernetes: A Starting Point

3.12 adds a starter Kubernetes install path: raw manifests, Kind setup for local testing, and deployment documentation. Thanks @sallyom, @dzianisv, and @egkristi.

This isn't a production-hardened Helm chart—it's a starting point for teams that want to run OpenClaw on K8s. Expect this to evolve rapidly based on community feedback.

Subagents: sessions_yield

A small but important primitive for orchestration: sessions_yield lets an orchestrator agent end the current turn immediately, skip any queued tool work, and carry a hidden follow-up payload into the next session turn.

Why this matters: in multi-agent workflows, sometimes the orchestrator needs to bail out of a turn early—maybe a higher-priority task arrived, maybe the current tool chain is going down a wrong path. Before this, you had to wait for all queued work to complete. Now you can cut the line. Thanks @jriff (#36537).

Security: 8 Advisories

This is the section you should actually read carefully if you're running OpenClaw in production. Between 3.11 and 3.12, there are 8 GitHub Security Advisories:

Critical: WebSocket Origin Validation (3.11) **GHSA-5wcw-8jjv-m286** — Browser origin validation was skipped when proxy headers weren't present. In trusted-proxy mode, this opened a cross-site WebSocket hijacking path that could grant `operator.admin` access to untrusted origins. Fixed: origin validation now runs on all browser connections regardless of proxy headers.

3.12 Security Fixes

AdvisoryWhatImpact
GHSA-99qw-6mr3-36qrWorkspace plugins auto-loaded without trustCloned repos could execute plugin code silently
GHSA-pcqg-f7rg-xfvvInvisible Unicode in exec approval promptsZero-width characters could spoof reviewed commands
GHSA-9r3v-37xh-2cf6Unicode normalization bypass in exec detectionFullwidth/invisible chars evaded heuristic checks
GHSA-f8r2-vg7x-gh8mExec allowlist case sensitivity on POSIXPatterns could overmatch across case/directory boundaries
GHSA-r7vr-gr74-94p8Non-owner access to /config and /debugAuthorized non-owners could reach owner-only surfaces
GHSA-rqpp-rjj8-7wv8Shared-token scope self-declarationDevice-less tokens could self-declare elevated scopes
GHSA-vmhq-cqm9-6p7qBrowser profile persistence via browser.requestWrite-scoped callers could persist admin-only browser profiles
GHSA-2rqg-gjgv-84jmAgent workspace boundary overrideExternal callers could override gateway workspace boundaries

The Unicode-related fixes (GHSA-pcqg, GHSA-9r3v) are particularly worth understanding. Attackers were using zero-width and fullwidth Unicode characters to make malicious commands appear benign in approval prompts. The fix normalizes Unicode and strips invisible formatting before both display and detection.

The workspace plugin fix (GHSA-99qw) changes a default: plugins in cloned repositories no longer auto-load. You now need an explicit trust decision. This is a breaking change for anyone relying on implicit workspace plugin loading—and that's intentional.

Bottom line: update to 3.12. These aren't theoretical vulnerabilities.

The Rest

There's a long tail of fixes across both releases that won't fit in a blog post but matter to specific users:

  • Telegram: Model picker persistence, HTML chunking, preview delivery dedup—four separate fixes addressing message reliability
  • Kimi Coding: Native Anthropic format restored for tool calls, User-Agent header fix for subscription auth, Ollama compatibility for kimi-k2.5:cloud
  • Mattermost: Block streaming dedup, reply media delivery with local file uploads
  • BlueBubbles/iMessage: Self-chat echo dedup without broad webhook suppression
  • Windows: Native update path fixed—no more dying on missing git or node-llama-cpp
  • Sandbox: Write operations no longer silently create empty files
  • Discord (3.11): Configurable auto-archive duration for threads

What Changed (Combined)

Area3.113.12
UIiOS Home Canvas, macOS model pickerControl UI dashboard-v2, command palette
ModelsOllama first-class onboarding, OpenCode Go providerGPT-5.4 fast mode, Claude fast mode, provider-plugin arch
MemoryMultimodal image/audio indexing with Gemini embeddings
InfraKubernetes manifests, subagent sessions_yield
SecurityWebSocket origin validation (critical)7 advisories: plugins, Unicode, scopes, workspace
PlatformsDiscord thread archiving, Feishu image fixSlack Block Kit, Mattermost fixes, Windows native update
StabilityTelegram preview delivery (4 fixes), agent failover improvementsCron dedup, session discovery, sandbox write fix

Two releases. Zero filler. Update now.

Stay in the Loop

Get updates on new features, integrations, and lobster wisdom. No spam, unsubscribe anytime.