u/ClassicMain

Open WebUI v0.9.3 (and v0.9.4) is out — massive performance wins, message editing finally fixed
▲ 119 r/OpenWebUI

Open WebUI v0.9.3 (and v0.9.4) is out — massive performance wins, message editing finally fixed

Open WebUI v0.9.3 (and v0.9.4 hotfix) is out — massive performance wins, message editing finally fixed

The big stuff

🚀 Massive performance improvements to loading

  • Chat history maps now load from normalized message records, slashing overhead on long conversations.
  • Prompt list and prompt-tag pages load much faster for non-admin users — accessible prompts are now filtered in a single DB query instead of doing per-prompt permission checks. If you've got a large prompt library, this one is going to feel huge.
  • Per-user memory lookups and deletions are also way faster at scale (memory user filter is now indexed).

✏️ Assistant response editing and continuation — finally fixed You can now edit and restructure assistant output items from a dedicated editor view and continue generating from the edited state with full prior context preserved. This includes reasoning blocks, tool calls, and text content — meaning you can edit the model's reasoning content too (depending on provider). Long overdue and a game-changer for iterating on outputs.

Other notable additions

  • 🔄 Replaceable tool embed updates — Pipes and Tools can now overwrite previously emitted rich-UI embeds in-place via a replace flag. Live dashboards and progress panels that update without stacking duplicates are now a thing.
  • 🔇 Voice Mode mute control — Dedicated mute toggle with an "M" shortcut and auto-unmute after assistant playback. No more accidental interruptions from background noise.
  • 🗑️ Delete from conversation menu — Delete the current conversation directly from the chat menu without searching the full chat list.
  • ⬆️ Scroll to Top shortcut — Long conversations get a Scroll to Top action in the chat menu.
  • 🧭 Unified model unload controls — Admins can unload running models from the model selector (Ollama and llama.cpp show loaded-state indicators).
  • 👥 {{USER_GROUPS}} prompt variable — System and template prompts now expand to the user's group memberships, so prompts can adapt to role/access context automatically.
  • 🔎 Brave LLM Context as a new web search provider with configurable context token budget.
  • 🧮 LaTeX copy shortcut — Click rendered LaTeX to copy the raw formula.
  • 🎙️ STT file extension controls — Admins can configure which audio extensions are accepted for speech-to-text uploads.
  • 🛂 MCP OAuth server URL setting — Static OAuth tool server setups can define a separate OAuth server URL for cases where auth endpoints are hosted separately.
  • 🔐 Public chat sharing permission control — Admins can control whether users are allowed to create publicly shareable chats.
  • 🚀 Smarter function dependency installs — Already-preinstalled deps are now skipped, improving startup speed.

Notable fixes worth calling out

  • Chat input no longer gets stuck by unrelated background tasks after a response completes.
  • Regeneration loading locks that previously stuck chats in permanent loading are now repaired.
  • Chat settings and chat controls now autosave properly — system prompts and parameters are no longer lost when refreshing or navigating away before sending.
  • Knowledge collections selected via the chat input selector now persist after reloads and chat switches.
  • Non-blocking STT processing — speech-to-text no longer blocks the server event loop, so other users stay responsive under concurrent load.
  • Streaming token analytics are now accurate across Responses API and OpenAI-compatible providers.
  • Several security hardening fixes: sanitized spreadsheet HTML previews, blocked untrusted external image URLs, validated webhook avatar URLs.

⚠️ Important upgrade note

This release includes database schema changes. Back up your database before upgrading in production. If you're running multi-worker, multi-server, or load-balanced — all instances must be updated simultaneously. Rolling updates aren't supported and will fail due to schema incompatibility.

Full release notes: https://github.com/open-webui/open-webui/releases

u/ClassicMain — 5 days ago

Inline Visualizer v2.1.0 — Every bug you reported, fixed. Plus pre-styled bare tags, accent palette, and a chart catalog the model actually uses.

v2.1.0 dropped today. This is the post-launch hardening release I wanted to ship the moment v2.0 was out — except instead of just fixing bugs, the more I dug into the skill and the runtime, the more new things I ended up adding.

So: massive feature additions AND every reported bug fixed. Both in the same release.

🆕 What's new

Pre-styled bare tags — write less, get more

Drop a vanilla <button>, <input>, <select>, <table>, <details>, <kbd>, and friends into a visualization. They come out looking native to the host UI: theme-aware colors, focus rings, hover states, the works.

Add class or style and the default opts out — the model can still go fully custom when the design calls for it.

The full menu: forms (<button>, <textarea>, <select>, <label>, <fieldset>, <legend>, every common <input> type), content (<kbd> keyboard pills, <hr> flat dividers, <details> / <summary> with rotating chevron, <blockquote> accent callouts, <mark> highlights), tables (header pill, row hover, tabular-num alignment for numeric columns), and <dl> definition lists in three layouts (stacked glossary, two-column grid card, inline pill row).

The result: smaller visualization payloads, faster rendering, and a consistent look across visualizations — without the model having to re-style the same primitives every time.

9-color accent palette

Set data-accent="teal" (or coral, pink, gray, blue, green, amber, red) on any element and focus rings, checkboxes, radio buttons, and any var(--accent) reference recolors to match. Default stays purple. Names match the chart ramps, so a finance dashboard with green accents reads naturally next to its green charts. Light and dark theme handled automatically.

More chart types & patterns

The skill now teaches the model:

  • Charts: stacked bars / areas, radar, KPI cards with sparklines, progress bars, ranking strips, KPI donuts, and custom-shape charts (thermometers, batteries, fuel gauges)
  • Components: comparison cards, slider-driven explainers, tabs, step-through walkthroughs

More libraries

ECharts, Plotly, vis-network, and Tone.js / Wavesurfer now have first-class entries in the skill's CDN catalog with vetted URLs.

Accessibility

  • aria-invalid="true" paints a red border on text inputs / textareas / selects to flag validation errors
  • Keyboard users get a clear accent-colored focus outline; mouse focus stays subtle

🐛 Every v2.0 bug, fixed

These were all reported by you in the GitHub issue queue. Thank you.

  • Visualizations stayed blank when the model's "Thinking" section was expanded. The wrapper now recovers correctly whether the response sits inside or outside the reasoning subtree.
  • HTML export was missing chart data. Downloaded files now run embedded charts properly when opened standalone.
  • Visualizations occasionally ate the prose around them — hiding text before @@@VIZ-START and after @@@VIZ-END. Marker detection and post-finalize cleanup are now much harder to confuse.
  • text.charCodeAt is not a function console errors blocking render — notably reported by Claude-via-LiteLLM users but seen across multiple providers. Defensive guards through the streaming pipeline keep the iframe alive even when an upstream message arrives in an unexpected shape.
  • Charts vanishing at end of streaming — D3, vis-network, and ECharts charts used to lose their rendered SVG / canvas during the final paint pass. Fixed.
  • new vis.DataSet(...) throwing vis is not defined — the recommended CDN URL was pointing at the build that requires vis-data loaded separately. Now points at the standalone bundle that just works.

🔧 Plus a stack of robustness improvements

  • Smarter handling of providers that wrap responses in reasoning blocks — Bedrock-hosted Haiku 4.5 (and any future provider doing the same) now renders correctly instead of staying blank
  • More resilient script chain — a single broken inline script no longer stalls every later script in the same visualization
  • Hidden-tab charts have a documented fix for the 0×0 init trap in Plotly, ECharts, and vis-network
  • Visualizations no longer feedback-loop on 100vh / 100vw layouts (iframe used to grow taller every measurement cycle with viewport units)
  • <style> blocks survive Open WebUI's chat sanitizer — the iframe re-inflates the rules automatically
  • Better Svelte compatibility — message detection updated for regenerate / edit / branch flows in newer Open WebUI versions
  • Marker debris cleanup — leftover @@@VIZ-START / @@@VIZ-END text and stray closing tags occasionally bled into chat after a long viz finished streaming. Multi-pass cleanup catches them.
  • Internal HTML-token regression guard — the plugin now refuses to load with a clear error if certain dangerous literals get reintroduced (caught a real bug during this release; saves hours of debugging next time)
  • Iframe no longer goes silently dormant if any single bootstrap step fails — each step is independently guarded

📦 Get it

Same install path as v2.0:

  1. Paste tool.py into Workspace → Tools
  2. Paste SKILL.md into Workspace → Knowledge as a skill named visualize
  3. Native function calling on, attach both to your model
  4. Settings → Interface → "Allow iframe same origin" enabled

https://github.com/Classic298/open-webui-plugins

Star the repo ⭐ if you want to see what I ship next.

Or drop a reaction on the full release notes: https://github.com/Classic298/open-webui-plugins/releases/tag/v1.0.18

Show me what you build with it.

reddit.com
u/ClassicMain — 10 days ago