OpenWebUI - Package Updates
Pinned
OpenWebUI
-
[2.5.2]
- Update OpenWebUI to 0.3.23
- Full changelog
- WebSocket Redis Support: Enhanced load balancing capabilities for multiple instance setups, promoting better performance and reliability in WebUI.
- Adjustable Chat Controls: Introduced width-adjustable chat controls, enabling a personalized and more comfortable user interface.
- i18n Updates: Improved and updated the Chinese translations.
-
[2.5.3]
- Update OpenWebUI to 0.3.26
- Full changelog
-
[2.5.4]
- Update OpenWebUI to 0.3.28
- Full changelog
-
[2.5.5]
- Update OpenWebUI to 0.3.29
- Update Ollama to 0.3.12
- Full changelog
- Fix nltk punkt_tab issue
-
[2.5.6]
- Update OpenWebUI to 0.3.30
- Full changelog
- Update Available Toast Dismissal: Enhanced user experience by ensuring that once the update available notification is dismissed, it won't reappear for 24 hours.
- Ollama /embed Form Data: Adjusted the integration inaccuracies in the /embed form data to ensure it perfectly matches with Ollama's specifications.
- O1 Max Completion Tokens Issue: Resolved compatibility issues with OpenAI's o1 models max_completion_tokens param to ensure smooth operation.
- Pip Install Database Issue: Fixed a critical issue where database changes during pip installations were reverting and not saving chat logs, now ensuring data persistence and reliability in chat operations.
- Chat Rename Tab Update: Fixed the functionality to change the web browser's tab title simultaneously when a chat is renamed, keeping tab titles consistent.
-
[2.5.7]
- Update OpenWebUI to 0.3.32
- Full changelog
-
[2.5.8]
- Update Ollama to 0.3.13
-
[2.5.9]
- Update Ollama to 0.3.14
-
[2.5.10]
- Update OpenWebUI to 0.3.35
- Full changelog
-
[2.6.0]
- Update ollama to 0.4.0
- Support for Llama 3.2 Vision (i.e. Mllama) architecture
- Sending follow on requests to vision models will now be much faster
- Fixed issues where stop sequences would not be detected correctly
- Ollama can now import models from Safetensors without a Modelfile when running
ollama create my-model
- Fixed issue where redirecting output to a file on Windows would cause invalid characters to be written
- Fixed issue where invalid model data would cause Ollama to error
-
[2.6.1]
- Update ollama to 0.4.1
-
[2.6.2]
- Update ollama to 0.4.2
-
[2.7.0]
- Update open-webui to 0.4.0
- Full Changelog
- User Groups: You can now create and manage user groups, making user organization seamless.
- Group-Based Access Control: Set granular access to models, knowledge, prompts, and tools based on user groups, allowing for more controlled and secure environments.
- Group-Based User Permissions: Easily manage workspace permissions. Grant users the ability to upload files, delete, edit, or create temporary chats, as well as define their ability to create models, knowledge, prompts, and tools.
- LDAP Support: Newly introduced LDAP authentication adds robust security and scalability to user management.
- Enhanced OpenAI-Compatible Connections: Added prefix ID support to avoid model ID clashes, with explicit model ID support for APIs lacking '/models' endpoint support, ensuring smooth operation with custom setups.
-
[2.7.1]
- Update open-webui to 0.4.1
- Full Changelog
- Enhanced Feedback System: Introduced a detailed 1-10 rating scale for feedback alongside thumbs up/down, preparing for more precise model fine-tuning and improving feedback quality.
- Tool Descriptions on Hover: Easily access tool descriptions by hovering over the message input, providing a smoother workflow with more context when utilizing tools.
-
[2.7.2]
- Update open-webui to 0.4.2
- Full Changelog
- Knowledge Files Visibility Issue: Resolved the bug preventing individual files in knowledge collections from displaying when referenced with '#'.
- OpenAI Endpoint Prefix: Fixed the issue where certain OpenAI connections that deviate from the official API spec werent working correctly with prefixes.
- Arena Model Access Control: Corrected an issue where arena model access control settings were not being saved.
- Usage Capability Selector: Fixed the broken usage capabilities selector in the model editor.
-
[2.7.3]
- Update ollama to 0.4.3
- Full Changelog
- Tlu 3: Tlu 3 is a leading instruction following model family, offering fully open-source data, code, and recipes by the The Allen Institute for AI.
- Mistral Large: a new version of Mistral Large with improved Long Context, Function Calling and System Prompt support.
- Fixed crash that occurred on macOS devices with low memory
- Improved performance issues that occurred in Ollama versions 0.4.0-0.4.2
- Fixed issue that would cause
granite3-dense
to generate empty responses - Fixed crashes and hanging caused by KV cache management
-
[2.7.4]
- Update open-webui to 0.4.4
- Full Changelog
- Translation Updates: Refreshed Catalan, Brazilian Portuguese, German, and Ukrainian translations, further enhancing the platform's accessibility and improving the experience for international users.
- Mobile Controls Visibility: Resolved an issue where the controls button was not displaying on the new chats page for mobile users, ensuring smoother navigation and functionality on smaller screens.
- RAG Query Generation Issue: Addressed a significant problem where RAG query generation occurred unnecessarily without attached files, drastically improving speed and reducing delays during chat completions.
- Legacy Event Emitter Support: Reintroduced compatibility with legacy "citation" types for event emitters in tools and functions, providing smoother workflows and broader tool support for users.