Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content

OpenWebUI

10 Topics 73 Posts
  • OpenWebUI - Package Updates

    Pinned
    12
    1 Votes
    12 Posts
    302 Views
    girishG

    [0.10.0]

    Update OpenWebUI to 0.1.119 Update Ollama to 0.1.32 Full changelog Enhanced RAG Embedding Support: Ollama, and OpenAI models can now be used for RAG embedding model. Seamless Integration: Copy 'ollama run ' directly from Ollama page to easily select and pull models. Tagging Feature: Add tags to chats directly via the sidebar chat menu. Mobile Accessibility: Swipe left and right on mobile to effortlessly open and close the sidebar. Improved Navigation: Admin panel now supports pagination for user list. Additional Language Support: Added Polish language support.
  • Access Ollama Base URL from n8n

    Unsolved
    1
    0 Votes
    1 Posts
    13 Views
    No one has replied
  • Should ollama be part of this app package?

    18
    5 Votes
    18 Posts
    487 Views
    coniunctioC

    @girish said in Should ollama be part of this app package?:

    Local ollama is now integrated. You have to reinstall the app though.

    Keep your expectations in check. It probably won't work great if you don't have a good CPU and we have no GPU integration yet. It's very slow with low end CPUs. I am not an expert on the RAM/CPU/GPU requirements. Feel free to experiment.

    I have been using the workaround of disabling local Ollama with the Cloudron app and running a separate (external) docker container installation of Ollama with a dedicated GPU on the same hardware and then linking that instance of Ollama to the Cloudron instance of Open-WebUI. Somehow, this configuration is faster on a NAS purchased in 2018 with an add-on NVIDIA 8GB GPU than my M1 MacBook Pro with 16GB RAM and integrated GPU purchased more recently. The additional bonus of running the Cloudron Open-WebUI vs the localhost version on my Apple silicon MBP is that I can use my local LLMs on my mobile devices in transit when my laptop is shut down.

  • The models load in the void

    6
    1 Votes
    6 Posts
    150 Views
    girishG

    Just saw this post - LLaMA Now Goes Faster on CPUs

  • Can't open web UI

    Solved
    3
    1 Votes
    3 Posts
    88 Views
    Dennis4720 0D

    Correct, problem was solved shortly after posting this.

  • let's collect some metrics

    17
    4 Votes
    17 Posts
    237 Views
    L

    One thing I would like to have as an option is a bell sound when the generation has completed. It helps me be productive elsewhere instead of waiting.

    Oh, I would suggest overriding the initial memory allocation and ramping it up to as much RAM as you can spare.

  • 3 Votes
    7 Posts
    93 Views
    L

    @jdaviescoates I think you tried the mixtral one already. How about one of the puffin or orca models?

  • Changing the admin password doesn't work

    Solved
    2
    0 Votes
    2 Posts
    28 Views
    jdaviescoatesJ

    Doh!

    It does work, I just hadn't scrolled down to see the all important Update password button!

    image.png

  • The advanced settings are reset every time

    4
    2 Votes
    4 Posts
    81 Views
    girishG

    Yeah, upstream is in heavy development, so I guess some of these issues are expected.

  • Various configuration issues

    3
    0 Votes
    3 Posts
    76 Views
    girishG

    @LoudLemur said in Various configuration issues:

    Cloudron's instructions say to change the password and email. You can change the password OK, but I don't think there is a field to change the email address.

    I can confirm this. There is actually no way to change the email. Can you report/ask this upstream and link it here? Thanks.