We would have to adjust the package to pre-configure the app to use optionally redis. However this is only relevant for multi-instance setups which is anyways not the case for Cloudron and only relevant for instances with many concurrent users, which is likely not the main focus for this app on Cloudron. So so far this optional redis is out of scope.
From the Changelog: Enhanced load balancing capabilities for multiple instance setups
The Open Web UI project initially started out as a Ollama frontend. To run local models, one needs GPU support and Cloudron currently does not have GPU support. Over time, Open Web UI has shifed to be a frontend for OpenAI API compatible services . It is also now possible to configure Ollama in another computer which has a GPU and then point Open Web UI to use that Ollama.
I guess we can mark this as stable since it's still useful while we figure out GPU support in Cloudron.
@vladimir-d In the future, I believe we should let the app handle most of it's configuration, environment-wise.
Variables such as RAG_EMBEDDING_MODEL, etc. are all set via the application UI and saved in the config.json file.
Now, for the initial issue of this topic, I've solved it by resetting the embedding model engine.
@Jack-613 this is a Cloudron related forum, so if you are not using OpenWebUI on Cloudron, then you should probably ask in the upstream project instead.
@eddowding said in Which models know most about Doughnut Economics?:
Does it matter much if you can add docs to them and have them answer intelligently?
I tried that yesterday. It didn't help in the slightest.