@Jack-613 this is a Cloudron related forum, so if you are not using OpenWebUI on Cloudron, then you should probably ask in the upstream project instead.
@eddowding said in Which models know most about Doughnut Economics?:
Does it matter much if you can add docs to them and have them answer intelligently?
I tried that yesterday. It didn't help in the slightest.
@shrey said in Using Mistral API seems broken on cloudron:
@nebulon Thanks!
Indeed, that was it.
RAM was set at 2GB. Whereas, it seems to require a min. of 5.5GB to function.
That's right, and if you can provide it with at least 8-16GB of RAM you will see a huge difference. These things are meant to consume large resources for now, but it's getting better.
@girish There is no problem. We lost all the content of our conversations on OpenWebUI, but we prefer that it happens now rather than in six months and after six months of use. More stability, we will always be for it and it is better that it is done at the beginning as now.
@Lanhild thanks, will check it out. Apps using sqlite cannot be considered stable until we figure a proper backup strategy. Cloudron's backup strategy is to copy files and this doesn't work well with databases in the data directory. The data can be inconsistent.
Also, given no automatic migration from sqlite to postgres, we might have to publish a completely new package
personally I disabled the ollama local, because my Cloudron doesnt have GPU and on CPU it is too painfull.
in exchange; I activated a bunch of Providers API compatible with OpenAI
but at the end I realized that I just need OpenRouter to access all of them.
[image: 1714667254980-1b065e93-d797-4b8a-a65e-48dbe6e00216-image.png]
with OpenRouter, you could even block providers that logs your queries;
which I will Feature Request for Open-WebUI
[image: 1714667397659-3999b3bc-9a90-4c26-82b9-bc0eeb2fd5dc-image.png]
@girish Perfect thank you! This actually made it work for me!
Steps:
Generate an API key in your OpenWebUI.
Add the header Authorization: Bearer {{api_key}} to a post request to {{OpenWebUI_URL}}/ollama/api/generate and a body of
{
"model": "llama2",
"prompt": "Why is the sky blue?"
}
From there I get the response via the api.
To use it with n8n I can just use the regular request node.
The URL
{{OpenWebUI_URL}}/ollama/
gets forwarded to the internal Ollama endpoint. So if you want to access other aspects of the Ollama api you can just use their documentation at https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-completion.
In the docs replace "localhost:11434/api/" with "{{OpenWebUI_URL}}/ollama/api/"
One thing I would like to have as an option is a bell sound when the generation has completed. It helps me be productive elsewhere instead of waiting.
Oh, I would suggest overriding the initial memory allocation and ramping it up to as much RAM as you can spare.
Doh!
It does work, I just hadn't scrolled down to see the all important Update password button!
[image: 1711579505762-4ff139f4-757d-45f0-9ad9-0d94e4d591b3-image.png]
@LoudLemur said in Various configuration issues:
Cloudron's instructions say to change the password and email. You can change the password OK, but I don't think there is a field to change the email address.
I can confirm this. There is actually no way to change the email. Can you report/ask this upstream and link it here? Thanks.