Auto-tagging ollama setup
-
Hello. Would be interesting to know if the auto-tagging AI feature (v2.9) is possible on a Cloudron install. And eventually adding the setup process to the docs.
https://docs.linkwarden.app/self-hosting/ai-worker -
Can the "llama" be a Cloudron service, usable by other apps?
An "external" llama - at least if running on the same server - would be against Cloudron rules? Otherwise, it could make sense to package it for convenience.... (IMHO) -
Should we bake in llama into the app package as an optional component? For the moment, if you have an external llama, you can set that variable in linkwarden to point to the external llama.
-
Can the "llama" be a Cloudron service, usable by other apps?
An "external" llama - at least if running on the same server - would be against Cloudron rules? Otherwise, it could make sense to package it for convenience.... (IMHO)@necrevistonnezr
I think that idea is somewhat discussed here: https://forum.cloudron.io/topic/11576/access-ollama-base-url-from-n8n -
@girish
I guess there's a point to package Ollama in the app as that is what the hosted version of Linkwarden does as well. And they also recommend a lightweight model in the docs (phi3:mini-4k
) so there should be no issues with ressources.@girish
Smaller in size than the phi3-mini-4k is llama3.2:1b.