Cloudron dockerfile for LibreChat is missing tools needed for RAG and OCR
-
hi there. i have been using the librechat experimental app for a few days and i gotta say, i love how elegant it is compared to OpenWebUI and its' graveyard of semi-working scripts and functions.
i did notice that the system as deployed by Cloudron right now cannot do OCR for uploaded files, nor can it handle its expected RAG functionality because it seems the way the dockerfile is written for the Cloudron deployment currently does not include the server locally that the app requires to handle all of that. i have an ollama endpoint that i run elsewhere on my tailnet that i would like to use.
this seems to be supported, but you still need the rag server running. i had little to no success getting it deployed on another server and pointing the cloudron to it over tailscale.
for the sake of cognitive ease, here is the server's github:
additionally, here is the documentation for the OCR stuff:
- https://www.librechat.ai/docs/features/ocr
- https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/ocr
i think the ideal state would be to maybe have cloudron use a locally-hosted ollama server preloaded with one of the lightweight embedding models, like:
here is the page i found where they describe what that server is and how to add it:
it might be helpful as well to ship default
env
andlibrechat.yaml
files that have every possibility on this page pre-populated but commented out by prepending#
to each line:i would be happy to put together a default or example
env
andlibrechat.yaml
for use by the cloudron team if that is something y'all want. i have been soaked in the documentation for a bit now and think i could aggregate something. seems not even the upstream docs have that.