Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
msproutM

msprout

@msprout
About
Posts
15
Topics
3
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • Cloudron dockerfile for LibreChat is missing tools needed for RAG and OCR
    msproutM msprout

    hi there. i have been using the librechat experimental app for a few days and i gotta say, i love how elegant it is compared to OpenWebUI and its' graveyard of semi-working scripts and functions.

    i did notice that the system as deployed by Cloudron right now cannot do OCR for uploaded files, nor can it handle its expected RAG functionality because it seems the way the dockerfile is written for the Cloudron deployment currently does not include the server locally that the app requires to handle all of that. i have an ollama endpoint that i run elsewhere on my tailnet that i would like to use.

    this seems to be supported, but you still need the rag server running. i had little to no success getting it deployed on another server and pointing the cloudron to it over tailscale.

    for the sake of cognitive ease, here is the server's github:

    • http://ghcr.io/danny-avila/librechat-rag-api-dev:latest

    additionally, here is the documentation for the OCR stuff:

    • https://www.librechat.ai/docs/features/ocr
    • https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/ocr

    i think the ideal state would be to maybe have cloudron use a locally-hosted ollama server preloaded with one of the lightweight embedding models, like:

    • nomic-embed-text
    • mxbai-embed-large
    • all-minilm.

    here is the page i found where they describe what that server is and how to add it:

    • https://www.librechat.ai/docs/configuration/rag_api

    it might be helpful as well to ship default env and librechat.yaml files that have every possibility on this page pre-populated but commented out by prepending # to each line:

    • https://www.librechat.ai/docs/configuration/dotenv

    i would be happy to put together a default or example env and librechat.yaml for use by the cloudron team if that is something y'all want. i have been soaked in the documentation for a bit now and think i could aggregate something. seems not even the upstream docs have that.

    LibreChat

  • Pangolin on Cloudron - Your own tunneled reverse proxy with authentication (Cloudflare Tunnel replacement)
    msproutM msprout

    I would love for this one to be available.

    As it stands, I actually have rolled my own reverse proxy w/ authentication by leveraging Tailscale, and Cloudron's "Relay" app/feature. As long as my Cloudron machine is on the same tailnet, I just set the relay to point to the Tailscale address and port.

    Tailscale has a neat web interface that helpfully exposes every process listening to which ports, so you'll also know if it can be accessible to other machines on the tailnet.

    Finally, with the relay app, Cloudron offers the option to have apps only be available to certain users; I have found that this works well for identity management too. Whatever Cloudron uses for its universal login tech got the thumbs up of approval from my buddy in InfoSec. 😄

    I really like how Pangolin offers so much more flexibility — what I have is super rigid in the sense that you have two choices for authentication: allow everyone or allow just local Cloudron users. I would like to be able to set some to be the browser-native password prompt, have some only allow certain IPs in, etc. That would be sweet.

    App Wishlist

  • A lightweight socks5 or Web proxy
    msproutM msprout

    Hi all. Longtime fan and customer of Cloudron. It's basically one of four services I actually recommend to people.

    Anywho, Cloudron has helped me successfully offboard from most of the non-FOSS software I became reliant on. However, the one thing I think I'm still missing is a light, simple socks5 proxy (or a web proxy tool, if you're like me and old enough to remember the era those were used — they are just very user-friendly).

    I can use the VPN deployment (which works great), but most of the people in my life just need a means of using certain blocked apps at work. They're never gonna have the opportunity to install a cert or VPN software on their work machines, so, this is basically the only lever available in that case.

    I saw Proxmox Helper Scripts is offering HevSocks5Server, for reference.

    Thanks all.

    Feature Requests

  • Minio removing the interface for community edition
    msproutM msprout

    I am so disappointed in Minio. I understand that finding a business model with FOSS software is a pain in the butt, but there really is something to be said about how valuable "free at home; cost at work" can be as a model. It is how Bitwarden has managed to actually take a chunk out of 1Password's market share.

    Minio

  • Cloudron dockerfile for LibreChat is missing tools needed for RAG and OCR
    msproutM msprout

    Wonderful as always Girish and team. 🙂 Thank you.

    LibreChat

  • How to use local GPU with remote LibreChat?
    msproutM msprout

    Imo, just serve Ollama on the server that has the GPU either locally (bare metal) or resource mapped (Proxmox/Virtualized), make sure to pass the server address flag in the systemd module / start script (it's in the docs; sorry, on mobile), connect both machines to a Tailscale tailnet, then configure LibreChat in the two config files to point the Ollama settings to your GPU's tailnet IP or hostname. I have found that this pathway is pretty robust. I haven't noticed any real slowdown, and my VPS and homelab are over 4,000 miles apart. 😊

    LibreChat librechat tunnel

  • How to use local GPU with remote LibreChat?
    msproutM msprout

    Zrok/OpenZiti looks hella cool though.

    If you are committed to self-hosting, you can check out application/protocols like NetBird, HeadScale, innernet, or plain ol' vanilla WireGuard.

    LibreChat librechat tunnel

  • A lightweight socks5 or Web proxy
    msproutM msprout

    (this is off topic, but, I am a big fan of Cloudron and a swag designer by trade — I have a T-shirt design launching this June at UNIQLO. If Cloudron ever wanted to make a run of swag like tshirts, mugs, posters, stickers, etc [say, for an anniversary celebration], please consider me! Swag stores these days can be built and serviced basically for free.)

    Feature Requests

  • A lightweight socks5 or Web proxy
    msproutM msprout

    I like that idea! 3proxy has a ridiculous number of features, having just looked it up. Only thing I would personally recommend or request is a means of giving a friend a URL, and allowing them to browse sites via my proxy without needing to configure their own local browser.

    Back in the early 2000s (when I was but a wee eighth grader), we called these tools "Web proxies," but really, all they were doing was grabbing the user's request via the input form, transmitting it to a backend using CGI, using a headless browser to grab the content of the page, and just present it on the user's screen via an iFrame as if it were native.

    I wonder if perhaps this could be done even more modernly using a tool like Neko Browser (https://github.com/m1k1o/neko) that is set up to deploy with your socks5 proxy server pre-configured in a Firefox instance.

    Just some thoughts. Glad I struck a chord with others too. 🙂

    Feature Requests

  • Transcoding Errors?
    msproutM msprout

    @nebulon thanks for your effort anyway.

    Just for anyone else, I thought maybe this was an issue with memory allotment making transcoding trip up, but I tried resizing the RAM of my VPS instance as well as the app within Cloudron — no dice. Didn't seem to be an issue that was solveable by switching to another encoder preset.

    Owncast

  • Pangolin on Cloudron - Your own tunneled reverse proxy with authentication (Cloudflare Tunnel replacement)
    msproutM msprout

    Also, the relay is on the app store screen — it's a button on the top right.

    App Wishlist

  • Pangolin on Cloudron - Your own tunneled reverse proxy with authentication (Cloudflare Tunnel replacement)
    msproutM msprout

    @visamp I did not, I use Proxmox as my hypervisor, but as long as TrueNAS can run a Tailscale node, you should be good to go with what I described.

    App Wishlist
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search