Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content

Discuss

Feedback, suggestions, anything else Cloudron related

1.1k Topics 9.0k Posts
  • 1 Votes
    13 Posts
    2k Views
    girishG
    @CBCUN It's in our TODO list. I don't have an ETA yet.
  • NFS network share

    Solved
    3
    0 Votes
    3 Posts
    396 Views
    randyjcR
    @girish Oke feel stupid. That worked, thank you. I’m using Tailscale IP, so all traffic is going via the tunnel. My assumption is that it doesn’t matter if it’s not encrypted as long as I use the Tailscale tunnel? ‍️
  • Borg backup to local/attached and S3 Compatible Object Storage

    16
    2 Votes
    16 Posts
    5k Views
    girishG
    In case, anyone is using borgbase, they seem to be having a big outage. Possibly down for a week already. https://status.borgbase.com/status/default I found out from https://news.ycombinator.com/item?id=37115540 [image: 1691987417755-95b7fff5-7246-4cb1-a359-568099bf35cc-image-resized.png]
  • NetBird On Cloudron - Peer-to-Peer Wireguard VPN Network

    1
    0 Votes
    1 Posts
    184 Views
    No one has replied
  • [Guide] Auto Deploy of Hugo via Gitea + Drone CI

    7
    5 Votes
    7 Posts
    1k Views
    fbartelsF
    robi said in [Guide] Auto Deploy of Hugo via Gitea + Drone CI: how this could all run from the same cloudron It already could. Drone's runner (would be the same with Gitea or Woodpecker) is a simple go binary that just needs access to a local docker socket. It's just me running jobs on my installation, so there's no risk of exposing data to others by accessing the Docker socket. And if you're serious about CI, you'll want optimised machines anyway (building in a ramdisk ftw), not running your CI jobs where their resources are competing with Wordpress & co. Being a simple binary that doesn't hold any local data also means you don't need to back up your runners, and since they access a central node, they don't even need internet access. If you want to save costs, you can even let your developers run the runner on their local machines.
  • 1 Votes
    7 Posts
    890 Views
    randyjcR
    You could also use rclone together with mergerfs to create some kind of layer. like /mnt/remote /mnt/local /mnt/mergerfs You then write stuff locally, and it'll upload stuff in the background to your remote storage. You can look for inspiration @ https://github.com/l3uddz/cloudplow
  • Cloudron forum and GIT offline for long time?

    5
    0 Votes
    5 Posts
    479 Views
    robiR
    Some tar - streaming would help with all the small files.
  • LDAP port (security considerations)

    10
    1 Votes
    10 Posts
    988 Views
    potemkin_aiP
    @imc67 geo-block feels like a more feature-rich solution, that might be of help, but not exactly my cup of tea. I would guess, that Cloudflare doesn't prevent anyone from accessing your web service directly (should they figure out the IP address, for example, via e-mail you've sent)?
  • Switch to Debian - Ubuntu Spying

    Moved
    17
    0 Votes
    17 Posts
    2k Views
    potemkin_aiP
    Guess CentOS / RedHat is out of consideration now...
  • Cloudron Dashboard OIDC Login Inconvenience - low priority

    5
    +0
    0 Votes
    5 Posts
    248 Views
    nebulonN
    That is fixed https://forum.cloudron.io/topic/9625/branding-bug
  • icedrive as backup solution?

    23
    1 Votes
    23 Posts
    2k Views
    robiR
    @LoudLemur said in icedrive as backup solution?: How would marking the bucket on idrive e2 as private rather than public effect usage with applications on the Cloudron? With object storage, public buckets mean it's accessible via HTTP to anyone. Some people use it to host web assets or even entire static websites.
  • Data backup

    5
    0 Votes
    5 Posts
    462 Views
    P
    Moving forward, what would be a good backup policy be to prevent this issue from occurring in the future? Should I just get seed boxes and sync directly from there to my local machine?
  • Object Storage or Block Storage for backups of growing 60+ GB?

    61
    3 Votes
    61 Posts
    10k Views
    marcusquinnM
    @LoudLemur I find rsync too slow with zilions of small files. You'd think it would be quicker as incremental, but large compressed files are beating many small files, same for storage space, compressed significantly less storage costs.
  • Icedrive Storage Deal

    1
    +0
    0 Votes
    1 Posts
    220 Views
    No one has replied
  • 3 Votes
    9 Posts
    901 Views
    nebulonN
    good point, I will remove them from the 1-click area in our install page
  • Love to see it :D

    5
    +0
    7 Votes
    5 Posts
    501 Views
    murgeroM
    @robi Next to the likes of WordPress and cPanel - Cloudron has come such a long way!
  • Allow users to restart Apps

    6
    1 Votes
    6 Posts
    518 Views
    G
    ok. will work something out. thanks
  • 2 Votes
    2 Posts
    823 Views
    girishG
    @LoudLemur (Going by the topic subject), if the error is related to PTR record, then the solution is not correct. The PTR record is set by your VPS provider and not by your DNS provider PTR record goes by many names - Reverse DNS, rDNS, PTR. Unlike "forward" DNS which looks up something based on the domain name, a "reverse" DNS looks up based on the IP address. And who owns the IP address? The VPS provider. So, you have to change it there. This means that to fix PTR record issue, you have to fix this in SSDnodes and not in Porkbun or whatever name server you have. If SSDnodes, does not let you set PTR record, you have to ask them by raising a support ticket. More information at https://docs.cloudron.io/email/#ptr-record
  • 3 Votes
    1 Posts
    143 Views
    No one has replied
  • Cloudron - Update to Cloudron version 7.5 Success!

    2
    6 Votes
    2 Posts
    314 Views
    necrevistonnezrN
    Same here in a home server situation.