Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content

Discuss

1.2k Topics 10.5k Posts

Feedback, suggestions, anything else Cloudron related

  • What do you do?

    Pinned
    74
    7 Votes
    74 Posts
    44k Views
    robiR
    @nostrdev so glad you investigated when I recommended Cloudron. Glad to have you here.
  • Show me your dashboard :)

    Pinned
    65
    2
    6 Votes
    65 Posts
    28k Views
    T
    @scooke just following the documentation for self development/deploy, it is still basically docker and there are good basic container to start from. I had a some more but moved to my local running TrueNas Scale and using OCID from cloudron.
  • About backups to objects storage and DNS requests

    4
    0 Votes
    4 Posts
    31 Views
    jamesJ
    Hello @mpevhgnuragistes I had to look up FDN and found FortiGuard Distribution Network, is this correct?
  • Kleptomania

    2
    0 Votes
    2 Posts
    30 Views
    jamesJ
    Hello @sebastian42 and welcome to the Cloudron Forum What is Cloudron? @Sebastian42 said: I just hope this is the right forum ..... This is not the correct forum, sorry.
  • Passkey Setup Requested Again After “Log out from all”

    1
    0 Votes
    1 Posts
    18 Views
    No one has replied
  • Czech Translation for Cloudron Now 100% Complete 🇨🇿

    5
    6 Votes
    5 Posts
    46 Views
    archosA
    @nebulon said in Czech Translation for Cloudron Now 100% Complete : This is great! We will ship the next Cloudron version then with Czech (internally 9.1 is released for new installs already, so will be added to the next patch release then) That’s great news, thank you!
  • Struggling to Replace MinIO - Advice Welcome!

    7
    1 Votes
    7 Posts
    157 Views
    jadudmJ
    Depending on your appetite for loss, I would consider backups-in-depth. That is, one backup site is not a backup. Use rsync-based backup over SSHFS to Hetzner or similar. You will want to select "use hardlinks" and, if you want it, encryption. The use of hardlinks is, essentially, your de-duplication. (See below.) For a second layer of depth, I would consider a (daily? weekly? monthly?) backup of your primary backup site to a secondary. This could be a sync to AWS S3, for example. Note that any S3-based backup (B2, Cloudflare ObjectSomething, etc.) will have both a storage cost and an API cost. If you are dealing with millions of small files in your backups, the API costs will become real, because dedupe requires checking each object, and then possibly transferring it (multiple PUT/GET requests per file). S3 has the ability to automatically keep multiple versions of a file. You could use this to have an in-place rotation/update of files. If you are doing an S3 backup, you can use lifecycle rules to automatically move your S3 content to Glacier. This is much cheaper than "hot" S3 storage. But, you pay a penalty if you download/delete to early/too often. As a third, cheap-ish option, go get a 2- or 4-bay NAS that can run TrueNAS, and put a pair of 8-12TB HDDs in it. Configure the disks in a ZFS mirrored pair. Run a cron job once per day/week to pull down the contents of the Hetzner box. (Your cron will want to, again, use rsync with hardlinks.) You now have a local machine mirroring your hot backups. It is arguably more expensive than some other options (~600USD up front), but you don't have any "we might run out of space" issues. And, because you're using it to pull, you don't have any weird networking problems: just SCP the data down. (Or, rsync it down over SSH.) Whatever you are doing, consider targeting two different destinations at two different times (per day/alternating/etc.). Or, consider having some combination of backups that give you multiple copies at multiple sites. That could be Hetzner in two regions, with backups run on alternating days, or it could be you backup to a storage box and pull down a clone every day to a local NAS, or ... or ... Ultimately, your 150GB is small. If you're increasing by a few GB per week, you're saying that you are likely to have 1TB/year. Not knowing your company's finances, this is generally considered a small amount of data. Trying to optimize for cost, immediately, is possibly less important than just getting the backups somewhere. Other strategies could involve backing up to the NAS locally first, and then using a cron to borg or rsync to a remote host (possibly more annoying to set up), etc. But, you might have more "dedupe" options then. (borg has dedupe built in, I think, but...) I have a suspicion that your desire to use object storage might be a red herring. But, again, I don't know your constraints/budget/needs/concerns. Deduplication: If you use rsync with hardlinks, then each daily backup will automatically dedupe unchanged files. A hardlink is a pointer to a file. So, if you upload super_ai_outputs_day_1.md to your storage on Monday, and it remains unchanged for the rest of time, then each subsequent day is going to be a hardlink to that file. It will, for all intents and purposes, take up zero disk space. So, if you are backing up large numbers of small-to-medium sized files that do not change, SSHFS/rsync with hardlinks is going to naturally dedupe your unchanging old data. This will not do binary deduplication of different files. So, if you're looking for a backup solution that would (say) identify that two, 1GB files where the middle 500GB are identical, and somehow dedupe that... you need more sophisticated tools and strategies. Rsync/hardlinks just makes sure that the same file, backed up every day, does not take (# days * size) space. It just takes the original size of the file plus an inode in the FS for each link. Note, though, if you involve a snapshot of your hardlinked backups to an object store, every file may take the full size of every file for every day. I'm possibly wrong on that, but I'm not confident that most tools would know what to do with those hardlinks when you're copying to an object store. I think you'd end up multiplying your disk usage significantly, because your backup tool will have to create a copy of each file into the object store. (Most object stores do not have a notion of symlinks/hardlinks.) An experiment with a subset of the data, or even a few files, will tell you the answer to that question. If you have other questions, you can ask here, or DM me.
  • min vps specs for a cr mailserver?

    3
    3 Votes
    3 Posts
    63 Views
    humptydumptyH
    @luckow thank you
  • Wasn't there a 9.0.18?

    2
    1 Votes
    2 Posts
    35 Views
    nebulonN
    That was just an intermediate release for pre-built images at cloud provider
  • AI on Cloudron

    a.i
    259
    2 Votes
    259 Posts
    225k Views
    robiR
    You're going to be seeing alot more of these going forward. Https://Cognitum.one is another impressive FPGA+rPi4 for fast, ultra low power local AI agents
  • Backups redundant?

    backups
    8
    3 Votes
    8 Posts
    146 Views
    robiR
    @girish said in Backups redundant?: @robi said in Backups redundant?: Made me ask to what would one restore this if not Cloudron (It's kind of special ! )? There is no standardized format for these backups, would have been great if there was one. One of my previous startups did have such a thing even though the focus was on app migration from any to any.
  • API docs bug

    Solved
    10
    0 Votes
    10 Posts
    143 Views
    J
    @charlesnw said in API docs bug: Are the docs generated from a git repo by chance? Yes, it's public at https://git.cloudron.io/docs/docs . Anyone can contribute!
  • I built a thing using Cloudron - testers wanted

    cloudron hosting help wanted
    31
    3 Votes
    31 Posts
    486 Views
    jamesJ
    Hello @3246 This can not be in the API since this is a file system setup.
  • Best practices for email security?

    7
    3 Votes
    7 Posts
    347 Views
    C
    So one approach I am using is to have 2 email servers on separate physical servers: current, archive. Archive becomes a mirror of current with respect to users. So user1@domain.com, user2@domain.com has user1@archivedomain.com, user2@archivedomain.com. If I am user1, I can have both accounts on all my devices (TBird, iOS, etc.). I then use this tool (https://imapsync.lamiral.info) to migrate current emails (but older) to archive. The tool is ugly, but works incredibly well. And since I only update the archive email server once or twice per year, I can backup less frequently since a backup from today has the same content as one generated 6 months ago. One of the other benefits is that I use SoGo EAS for current. The smaller the mailbox size the better it behaves. Security is not much stronger other than having a smaller blast radius and the need to penetrate 2 accounts instead of 1. If IMAP and POP3 could be disabled on the archive mail server and 2FA TOTP, passwordless, etc. be enabled to access webmail, that would be a better archive option.
  • Apps for file management/sharing/syncing

    28
    7 Votes
    28 Posts
    995 Views
    stalecontextS
    Here you guys go @robi @jdaviescoates https://git.cathedral.gg/Ben/copyparty-cloudron-app If you want to wait for my frontend fork, here's what it's looking like [image: thumbnail?slug=copyparty-frontend-fork-1&size=preview&c=xwcCBYD7dmWKh3iamKh4if29j%2B78&edited=true]
  • Do insults work with AI dev assistants ?

    6
    2 Votes
    6 Posts
    315 Views
    SansGuidonS
    With Mistral subscription I have almost infinite calls to Mistral API at not extra cost just by enabling the experiment mode in their subscription because no way I want to pay per token usage.
  • WorkOS with Cloudron

    Moved
    2
    0 Votes
    2 Posts
    125 Views
    jamesJ
    Hello @mdc773 I have changed the title from Ron to Cloudron since I was a bit confused and assumed it meant Cloudron.
  • No reboot button in reboot notification

    reboot notifications
    8
    1
    6 Votes
    8 Posts
    271 Views
    nebulonN
    most likely 9.1 as we are done with 9.0 patch releases
  • Improve user listing

    5
    1 Votes
    5 Posts
    134 Views
    jdaviescoatesJ
    Excellent, thanks @nebulon
  • CCAI-P - JSON.parse error

    Moved ccai
    7
    0 Votes
    7 Posts
    192 Views
    humptydumptyH
    @timconsidine No worries and it's not that big of deal, really. I just wanted to mention it to help with troubleshooting and for others not to be discouraged from trying CCAI(P). I believe the staff wouldn't have considered this feature in 9.1 if it weren't for the success of CCAI. Kudos!