Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
jadudmJ

jadudm

@jadudm
About
Posts
112
Topics
21
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • Why does Cloudron set 777 permissions for SSHFS?
    jadudmJ jadudm

    Ah. I see.

    My apologies. I am very used to being the same user on both the host and the target system. And, I'm thinking in terms of scp or sftp, not an SSHFS mount. The difference matters a great deal; your answer is clear, and I see why I was confused/wrong.

    My fog of confusion wafts away in the light of illumination. 🙏 Thank you.

    Support backup sshfs security

  • Long backups, local and remote, failing consistently
    jadudmJ jadudm

    (I have a suspicion that this is a variation on this post from a while back.)

    I have configured backups as follows:

    backup set encr? target day time files size
    bitwarden Y storage box daily 20:00 800 7MB
    photos N storage box S 03:00 300K 200GB
    photos N NAS Su 03:00 300K 200GB
    full (- music, -photos) Y NAS MWF 03:00 18K 12GB
    music N NAS T 03:00 ? 600GB

    What I'm finding is that my Immich (photos) instance does not want to backup. To be more precise: Immich consistently fails a long way into the backup. In both the case where it is talking to a storage box (overseas, for me) and to my local NAS, it is configured as an SSHFS mount. In each location I have set up a folder called $HOME/backups, and used a subpath for each backup (e.g. photos, so that the full path becomes $HOME/backups/photos, $HOME/backups/vaults, etc.). In all cases, I'm using rsync with hardlinks.

    I removed the photos (which is large/has many files) and the music from the full backup set, because I want to target them separately for backup. And, I want to make sure my full backup completes.

    I can backup the bitwarden instance, because it is small. I have not yet seen the photos complete. I end up somewhere around 290K files, and there's an SSH error that drops. I don't know what the root cause is. (And, I'm now waiting for another backup, because Immich kicked off an update... so, I have to wait.)

    I'll update this thread if/when it fails again. Possible root causes (that would be difficult for me to work around):

    1. Too many files. I would think rsync would have no problems.
    2. Files changing. Immich likes to touch things. Is it paused during backup? If not, could that be the problem? (There are tempfiles that get created as part of its processes; could those be in the set, then get processed/deleted before the backup gets to them, and then it breaks the backup? But, pausing during backups is disruptive/not appropriate for a live system, so... that's not actually a solution path. Ignore me.)
    3. Not enough RAM. Do I need to give the backup process more RAM?

    The NAS is a TrueNAS (therefore Debian) machine sitting next to the Cloudron host. Neither seems to be under any kind of RAM pressure that I can see. Neither is doing anything else of substance while the backups are happening.

    Unrelated: I do not know what happens when Immich updates, because I am targeting it with two backup points. Does that mean an app update will trigger a backup to both locations? Will it do so sequentially, or simultaneously?

    possible other solutions

    I would like the SSHFS backup to "just work." But, I'm aware of the complexity of the systems involved.

    Other solutions I could consider:

    1. Use object storage. I don't like this one. When using rsync with many files, I discovered that (on B2) I could end up paying a lot for transactions if I had a frequent backup, because rsync likes to touch so many things. This was the point of getting the NAS.
    2. Run my own object storage on the NAS. I really don't want to do that. And, it doesn't solve my off-site photos backup.
    3. Introduce JuiceFS on the Cloudron host. I could put JuiceFS on the Cloudron host. I dislike this for all of the obvious reasons. But, it would let me set up an SSHFS mount to my remote host, and Cloudron/rsync would think it was a local filesystem. This might only be pushing the problems downwards, though.
    4. Backup locally, and rsync the backup. I think I have the disk space for this. This is probably my most robust answer, but it is... annoying. It means I have to set up a secondary layer of rsync processes. On the other hand, I have confidence that if I set up a local volume, the Cloudron backup will "just work."

    Ultimately, I'm trying to figure out how to reliably back things up. I think #4 is my best bet.

    Support backup sshfs rsync

  • Why does Cloudron set 777 permissions for SSHFS?
    jadudmJ jadudm

    Hi @james ,

    Fair enough. To be clear:

    If you make the mistake of using $HOME for the target directory, then the behavior of allow_other changes the permissions on $HOME to 777. The .ssh directory must exist under a home directory that is either 755, 751, or 750. (It probably can be something else...) Point being, "fool me twice," I have made this mistake on more than one system, and wondered why it is so hard to set up an SSHFS mountpoint. It is because it works once, and then not a second time, because the home directory permissions have changed, "breaking" SSH on the target system.

    Perhaps this is clearly described in the SSH mount docs, and I missed it, but it is a silent/invisible source of confusion when setting up SSHFS mountpoints.

    (An aside: I still don't know why any user other than the user I assign would need to access the mountpoint: I provide a private key and a username. Only that user should be able to carry out the SSHFS mount, and all of the writes should happen as that user. Why would I ever need some other user to be able to read my backups on a remote system?)

    We can re-close this as solved, because I more clearly understand Cloudron's behavior. Because two things can be true, I understand the behavior, and I still think it is incorrect: if I provide a private key and username, that is the user I expect all operations to happen as, and I do not expect permissions to be set so that any user of the remote system can read the files. But, expectations are tantamount to assumptions. 😃

    Support backup sshfs security

  • Why does Cloudron set 777 permissions for SSHFS?
    jadudmJ jadudm

    I'm struggling with this problem as well.

    I'm finding when I try and SSHFS with my TrueNAS box...

    1. Assuming the user is cloudback
    2. The path is /home/pool/dataset/cloudback
    3. I set my backup path to /home/pool/dataset/cloudback and my prefix to full

    Cloudron always changes the permissions on the directory /home/pool/dataset/cloudback to 777. This seems... grossly insecure. And, worse, it breaks SSH, because you can't have a filesystem above the .ssh directory with permissions that open.

    However, I also find that if I set the path deeper into the account (with no prefix), I avoid the permissions issue, and instead, I get backups that hang/lock, especially on Immich. (That could be unrelated.)

    My single biggest question is why is Cloudron setting perms to 777 anywhere?

    I'm trying again by creating a directory in the homedir, and using that as my base path. Then, within that, I'm using the "path" option to create subfolders. I don't have a reason I think this might help, but given comments above, I'm trying it. 🤷

    Support backup sshfs security

  • LF Kanban recommendations/experiences? (non-wekan)
    jadudmJ jadudm

    I've eyed both

    https://github.com/kanbn/kan

    and

    https://planka.app/community#strategy

    as being potentially friendly to Cloudron packaging. kan.bn in particular just wants a Postgres database, which Cloudron happily provides.

    Off-topic

  • Backuping to my Synology NAS
    jadudmJ jadudm

    I don't have a Synology, but can you use the Task Manager to run an rsync command periodically to pull copies of your backups from the VPS to your local NAS?

    Or, if you're backing up to an object store from the VPS, you could use Cloud Sync to sync the S3-based backups to your local NAS?

    These are two ideas that come from some googling. Perhaps they'll inspire additional thoughts.

    Discuss backup synology

  • Garage, an open-source distributed storage service you can self-host to fullfill many needs
    jadudmJ jadudm

    Nevermind. It was temporary. I retried later. 🙂

    App Wishlist

  • Garage, an open-source distributed storage service you can self-host to fullfill many needs
    jadudmJ jadudm

    @timconsidine , I'd like to look at combining your package and mine. Should https://git.cloudron.io/timconsidine/cloudron-garages3-ui be public? It just says Retry later.

    App Wishlist

  • Garage, an open-source distributed storage service you can self-host to fullfill many needs
    jadudmJ jadudm

    This is my new favorite sequence of posts in this forum. Thank you, @robi , for the assist. 🙂

    App Wishlist

  • Garage, an open-source distributed storage service you can self-host to fullfill many needs
    jadudmJ jadudm

    @timconsidine , I'll look at Voltron-ing the two repositories together.

    App Wishlist

  • Best Backup Technology + Small UI Wish for Separating Technologies vs. Providers
    jadudmJ jadudm

    Everything I'm about to say is independent of your actual needs and what you want to achieve. Are you hosting for others? Do you need to be able to restore within hours? Then some of what I say is not for you. If you're instead looking for some "I messed up, everything is gone, and I need a way to recover, even if it takes me a few days," then some of what I say will be more applicable.

    I have generally built my backups in tiers, for redundancy and disaster recovery.

    1. I would backup the first tier to something close and "hot." That is, use rsync or direct mount to your instance's local disk.
    2. I would consider SSHFS mounted disk a second option, if #1 does not have enough space.
    3. I would have a cron that then backs up your backup. If you backup 3x/day using Cloudron (using hardlinks, to preserve space), I would then do a daily copy to "somewhere else." That could be via restic/duplicati/etc. as you prefer.
    4. I would weekly dump to S3 (again, via cron if possible), and consider doing that as a single large file (if feasible), or multiple smaller files. Those could be straight tar files, or tar.gz if you think the data is compressible. Set up a lifecycle rule to move it quickly (one day?) to Glacier if you're thinking about cost.
    5. At the end of the month, keep only one monthly in Glacier. Not sure what the deletion costs would be if you delete that quickly, so some thought may need to be given here.

    That's perhaps a lot, but it depends on your data/what you're doing.

    You could also go the other way: if you think your cloud backup costs will be too high, you could do the following:

    1. Pick up a $300 NAS and a pair of 8-16TB hard drives
    2. Install TrueNAS on it, and put the disks in a ZFS Mirror
    3. Set up a cron on the NAS to pull down your Cloudron backups on a periodic (daily/weekly) basis. restic or similar will be your friend here.

    That's the... $800 or so solution, but you would weigh that cost against how much you're going to be paying in cloud storage. (That is, if you decide you're going to be paying $200+/year for backups, perhaps the NAS is going to start to look attractive.) The incremental backups should get smaller once you get the initial pull done (in terms of size to pull down).

    A version of the NAS is where you buy one external drive, run your backups locally, and pray the drive doesn't go underneath you, or worse, when you have to do a restore. I would personally chuck a single drive out the window, but some people love to gamble.

    Recovery from the offline backup will be annoying/painful. You'd have to upload it, and then configure your restore to point at it. However, it would be your "last ditch" recovery approach. This would be to my opening point: your backups are dictated, in no small part, by your budget and needs.

    If you have money to spare, use direct- or SSHFS-mounted disk, and just backup to it. If you are looking for some savings, you can price out S3-based storage (B2 tends to be cheapest, I think, but don't forget to estimate how many operations your backup will need---those API calls can get expensive if you have enough small objects in your backup.) Moving to Glacier is possible if you use AWS, which is significantly cheaper per TB. Having at least one disconnected backup (and a sequence) matters in the event of things like ransomware-style attacks (if that is a threat vector for you). Ultimately, each layer adds cost, complexity, and time to data recovery.

    Finally, remember: your backups are only as good as your recovery procedures and testing. If you never test your backups, you might discover you did something wrong all along, and have been wasting time from the beginning. I find Cloudron's backups to be remarkably robust, and was surprised (pleasantly!) by a recent restore. But, if you mangle backups via cron, etc., then you're just spending a lot of money moving zeros and ones around...

    Discuss backups

  • Garage packaging status, next steps
    jadudmJ jadudm

    Hi @timconsidine ,

    Good question. I didn't push further, given that @girish suggested this might be positioned to be an addon.

    https://forum.cloudron.io/post/116655

    @girish , do you think I should finish this as an app package, or do you think this is something that will land in the roadmap? Or, as we say, "two things can be true?"

    @timconsidine , I'm happy to bring this app to completion. Or, perhaps, contribute work to the core. I guess I'd look to the product team to provide some guidance.

    App Packaging & Development

  • Vaultwarden fails to start after update – DB migration error (SSO)
    jadudmJ jadudm

    The fix that @inibudi posted worked for me.

    Vaultwarden

  • Vaultwarden fails to start after update – DB migration error (SSO)
    jadudmJ jadudm

    Restoring the previous version from backup worked for me, and I then disabled automatic updates.

    Vaultwarden

  • Kudos regarding a great restore/migration experience
    jadudmJ jadudm

    Description

    I just migrated my entire Cloudron instance, and there were no problems.

    You can, and should, close this ticket after basking in your kudos. 👏 🍿 🍰

    Steps to reproduce

    1. I read the documentation for backing up and restoring Cloudron.
    2. I followed the directions.
    3. I migrated my instance from one machine to another.

    Specifically, I'm locally hosting, and was very impressed with how seamless the process was. My backups were via SSHFS mount, so when I brought up my new machine, and uploaded the backup config, it happily mounted the backup and began the restore process.

    I used the /etc/hosts trick to point to the new machine on the internal network, and when everything came up, I told my router to point to the new host as opposed to the old.

    Absolutely wonderful. Thank you.

    Logs

    YOUR_LOGS_GO_HERE
    

    Not this time!

    The Absence Of Troubleshooting Already Performed

    Just wanted to say how grateful I am for the work that goes into this. Thank you.

    System Details

    I moved from a Dell 7040MFF (bare metal) to a UGREEN NAS 2800.

    1. I put Proxmox on the NAS bare metal
    2. Built a VM on the NAS NVMe boot drive (this became my new Cloudron host)
    3. I put a pair of 8TB drives in ZFS RAID1 (7TB usable), and mounted a 6TB virtual disk living on that ZFS filesystem to the VM as /home

    This should put all of my Cloudron app data on the ZFS array. I backup over SSH to another machine with a similarly sized ZFS mirror.

    Cloudron Version

    9.0.15

    Ubuntu Version

    24.04.03

    Cloudron installation method

    Manual on a 24.04 Ubuntu Server VM.

    Discuss

  • Minio now in maintenance mode
    jadudmJ jadudm

    I did some explorations in packaging Garage, and I suspect the need for object storage on Coudron with the fading of Minio is on the core team's radar:

    • https://forum.cloudron.io/topic/6461/garage-an-open-source-distributed-storage-service-you-can-self-host-to-fullfill-many-needs/16
    • https://forum.cloudron.io/topic/14689/garage-packaging-status-next-steps

    Like timka said, I didn't poke RustFS for reliabiliy/newness reasons, and was reasonably pleased that Garage fit a similar niche/seems to fit the Cloudron model (self-hosting/limited-scale hosting).

    Minio

  • Garage, an open-source distributed storage service you can self-host to fullfill many needs
    jadudmJ jadudm

    @girish Glad to serve.

    If the goal is an addon, then...

    1. I think the backup piece should be straight-forward? (I have some more info in the packaging thread about this.) Essentially, you want to make sure the SQLite DB is backed up, and then you backup the filesystem. I have questions about what happens if (say) a restore happens (e.g. what if the filesystem path changes?), but those things can be explored.
    2. There is an administrative API that (once you generate a secret/trusted admin API token) lets you do everything administratively via that API (bucket creation, etc.).

    As to a web interface, I would not recommend you create one for end-users. (I suspect this is not what you mean.) You have NextCloud, XBackBone, and other apps that can talk to an S3-compatible backend for file browsing. What people might need/want is a way to:

    • Create/remove buckets (which, on the backend, you'd use your secret admin key)
    • Create/remove keys and attach them to buckets
    • Create/remove administrative keys (for superusers who want to script things against the backend)
    • Bonus: the ability to designate a bucket as a static site, and then you do the DNS work on the backend to point either a subsubdomain at it (e.g. site.s3.example.com) or a whole new domain (e.g. someothersite.com -> site.s3.example.com)

    I suspect you could iterate towards this, if you wanted to. Release it with terminal-only management to start, and work towards an admin interface for common bucket creation/removal-type tasks.

    There are things that Garage does not do (e.g. lifecycle rules), so this is not a "full" S3 clone (the way Minio aspired to be). In this regard, SeaweedFS might offer more features (and a built-in admin UI)... so, it might be worth doing a spike to explore it as well. At a glance, it is similar, but it also is intended to scale (e.g. it does Raft consensus and Reed-Solomon EC if you want it) in a way that Garage does not. This might also be a reason to not use Seaweed.

    I can poke it with a stick if it would be valuable.

    Update, a few hours later: I would recommend sticking with Garage.

    App Wishlist

  • Garage packaging status, next steps
    jadudmJ jadudm

    Hi @scooke ,

    To your question, I followed the instructions on the Garage website, and I added a near-copy of that documentation to the README in the package I've developed. This is all documented in the git repository for the package that I linked to. I have Garage running on my Cloudron instance, and was able to create buckets, add content, and even serve static web content from within those buckets. I'm sorry your prior experience with this did not work for you.

    I am now trying to do that in a way that it would be considered for inclusion as a standard package, if the Cloudron team thinks it is worthwhile. If they don't, perhaps I'll do it for myself. 🤷

    App Packaging & Development

  • Garage packaging status, next steps
    jadudmJ jadudm

    Not much I can do with that statement. I have it packaged and working. I'm now in the weeds of how best to handle data backup and restore on Cloudron. Given that the Minio package must go away, this is at least a possibility that can be evaluated.

    App Packaging & Development

  • Garage packaging status, next steps
    jadudmJ jadudm

    Hm. Maybe I am wrong about the URL rewrites. I'll sleep on it.

    Because Garage puts the different APIs/functions on different ports... you might be right, @robi.

    I still think it would be nice to have aliases exposed in the manifest.

    App Packaging & Development
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search