Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
jadudmJ

jadudm

@jadudm
About
Posts
131
Topics
22
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • file upload broken
    jadudmJ jadudm

    There's no log on the server side. The JS yields this message in the console:

    {
        "code": "E_UNPROCESSABLE_ENTITY",
        "message": "EROFS: read-only file system, mkdir '/app/code/.tmp'"
    }
    

    That is when I drag-and-drop a file into a card. The same if I use the "Attachment" feature to attempt to attach a doc to a card.

    Planka

  • First steps / discoveries
    jadudmJ jadudm

    For anyone poking the package...

    I was unsure how to change the password for the default user. Some digging around yielded the following...

    1. I had to generate a password hash. Log into the terminal for the app and run:
    htpasswd -bnBC 10 "" PASSWORD | tr -d ":\n" | sed 's/$2y/$2a/' && echo
    

    replacing the word PASSWORD with your password. I used a long combination of words and numbers with no spaces. This will output a bcrypt password hash:

    $2a$10$GpIUtD/NDyfpVkQuaVfDde7M5SjcHdcmN9e49kFlgoeVsmMnrQ0wm
    

    or similar. (Don't use that one.)

    1. Log into the terminal, and open up the Postgres prompt.
    update user_account set password = '$2a$10$GpIUtD/NDyfpVk...' where email = 'admin@cloudron.local';
    

    Again, put your password hash between the single quotes.

    Now, you can log in as the administrative user with the password you hashed.

    1. Log in using your SSO/Cloudron user.

    I think a package update might be needed. The OIDC users do not have any privileges out-of-the-box.

    https://github.com/plankanban/planka/issues/661

    It might be setting OIDC_IGNORE_ROLES would allow OIDC users to create boards. There might be more env vars necessary, but that one seems critical. It seems like the administrative user cannot (out-of-the-box) modify the roles for OIDC users (https://github.com/plankanban/planka/issues/1112). Again, that env var may be part of the puzzle.

    In the meantime, I again opened up the Postgres prompt, and did:

    SELECT * FROM user_account
    

    I found my username (which was my expected Cloudron username, so it was easy to find/there were no surprises), and then modified that row to give myself more permissions:

    UPDATE user_account SET role = 'projectOwner' WHERE username = 'MYUSERNAME';
    

    This gave me enough permissions to do things.

    My hope is that if the right env vars were set, that I would not need to grant my OIDC users permissions via the admin user.

    Planka

  • SSH remote copy always failed, falling back to sshfs copy
    jadudmJ jadudm

    I'm not sure where in the Box codebase this is, but the SSH backup behavior is strange. (Or, not documented sufficiently clearly for me to make sense of it.) I spent a bunch of time trying to figure this out as well, and ultimately gave up. However, @dummyzam is encountering many of the same kinds of confusion I did.

    Ideally:

    1. The remote directory should be the base for all operations, as far as backup site configuration is concerned.
    2. All operations should be absolute paths, and be rooted at join(remote dir, prefix). No backup operations should take place outside of that root path on the remote system.

    If those things are true, then it should be the case that given a target/remote directory of (say) /mnt/OnlyZpol/CloudronBackup/ and a prefix of backups, then all operations will be against /mnt/OnlyZpol/CloudronBackup/backups/*.

    If $HOME is used by Box when doing SSHFS backups, then that should be documented somewhere.

    As I learned, the target/remote directory will be set to 777, which can be a problem if the user you're authenticating as lacks permissions, or if you make the mistake of using $HOME as your remote directory (as this can upset the permissions that SSHD expects for .ssh).

    Support backup restore zfs sshfs

  • SSH remote copy always failed, falling back to sshfs copy
    jadudmJ jadudm

    Is Cloudron using a correct/full path when issuing the cp over ssh? I've had this problem, too, and I know I'm not spanning ZFS pools. If we enter a root path into the config, should't Cloudron use the full paths, for correctness/completeness/clarity, when issuing the cp?

    I don't think that helps much, but I've seen the same persistent issues that @dummyzam is pointing to.

    Support backup restore zfs sshfs

  • Garage packaging status, next steps
    jadudmJ jadudm

    It might be in the location block that you would need to add it:

    https://forgejo.tcjc.uk/cca/cloudron-garage-s3/src/commit/d8e09cf79461b108f369f2ec5a87f025a1967e64/nginx.conf#L65

    Which would parallel the fix described here:

    https://forum.cloudron.io/topic/14972/413-content-too-large-on-video-upload-inner-nginx-client_max_body_size-seems-too-low/2?_=1775394530576#:~:text=Example%3A-,location,-/api/ { client_max_body_size 200M

    It might even be that @d19dotca can make that change to his own config for local testing. But, I agree, an officially packaged version will likely address this, and be integrated fully with backups, etc.

    App Packaging & Development

  • Garage packaging status, next steps
    jadudmJ jadudm

    @timconsidine Serving as your meat-based AI... 😄

    In theory, it would be an app issue. Because Nginx is serving as a proxy in front of Garage---it is handling the HTTPS and some domain routing to the app itself---the behavior of that HTTP server becomes "a thing." In this case, Nginx has a maximum client body size:

    https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size

    For many apps, this is never an issue; for apps that involve uploading files (Immich, Garage, etc.), clients routinely send large files. While I might think I'm connected to my Garage instance, what I'm actually connected to is Nginx, which is proxying my connection through to Garage in the backend. Therefore, the behavior of the proxy matters, and in this case, it has to do with the filesize limits of the proxy. When doing file uploads, we can easily exceed the per-request filesize limit on the proxy, and Nginx returns a 413 as a result.

    You wouldn't see it if you're using rsync backups and dealing with small files. However, a tarball backup can easily generate a request that comes in at gigabytes in size; as a result, Nginx says "no" and returns a 413.

    Hence, @d19dotca thinking that setting that value in the Nginx config to 0 would likely eliminate the error.

    All of that said, there's also the possible arrival of an officially maintained Garage package, which we might want to move to anyway. However, it is good to have options to experiment with!

    App Packaging & Development

  • Garage packaging status, next steps
    jadudmJ jadudm

    There's a good chance this is an Nginx error; we've seen this before on other packages. There's a limit on the front-side as to how large a file can be passed through an Nginx proxy. For example:

    https://forum.cloudron.io/topic/14972/413-content-too-large-on-video-upload-inner-nginx-client_max_body_size-seems-too-low/2?_=1775394530576

    It might be a similar problem here. Fixable, but it will require an update on the package.

    App Packaging & Development

  • Garage, an open-source distributed storage service you can self-host to fullfill many needs
    jadudmJ jadudm

    Hi @timconsidine , I know I won't have time to do a PR anytime soon, so I'll drop a note here. Huge kudos on bringing the package forward.

    The Garage state is stored entirely in SQLite databases. I can't remember the names of them... there's 2 or 3? So while you've spec'd the directories where they will live, that's only part of what needs to be done with them for a restoreable Garage installation on Cloudron.

    https://docs.cloudron.io/packaging/addons/#sqlite

    You'll want to make sure they're explicitly called out in the manifest. Doing so makes sure they get baked up safely.

    If you don't, it is possible that a backup will fail to correctly capture all of the metadata about the Garage instance, and the result could be lost data upon restore. (That is, if a WAL file is not flushed, then the standard backup might capture the metadata DB in an inconsistent state, and if someone had to restore, they would have a corrupt and unrecoverable Garage installation.)

    App Wishlist

  • Struggling to Replace MinIO - Advice Welcome!
    jadudmJ jadudm

    Depending on your appetite for loss, I would consider backups-in-depth. That is, one backup site is not a backup.

    1. Use rsync-based backup over SSHFS to Hetzner or similar. You will want to select "use hardlinks" and, if you want it, encryption. The use of hardlinks is, essentially, your de-duplication. (See below.)
    2. For a second layer of depth, I would consider a (daily? weekly? monthly?) backup of your primary backup site to a secondary. This could be a sync to AWS S3, for example. Note that any S3-based backup (B2, Cloudflare ObjectSomething, etc.) will have both a storage cost and an API cost. If you are dealing with millions of small files in your backups, the API costs will become real, because dedupe requires checking each object, and then possibly transferring it (multiple PUT/GET requests per file).
      1. S3 has the ability to automatically keep multiple versions of a file. You could use this to have an in-place rotation/update of files.
      2. If you are doing an S3 backup, you can use lifecycle rules to automatically move your S3 content to Glacier. This is much cheaper than "hot" S3 storage. But, you pay a penalty if you download/delete to early/too often.
    3. As a third, cheap-ish option, go get a 2- or 4-bay NAS that can run TrueNAS, and put a pair of 8-12TB HDDs in it. Configure the disks in a ZFS mirrored pair. Run a cron job once per day/week to pull down the contents of the Hetzner box. (Your cron will want to, again, use rsync with hardlinks.) You now have a local machine mirroring your hot backups. It is arguably more expensive than some other options (~600USD up front), but you don't have any "we might run out of space" issues. And, because you're using it to pull, you don't have any weird networking problems: just SCP the data down. (Or, rsync it down over SSH.)

    Whatever you are doing, consider targeting two different destinations at two different times (per day/alternating/etc.). Or, consider having some combination of backups that give you multiple copies at multiple sites. That could be Hetzner in two regions, with backups run on alternating days, or it could be you backup to a storage box and pull down a clone every day to a local NAS, or ... or ...

    Ultimately, your 150GB is small. If you're increasing by a few GB per week, you're saying that you are likely to have 1TB/year. Not knowing your company's finances, this is generally considered a small amount of data. Trying to optimize for cost, immediately, is possibly less important than just getting the backups somewhere.

    Other strategies could involve backing up to the NAS locally first, and then using a cron to borg or rsync to a remote host (possibly more annoying to set up), etc. But, you might have more "dedupe" options then. (borg has dedupe built in, I think, but...)

    I have a suspicion that your desire to use object storage might be a red herring. But, again, I don't know your constraints/budget/needs/concerns.


    Deduplication: If you use rsync with hardlinks, then each daily backup will automatically dedupe unchanged files. A hardlink is a pointer to a file. So, if you upload super_ai_outputs_day_1.md to your storage on Monday, and it remains unchanged for the rest of time, then each subsequent day is going to be a hardlink to that file. It will, for all intents and purposes, take up zero disk space. So, if you are backing up large numbers of small-to-medium sized files that do not change, SSHFS/rsync with hardlinks is going to naturally dedupe your unchanging old data.

    This will not do binary deduplication of different files. So, if you're looking for a backup solution that would (say) identify that two, 1GB files where the middle 500GB are identical, and somehow dedupe that... you need more sophisticated tools and strategies. Rsync/hardlinks just makes sure that the same file, backed up every day, does not take (# days * size) space. It just takes the original size of the file plus an inode in the FS for each link.

    Note, though, if you involve a snapshot of your hardlinked backups to an object store, every file may take the full size of every file for every day. I'm possibly wrong on that, but I'm not confident that most tools would know what to do with those hardlinks when you're copying to an object store. I think you'd end up multiplying your disk usage significantly, because your backup tool will have to create a copy of each file into the object store. (Most object stores do not have a notion of symlinks/hardlinks.) An experiment with a subset of the data, or even a few files, will tell you the answer to that question.

    If you have other questions, you can ask here, or DM me.

    Discuss

  • TLS Passthrough option for apps requiring end-to-end TLS
    jadudmJ jadudm

    Consider this an upvote/+1, along with kudos on @marcusquinn 's packaging of NetBird.

    Feature Requests

  • Long backups, local and remote, failing consistently
    jadudmJ jadudm

    Will do, James. I have not been able to recreate the held lock issue. I was starting/stopping jobs a fair bit at one point, and can't... be precise about where in the backup cycle those cancellations happened that a cleanup might not have happened. I will watch for it in the future.

    When I said there was no RAM pressure, I meant that was true for the server. However, my jobs all had 1GB of RAM. Your suggestion clued me in; because that value must be set after you setup the backup job, I had never noticed it before... or, not realized how critical it might be. I have bumped them all to 6GB of RAM, and so far, I've been seeing backup successes.

    Barring the question below, I'd say we could close this issue. The lesson learned is that I need to provide my backup tasks more RAM. Because I have some RAM to spare, I'm going aggressive, and giving things 6GB. I did not attempt to settle on a smaller amount, for anyone who comes along after--- I just gave the tasks a limit that I considered to be "a lot" in this context.

    I still see some things like the errors below. The backup completes successfully, but I'm unclear why there would be errors like these sprinkled throughout the backup. Is the relative path full/snapshot/app_... actually correct? Or, should that be a full path (e.g. the base path I provided at setup time along with the relative path)? In the command that succeeds, it is a full path.

    Feb 13 16:11:00 box:shell filesystem: ssh -o "StrictHostKeyChecking no" -i /tmp/identity_file_d82bc09e-a419-4d60-84bf-95d631fd0ebb -p 22 user@nas.lan cp -aRl full/snapshot/app_c74efccf-d273-46c9-8afe-3fd427bb78c1 full/2026-02-13-210356-064/app_git.jadud.com_v1.37.4 errored BoxError: ssh exited with code 1 signal null
    Feb 13 16:11:00 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:82:23)
    Feb 13 16:11:00 at ChildProcess.emit (node:events:519:28)
    Feb 13 16:11:00 at maybeClose (node:internal/child_process:1101:16)
    Feb 13 16:11:00 at ChildProcess._handle.onexit (node:internal/child_process:304:5) {
    Feb 13 16:11:00 reason: 'Shell Error',
    Feb 13 16:11:00 details: {},
    Feb 13 16:11:00 stdout: <Buffer >,
    Feb 13 16:11:00 stdoutString: '',
    Feb 13 16:11:00 stdoutLineCount: 0,
    Feb 13 16:11:00 stderr: <Buffer 63 70 3a 20 63 61 6e 6e 6f 74 20 73 74 61 74 20 27 66 75 6c 6c 2f 73 6e 61 70 73 68 6f 74 2f 61 70 70 5f 63 37 34 65 66 63 63 66 2d 64 32 37 33 2d 34 ... 50 more bytes>,
    Feb 13 16:11:00 stderrString: "cp: cannot stat 'full/snapshot/app_c74efccf-d273-46c9-8afe-3fd427bb78c1': No such file or directory\n",
    Feb 13 16:11:00 stderrLineCount: 1,
    Feb 13 16:11:00 code: 1,
    Feb 13 16:11:00 signal: null,
    Feb 13 16:11:00 timedOut: false,
    Feb 13 16:11:00 terminated: false
    Feb 13 16:11:00 }
    Feb 13 16:11:00 box:storage/filesystem SSH remote copy failed, trying sshfs copy
    Feb 13 16:11:00 box:shell filesystem: cp -aRl /mnt/managedbackups/1ec6c6b4-7566-4369-b2ce-466968b00d5d/full/snapshot/app_c74efccf-d273-46c9-8afe-3fd427bb78c1 /mnt/managedbackups/1ec6c6b4-7566-4369-b2ce-466968b00d5d/full/2026-02-13-210356-064/app_git.jadud.com_v1.37.4
    Feb 13 16:11:07 box:backuptask copy: copied successfully to 2026-02-13-210356-064/app_git.jadud.com_v1.37.4. Took 7.889 seconds
    
    Support backup sshfs rsync

  • Long backups, local and remote, failing consistently
    jadudmJ jadudm

    OK. Solution so far:

    1. I removed all backup sites and rebooted. (There's a question at the end.)
    2. I added a CIFS point (instead of SSHFS) to the local NAS.
    3. Gave the backup 5GB of RAM, and set the concurrency to 100
    4. Waited an hour or two. Two? What is time.

    The backup for Immich succeeded.

    I may try an SSHFS backup with similar parameters, but I'll... be limited on the storage box with regards to concurrency. So, we'll see.

    QUESTION: I have noticed when app backups fail, there's sometimes a stale lock. Where is that lock? I would like to be able to remove the lock without having to reboot. Is it in the DB? A file? Where does Box keep those app backup locks?

    I'm not convinced I've solved my problem, but I'm starting to think the RAM for the backup(s) may matter, which I had never encountered before.

    Support backup sshfs rsync

  • Long backups, local and remote, failing consistently
    jadudmJ jadudm

    Interesting. I think I had missed that setting before.

    I tried two things, but now need to head to work.

    I created a SMB share on the NAS. I was able to establish a backup site... and, I just re-created an SSHFS mount per above, and gave it 6GB of RAM.

    Feb 11 09:16:30 box:taskworker Starting task 9902. Logs are at /home/yellowtent/platformdata/logs/tasks/9902.log
    Feb 11 09:16:30 box:taskworker Running task of type backup
    Feb 11 09:16:30 box:backuptask fullBackup: skipped backup ...
    Feb 11 09:16:30 box:tasks updating task 9902 with: {"percent":66.38461538461539,"message":"Backing up photos.jadud.com (17/23). Waiting for lock"}
    Feb 11 09:16:30 box:locks write: current locks: {"full_backup_task_846414c7-0abc-4ae1-8432-2430e5008342":null,"app_backup_a6dc2056-829f-46c4-bf31-7a93cba4af11":"9902"}
    Feb 11 09:16:30 box:locks acquire: app_backup_a6dc2056-829f-46c4-bf31-7a93cba4af11
    Feb 11 09:16:30 box:backuptask fullBackup: app photos.jadud.com backup finished. Took 0.002 seconds
    Feb 11 09:16:30 box:locks write: current locks: {"full_backup_task_846414c7-0abc-4ae1-8432-2430e5008342":null}
    Feb 11 09:16:30 box:locks release: app_backup_a6dc2056-829f-46c4-bf31-7a93cba4af11
    Feb 11 09:16:30 box:backuptask fullBackup: skipped backup ...
    Feb 11 09:16:30 box:tasks setCompleted - 9902: {"result":[],"error":null,"percent":100}
    Feb 11 09:16:30 box:tasks updating task 9902 with: {"completed":true,"result":[],"error":null,"percent":100}
    Feb 11 09:16:30 box:taskworker Task took 0.066 seconds
    Feb 11 09:16:30 Exiting with code 0
    

    If I try and kick off the backup, it starts up and exits immediately. Is there a lock floating somewhere? (Is that the full backup task lock?)

    No backups are running that I can see, but this is now a new behavior. I have rebooted the machine, and this does not change.

    No doubt, I've created this problem through my iterations.

    Support backup sshfs rsync

  • Long backups, local and remote, failing consistently
    jadudmJ jadudm

    Good questions. The configuration locally is that the machines all live behind an OpnSense router. Cloudron is hosted on a VM on a small machine (and has 24GB of RAM allocated to it, and does not show signs of RAM pressure), and the NAS itself is running TrueNAS w/ 40GB of RAM available (it is never under RAM pressure, as far as I can tell).

    cloudron.lan -> switch -> nas.lan

    Both machines are local. The cables could be poor; I can check. This is why I think the SSHFS failure on the Cloudron -> NAS connection is so worrying; there's no good reason why it should fail, from what I can tell.

    I can... understand that the SSHFS backup to the storage box might be troublesome, given the distances involved. The local connection, though, should "just work."

    I'll dig more into possible memory issues.

    Support backup sshfs rsync

  • Long backups, local and remote, failing consistently
    jadudmJ jadudm

    @james , do you have any thoughts?

    I had to reboot the server for updates yesterday; as a result, the Immich app is (again) trying to backup. It is now 14K into another attempt. I have every belief that it will fail some 250K files into the backup.

    Do any of the strategies I've brainstormed sound better than the others from y'alls perspective?

    We can leave this thread open as I explore, but I think the answer is "I can't backup my photos by simply adding an SSHFS backup location." I apparently have to solve this some other way.

    Support backup sshfs rsync

  • Long backups, local and remote, failing consistently
    jadudmJ jadudm

    I could also use the fstab to mount an SSHFS filesystem to the remotes, and let Cloudron backup via filesystem there. This would move the management of the mount out of the hands of Cloudron, and into the hands of the OS.

    I don't know if that would help.

    Support backup sshfs rsync

  • Long backups, local and remote, failing consistently
    jadudmJ jadudm

    The Immich (photos) backup ended as follows.

    Feb 10 03:11:21 box:backupformat/rsync sync: adding data/upload/upload/d354571e-1804-4798-bd79-e29690172c14/d9/d7/d9d762ae-5a69-461d-9387-84882f110276.jpg.xmp position 227458 try 1
    Feb 10 03:11:21 box:backupformat/rsync sync: processing task: {"operation":"add","path":"data/upload/upload/d354571e-1804-4798-bd79-e29690172c14/d9/d7/d9d762ae-5a69-461d-9387-84882f110276.jpg.xmp","reason":"new","position":227458}
    Feb 10 03:11:21 Exiting with code 70
    Feb 10 03:11:21 box:taskworker Terminated
    Feb 10 05:03:04 13:M 10 Feb 2026 10:03:04.004 * 10 changes in 300 seconds. Saving...
    Feb 10 05:03:04 13:M 10 Feb 2026 10:03:04.004 * Background saving started by pid 298
    

    I do not know for certain if this was the local or remote backup. Local, the snapshot folder dates Feb 9 03:13, and remote it dates Feb 9, 02:35. Those... appear to be the created times, using ls -ac.

    According to logs, my music backup ran at Tuesday at 3AM, and it completed in 1m30s or thereabouts. So, that took place 10m before this failure. The music backup would be against the NAS.

    Immich still wants to update.

    Are there any thoughts as to what I should consider doing to get to a successful backup of my photos?

    Absent a way for Cloudron to successfully backup Immich, I feel like the following are my options:

    1. JuiceFS would probably let rsync complete and support hardlinks. I would create an SSHFS mount via Juice from a folder on my localhost -> the target system. Then, I would mount that folder as a local volume (filesystem). As far as Cloudron would be concerned, it would be a regular filesystem. Downside? It's a moving piece in-between me and my files, and a point for data loss.
    2. I could use object storage, but I'm concerned about operation costs. An rsync -> object store approach with this many files means... probably hundreds of thousands of API calls for every backup. Depending on the provider, that ends up costing.
    3. Use tar? I feel that a tarball is really inefficient, since the photos don't change often/at all.
    4. Backup locally and rsync the backup. This would eat disk, but I have space to spare on the Cloudron host; it runs on a mirrored 8TB pair. If I keep three backups (monthly), I would end up with nearly a TB of data, but I could rsync that to the NAS and remote. The rotation would happen locally, I'd get off-site and local backups, and the cost would be that each photo takes 4x the space (original + 3x copies on the local filesystem for rsync rotation).
    Support backup sshfs rsync

  • Update on community packages
    jadudmJ jadudm

    I would ask, for simplicity, that you require the developer to put the JSON in a fixed/predictable path, and allow the user to paste the URL for the main GH repo. Asking users to find the "raw" link is likely hard/confusing. Put the onus on the person packaging, not the person installing?

    App Packaging & Development

  • Long backups, local and remote, failing consistently
    jadudmJ jadudm

    I'm 140k into another run. Took all day... will bump thread with results when there are results...

    Support backup sshfs rsync

  • Why does Cloudron set 777 permissions for SSHFS?
    jadudmJ jadudm

    Ah. I see.

    My apologies. I am very used to being the same user on both the host and the target system. And, I'm thinking in terms of scp or sftp, not an SSHFS mount. The difference matters a great deal; your answer is clear, and I see why I was confused/wrong.

    My fog of confusion wafts away in the light of illumination. 🙏 Thank you.

    Support backup sshfs security
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search