Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
D

davejgreen

@davejgreen
About
Posts
16
Topics
4
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • Help with migrating Cloudron to a new server
    D davejgreen

    I tried again with a fresh install of Ubuntu 24.04 (this wipes all data on the server, so there is nothing left over from previous attempts). I checked UID 1000 was available, installed Cloudron v9.0.17 (following everything to the letter), but I still ran into exactly the same problem. The new installation gave yellowtent the UID 808, which does not match the UID of yellowtent on the existing server (1000), which is the sole cause of the problem.

    The existing server was installed long before I worked here, so I don't know why yellowtent has UID 1000 there, while fresh Cloudron installs seem to be giving it 808. I read that UIDs of 1000 and over are for "normal" users, while those below 1000 are for "system" users. Has the yellowtent user been changed from a "normal" to a "system" user at some point? Or is the UID assigned by Ubuntu? Maybe different versions of Ubuntu do it differently? Or maybe there was some anomaly when our existing Cloudron instance was installed that caused yellowtent to get 1000 instead of 808.

    I don't really know anything about idmapping - I had a quick go at the setfacl thing, but it spat out hundreds of lines ending in "Operation not permitted". In any case, I figured we probably want to fix things so yellowtent has the same UID so it can continue using the existing backups after the migration. So, I did another fresh install of Ubuntu and started again. This time, once Cloudron had been installed and I had rebooted the server, I did the following on the new server:

    # Stop everything yellowtent is involved with:
    systemctl stop box
    systemctl disable box
    systemctl stop cloudron-syslog.service
    
    # Check this returns empty:
    ps -u yellowtent
    
    # Switch the UID:
    usermod -u 1000 yellowtent
    groupmod -g 1000 yellowtent
    
    # Fix ownership on everything owned by 808 (takes a few mins):
    find / -xdev -user 808 -exec chown -h 1000 {} \;
    find / -xdev -group 808 -exec chgrp -h 1000 {} \;
    
    # Restart stuff:
    systemctl start cloudron-syslog.service
    systemctl enable box
    systemctl start box
    

    After that, I was able to continue and complete a successful dry run restore on the new server.

    Support backup restore migration

  • Help with migrating Cloudron to a new server
    D davejgreen

    Just to clarify, the editing of /etc/hosts was irrelevant and did not cause any of the issues I was having. This is a business set up, and I have been instructed to create at least 2 users on the server, to give them admin rights, and then prevent ssh-ing in as root, so the user accounts on the server are needed. I needed to add the server to our tailscale network before doing the restore, as that is how the server will access the backups. It seemed sensible to add the users and check ssh-ing between devices when I did that.

    I wondered if it would be worth adding a note to the migration docs about making sure the UID of the "yellowtent" user on the old server is available on the new server when Cloudron is installed?

    Support backup restore migration

  • Help with migrating Cloudron to a new server
    D davejgreen

    Ah, I think I understand the issue. On our existing server, yellowtent has UID 1000. This is why the office device that is receiving the NFS Tarball backups has everything owned by "1000:1000". But on the new server, I created my user before installing cloudron, and it happened to be assigned UID 1000. Then I installed Cloudron and it must have assigned yellowtent to 808 because 1000 was taken. So, then the ownership does not match.

    Thank you, I believe I can sort this out now!

    Support backup restore migration

  • Help with migrating Cloudron to a new server
    D davejgreen

    When I click Restore, I get the message: "Access denied. Create /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs/snapshot and run "chown yellowtent:yellowtent /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs" on the server".

    Then, the /mnt/managedbackups/cloudron-restore-validation folder looks like it is mounted correctly. It has permissions "drwxrwxrwx root root", and I can see the "nfs-tarballs" folder from our office device inside it.

    The "nfs-tarballs" folder, and all its contents, have permissions "drwxr-xr-x djg djg" (djg is my user, which happens to have the UID 1000, on the office device there is no user assigned to 1000 and these folders show "drwxr-xr-x 1000 1000"). The "nfs-tarballs" folder contains several folders named with date-times, as well as one called "snapshot". Inside these are the .tar.gz and .backupinfo files. Is the problem that there is already a "snapshot" folder here? (The message is asking me to create it, but it already exists.)

    The error message also says to change the ownership of the "nfs-tarballs" folder to yellowtent:yellowtent. If I do that, it changes the ownership to "808:808" on the office device (because that is where the files actually are) - is that what is intended? If I try this and click the Restore button again, I get the message "Failed to unmount existing mount". If I then unmount cloudron-restore-validation, and click the Restore button again, I get the error message: "Unable to create test file as 'yellowtent' user in /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs: EACCES: permission denied, open '/mnt/managedbackups/cloudron-restore-validation/nfs-tarballs/snapshot/cloudron-testfile'. Check dir/mount permissions". Inside the "snapshot" folder there is a full set of .tar.gz and .backupinfo files, but no "cloudron-testfile". What should the "dir/mount" permissions be?

    Support backup restore migration

  • Help with migrating Cloudron to a new server
    D davejgreen

    Hi @james, thanks again for a quick response.

    No Left Over Files
    I think you're not understanding what I said about a complete reinstall. I tried the rsync attempt last week, then I erased everything on the new server to try again. Wiped it clean, all files gone, new install of Ubuntu 24.04, no left over files. Then I did this morning's attempt. There were not any files or folders left over from the rsync attempt when I tried the restore today, so that cannot be the reason it didn't work.

    DNS
    I don't think I understand your point about the /etc/hosts change. I understand that to view the newly restored Cloudron instance in a browser on my local device, I will need to change my local device's /etc/hosts. But I don't need to do this to see the "Restore Cloudron" page, I can just enter the new server's IPv4 in the browser address bar. Then when I start the restore from there, doesn't this happen on the new server? I didn't think my local device was doing anything other than showing me what was happening on the new server through the browser window? Once the restore has finished, then I would need to adjust my local /etc/hosts to view the newly restored Cloudron instead of the old existing one. Are you saying I need to adjust /etc/hosts on my local device before I do the restore? (If so, why?, given that I'm doing a dry run and that we manage all DNS manually.)

    Next Steps?
    I'm unsure what to try next. I still have the error "Failed to unmount existing mount". I tried doing sudo umount /mnt/managedbackups/cloudron-restore-validation which appeared to work, and then I clicked the restore button again, but got the same error message. The first error message started with "Access denied." which makes me think the problem might be related to file and folder ownership and/or permissions, as I am often confused by these.

    How is this mounting meant to work? What should I try next?

    Support backup restore migration

  • Help with migrating Cloudron to a new server
    D davejgreen

    Hi @james, thanks for such a prompt response.

    I'm surprised that trying to migrate from one server to another using an NFS tarball backup on a local device is considered complex - although I am certainly finding it very complex! But I figured that was down to my inexperience in these things. Is it the NFS tarball backup that is unusual?

    Our DNS records are manually configured. I think I understand I probably don't need the new server to have the /etc/hosts entries until we do the actual migration. Our thinking was to keep the existing server live for as long as possible, so we could do the migration with these /etc/host entries on the new server, check it is working, then turn off the live server, switch the DNS records, and remove the /etc/hosts entries on the new server. (Though, maybe that's a bad idea and we should just turn the old server off first.) For other parts of my job I still need access to the existing live Cloudron instance, so I hadn't added anything to my local device's /etc/hosts yet - but was planning to do this once the install had completed so that I could check it. I did this with the earlier rsync-Filesystem attempt.

    Just to clarify, I did do a complete reinstall of the server before each attempt - installing an Ubuntu image from our provider and erasing all existing data, so I was starting with a clean slate each time. So the backups I rsync over on the earlier attempt were not still there when I tried again today.

    The mounting stuff I am a bit confused by (I'm relatively new to the idea of mounting file systems). Doing mount -l on the new server, I can see there is a mount: our-office-device:/export/nfs-cloudron-backups on /mnt/managedbackups/cloudron-restore-validation type nfs4 (with,various,config). I did not create this (or any other mount) manually. Our office device runs on NixOS, and I added a line for the new Cloudron server to its configuration:

      services.nfs.server = {
        enable = true;
        exports = ''
          /export/nfs-cloudron-backups <old-server-tailscale-IP>(rw,sync,no_subtree_check,no_root_squash)
          /export/nfs-cloudron-backups <new-server-tailscale-IP>(rw,sync,no_subtree_check,no_root_squash)
        '';
    
    

    My understanding was that this would only allow a device with that IP address to mount the folder at /export/nfs-cloudron-backups, but maybe it 'pushed' the mount to the device it could see at that IP address? I'd be surprised if that was the case though, as I haven't set the mount location on the new server anywhere here, and with the old server, I still had to go and create the backup site which included it mounting to this location. The new server has a systemd service mnt-managedbackups-cloudron\x2drestore\x2dvalidation.mount which is active and mounted. I assume that must have been created when I first clicked the "Restore" button and then got the "Access denied. Create..." message? Is that something Cloudron does? Are there some directory permissions I need to check or change? Should I have manually created a mount so the new server could see the office device backup folder before clicking "Restore"?

    Support backup restore migration

  • Help with migrating Cloudron to a new server
    D davejgreen

    Thank you for the clear responses - much appreciated. I am now attempting a dry run of restoring to the new server (a remote VPS with Mythic Beasts) from a backup from the old server (a remote Linode server), but I am hitting some problems.

    I have reinstalled Ubuntu 24.04 on our new server (giving me a clean slate), and added it to our organisation's tailscale network and checked I can ssh between all the relevant devices. I have added several entries to the /etc/hosts file on the new server to point all our Cloudron domains to its IPv4 and IPv6 addresses. I have then installed on our new server the same Cloudron version as our existing server (9.0.17).

    On the existing server, I have set up a backup site using NFS (tarball) which stores complete backups on an office device that is also on our tailscale network. I have configured this office device to allow both Cloudron servers to see its NFS export. I have then downloaded the config file for the most recent backup.

    I have gone to the IPv4 of the new server in a browser, got to the Cloudron install page and clicked the "Looking to restore?" link. I have then uploaded the backup config file and it has populated the text fields for me. I ticked the "Dry Run" box and clicked "Restore". This gave me the error message:
    Access denied. Create /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs/snapshot and run "chown yellowtent:yellowtent /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs" on the server

    So, on the new server, I did mkdir -p /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs/snapshot. I could then see the backups from our office device at /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs on the new server. (How did they get there if all I did was make a directory?) I then did the chown as instructed and tried clicking "Restore" in the browser page again. This time I got the error:
    Failed to unmount existing mount
    Is this referring to a mount that was somehow made when I created the directory? Why does it need to unmount it, and why can't it do that? Do I need to unmount something?

    Prior to this attempt, I tried the dry run restore by rsyncing the folder of tarballs from a recent backup onto the new server, and then using "Filesystem" as the storage provider (as mentioned in the docs). This successfully restored Cloudron with all users, but it failed to restore all the apps - they were all in an error state. It seemed to be looking for their backup files in the location they would have been at on the old server (a location whose path included a folder named with a big hash). Is this the better approach? And do I have to work out where the backups should be and try to recreate that location on the new server?

    Any further guidance would be much appreciated!

    Support backup restore migration

  • Help with migrating Cloudron to a new server
    D davejgreen

    Hi, we are looking to migrate our Cloudron to a new server and want to minimise down time for our staff. We have about 30 users, 130GB of emails, and recent full backups are around 160GB. We use Cloudron for our business website, emails, and internal chat among other things.

    I have reviewed the Docs at https://docs.cloudron.io/backups#move-cloudron-to-another-server, but I'm still a bit unsure of a few things:

    • Is migration of users and email boxes only possible by using the "looking to restore" button for restoring full Cloudron backups? And is that button only available when you first install Cloudron? We have a second server up and running with a newer version of Ubuntu and Cloudron and were thinking about importing each app individually (after backing it up from the old Cloudron onto a backup site hosted on the new Cloudron), but I'm not sure if there's a way to import the users and email boxes? If not, I guess this approach will not work for us, we'd have to use a third site for a full backup.

    • We have a premium monthly subscription on the old Cloudron instance. Do we need to purchase another one for the new Cloudron instance? (We have more than 2 apps.) And then cancel the old one once the migration is done? Or is there a way of using the same subscription throughout the migration?

    Support backup restore migration

  • Gitlab runners fail every 30th build
    D davejgreen

    We solved this in the end by adding some clean up to the GitLab runner's after_script in our .gitlab-ci.yml file:

      before_script:
        - stat -c %m /tmp
        - du -sh /tmp
        - ls /tmp -la
        - export XDG_CACHE_HOME=$(mktemp -d)
    
    # Each job leaks files into the gitlab-runner's private /tmp
      after_script:
        - rm -rf $XDG_CACHE_HOME
        - du -sh /tmp
        - ls /tmp -la
        - find /tmp -mindepth 1 -maxdepth 1 -user gitlab-runner -exec rm -rf {} +
        - du -sh /tmp
        - ls /tmp -la
    

    We could see the files that got left over when we added the logging lines.

    GitLab

  • Cannot login after restoring from backup to v.1.112.4
    D davejgreen

    Last week, our GitLab app updated from v1.112.4 to v1.113.0 (or tried to) and we could no longer access GitLab. We would get a 500 error when trying to log in (with Cloudron single sign on as usual). I have been trying without any success since then to get our GitLab back.

    Problems restoring from backup
    Ideally, we'd just like to restore from a backup we took just before the update. I have tried this numerous times, and sometimes it appears to be running, but we still cannot log in. We just get the 500 error, and the logs show a lot of 500 and 502 errors. I've also seen log lines such as: NoMethodError (undefined method 'id' for nil): and RuntimeError (CI job token signing key is not set):, but I'm not sure what this means. I've tried working with ChatGPT to fix these, but I don't think ChapGPT really knows either.

    New Cloudron instance with old GitLab version
    We even tried getting a new VPS and installing a fresh, empty instance of Cloudron (v9.1.2 and then v9.1.3). I was sometimes able to install GitLab v1.112.4 and get that running. I then tried importing our backup, but we'd end up with the same 500 error when trying to log in. I'm not sure if this should work anyway though, as it is on a different domain name, so may not be compatible with the backup from the existing domain.

    Can't work out the "fix"
    I've looked at the information here: https://forum.cloudron.io/topic/15158/postgres-sql-error-update-from-v1.112.4-to-v1.113.0, which is about the same GitLab update, including the links to the GitLab issues. We have a similar log line => Healthcheck error: Error: connect ECONNREFUSED 172.18.x.x:80 that we are getting a lot, but I don't think I've seen the logs lines about PG::CheckViolation: ERROR: check constraint "check_17a3a18e31" of relation "user_agent_details" is violated by some row" in our case. I have tried following the "fixes", but I'm not really sure what I'm trying to do, and our case does not seem to match the information given. In the Postgresql we already have an "organization_id" column, but our "user_agent_details" table is empty. In ~/gitlab/db/migrate# the latest file is 20260212153542_add_work_item_custom_types_namespace_fk.rb which looks like it is from 2 or 3 weeks earlier than this problem. Should there be one from the evening of the 2nd March when our GitLab updated?

    Please help!
    Any help that gets us back into our GitLab would be greatly appreciated. (Then we might see if we can migrate our existing repo and issues away from GitLab into Forgejo!)

    GitLab

  • PostGres SQL Error update from v1.112.4 to v1.113.0
    D davejgreen

    @joseph Yes, on a fresh install of Ubuntu 24.04, Cloudron 9.1.2, I installed GitLab 1.112.4 (18.8.4, the version before the update), and I cannot get it working, we have logs similar to the ones above with the line => Healthcheck error: Error: connect ECONNREFUSED 172.18.x.x:80. I have also tried this with Ubuntu 22.04 and got the same behaviour. Still haven't managed to get back into GitLab on our existing Cloudron instance either.

    GitLab

  • PostGres SQL Error update from v1.112.4 to v1.113.0
    D davejgreen

    We are also having an issue after this update to GitLab. We have not been able to get back into GitLab since. I have tried following the fixes in the posts and links given above, but with no success. I tried restoring to our last backup on the previous GitLab version (1.112.4), this appeared to be running, but it would not let us log in, we just get "500: We're sorry, something went wrong on our end". We have even tried paying for a new VPS, installing a fresh Ubuntu 24.04, installing a fresh Cloudron instance, and installing GitLab v1.112.4, but I cannot get this to work either, we still get the healthcheck errors similar to the one in the original post. (My intention was to then restore to this fresh version from our backup.) Any ideas why restoring to the old version is not working for us? Or how we can get back into our GitLab data? I'm a bit concerned we cannot restore to a backup from a day or 2 ago and access the app data.

    GitLab

  • Struggling to Replace MinIO - Advice Welcome!
    D davejgreen

    Thanks for the responses. We are particularly interested in de-duplication, does anyone know if Cloudron backing up to a Hetzner Storage Box will do de-duplicated backups? I was surprised when Backblaze didn't, but maybe I configured something wrong?

    Discuss

  • Struggling to Replace MinIO - Advice Welcome!
    D davejgreen

    We have been using MinIO for our company backups for some time. Each nightly backup with MinIO takes about 2-3 hours. When Cloudron updated from v8 to v9, something broke with the MinIO regions, and we need to find an alternative anyway as MinIO has gone into maintenance mode.

    Requirements: We have about 150GB of data, increasing by a few GB a week. It is made up of a large number of small files, with new ones being added while many old ones stay the same. We want frequent (7 daily, 4 weekly, 12 monthly, etc.) de-duplicated backups. So the first backup would be the full 150GB, and subsequent backups would be snapshots that include the changed or new files (only a few GB) and "hard-links"(?) to the unchanged ones. We can tolerate an initial backup taking longer, but subsequent daily backups should be, e.g. 5-6 hours max. We had our Cloudron server (in "the cloud") backing up to an on-premises device with a ZFS file system and plenty of storage space. We are open to either using this set-up, or having storage somewhere else in "the cloud" for our backups.

    Can anyone advise on a setup that would be a suitable replacement for MinIO?

    Does anyone know which backup options in Cloudron are intended to provide de-duplicated, incremental backup snapshots?

    Below is a breakdown of what we have tried so far and what the problems were:

    Garage on the same on-premises device as MinIO. Was difficult to set up, but we got there in the end. However, we found each backup took exponentially longer, 5 hours, 7 hours, then 16 hours. I think the de-duplication was making things take longer the more snapshots we had. (We also did not like that the files were stored in a non-human-readable way. With MinIO, we can browse the backup files as a normal file system, but Garage just has chunks of bytes, so you can only access the data by using Garage.)

    NFS (and SSHFS) with rsync (as I believe tarball would just do a full copy of the data each time). These were just too slow. We first tried this when we had more data, and they would run for 24 hours and then get killed by Cloudron for taking too long. After reducing our data, we managed a complete backup, but it took around 13 hours each day, which isn't really workable.

    Backblaze B2 (rsync, in "the cloud") The first backup seemed to work fine, but subsequent backups did not appear to be de-duplicating. We had four 150GB Backblaze backups, but the bucket was showing as 860GB in size, so far more than if all four backups were full copies of the data. Is Backblaze meant to de-duplicate? We ticked the encrypted option in Cloudron - would it de-duplicate if it was not encrypted? Does encrypting it make it bigger?

    S3 API Compatible (v4) with PeaSoup in "the cloud": Too slow, and no de-duplication.

    Has anyone with similar requirements found a true replacement for MinIO yet?

    Discuss

  • Minio region issues after cloudron updates
    D davejgreen

    It has happened for both the version 9 updates we have received, so v9.0.13, and v9.0.15. So, I'm expecting it to keep happening each time we get a version 9 update (unless that update specifically addresses the issue).

    Support minio backup update s3

  • Minio region issues after cloudron updates
    D davejgreen

    Description

    Our Cloudron updates (from v8.3.2 to v9.0.13 at the start of Dec 2025, and from v9.0.13 to v9.0.15 on 20 Dec 2025) have caused our daily minio backup to fail with the error: "Error listing objects. code: AuthorizationHeaderMalformed message: The authorization header is malformed; the region is wrong; expecting 'eu-west-2'. HTTP: 400". The machine we are backing up to is configured with NixOS, so the minio service on it is pinned to a particular version that has not changed throughout what is described here. The minio service and our minio bucket are set to use region "eu-west-2". After trying a number of different backups types (with various other problems I won't go into here), I came back to try minio again.

    I changed the NixOS setting for our local machine's minio service to "us-east-1" and restarted the minio service on it. I then started a minio backup manually from the Cloudron UI. This failed immediately with the same error message as above, saying it was expecting region "eu-west-2". However, when I then changed the NixOS setting back to "eu-west-2" and started another minio backup from the Cloudron UI, it worked! I was able to get 2 successful backups until our Cloudron version updated to v9.0.15 the following day, after which we got the same error as above. I then repeated the steps described to get the backups working again.

    This seems to be a similar issue to:
    https://forum.cloudron.io/topic/14684/minio-s3-backup-configuration-no-longer-working-region-is-wrong-error?_=1766397645452
    and possibly:
    https://forum.cloudron.io/topic/13970/error-configuring-backup-cloudflare-r2-object-storage
    and we're looking into minio alternatives. In the mean time is there something about Cloudron v9 that is interfering with minio S3 regions that could be put right?

    Steps to reproduce

    • Have a working daily minio backup with minio configured to use region "eu-west-2".
    • Receive a Cloudron v9 update. The next minio backup fails with the error: "Error listing objects. code: AuthorizationHeaderMalformed message: The authorization header is malformed; the region is wrong; expecting 'eu-west-2'. HTTP: 400"
    • Change the minio service's region setting on the device receiving the backup to, e.g. "us-east-1", and try another minio backup, which fails with the same error, expecting "eu-west-2".
    • Change the minio service's region setting back to "eu-west-2" (as it was before), and try another minio backup, which then succeeds.
    • Receive another Cloudron update and repeat.

    System Details

    Cloudron server (self-hosted) on Ubuntu v20.04.6 (I know we need to update this, but we're trying to get backups working first), Cloudron versions as mentioned above.

    Device receiving the backup is on Linux, using NixOS 25.05.

    Do let me know if any other information might be useful in debugging this,

    Many thanks.

    Support minio backup update s3
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search