Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
D

davejgreen

@davejgreen
About
Posts
16
Topics
4
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • Help with migrating Cloudron to a new server
    D davejgreen

    I tried again with a fresh install of Ubuntu 24.04 (this wipes all data on the server, so there is nothing left over from previous attempts). I checked UID 1000 was available, installed Cloudron v9.0.17 (following everything to the letter), but I still ran into exactly the same problem. The new installation gave yellowtent the UID 808, which does not match the UID of yellowtent on the existing server (1000), which is the sole cause of the problem.

    The existing server was installed long before I worked here, so I don't know why yellowtent has UID 1000 there, while fresh Cloudron installs seem to be giving it 808. I read that UIDs of 1000 and over are for "normal" users, while those below 1000 are for "system" users. Has the yellowtent user been changed from a "normal" to a "system" user at some point? Or is the UID assigned by Ubuntu? Maybe different versions of Ubuntu do it differently? Or maybe there was some anomaly when our existing Cloudron instance was installed that caused yellowtent to get 1000 instead of 808.

    I don't really know anything about idmapping - I had a quick go at the setfacl thing, but it spat out hundreds of lines ending in "Operation not permitted". In any case, I figured we probably want to fix things so yellowtent has the same UID so it can continue using the existing backups after the migration. So, I did another fresh install of Ubuntu and started again. This time, once Cloudron had been installed and I had rebooted the server, I did the following on the new server:

    # Stop everything yellowtent is involved with:
    systemctl stop box
    systemctl disable box
    systemctl stop cloudron-syslog.service
    
    # Check this returns empty:
    ps -u yellowtent
    
    # Switch the UID:
    usermod -u 1000 yellowtent
    groupmod -g 1000 yellowtent
    
    # Fix ownership on everything owned by 808 (takes a few mins):
    find / -xdev -user 808 -exec chown -h 1000 {} \;
    find / -xdev -group 808 -exec chgrp -h 1000 {} \;
    
    # Restart stuff:
    systemctl start cloudron-syslog.service
    systemctl enable box
    systemctl start box
    

    After that, I was able to continue and complete a successful dry run restore on the new server.

    Support backup restore migration

  • Struggling to Replace MinIO - Advice Welcome!
    D davejgreen

    We have been using MinIO for our company backups for some time. Each nightly backup with MinIO takes about 2-3 hours. When Cloudron updated from v8 to v9, something broke with the MinIO regions, and we need to find an alternative anyway as MinIO has gone into maintenance mode.

    Requirements: We have about 150GB of data, increasing by a few GB a week. It is made up of a large number of small files, with new ones being added while many old ones stay the same. We want frequent (7 daily, 4 weekly, 12 monthly, etc.) de-duplicated backups. So the first backup would be the full 150GB, and subsequent backups would be snapshots that include the changed or new files (only a few GB) and "hard-links"(?) to the unchanged ones. We can tolerate an initial backup taking longer, but subsequent daily backups should be, e.g. 5-6 hours max. We had our Cloudron server (in "the cloud") backing up to an on-premises device with a ZFS file system and plenty of storage space. We are open to either using this set-up, or having storage somewhere else in "the cloud" for our backups.

    Can anyone advise on a setup that would be a suitable replacement for MinIO?

    Does anyone know which backup options in Cloudron are intended to provide de-duplicated, incremental backup snapshots?

    Below is a breakdown of what we have tried so far and what the problems were:

    Garage on the same on-premises device as MinIO. Was difficult to set up, but we got there in the end. However, we found each backup took exponentially longer, 5 hours, 7 hours, then 16 hours. I think the de-duplication was making things take longer the more snapshots we had. (We also did not like that the files were stored in a non-human-readable way. With MinIO, we can browse the backup files as a normal file system, but Garage just has chunks of bytes, so you can only access the data by using Garage.)

    NFS (and SSHFS) with rsync (as I believe tarball would just do a full copy of the data each time). These were just too slow. We first tried this when we had more data, and they would run for 24 hours and then get killed by Cloudron for taking too long. After reducing our data, we managed a complete backup, but it took around 13 hours each day, which isn't really workable.

    Backblaze B2 (rsync, in "the cloud") The first backup seemed to work fine, but subsequent backups did not appear to be de-duplicating. We had four 150GB Backblaze backups, but the bucket was showing as 860GB in size, so far more than if all four backups were full copies of the data. Is Backblaze meant to de-duplicate? We ticked the encrypted option in Cloudron - would it de-duplicate if it was not encrypted? Does encrypting it make it bigger?

    S3 API Compatible (v4) with PeaSoup in "the cloud": Too slow, and no de-duplication.

    Has anyone with similar requirements found a true replacement for MinIO yet?

    Discuss

  • Gitlab runners fail every 30th build
    D davejgreen

    We solved this in the end by adding some clean up to the GitLab runner's after_script in our .gitlab-ci.yml file:

      before_script:
        - stat -c %m /tmp
        - du -sh /tmp
        - ls /tmp -la
        - export XDG_CACHE_HOME=$(mktemp -d)
    
    # Each job leaks files into the gitlab-runner's private /tmp
      after_script:
        - rm -rf $XDG_CACHE_HOME
        - du -sh /tmp
        - ls /tmp -la
        - find /tmp -mindepth 1 -maxdepth 1 -user gitlab-runner -exec rm -rf {} +
        - du -sh /tmp
        - ls /tmp -la
    

    We could see the files that got left over when we added the logging lines.

    GitLab

  • Minio region issues after cloudron updates
    D davejgreen

    It has happened for both the version 9 updates we have received, so v9.0.13, and v9.0.15. So, I'm expecting it to keep happening each time we get a version 9 update (unless that update specifically addresses the issue).

    Support minio backup update s3

  • Help with migrating Cloudron to a new server
    D davejgreen

    Ah, I think I understand the issue. On our existing server, yellowtent has UID 1000. This is why the office device that is receiving the NFS Tarball backups has everything owned by "1000:1000". But on the new server, I created my user before installing cloudron, and it happened to be assigned UID 1000. Then I installed Cloudron and it must have assigned yellowtent to 808 because 1000 was taken. So, then the ownership does not match.

    Thank you, I believe I can sort this out now!

    Support backup restore migration

  • Help with migrating Cloudron to a new server
    D davejgreen

    Just to clarify, the editing of /etc/hosts was irrelevant and did not cause any of the issues I was having. This is a business set up, and I have been instructed to create at least 2 users on the server, to give them admin rights, and then prevent ssh-ing in as root, so the user accounts on the server are needed. I needed to add the server to our tailscale network before doing the restore, as that is how the server will access the backups. It seemed sensible to add the users and check ssh-ing between devices when I did that.

    I wondered if it would be worth adding a note to the migration docs about making sure the UID of the "yellowtent" user on the old server is available on the new server when Cloudron is installed?

    Support backup restore migration

  • PostGres SQL Error update from v1.112.4 to v1.113.0
    D davejgreen

    We are also having an issue after this update to GitLab. We have not been able to get back into GitLab since. I have tried following the fixes in the posts and links given above, but with no success. I tried restoring to our last backup on the previous GitLab version (1.112.4), this appeared to be running, but it would not let us log in, we just get "500: We're sorry, something went wrong on our end". We have even tried paying for a new VPS, installing a fresh Ubuntu 24.04, installing a fresh Cloudron instance, and installing GitLab v1.112.4, but I cannot get this to work either, we still get the healthcheck errors similar to the one in the original post. (My intention was to then restore to this fresh version from our backup.) Any ideas why restoring to the old version is not working for us? Or how we can get back into our GitLab data? I'm a bit concerned we cannot restore to a backup from a day or 2 ago and access the app data.

    GitLab

  • Minio region issues after cloudron updates
    D davejgreen

    Description

    Our Cloudron updates (from v8.3.2 to v9.0.13 at the start of Dec 2025, and from v9.0.13 to v9.0.15 on 20 Dec 2025) have caused our daily minio backup to fail with the error: "Error listing objects. code: AuthorizationHeaderMalformed message: The authorization header is malformed; the region is wrong; expecting 'eu-west-2'. HTTP: 400". The machine we are backing up to is configured with NixOS, so the minio service on it is pinned to a particular version that has not changed throughout what is described here. The minio service and our minio bucket are set to use region "eu-west-2". After trying a number of different backups types (with various other problems I won't go into here), I came back to try minio again.

    I changed the NixOS setting for our local machine's minio service to "us-east-1" and restarted the minio service on it. I then started a minio backup manually from the Cloudron UI. This failed immediately with the same error message as above, saying it was expecting region "eu-west-2". However, when I then changed the NixOS setting back to "eu-west-2" and started another minio backup from the Cloudron UI, it worked! I was able to get 2 successful backups until our Cloudron version updated to v9.0.15 the following day, after which we got the same error as above. I then repeated the steps described to get the backups working again.

    This seems to be a similar issue to:
    https://forum.cloudron.io/topic/14684/minio-s3-backup-configuration-no-longer-working-region-is-wrong-error?_=1766397645452
    and possibly:
    https://forum.cloudron.io/topic/13970/error-configuring-backup-cloudflare-r2-object-storage
    and we're looking into minio alternatives. In the mean time is there something about Cloudron v9 that is interfering with minio S3 regions that could be put right?

    Steps to reproduce

    • Have a working daily minio backup with minio configured to use region "eu-west-2".
    • Receive a Cloudron v9 update. The next minio backup fails with the error: "Error listing objects. code: AuthorizationHeaderMalformed message: The authorization header is malformed; the region is wrong; expecting 'eu-west-2'. HTTP: 400"
    • Change the minio service's region setting on the device receiving the backup to, e.g. "us-east-1", and try another minio backup, which fails with the same error, expecting "eu-west-2".
    • Change the minio service's region setting back to "eu-west-2" (as it was before), and try another minio backup, which then succeeds.
    • Receive another Cloudron update and repeat.

    System Details

    Cloudron server (self-hosted) on Ubuntu v20.04.6 (I know we need to update this, but we're trying to get backups working first), Cloudron versions as mentioned above.

    Device receiving the backup is on Linux, using NixOS 25.05.

    Do let me know if any other information might be useful in debugging this,

    Many thanks.

    Support minio backup update s3

  • Cannot login after restoring from backup to v.1.112.4
    D davejgreen

    Last week, our GitLab app updated from v1.112.4 to v1.113.0 (or tried to) and we could no longer access GitLab. We would get a 500 error when trying to log in (with Cloudron single sign on as usual). I have been trying without any success since then to get our GitLab back.

    Problems restoring from backup
    Ideally, we'd just like to restore from a backup we took just before the update. I have tried this numerous times, and sometimes it appears to be running, but we still cannot log in. We just get the 500 error, and the logs show a lot of 500 and 502 errors. I've also seen log lines such as: NoMethodError (undefined method 'id' for nil): and RuntimeError (CI job token signing key is not set):, but I'm not sure what this means. I've tried working with ChatGPT to fix these, but I don't think ChapGPT really knows either.

    New Cloudron instance with old GitLab version
    We even tried getting a new VPS and installing a fresh, empty instance of Cloudron (v9.1.2 and then v9.1.3). I was sometimes able to install GitLab v1.112.4 and get that running. I then tried importing our backup, but we'd end up with the same 500 error when trying to log in. I'm not sure if this should work anyway though, as it is on a different domain name, so may not be compatible with the backup from the existing domain.

    Can't work out the "fix"
    I've looked at the information here: https://forum.cloudron.io/topic/15158/postgres-sql-error-update-from-v1.112.4-to-v1.113.0, which is about the same GitLab update, including the links to the GitLab issues. We have a similar log line => Healthcheck error: Error: connect ECONNREFUSED 172.18.x.x:80 that we are getting a lot, but I don't think I've seen the logs lines about PG::CheckViolation: ERROR: check constraint "check_17a3a18e31" of relation "user_agent_details" is violated by some row" in our case. I have tried following the "fixes", but I'm not really sure what I'm trying to do, and our case does not seem to match the information given. In the Postgresql we already have an "organization_id" column, but our "user_agent_details" table is empty. In ~/gitlab/db/migrate# the latest file is 20260212153542_add_work_item_custom_types_namespace_fk.rb which looks like it is from 2 or 3 weeks earlier than this problem. Should there be one from the evening of the 2nd March when our GitLab updated?

    Please help!
    Any help that gets us back into our GitLab would be greatly appreciated. (Then we might see if we can migrate our existing repo and issues away from GitLab into Forgejo!)

    GitLab

  • Help with migrating Cloudron to a new server
    D davejgreen

    Hi, we are looking to migrate our Cloudron to a new server and want to minimise down time for our staff. We have about 30 users, 130GB of emails, and recent full backups are around 160GB. We use Cloudron for our business website, emails, and internal chat among other things.

    I have reviewed the Docs at https://docs.cloudron.io/backups#move-cloudron-to-another-server, but I'm still a bit unsure of a few things:

    • Is migration of users and email boxes only possible by using the "looking to restore" button for restoring full Cloudron backups? And is that button only available when you first install Cloudron? We have a second server up and running with a newer version of Ubuntu and Cloudron and were thinking about importing each app individually (after backing it up from the old Cloudron onto a backup site hosted on the new Cloudron), but I'm not sure if there's a way to import the users and email boxes? If not, I guess this approach will not work for us, we'd have to use a third site for a full backup.

    • We have a premium monthly subscription on the old Cloudron instance. Do we need to purchase another one for the new Cloudron instance? (We have more than 2 apps.) And then cancel the old one once the migration is done? Or is there a way of using the same subscription throughout the migration?

    Support backup restore migration

  • Help with migrating Cloudron to a new server
    D davejgreen

    Hi @james, thanks again for a quick response.

    No Left Over Files
    I think you're not understanding what I said about a complete reinstall. I tried the rsync attempt last week, then I erased everything on the new server to try again. Wiped it clean, all files gone, new install of Ubuntu 24.04, no left over files. Then I did this morning's attempt. There were not any files or folders left over from the rsync attempt when I tried the restore today, so that cannot be the reason it didn't work.

    DNS
    I don't think I understand your point about the /etc/hosts change. I understand that to view the newly restored Cloudron instance in a browser on my local device, I will need to change my local device's /etc/hosts. But I don't need to do this to see the "Restore Cloudron" page, I can just enter the new server's IPv4 in the browser address bar. Then when I start the restore from there, doesn't this happen on the new server? I didn't think my local device was doing anything other than showing me what was happening on the new server through the browser window? Once the restore has finished, then I would need to adjust my local /etc/hosts to view the newly restored Cloudron instead of the old existing one. Are you saying I need to adjust /etc/hosts on my local device before I do the restore? (If so, why?, given that I'm doing a dry run and that we manage all DNS manually.)

    Next Steps?
    I'm unsure what to try next. I still have the error "Failed to unmount existing mount". I tried doing sudo umount /mnt/managedbackups/cloudron-restore-validation which appeared to work, and then I clicked the restore button again, but got the same error message. The first error message started with "Access denied." which makes me think the problem might be related to file and folder ownership and/or permissions, as I am often confused by these.

    How is this mounting meant to work? What should I try next?

    Support backup restore migration
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search