Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
D

davejgreen

@davejgreen
About
Posts
9
Topics
4
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • Gitlab runners fail every 30th build
    D davejgreen

    We solved this in the end by adding some clean up to the GitLab runner's after_script in our .gitlab-ci.yml file:

      before_script:
        - stat -c %m /tmp
        - du -sh /tmp
        - ls /tmp -la
        - export XDG_CACHE_HOME=$(mktemp -d)
    
    # Each job leaks files into the gitlab-runner's private /tmp
      after_script:
        - rm -rf $XDG_CACHE_HOME
        - du -sh /tmp
        - ls /tmp -la
        - find /tmp -mindepth 1 -maxdepth 1 -user gitlab-runner -exec rm -rf {} +
        - du -sh /tmp
        - ls /tmp -la
    

    We could see the files that got left over when we added the logging lines.

    GitLab

  • Minio region issues after cloudron updates
    D davejgreen

    It has happened for both the version 9 updates we have received, so v9.0.13, and v9.0.15. So, I'm expecting it to keep happening each time we get a version 9 update (unless that update specifically addresses the issue).

    Support minio backup update s3

  • Struggling to Replace MinIO - Advice Welcome!
    D davejgreen

    We have been using MinIO for our company backups for some time. Each nightly backup with MinIO takes about 2-3 hours. When Cloudron updated from v8 to v9, something broke with the MinIO regions, and we need to find an alternative anyway as MinIO has gone into maintenance mode.

    Requirements: We have about 150GB of data, increasing by a few GB a week. It is made up of a large number of small files, with new ones being added while many old ones stay the same. We want frequent (7 daily, 4 weekly, 12 monthly, etc.) de-duplicated backups. So the first backup would be the full 150GB, and subsequent backups would be snapshots that include the changed or new files (only a few GB) and "hard-links"(?) to the unchanged ones. We can tolerate an initial backup taking longer, but subsequent daily backups should be, e.g. 5-6 hours max. We had our Cloudron server (in "the cloud") backing up to an on-premises device with a ZFS file system and plenty of storage space. We are open to either using this set-up, or having storage somewhere else in "the cloud" for our backups.

    Can anyone advise on a setup that would be a suitable replacement for MinIO?

    Does anyone know which backup options in Cloudron are intended to provide de-duplicated, incremental backup snapshots?

    Below is a breakdown of what we have tried so far and what the problems were:

    Garage on the same on-premises device as MinIO. Was difficult to set up, but we got there in the end. However, we found each backup took exponentially longer, 5 hours, 7 hours, then 16 hours. I think the de-duplication was making things take longer the more snapshots we had. (We also did not like that the files were stored in a non-human-readable way. With MinIO, we can browse the backup files as a normal file system, but Garage just has chunks of bytes, so you can only access the data by using Garage.)

    NFS (and SSHFS) with rsync (as I believe tarball would just do a full copy of the data each time). These were just too slow. We first tried this when we had more data, and they would run for 24 hours and then get killed by Cloudron for taking too long. After reducing our data, we managed a complete backup, but it took around 13 hours each day, which isn't really workable.

    Backblaze B2 (rsync, in "the cloud") The first backup seemed to work fine, but subsequent backups did not appear to be de-duplicating. We had four 150GB Backblaze backups, but the bucket was showing as 860GB in size, so far more than if all four backups were full copies of the data. Is Backblaze meant to de-duplicate? We ticked the encrypted option in Cloudron - would it de-duplicate if it was not encrypted? Does encrypting it make it bigger?

    S3 API Compatible (v4) with PeaSoup in "the cloud": Too slow, and no de-duplication.

    Has anyone with similar requirements found a true replacement for MinIO yet?

    Discuss

  • PostGres SQL Error update from v1.112.4 to v1.113.0
    D davejgreen

    We are also having an issue after this update to GitLab. We have not been able to get back into GitLab since. I have tried following the fixes in the posts and links given above, but with no success. I tried restoring to our last backup on the previous GitLab version (1.112.4), this appeared to be running, but it would not let us log in, we just get "500: We're sorry, something went wrong on our end". We have even tried paying for a new VPS, installing a fresh Ubuntu 24.04, installing a fresh Cloudron instance, and installing GitLab v1.112.4, but I cannot get this to work either, we still get the healthcheck errors similar to the one in the original post. (My intention was to then restore to this fresh version from our backup.) Any ideas why restoring to the old version is not working for us? Or how we can get back into our GitLab data? I'm a bit concerned we cannot restore to a backup from a day or 2 ago and access the app data.

    GitLab

  • Minio region issues after cloudron updates
    D davejgreen

    Description

    Our Cloudron updates (from v8.3.2 to v9.0.13 at the start of Dec 2025, and from v9.0.13 to v9.0.15 on 20 Dec 2025) have caused our daily minio backup to fail with the error: "Error listing objects. code: AuthorizationHeaderMalformed message: The authorization header is malformed; the region is wrong; expecting 'eu-west-2'. HTTP: 400". The machine we are backing up to is configured with NixOS, so the minio service on it is pinned to a particular version that has not changed throughout what is described here. The minio service and our minio bucket are set to use region "eu-west-2". After trying a number of different backups types (with various other problems I won't go into here), I came back to try minio again.

    I changed the NixOS setting for our local machine's minio service to "us-east-1" and restarted the minio service on it. I then started a minio backup manually from the Cloudron UI. This failed immediately with the same error message as above, saying it was expecting region "eu-west-2". However, when I then changed the NixOS setting back to "eu-west-2" and started another minio backup from the Cloudron UI, it worked! I was able to get 2 successful backups until our Cloudron version updated to v9.0.15 the following day, after which we got the same error as above. I then repeated the steps described to get the backups working again.

    This seems to be a similar issue to:
    https://forum.cloudron.io/topic/14684/minio-s3-backup-configuration-no-longer-working-region-is-wrong-error?_=1766397645452
    and possibly:
    https://forum.cloudron.io/topic/13970/error-configuring-backup-cloudflare-r2-object-storage
    and we're looking into minio alternatives. In the mean time is there something about Cloudron v9 that is interfering with minio S3 regions that could be put right?

    Steps to reproduce

    • Have a working daily minio backup with minio configured to use region "eu-west-2".
    • Receive a Cloudron v9 update. The next minio backup fails with the error: "Error listing objects. code: AuthorizationHeaderMalformed message: The authorization header is malformed; the region is wrong; expecting 'eu-west-2'. HTTP: 400"
    • Change the minio service's region setting on the device receiving the backup to, e.g. "us-east-1", and try another minio backup, which fails with the same error, expecting "eu-west-2".
    • Change the minio service's region setting back to "eu-west-2" (as it was before), and try another minio backup, which then succeeds.
    • Receive another Cloudron update and repeat.

    System Details

    Cloudron server (self-hosted) on Ubuntu v20.04.6 (I know we need to update this, but we're trying to get backups working first), Cloudron versions as mentioned above.

    Device receiving the backup is on Linux, using NixOS 25.05.

    Do let me know if any other information might be useful in debugging this,

    Many thanks.

    Support minio backup update s3

  • Cannot login after restoring from backup to v.1.112.4
    D davejgreen

    Last week, our GitLab app updated from v1.112.4 to v1.113.0 (or tried to) and we could no longer access GitLab. We would get a 500 error when trying to log in (with Cloudron single sign on as usual). I have been trying without any success since then to get our GitLab back.

    Problems restoring from backup
    Ideally, we'd just like to restore from a backup we took just before the update. I have tried this numerous times, and sometimes it appears to be running, but we still cannot log in. We just get the 500 error, and the logs show a lot of 500 and 502 errors. I've also seen log lines such as: NoMethodError (undefined method 'id' for nil): and RuntimeError (CI job token signing key is not set):, but I'm not sure what this means. I've tried working with ChatGPT to fix these, but I don't think ChapGPT really knows either.

    New Cloudron instance with old GitLab version
    We even tried getting a new VPS and installing a fresh, empty instance of Cloudron (v9.1.2 and then v9.1.3). I was sometimes able to install GitLab v1.112.4 and get that running. I then tried importing our backup, but we'd end up with the same 500 error when trying to log in. I'm not sure if this should work anyway though, as it is on a different domain name, so may not be compatible with the backup from the existing domain.

    Can't work out the "fix"
    I've looked at the information here: https://forum.cloudron.io/topic/15158/postgres-sql-error-update-from-v1.112.4-to-v1.113.0, which is about the same GitLab update, including the links to the GitLab issues. We have a similar log line => Healthcheck error: Error: connect ECONNREFUSED 172.18.x.x:80 that we are getting a lot, but I don't think I've seen the logs lines about PG::CheckViolation: ERROR: check constraint "check_17a3a18e31" of relation "user_agent_details" is violated by some row" in our case. I have tried following the "fixes", but I'm not really sure what I'm trying to do, and our case does not seem to match the information given. In the Postgresql we already have an "organization_id" column, but our "user_agent_details" table is empty. In ~/gitlab/db/migrate# the latest file is 20260212153542_add_work_item_custom_types_namespace_fk.rb which looks like it is from 2 or 3 weeks earlier than this problem. Should there be one from the evening of the 2nd March when our GitLab updated?

    Please help!
    Any help that gets us back into our GitLab would be greatly appreciated. (Then we might see if we can migrate our existing repo and issues away from GitLab into Forgejo!)

    GitLab
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search