Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
R

rlp10

@rlp10
About
Posts
9
Topics
3
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • Gitlab runners fail every 30th build
    R rlp10

    Thank you, Nebulon, for your reply.

    The runners are on a different machine.

    I don't think that the machine with the runners is running out of space because it has a very large amount available on all partitions (the root has 300GB available). I think the error is originating from the cloudron machine (although I may be wrong).

    In order to try to confirm the location of the error, I will setup another gitlab runner client on a different machine, and see if that has the same problem. Hopefully that will isolate the error either to the Cloudron server, or else the gitlab runner machines.

    GitLab

  • Gitlab runners fail every 30th build
    R rlp10

    We have Gitlab installed as a Cloudron app on our VPS server. Until recently, the our gitlab-runner.service was working fine. On Tuesday 3rd June we started getting unexpected failed jobs from the gitlab-runner. Upon further testing, it appears that the first 29 jobs pass as expected, and then the 30th job, and any subsequent jobs, fail until the service is stopped and restarted. The error seems to happen when the pipeline job is cloning a repository. This is an example of a log from one of the failed jobs:

    Running with gitlab-runner 17.11.0 (v17.11.0)
    on default_mydevice_somenumber A-number, system ID: id_number
    Preparing the "shell" executor 00:00
    Using Shell (bash) executor...
    Preparing environment
    Running on mydevice...
    Getting source from Git repository
    Fetching changes with git depth set to 20...
    Reinitialized existing Git repository in /var/lib/private/gitlab-runner/builds/A-number/0/OrgName/mainRepo/.git/
    Checking out 130d8ca7 as detached HEAD (ref is refs/merge-requests/294/head)...
    Skipping Git submodules setup
    Executing "step_script" stage of the job script 00:03
    $ export repo2=$(mktemp -d)
    $ GIT_TRACE=1 GIT_FLUSH=1 git -c core.progress=false clone https://gitlab-ci-token:$CI_JOB_TOKEN@my-self-hosted-git.com/OrgName/repo2 $repo2
    09:27:40.005972 git.c:476 trace: built-in: git clone https://gitlab-ci-token:[MASKED]@my-self-hosted-git.com/OrgName/repo2 /tmp/tmp.NDTArMtQAp
    Cloning into '/tmp/tmp.NDTArMtQAp'...
    09:27:40.009784 run-command.c:667 trace: run_command: git remote-https origin https://gitlab-ci-token:[MASKED]@my-self-hosted-git.com/OrgName/repo2
    09:27:40.009805 run-command.c:759 trace: start_command: /nix/store/805a5wv1cyah5awij184yfad1ksmbh9f-git-2.49.0/libexec/git-core/git remote-https origin https://gitlab-ci-token:[MASKED]@my-self-hosted-git.com/OrgName/repo2
    09:27:40.011889 git.c:772 trace: exec: git-remote-https origin https://gitlab-ci-token:[MASKED]@my-self-hosted-git.com/OrgName/repo2
    09:27:40.011944 run-command.c:667 trace: run_command: git-remote-https origin https://gitlab-ci-token:[MASKED]@my-self-hosted-git.com/OrgName/repo2
    09:27:40.011965 run-command.c:759 trace: start_command: /nix/store/805a5wv1cyah5awij184yfad1ksmbh9f-git-2.49.0/libexec/git-core/git-remote-https origin https://gitlab-ci-token:[MASKED]@my-self-hosted-git.com/OrgName/repo2
    warning: redirecting to https://my-self-hosted-git.com/OrgName/repo2.git/
    09:27:40.411977 run-command.c:667 trace: run_command: git index-pack --stdin --fix-thin '--keep=fetch-pack 1061622 on mydevice' --check-self-contained-and-connected
    09:27:40.412021 run-command.c:759 trace: start_command: /nix/store/805a5wv1cyah5awij184yfad1ksmbh9f-git-2.49.0/libexec/git-core/git index-pack --stdin --fix-thin '--keep=fetch-pack 1061622 on mydevice' --check-self-contained-and-connected
    09:27:40.415115 git.c:476 trace: built-in: git index-pack --stdin --fix-thin '--keep=fetch-pack 1061622 on mydevice' --check-self-contained-and-connected
    fatal: write error: No space left on device
    fatal: fetch-pack: invalid index-pack output
    Running after_script
    Running after script...
    $ rm -rf $repo2
    Cleaning up project directory and file based variables
    ERROR: Job failed: exit status 1

    The device on which the gitlab-runner is running has plenty of space, as does the Linode server. We tried increasing the RAM on the Linode server, but still found the 30th job onwards failed, after the first 29 passed. This behaviour suggests something is building up somewhere and then preventing further jobs from succeeding, but we are not sure what. We are looking for any help or pointers you might be able to give us to solve this issue.

    I wonder if the failure may have been caused by an update to the gitlab app. Perhaps someone else may have reported similar problems.

    Thanks in advance.

    GitLab

  • Backup Fails: "Unknown system error -74"
    R rlp10

    @girish Thanks for that information

    I'm pleased to say that the backup did run without encryption. Since I own the target backup machine, I will just proceed with that.

    I have to admit, I'm relieved to see a complete backup of my data!

    If anyone does get the chance, I would be grateful to know if Minio is preferred as a backup target in comparison with sshfs?

    Also thanks @nebulon for all your help troubleshooting this issue.

    Support backups sshfs

  • Backup Fails: "Unknown system error -74"
    R rlp10

    @nebulon Ugh, it still didn't work despite my having removed the previous directory where it failed. It is just failing somewhere else now.

    Following your latest post, I'm going to try a backup which is not encrypted to see if that works.

    Failing that, do you think it might be more reliable if I were to backup to a Minio installation, rather than using sshfs? I'm running a Ubuntu-based distro on the machine holding the backups, so it wouldn't be very difficult to install, I imagine.

    I could even try it out in docker first, just to see if it fixed it.

    Support backups sshfs

  • Backup Fails: "Unknown system error -74"
    R rlp10

    @nebulon Thanks for that.

    Yes it does seem to fail even when I just backup Nextcloud.

    Looking at the logs, it appears to be the same directory that is always failing. Inspecting that directory, it seems to have very long filenames.

    I've therefore compressed the directory into a tarball and then deleted the original. I'm re-running the backup to see if it will work now. Perhaps I'm hitting up against a limit on the length of filenames.

    If it fails, then I will let you have the logs as you have kindly suggested.

    Support backups sshfs

  • Backup Fails: "Unknown system error -74"
    R rlp10

    @nebulon Thanks for your reply and sorry for the delay - I do still have the same issue.

    I avoided the error by using tarballs to backup rather than rsync. However, I now want to sort it out properly with rsync and hard links as the space taken by tarballs is just too much.

    I ran the backup again yesterday and I still have the same error message.

    To answer your questions:

    I have remounted and retriggered the backup on several occasions - it's always the same error.

    I am using encryption, yes.

    It does always seem to be the same application that causes the problem. Looking at the ID and browsing the /home/yellowtent directory on my server, I think the UUID refers to my Nextcloud installation. Admittedly, that would be the app with the most data (by far).

    Perhaps I could try running the backup with Nextcloud offline to see if that works. It wouldn't be a permanent solution, but it would be interesting to know if that works at least.

    Also, perhaps I should be setting up a separate "volume" for my Nextcloud data and migrate it over. I am a little nervous about that though because I'd have to make sure that all the sharing links etc in Nextcloud still worked.

    Any other ideas how to troubleshoot this? Thanks in advance.

    Support backups sshfs

  • Shared Resources
    R rlp10

    Hi folks

    Is it possible to setup a shared resource (like a meeting room) using Sogo? I have tried googling but the suggestions all seem to be quite development based (like I have to make manual changes to the database!)

    Is this something that SOGO supports or not? Basically, it would just need to be a user that auto-accepts invitations, I guess.

    Many thanks
    Richard

    SOGo

  • Backup Fails: "Unknown system error -74"
    R rlp10

    I am getting the following error each time I try to backup:

    "Unknown system error -74: Unknown system error -74, open ..." and then the path to my backup.

    I am using backup over sshfs. At first it seemed to work fine, but now it reliably fails with this same error.

    If I watch the backup, then it seems to fail during the Nextcloud phase of the backup and I wonder if the problems relates to Nextcloud.

    I have tried searching for that error number, but I can hardly find anything. My only hit is that it is possibly an error generating by nodejs. Perhaps node is being used to perform the backup or elsewhere in the Cloudron infrastructure?

    I can SSH into the cloudron server and browse the remote server by going to /backups_sshfs/. I can view the backups there and it seems to work relatively promptly.

    Otherwise, I'm out of ideas. Does anyone have any suggestions for how to troubleshoot this issue?

    Support backups sshfs

  • Jitsi Meet
    R rlp10

    +1 for Jitsi Meet on Cloudron

    I'm using Kopano and I don't find it to be as reliable for establishing connections as the public Jitsi server. I would far prefer to use my own machine though.

    Is this being actively worked on? Can I help at all, say by testing?

    Would it help if there was a small bounty for getting it working (if that is appropriate in this community)?

    App Wishlist
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search