Repository archives balooned to take up all space on disk
-
I just ran into an issue where the entire disk was consumed by gitea repository archives, roughly 48GB.
Here's the top several lines output from
du | sort -n -r
:46659924 . 44862768 ./appdata 43136132 ./appdata/repo-archive 33266924 ./appdata/repo-archive/19 9538276 ./appdata/repo-archive/4 1835424 ./appdata/repo-archive/19/4e 1797076 ./repository 1741536 ./repository/mirror 1725744 ./appdata/packages 1721100 ./appdata/repo-archive/19/75 1544116 ./appdata/repo-archive/19/ad 1217724 ./repository/mirror/erpnext.git 1210888 ./repository/mirror/erpnext.git/objects 1187404 ./repository/mirror/erpnext.git/objects/pack 1005688 ./appdata/repo-archive/19/29 981364 ./appdata/repo-archive/19/b1 953240 ./appdata/repo-archive/19/25 949260 ./appdata/repo-archive/19/b4 943956 ./appdata/repo-archive/19/da 932688 ./appdata/repo-archive/19/c9 918308 ./appdata/repo-archive/19/9d 916680 ./appdata/repo-archive/19/2b 914320 ./appdata/repo-archive/19/6b 911324 ./appdata/repo-archive/19/ac 900540 ./appdata/repo-archive/19/df 899540 ./appdata/repo-archive/19/ca 898788 ./appdata/repo-archive/19/b8 897392 ./appdata/repo-archive/19/15 895184 ./appdata/repo-archive/19/2e 890984 ./appdata/repo-archive/19/65 890616 ./appdata/repo-archive/19/9b 889036 ./appdata/repo-archive/19/f7 838684 ./appdata/repo-archive/19/55 816740 ./appdata/repo-archive/19/5e 815840 ./appdata/repo-archive/19/89
My gitea instance is barely used and only has a few repos that I'm mirroring.
I was able to solve the issue by logging in as root (I hadn't changed the root password ) and running the
Delete all repositories' archives (ZIP, TAR.GZ, etc..)
Cron Task from the Site Administration page. Now the app uses 3.3GB of disk.I hope "repository archives" aren't important but everything seems to be working ok again... I decided to report this in case there's an upstream issue.
-
This happened again today. 48+GB of actually useless "repository archives" whatever that means. ~~Even login fails when the disk is full so I can't even log in to fix it. ~~ I was able to log in finally.
Is there a way to set disk quotas for apps so when they misbehave the whole cloudron doesn't go belly up? Sigh.
-
Yeah it probably has to do with the Update Mirrors cron task that runs every 10m, since that's the only activity on the server. I did notice that the Update Mirrors task run count was about 280 in both cases. Maybe a recent update is now leaving junk behind when it updates mirrors that isn't being cleaned up.
-
Would https://github.com/pojntfx/octarchive be better?
-
Release 1.20.0 included these PRs related to mirroring, maybe one of them caused the issue: