Backups Failing Frequently
-
I use Backblaze B2 for backups.
They have been failing more often than not in the last week.
The error is always "InternalError: CPU too busy" however my usage graphs have my CPU never going over 50%.
Is this a Cloudron problem or a B2 problem?
-
I find that failed backups are usually due to interrupted/slow network connections, not necessarily the source or the destination.
-
InternalError: CPU too busy
@Dave-Swift ok, that's an interesting message! never seen that, I wasn't even aware one can detect that CPU is too busy If you go back to the failing backups, what does the log say in those? Maybe the last 100-200 lines give us an idea of where it is failing.
-
-
@girish here is the most recent one
Mar 18 00:49:48 box:tasks update 713: {"percent":21,"message":"Uploading backup 25376M@1MBps (example.com)"}
Mar 18 00:49:58 box:tasks update 713: {"percent":21,"message":"Uploading backup 25386M@1MBps (example.com)"}
Mar 18 00:50:05 box:shell backup-snapshot/app_a64992c5-6f6d-401e-9edc-fa8d1f52adfa: /usr/bin/sudo -S -E --close-from=4 /home/yellowtent/box/src/scripts/backupupload.js snapshot/app_a64992c5-6f6d-401e-9edc-fa8d1f52adfa tgz {"localRoot":"/home/yellowtent/appsdata/a64992c5-6f6d-401e-9edc-fa8d1f52adfa","layout":[]} errored BoxError: backup-snapshot/app_a64992c5-6f6d-401e-9edc-fa8d1f52adfa exited with code 1 signal null
Mar 18 00:50:05 box:tasks setCompleted - 713: {"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:163:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:360:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:382:5)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:503:29)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}}
Mar 18 00:50:05 box:tasks update 713: {"percent":100,"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:163:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:360:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:382:5)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:503:29)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}}
Mar 18 00:50:05 box:taskworker Task took 6604.595 seconds
[no timestamp] Backuptask crashed -
Mar 19 09:49:52 box:taskworker Starting task 1060. Logs are at /home/yellowtent/platformdata/logs/tasks/1060.log Mar 19 09:49:52 box:tasks update 1060: {"percent":1,"message":"Backing up netdata.example.com (1/2)"} Mar 19 09:49:52 box:tasks update 1060: {"percent":21,"message":"Snapshotting app netdata.example.com"} Mar 19 09:49:52 box:backuptask snapshotApp: netdata.example.com took 0.026 seconds Mar 19 09:49:52 box:services backupAddons Mar 19 09:49:52 box:services backupAddons: backing up ["localstorage","proxyAuth"] Mar 19 09:49:52 box:tasks update 1060: {"percent":21,"message":"Uploading app snapshot netdata.example.com"} Mar 19 09:49:52 box:shell backup-snapshot/app_03cfa8c6-9930-4b76-8604-a9de46be6f08 /usr/bin/sudo -S -E --close-from=4 /home/yellowtent/box/src/scripts/backupupload.js snapshot/app_03cfa8c6-9930-4b76-8604-a9de46be6f08 tgz {"localRoot":"/home/yellowtent/appsdata/03cfa8c6-9930-4b76-8604-a9de46be6f08","layout":[]} Mar 19 09:50:03 box:tasks update 1060: {"percent":21,"message":"Uploading backup 41M@4MBps (netdata.example.com)"} Mar 19 09:50:13 box:tasks update 1060: {"percent":21,"message":"Uploading backup 55M@1MBps (netdata.example.com)"} [...] Mar 19 13:51:39 box:tasks update 1060: {"percent":21,"message":"Uploading backup 55M@0MBps (netdata.example.com)"}
On my side backup are stuck on this line (It's been on for a while now.
Don't know if it's related but the percentage is the same and symptom seems identical (no upload speed).
And my backup is done to local filesystem so it should be a problem of network -
I've also had similar issues with backups the past few days.
2024-03-20T08:58:00.790Z box:shell backup-snapshot/app_c4a27e93-af6e-44e2-b7cf-9056fa01c5c3 code: null, signal: SIGKILL 2024-03-20T08:58:00.792Z box:taskworker Task took 10678.952 seconds 2024-03-20T08:58:00.793Z box:tasks setCompleted - 16169: {"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:164:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:361:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:383:5)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:504:29)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}} 2024-03-20T08:58:00.793Z box:tasks update 16169: {"percent":100,"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:164:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:361:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:383:5)\n at async fullBackup (/home/yellowtent/box/src/backuptask.js:504:29)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}} BoxError: Backuptask crashed
It would be nice if the error or code it pointed too had more info about what happened. Best I can tell is it got killed somehow. I increased the backup task memory and will see what happens next time.
Edit: it didn't crash yesterday, so maybe it just needed more ram.
-
I got that “uploading backup ***M@0MBps” too. I'll use Contabo Object Storage. But it happened on Backblaze and HetznerStorageBox too. At some days it seems to got stuck for 8 hours until I cancel. Additional Information: I'll use tgz-backup with encryption enabled.