Error 400 in backup process with Ionos S3 Object Storage
-
Hello @dsp76 I have marked to topic as unsolved again.
Where you unable to do this yourself or was this more of a courtesy question? -
I've had the same problem since a week ago. The strange thing is that everything had been running flawlessly for a year. However, we recently set up a second Cloudron. The backup is stored on a different bucket and also has its own key. The problem occurs on both servers. Unfortunately, it's not deterministic.
03:00:33 box:storage/s3 Upload progress: {"loaded":95400493056,"part":920,"key":"backup/snapshot/app_da39dd94-29b5-4049-9aa5-76864ebc4608.tar.gz.enc"} Oct 16 03:00:33 /home/yellowtent/box/node_modules/aws-sdk/lib/services/s3.js:712 Oct 16 03:00:33 resp.error = AWS.util.error(new Error(), { Oct 16 03:00:33 ^ Oct 16 03:00:33 Oct 16 03:00:33 400: null Oct 16 03:00:33 at Request.extractError (/home/yellowtent/box/node_modules/aws-sdk/lib/services/s3.js:712:35) Oct 16 03:00:33 at Request.callListeners (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:106:20) Oct 16 03:00:33 at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:78:10) Oct 16 03:00:33 at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:686:14) Oct 16 03:00:33 at Request.transition (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:22:10) Oct 16 03:00:33 at AcceptorStateMachine.runTo (/home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:14:12) Oct 16 03:00:33 at /home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:26:10 Oct 16 03:00:33 at Request.<anonymous> (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:38:9) Oct 16 03:00:33 at Request.<anonymous> (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:688:12) Oct 16 03:00:33 at Request.callListeners (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:116:18) { Oct 16 03:00:33 code: 400, Oct 16 03:00:33 region: null, Oct 16 03:00:33 time: 2025-10-16T01:00:34.559Z, Oct 16 03:00:33 requestId: null, Oct 16 03:00:33 extendedRequestId: undefined, Oct 16 03:00:33 cfId: undefined, Oct 16 03:00:33 statusCode: 400, Oct 16 03:00:33 retryable: false, Oct 16 03:00:33 retryDelay: 20000 Oct 16 03:00:33 } Oct 16 03:00:33 Oct 16 03:00:33 Node.js v20.18.0 Oct 16 03:00:34 box:shell backuptask: /usr/bin/sudo -S -E --close-from=4 /home/yellowtent/box/src/scripts/backupupload.js snapshot/app_da39dd94-29b5-4049-9aa5-76864ebc4608 tgz {"localRoot":"/home/yellowtent/appsdata/da39dd94-29b5-4049-9aa5-76864ebc4608","layout":[]} errored BoxError: backuptask exited with code 1 signal null Oct 16 03:00:34 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19) Oct 16 03:00:34 at ChildProcess.emit (node:events:519:28) Oct 16 03:00:34 at ChildProcess.emit (node:domain:488:12) Oct 16 03:00:34 at ChildProcess._handle.onexit (node:internal/child_process:294:12) { Oct 16 03:00:34 reason: 'Shell Error', Oct 16 03:00:34 details: {}, Oct 16 03:00:34 code: 1, Oct 16 03:00:34 signal: null Oct 16 03:00:34 } Oct 16 03:00:34 box:backuptask runBackupUpload: backuptask crashed BoxError: backuptask exited with code 1 signal null Oct 16 03:00:34 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19) Oct 16 03:00:34 at ChildProcess.emit (node:events:519:28) Oct 16 03:00:34 at ChildProcess.emit (node:domain:488:12) Oct 16 03:00:34 at ChildProcess._handle.onexit (node:internal/child_process:294:12) { Oct 16 03:00:34 reason: 'Shell Error', Oct 16 03:00:34 details: {}, Oct 16 03:00:34 code: 1, Oct 16 03:00:34 signal: null Oct 16 03:00:34 } -
I've had the same problem since a week ago. The strange thing is that everything had been running flawlessly for a year. However, we recently set up a second Cloudron. The backup is stored on a different bucket and also has its own key. The problem occurs on both servers. Unfortunately, it's not deterministic.
03:00:33 box:storage/s3 Upload progress: {"loaded":95400493056,"part":920,"key":"backup/snapshot/app_da39dd94-29b5-4049-9aa5-76864ebc4608.tar.gz.enc"} Oct 16 03:00:33 /home/yellowtent/box/node_modules/aws-sdk/lib/services/s3.js:712 Oct 16 03:00:33 resp.error = AWS.util.error(new Error(), { Oct 16 03:00:33 ^ Oct 16 03:00:33 Oct 16 03:00:33 400: null Oct 16 03:00:33 at Request.extractError (/home/yellowtent/box/node_modules/aws-sdk/lib/services/s3.js:712:35) Oct 16 03:00:33 at Request.callListeners (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:106:20) Oct 16 03:00:33 at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:78:10) Oct 16 03:00:33 at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:686:14) Oct 16 03:00:33 at Request.transition (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:22:10) Oct 16 03:00:33 at AcceptorStateMachine.runTo (/home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:14:12) Oct 16 03:00:33 at /home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:26:10 Oct 16 03:00:33 at Request.<anonymous> (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:38:9) Oct 16 03:00:33 at Request.<anonymous> (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:688:12) Oct 16 03:00:33 at Request.callListeners (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:116:18) { Oct 16 03:00:33 code: 400, Oct 16 03:00:33 region: null, Oct 16 03:00:33 time: 2025-10-16T01:00:34.559Z, Oct 16 03:00:33 requestId: null, Oct 16 03:00:33 extendedRequestId: undefined, Oct 16 03:00:33 cfId: undefined, Oct 16 03:00:33 statusCode: 400, Oct 16 03:00:33 retryable: false, Oct 16 03:00:33 retryDelay: 20000 Oct 16 03:00:33 } Oct 16 03:00:33 Oct 16 03:00:33 Node.js v20.18.0 Oct 16 03:00:34 box:shell backuptask: /usr/bin/sudo -S -E --close-from=4 /home/yellowtent/box/src/scripts/backupupload.js snapshot/app_da39dd94-29b5-4049-9aa5-76864ebc4608 tgz {"localRoot":"/home/yellowtent/appsdata/da39dd94-29b5-4049-9aa5-76864ebc4608","layout":[]} errored BoxError: backuptask exited with code 1 signal null Oct 16 03:00:34 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19) Oct 16 03:00:34 at ChildProcess.emit (node:events:519:28) Oct 16 03:00:34 at ChildProcess.emit (node:domain:488:12) Oct 16 03:00:34 at ChildProcess._handle.onexit (node:internal/child_process:294:12) { Oct 16 03:00:34 reason: 'Shell Error', Oct 16 03:00:34 details: {}, Oct 16 03:00:34 code: 1, Oct 16 03:00:34 signal: null Oct 16 03:00:34 } Oct 16 03:00:34 box:backuptask runBackupUpload: backuptask crashed BoxError: backuptask exited with code 1 signal null Oct 16 03:00:34 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19) Oct 16 03:00:34 at ChildProcess.emit (node:events:519:28) Oct 16 03:00:34 at ChildProcess.emit (node:domain:488:12) Oct 16 03:00:34 at ChildProcess._handle.onexit (node:internal/child_process:294:12) { Oct 16 03:00:34 reason: 'Shell Error', Oct 16 03:00:34 details: {}, Oct 16 03:00:34 code: 1, Oct 16 03:00:34 signal: null Oct 16 03:00:34 } -
J james has marked this topic as solved on
-
@james it still happens sometimes. I did more investigation in the log. It says in between it couldn't find the file ("Old backup not found").
Jan 14 05:26:29 box:storage/s3 Upload progress: {"loaded":40692513140,"part":304,"Key":"snapshot/app_APP_UUID_01.tar.gz.enc","Bucket":"ACME-BACKUP"} Jan 14 05:47:09 box:storage/s3 Upload finished. {"$metadata":{"httpStatusCode":200,"requestId":"REQUEST_ID_01-ACCOUNT_01-REGION_01","attempts":3,"totalRetryDelay":40000},"Bucket":"ACME-BACKUP","ETag":"\"\"","Key":"snapshot/app_APP_UUID_01.tar.gz.enc","Location":"S3_ENDPOINT_01/ACME-BACKUP/snapshot/app_APP_UUID_01.tar.gz.enc"} Jan 14 05:47:09 box:backuptask upload: path snapshot/app_APP_UUID_01.tar.gz.enc site SITE_UUID_01 uploaded: {"fileCount":11571,"size":40692513140,"transferred":40692513140} Jan 14 05:47:09 box:tasks updating task TASK_ID_01 with: {"percent":75.1935483870967,"message":"Uploading integrity information to snapshot/app_APP_UUID_01.tar.gz.enc.backupinfo (REGISTRY.DOMAIN.TLD)"} Jan 14 05:47:10 box:storage/s3 Upload progress: {"loaded":146,"total":146,"part":1,"Key":"snapshot/app_APP_UUID_01.tar.gz.enc.backupinfo","Bucket":"ACME-BACKUP"} Jan 14 05:47:10 box:storage/s3 Upload finished. {"$metadata":{"httpStatusCode":200,"requestId":"REQUEST_ID_02-ACCOUNT_02-REGION_01","attempts":1,"totalRetryDelay":0},"ETag":"\"ETAG_01\"","Bucket":"ACME-BACKUP","Key":"snapshot/app_APP_UUID_01.tar.gz.enc.backupinfo","Location":"https://ACME-BACKUP.s3.REGION_01.ionoscloud.com/snapshot/app_APP_UUID_01.tar.gz.enc.backupinfo"} Jan 14 05:47:10 box:backupupload upload completed. error: null Jan 14 05:47:10 box:backuptask runBackupUpload: result - {"result":{"stats":{"fileCount":11571,"size":40692513140,"transferred":40692513140},"integrity":{"signature":"SIGNATURE_01"}}} Jan 14 05:47:10 box:backuptask uploadAppSnapshot: REGISTRY.DOMAIN.TLD uploaded to snapshot/app_APP_UUID_01.tar.gz.enc. 4202.695 seconds Jan 14 05:47:10 box:backuptask backupAppWithTag: rotating REGISTRY.DOMAIN.TLD snapshot of SITE_UUID_01 to path 2026-01-14-030000-896/app_REGISTRY.DOMAIN.TLD_VERSION_01.tar.gz.enc Jan 14 05:47:10 box:tasks updating task TASK_ID_01 with: {"percent":75.1935483870967,"message":"Copying (multipart) snapshot/app_APP_UUID_01.tar.gz.enc"} Jan 14 05:47:10 box:tasks updating task TASK_ID_01 with: {"percent":75.1935483870967,"message":"Copying part 1 - ACME-BACKUP/snapshot/app_APP_UUID_01.tar.gz.enc bytes=0-1073741823"} Jan 14 05:47:10 box:tasks updating task TASK_ID_01 with: {"percent":75.1935483870967,"message":"Copying part 2 - ACME-BACKUP/snapshot/app_APP_UUID_01.tar.gz.enc bytes=1073741824-2147483647"} Jan 14 05:47:10 box:tasks updating task TASK_ID_01 with: {"percent":75.1935483870967,"message":"Copying part 3 - ACME-BACKUP/snapshot/app_APP_UUID_01.tar.gz.enc bytes=2147483648-3221225471"} Jan 14 05:47:10 box:tasks updating task TASK_ID_01 with: {"percent":75.1935483870967,"message":"Aborting multipart copy of snapshot/app_APP_UUID_01.tar.gz.enc"} Jan 14 05:47:10 box:storage/s3 copy: s3 copy error when copying snapshot/app_APP_UUID_01.tar.gz.enc: NoSuchKey: UnknownError Jan 14 05:47:10 box:backuptask copy: copy to 2026-01-14-030000-896/app_REGISTRY.DOMAIN.TLD_VERSION_01.tar.gz.enc errored. error: Old backup not found: snapshot/app_APP_UUID_01.tar.gz.enc Jan 14 05:47:10 box:backuptask fullBackup: app REGISTRY.DOMAIN.TLD backup finished. Took 4203.103 seconds Jan 14 05:47:10 box:locks write: current locks: {"full_backup_task_SITE_UUID_01":null} Jan 14 05:47:10 box:locks release: app_backup_APP_UUID_01 Jan 14 05:47:10 box:tasks setCompleted - TASK_ID_01: {"result":null,"error":{"message":"Old backup not found: snapshot/app_APP_UUID_01.tar.gz.enc","reason":"Not found"},"percent":100} Jan 14 05:47:10 box:tasks updating task TASK_ID_01 with: {"completed":true,"result":null,"error":{"message":"Old backup not found: snapshot/app_APP_UUID_01.tar.gz.enc","reason":"Not found"},"percent":100} Jan 14 05:47:10 box:taskworker Task took 6429.474 seconds Jan 14 05:47:10 BoxError: Old backup not found: snapshot/app_APP_UUID_01.tar.gz.enc Jan 14 05:47:10 at throwError (/home/yellowtent/box/src/storage/s3.js:387:49) Jan 14 05:47:10 at copyInternal (/home/yellowtent/box/src/storage/s3.js:454:16) Jan 14 05:47:10 at process.processTicksAndRejections (node:internal/process/task_queues:105:5) Jan 14 05:47:10 at async Object.copy (/home/yellowtent/box/src/storage/s3.js:488:12) Jan 14 05:47:10 at async Object.copy (/home/yellowtent/box/src/backupformat/tgz.js:282:5) Jan 14 05:47:10 Exiting with code 0I checked the bucket and can see its still there:
app_APP_UUID_01.tar.gz.enc 37.90 GB 14.1.2026, 05:36:49 app_APP_UUID_01.tar.gz.enc.backupinfo 146 bytes 14.1.2026, 05:47:09Please also check the timestamps.
Whats causing the the process is not finding the file and stopping the process?
-
J james has marked this topic as unsolved on
-
So the actual
app_APP_UUID_01.tar.gz.encwas created prior to the multipart copy attempt. Also it seems first two parts were copied fine and suddenly the S3 object cannot be found anymore for a brief period? Is this easily reproducible or more of an occasional hiccup?