Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Error 400 in backup process with Ionos S3 Object Storage

Error 400 in backup process with Ionos S3 Object Storage

Scheduled Pinned Locked Moved Unsolved Support
7 Posts 3 Posters 55 Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • dsp76D Offline
    dsp76D Offline
    dsp76
    wrote last edited by dsp76
    #1

    Hi there,
    we backup to Ionos S3 Storage.

    Aug 29 03:04:58 /home/yellowtent/box/node_modules/aws-sdk/lib/services/s3.js:712
    Aug 29 03:04:58 resp.error = AWS.util.error(new Error(), {
    Aug 29 03:04:58 ^
    Aug 29 03:04:58
    Aug 29 03:04:58 400: null
    Aug 29 03:04:58 at Request.extractError (/home/yellowtent/box/node_modules/aws-sdk/lib/services/s3.js:712:35)
    Aug 29 03:04:58 at Request.callListeners (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    Aug 29 03:04:58 at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    Aug 29 03:04:58 at Request.emit (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:686:14)
    Aug 29 03:04:58 at Request.transition (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:22:10)
    Aug 29 03:04:58 at AcceptorStateMachine.runTo (/home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:14:12)
    Aug 29 03:04:58 at /home/yellowtent/box/node_modules/aws-sdk/lib/state_machine.js:26:10
    Aug 29 03:04:58 at Request.<anonymous> (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:38:9)
    Aug 29 03:04:58 at Request.<anonymous> (/home/yellowtent/box/node_modules/aws-sdk/lib/request.js:688:12)
    Aug 29 03:04:58 at Request.callListeners (/home/yellowtent/box/node_modules/aws-sdk/lib/sequential_executor.js:116:18) {
    Aug 29 03:04:58 code: 400,
    Aug 29 03:04:58 region: null,
    Aug 29 03:04:58 time: 2025-08-29T01:04:58.491Z,
    Aug 29 03:04:58 requestId: null,
    Aug 29 03:04:58 extendedRequestId: undefined,
    Aug 29 03:04:58 cfId: undefined,
    Aug 29 03:04:58 statusCode: 400,
    Aug 29 03:04:58 retryable: false,
    Aug 29 03:04:58 retryDelay: 20000
    Aug 29 03:04:58 }
    Aug 29 03:04:58
    Aug 29 03:04:58 Node.js v20.18.0
    Aug 29 03:04:58 box:shell backuptask: /usr/bin/sudo -S -E --close-from=4 /home/yellowtent/box/src/scripts/backupupload.js snapshot/app_2315967d-42f4-4e64-9935-f62c3e6e858e tgz {"localRoot":"/home/yellowtent/appsdata/2315967d-42f4-4e64-9935-f62c3e6e858e","layout":[]} errored BoxError: backuptask exited with code 1 signal null
    Aug 29 03:04:58 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19)
    Aug 29 03:04:58 at ChildProcess.emit (node:events:519:28)
    Aug 29 03:04:58 at ChildProcess.emit (node:domain:488:12)
    Aug 29 03:04:58 at ChildProcess._handle.onexit (node:internal/child_process:294:12) {
    Aug 29 03:04:58 reason: 'Shell Error',
    Aug 29 03:04:58 details: {},
    Aug 29 03:04:58 code: 1,
    Aug 29 03:04:58 signal: null
    Aug 29 03:04:58 }
    Aug 29 03:04:58 box:backuptask runBackupUpload: backuptask crashed BoxError: backuptask exited with code 1 signal null
    Aug 29 03:04:58 at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19)
    Aug 29 03:04:58 at ChildProcess.emit (node:events:519:28)
    Aug 29 03:04:58 at ChildProcess.emit (node:domain:488:12)
    Aug 29 03:04:58 at ChildProcess._handle.onexit (node:internal/child_process:294:12) {
    Aug 29 03:04:58 reason: 'Shell Error',
    Aug 29 03:04:58 details: {},
    Aug 29 03:04:58 code: 1,
    Aug 29 03:04:58 signal: null
    Aug 29 03:04:58 }
    Aug 29 03:04:58 box:backuptask fullBackup: app www.REDACTED.com backup finished. Took 131.683 seconds
    Aug 29 03:04:58 box:locks write: current locks: {"backup_task":null}
    Aug 29 03:04:58 box:locks release: app_2315967d-42f4-4e64-9935-f62c3e6e858e
    Aug 29 03:04:58 box:taskworker Task took 298.068 seconds
    Aug 29 03:04:58 box:tasks setCompleted - 6937: {"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:170:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:369:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:391:5)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}}
    Aug 29 03:04:58 box:tasks update 6937: {"percent":100,"result":null,"error":{"stack":"BoxError: Backuptask crashed\n at runBackupUpload (/home/yellowtent/box/src/backuptask.js:170:15)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:369:5)\n at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:391:5)","name":"BoxError","reason":"Internal Error","details":{},"message":"Backuptask crashed"}}
    Aug 29 03:04:58 BoxError: Backuptask crashed
    Aug 29 03:04:58 at runBackupUpload (/home/yellowtent/box/src/backuptask.js:170:15)
    Aug 29 03:04:58 at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    Aug 29 03:04:58 at async uploadAppSnapshot (/home/yellowtent/box/src/backuptask.js:369:5)
    Aug 29 03:04:58 at async backupAppWithTag (/home/yellowtent/box/src/backuptask.js:391:5)
    

    Is there a reason why it failed?

    It doesn't always crash...

    (Ask me about B2B marketing automation & low code business solutions, if thats interesting for you.)

    1 Reply Last reply
    0
    • P Offline
      P Offline
      p44
      translator
      wrote last edited by
      #2

      Hello @dsp76

      • Have you tested with alternative storage providers or alternative regions to isolate if this is Ionos S3/Zone-specific?
      • Which region are you using for your S3 bucket?
      • Where data origin is stored?
      • How frequently does this issue occur?
      1 Reply Last reply
      0
      • dsp76D Offline
        dsp76D Offline
        dsp76
        wrote last edited by
        #3

        Hi,
        1.) we used Hetzner Storage Box before pretty stable. But thats a while ago. We needed to switch, as we didn't want our backup location to be the same provider as our server.

        2.) Region of IONOS S3 is: eu-central-3 (Berlin)
        3.) Not sure what this means?
        4.) Just now, not the days before.

        (Ask me about B2B marketing automation & low code business solutions, if thats interesting for you.)

        P 1 Reply Last reply
        1
        • dsp76D dsp76

          Hi,
          1.) we used Hetzner Storage Box before pretty stable. But thats a while ago. We needed to switch, as we didn't want our backup location to be the same provider as our server.

          2.) Region of IONOS S3 is: eu-central-3 (Berlin)
          3.) Not sure what this means?
          4.) Just now, not the days before.

          P Offline
          P Offline
          p44
          translator
          wrote last edited by
          #4

          @dsp76

          1. I was intending which was origin of data, in which data center (EU, US, and so on) is stored source, if far away from Ionos S3 Region (destination).

          What I have in mind, it could be some link issues between source and destination...

          Il I'm not wrong Hetzner Storage Box service does not have any S3 endpoint, .... Just sFtp, webdav, and others.

          So what I can suggest to you is to debug using other alternative S3 service. You could try backblaze, idrive e2, or Exoscale.

          Can you tell also us more about Cloudron instance resources? How much Ram do you have? Cpu? is Bare metal server or Vps?

          I think all those informations help to better understand.

          1 Reply Last reply
          0
          • nebulonN Away
            nebulonN Away
            nebulon
            Staff
            wrote last edited by
            #5

            If we have a way to reliably reproduce the issue, we can see if there is a workaround for that S3 provider, but most likely it is some issue on provider end, if this is not happening all the time. Sadly the S3 implementations between provider are not always behaving the same way. We use the AWS S3 SDK for all requests, which is the standard.

            1 Reply Last reply
            2
            • dsp76D Offline
              dsp76D Offline
              dsp76
              wrote last edited by
              #6

              Hi there,
              its a Virtual Dedicated Server at Hetzner. Decently sized with 32GB RAM and 8 Cores.
              When I drop the error message into AI, it also turns towards the provider. Suddenly HTTP400 when uploading ...
              The concerned App ID is the largest - our own docker registry with about 40GB.

              https://docs.ionos.com/cloud/storage-and-backup/ionos-object-storage/overview/limitations

              Currently I load it in multi upload parts of max. 512MB size. The backup process may use up to 10GB memory, which should be enough. For testing I reduced the size zu 128MB for the parts. Lets see if thats fixing it.

              I also opened a ticket with IONOS to find out, what the cause for error 400 was at the given timestamp. Hope they find some helpful information in the logs.

              (Ask me about B2B marketing automation & low code business solutions, if thats interesting for you.)

              1 Reply Last reply
              0
              • dsp76D Offline
                dsp76D Offline
                dsp76
                wrote last edited by
                #7

                OK, with reducing the size of the parts to 128MB it now ended successfully for the manual backup. Lets see if it also works with the regular backup at night from now on.

                (Ask me about B2B marketing automation & low code business solutions, if thats interesting for you.)

                1 Reply Last reply
                1
                Reply
                • Reply as topic
                Log in to reply
                • Oldest to Newest
                • Newest to Oldest
                • Most Votes


                • Login

                • Don't have an account? Register

                • Login or register to search.
                • First post
                  Last post
                0
                • Categories
                • Recent
                • Tags
                • Popular
                • Bookmarks
                • Search