Backup Failed - looks like nextcloud
-
Oct 01 21:01:55 box:backuptask uploadAppSnapshot: nextcloud.rotovegas.nz uploaded to snapshot/app_b5917fc4-9fc2-4d51-b754-4f08cd02ea41. 65.989 seconds Oct 01 21:01:55 box:backuptask rotateAppBackup: rotating nextcloud.rotovegas.nz to path 2024-10-01-080000-722/app_nextcloud.rotovegas.nz_v4.22.4 Oct 01 21:01:55 box:tasks update 6137: {"percent":61.86956521739132,"message":"Copying /mnt/cloudronbackup/snapshot/app_b5917fc4-9fc2-4d51-b754-4f08cd02ea41 to /mnt/cloudronbackup/2024-10-01-080000-722/app_nextcloud.rotovegas.nz_v4.22.4"} Oct 01 21:01:55 box:shell copy execArgs: cp ["-dRl","/mnt/cloudronbackup/snapshot/app_b5917fc4-9fc2-4d51-b754-4f08cd02ea41","/mnt/cloudronbackup/2024-10-01-080000-722/app_nextcloud.rotovegas.nz_v4.22.4"] Oct 01 21:02:50 box:shell copy: cp with args -dRl /mnt/cloudronbackup/snapshot/app_b5917fc4-9fc2-4d51-b754-4f08cd02ea41 /mnt/cloudronbackup/2024-10-01-080000-722/app_nextcloud.rotovegas.nz_v4.22.4 errored RangeError [ERR_CHILD_PROCESS_STDIO_MAXBUFFER]: stderr maxBuffer length exceeded [no timestamp] at Socket.onChildStderr (node:child_process:519:14) [no timestamp] at Socket.emit (node:events:518:28) [no timestamp] at addChunk (node:internal/streams/readable:559:12) [no timestamp] at readableAddChunkPushByteMode (node:internal/streams/readable:510:3) [no timestamp] at Readable.push (node:internal/streams/readable:390:5) [no timestamp] at Pipe.onStreamRead (node:internal/stream_base_commons:190:23) { [no timestamp] code: 'ERR_CHILD_PROCESS_STDIO_MAXBUFFER', [no timestamp] cmd: 'cp -dRl /mnt/cloudronbackup/snapshot/app_b5917fc4-9fc2-4d51-b754-4f08cd02ea41 /mnt/cloudronbackup/2024-10-01-080000-722/app_nextcloud.rotovegas.nz_v4.22.4' [no timestamp] } Oct 01 21:02:50 box:backuptask copy: copied to 2024-10-01-080000-722/app_nextcloud.rotovegas.nz_v4.22.4 errored. error: copy errored with code ERR_CHILD_PROCESS_STDIO_MAXBUFFER message stderr maxBuffer length exceeded Oct 01 21:02:50 box:taskworker Task took 169.826 seconds Oct 01 21:02:50 box:tasks setCompleted - 6137: {"result":null,"error":{"stack":"BoxError: copy errored with code ERR_CHILD_PROCESS_STDIO_MAXBUFFER message stderr maxBuffer length exceeded\n at Object.copy (/home/yellowtent/box/src/storage/filesystem.js:173:26)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)","name":"BoxError","reason":"External Error","details":{},"message":"copy errored with code ERR_CHILD_PROCESS_STDIO_MAXBUFFER message stderr maxBuffer length exceeded"}} Oct 01 21:02:50 box:tasks update 6137: {"percent":100,"result":null,"error":{"stack":"BoxError: copy errored with code ERR_CHILD_PROCESS_STDIO_MAXBUFFER message stderr maxBuffer length exceeded\n at Object.copy (/home/yellowtent/box/src/storage/filesystem.js:173:26)\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)","name":"BoxError","reason":"External Error","details":{},"message":"copy errored with code ERR_CHILD_PROCESS_STDIO_MAXBUFFER message stderr maxBuffer length exceeded"}} [no timestamp] copy errored with code ERR_CHILD_PROCESS_STDIO_MAXBUFFER message stderr maxBuffer length exceeded [no timestamp] at Object.copy (/home/yellowtent/box/src/storage/filesystem.js:173:26) [no timestamp] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Ideas on what to do to fix it? Previously I have just deleted the snapshot folder on the destination, but the reasons for the error were different.
-
Deleting the snapshot folder in the destination did again fix the issue.
-
Hm this looks like the backup task is running out of STDIO buffer when performing the backup snapshot rotation. If that is the case, then I don't understand why deleting the snapshot folder would help here. But good if it works again for a start.
I think for next release we have to check the spawning of that copy process to ensure buffers are not exhausted.
-
I don't know what STDIO buffer is, its all happening on 1 Gb LAN cloudron server PC to QNAP NAS via SMB/CIFS share, if that helps.
-
@AartJansen we use a function called "child_process.exec" in nodejs. When you execute a command the process writes to stdout and stderr . These outputs are captured by nodejs in internal variables. In nodejs, there is a limit to how much is stored in these internal variables. That makes sense because you don't want to capture large MBs of output into internal variables. This will just crash everything. When executing cp in the backup, the output seems very large and exhausts the size limit and nodejs kills the backup. This is the bug and what @nebulon means by STDIO buffer ...
Now, the issue remains is why cp is giving any output at all. I suspect it is emitting some warning like "cannot preserve permissions" because of the filesystem .
-
Would it be more robust to use NFS ?
-
@AartJansen In this specific case, it might help because cp will not emit lines of stderr because of NFS (NFS supports all file attributes/permissions) . But ultimately, this is still a Cloudron bug, we are fixing it
-
-
Thats great that you are fixing it, however in the meantime I'd like to have good backups
I did change the format from tgz to rsync also. Would that play a role?
-
@AartJansen yes, tgz should not have the issue
-
Awesome, I couldn't get NFS to mount, so just set the tarball option instead, thanks.
-