encrypted rsync error



  • I changed from tar to rsync backup earlier and now I'm getting this error when the backup-task kicks in:

    { stack: 'BackupsError: Error uploading snapshot/app_b5a3d83f-df04-4a88-b271-88ad1d1262f6/u1hGMHLEh1jPYmzIBGciZg/2r910P5u4bsvWIPmPEBAyw/7aAhYJHDE8l7R-LcQc2gWg/4VnR7SuEYxJF49avZfUSyQKFVuwNyr0o-1X1oHP3Rt0/WJKGqW9jKiZzUtkOxc5a5g/2Xlt--dyciwxLESgFNZVGISpdGSE1CGKwF+UmA2W0llohw5IdHN7cWw1n5B0rI+txjb-WDauBicPU58yct6D9SyFZqVKeCs9zO+KQn-7-peQq96P5Pjp8QFF6alW4fC18saaTZI1GJ2itwWJNiacT1wUX45LLnfAyiVMgGkA17onxtOPlU8nOuvUOnAXgTeMPWAmfpY7z7+XeEHSc79PrKxD8-ep2cX9TRrT+PoULptThlJ4WWImYT8GqRHzxQS5. Message: Object name contains unsupported characters. HTTP Code: XMinioInvalidObjectName\n    at /home/yellowtent/box/src/backups.js:616:29\n    at f (/home/yellowtent/box/node_modules/once/once.js:25:25)\n    at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:64:9)\n    at emitTwo (events.js:126:13)\n    at ChildProcess.emit (events.js:214:7)\n    at Process.ChildProcess._handle.onexit (internal/child_process.js:198:12)',
      name: 'BackupsError',
      reason: 'external error',
      message: 'Error uploading snapshot/app_b5a3d83f-df04-4a88-b271-88ad1d1262f6/u1hGMHLEh1jPYmzIBGciZg/2r910P5u4bsvWIPmPEBAyw/7aAhYJHDE8l7R-LcQc2gWg/4VnR7SuEYxJF49avZfUSyQKFVuwNyr0o-1X1oHP3Rt0/WJKGqW9jKiZzUtkOxc5a5g/2Xlt--dyciwxLESgFNZVGISpdGSE1CGKwF+UmA2W0llohw5IdHN7cWw1n5B0rI+txjb-WDauBicPU58yct6D9SyFZqVKeCs9zO+KQn-7-peQq96P5Pjp8QFF6alW4fC18saaTZI1GJ2itwWJNiacT1wUX45LLnfAyiVMgGkA17onxtOPlU8nOuvUOnAXgTeMPWAmfpY7z7+XeEHSc79PrKxD8-ep2cX9TRrT+PoULptThlJ4WWImYT8GqRHzxQS5. Message: Object name contains unsupported characters. HTTP Code: XMinioInvalidObjectName' }
    

    The backuplog says (all dated Jan 01 00:00:00 btw) that the last few attempts were from wordpress files with very long and I reckon problematic filenames.

    Ideas?



  • @msbt Looking at the length of the last part of the filename - it is 257 characters. On Ext4, the max length of filename is 255. This is the reason it fails.

    I can think of two solutions:

    1. Make the filename encryption optional. So, we only encrypt the file contents but not the filenames itself. By doing so, we can ensure that as long as the filename can exist on Cloudron, it will be possible to save it in minio as well. If we implement this, would the encrypted rsync feature be still useful to you?

    2. Figure out some complicated scheme to rename files like outlined here - https://serverfault.com/questions/264339/how-to-increase-filename-size-limit-on-ext3-on-ubuntu



  • @girish thanks for the reply. I just checked the length of files in that wordpress via
    find /app/data/ -regextype posix-basic -regex '.*/.\{100,\}'
    and the longest it shows is 180 characters. There are however multiple files with "Umlaute" (äöü) and other special characters (ß) and even though WP correctly shows their names in the adminpanel, on the filesystem they're shown with ?? instead of the character or ???? in some cases, could those be the issue?

    In this particular case filename encryption is total overkill, it's just a small blog. But I'm guessing that it will stumble across the special characters aswell. I've returned to tar backup for now as it seems to be working again (only had one failed backup, probably the fs was slow at the time), but maybe we should keep that in mind for other usecases.



  • @msbt With encryption, I think the max filename length that can supported is around 140-150. This is because encryption adds extra to the final output. I will update our docs accordingly or try to make filename encryption optional. https://github.com/ncw/rclone/issues/2040 is a similar bug (we don't use rclone but our encryption support is similar to rclone).



  • @girish thanks for the info and extra-feature, that's good to know and might come in handy! I'll also tell people not to use that long filenames to avoid such errors.