Backup feedback (minio)
-
After struggling with sshfs-based backups, I decided to try object-based storage. I ran into a few problems.
- Key length limitations (w/ rsync)
As the documentation notes, especially when encrypted, file name length limits can be a problem. I hit this sooner than expected, and wasn't able to successfully back up.
- max-parts limitation
Switching from rsync to tarball-based backups unfortunately didn't work at some point either:
Error uploading snapshot/app_xxx.tar.gz.enc. Message: Argument max-parts must be an integer between 0 and 2147483647 HTTP Code: InvalidArgument
Not certain, but my guess is that my backup in this case was too large to store?
-
@Robin I have encountered the same problem (1). Especially NextCloud is the candidate for too long hashes with rsync & encrypted. The solution was: tgz & encryption, because no encryption is no solution
Your problem (2) is not my problem. My largest cloudron backup is about 377 GB, and NextCloud takes up about 250 GB of that.
-
I will move this key length limitation as a feature request. I don't know of a way around it without either a) maintaining an offline index of file names or b) disabling file name encryption altogether. Is disabling filename encrpytion acceptable to you? If so, that is atleast easier to add than a. If so, please make a post under https://forum.cloudron.io/category/97/feature-requests section and I can look into this.
For 2, this seems like a Cloudron bug. How big is your data roughly?
-
I will move this key length limitation as a feature request. I don't know of a way around it without either a) maintaining an offline index of file names or b) disabling file name encryption altogether.
Yes, I think that there's no other solution, at least, not that I've been able to come up with.
Is disabling filename encrpytion acceptable to you? If so, that is atleast easier to add than a.
That would for sure be more ideal than it not working, so I can make a feature request for it
In terms of the "just work" factor, it would be nicer to have this just handled transparently ('a') though, but I can understand it isn't at all straightforward. Might be worth studying some different backup tools to see how this is handled, and sticking this into the list as a longer term project.
For 2, this seems like a Cloudron bug. How big is your data roughly?
Pretty big then, much smaller now. At the time I was testing the limits to see if I could break things, so I had around 400G stored I think (most of it in a single syncthing instance), don't recall the exact amount.