Thanks for clarifying. I think I was just looking in the wrong place / expecting this to workm differently. No, I am not aware of a UI for this in non-Cloudron instances, though this would be nice to have of course.
@girish ooooo - so each backup stands on its own. That makes sense from a recovery perspective. I suspect it'd be a groovie feature to add though for Nextclouders and Jellyfin types so they know they have one good backup. In my case, disable backups of Jellyfin and back up and goin.
@girish It's no problem - just a tad bit of overhead and I use the S3 for other backups anyways.
Yeah, the volumes work great - in my instance it looks like this:
-External hard drive via USB3 hooked to physical box
-Hard drive volume created on physical box with an 8 TB VHDX on there.
-8TB VHDX passed through to Cloudron VM
-Cloudron VM mounts it as an xfs target
-Pass mount (the /mnt requirement tripped me up but is smart to require it) to Cloudron
-Mount volume into Minio container (/media/backuptarget)
-Reconfigure minio to default to /media/backuptarget instead of /app/data/data
(The last bit is why minio CLI's fuss cause /app/data/data is hardcoded in a few places).
@girish weird, "./" was the first thing I tried when it didn't on its own but there again it threw me an error about command not found and minio-credentials not being a folder. I resorted to trying bash in case I got it wrong and that's the only time it at least tried to run something (but ran into those errors).
But have just done it again and it worked fine, I must have done something wrong the first time.
Sorry for reporting a non issue and thanks for getting back
@RoboMod Sure, sorry for the no response. From the logs in the other post, the error is Message: Connection timed out after 300000ms HTTP Code: TimeoutError. Maybe the server becomes unreachable because of some network error? Is that possible in your setup?