[Backups] Ability to add multiple storage provider/location
-
Don't forget about rclone.org
-
@robi
I actually don't like that the 3-2-1 is managed by the main server, because if that is compromised you will have compromised also your backup.I think that: if cloudron wants to offer a better backup solution should have a 3° party software/node be in charge of the replication for the 2-1.
This will protect the server from any ransomware or if your server is compromised. -
Borrowing an answer from StackOverflow that may work:
- Minio Cloudron instance: using the command
mc mirror
on a cron job.
If that works, it could just be a case of documenting and maybe a GUI to make it user-friendly @girish ?
- Minio Cloudron instance: using the command
-
@marcusquinn
a full install of cloudron is too many resources w8st for many storage servers, we speak of old CPU (many of our storage servers have Haswell xeon) or just 1 vCore (time4vps) , and often without the support of docker. -
@moocloud_matt said in [Backups] Ability to add multiple storage provider/location:
I actually don't like that the 3-2-1 is managed by the main server, because if that is compromised you will have compromised also your backup.
That's the problem with traditional backups.
Next gen way of thinking about backups is simply having a much more resilient storage system. For example, when your data is sprinkled across 8 places and you only need 5 to restore any file/object. There are some very clever and efficient algorithms for this m of n approach which removes the need for 3x replication.
Minio can do this, and as a community we can pool resources to have 20+ places and only need 7 or so to be available at any one time. Maybe even start a coop.
-
@robi said in [Backups] Ability to add multiple storage provider/location:
For example, when your data is sprinkled across 8 places and you only need 5 to restore any file/object.
True, that's why we are using ceph, but it's not efficient (storage speaking) to protect the files we need to use snapshot or versioning in ceph too, because if the access of the bucket is compromised on the cloudron side all files even if they are slit into multiple nodes can still be deleted/encrypted, so that made the all advantage of using Software Define Storage/ Distributed storage = to a normal NAS offer by the datacenter over NFS.
I really would like to analyze better what push proxmox to build a dedicated Client for their Storage Server. And what I have understood un till now is that they want maximal protection made easy which means that the ssh key used by their hypervisor server is not able to access the 2 and 1 copy of the backup.
I really don't care about what software/ stack Cloudron will use, I just want to get out of Ceph for the backup and use a better setup that is not less safe.
-
-
@moocloud_matt proper object storage will have versioning and dedup too.
If you have source compromise you have other issues, however aspects of the push to store can be flipped in a pull to archive manner provided you have a mechanism to detect compromise and stop the pull or avoid overwrites.
-
@robi said in [Backups] Ability to add multiple storage provider/location:
will have versioning and dedup too
True, but dedup is a "raid" equivalent feature but scaled to multi-node.
And versioning is not efficient if combined with backup tools, because most of them don't take advantage of s3 versioning API.So we are back on the idea that the main server if it's is compromised you want your backup to be safe, so many ransomware those days.
Yes, that's possible to do it with S3 but is not easy and requires a lot of coding from the cloudron side and is just 1 protocol, no sftp or nfs or samba, if you don't want to use a proxy s3 to nfs or like we use Ceph to smd.
Supporting 1 protocol is not what backups are in cloudron, you see more and more protocol support over time, I just help add SMB3 (with encrypt) to that list.
And I think that is amazing for cloudron users that are not geeks or professionals and want to keep their local Synology or qnap NAS as a backup.
A solution that would fit users like many that are commenting on this post is not enough. The idea I hope is to find a solution that can scale from a classic rsync stile to a 3-2-1 systems like proxmox backup or Borg.
-
@moocloud_matt for your Ceph replacment you should look at Cleversafe from IBM. They also have some partners that make it even better on the user side. Komprise comes to mind.
For Cloudron, a nice rclone integration would handle most of those options nicely.
-
Definitely having a need for this now as I look to further improve my DR plan. What I’m hoping to achieve… faster local backups to an external block storage disk daily (possibly even multiple times given criticality of email for some users), and perhaps once a week or every 2-5 days would be an external backup to something like Wasabi to avoid any complete disasters in a single datacentre.
-
-
-
I can only agree with the previous speaker. I would be very happy about an implementation. For me it is not foreseeable how much effort it can be.
-
Might be a case of packaging the Rclone Web GUI as an app, then people can duplicate their main S3 backups anywhere else as a secondary routine.
-
@marcusquinn This could be an alternative solution but I hope that a second backup can be configured directly in Cloudron.