Best aws s3 backup storage class
-
my 2 cents for this topic:
for backups you need fast save & restore options. The "normal" S3 buckets are fine. Some for backup solutions from your hoster. And same for snapshops. They all are fast enough to get a full recovery in a short time.
IMHO solutions like glacier are more or less something for archives. If you need to save your data longer as your typical backup periode is, you need an archive solution. Especially if you are in a business and your government tells you to store your business "papers" for at least 10 years. In that case it's more an archive topic than a backup topic. -
@luckow I understand your point, which I find really interesting. However, my point of view is slightly different: I've been running Cloudron on 2 servers for more than 4 years now and I have never nedded backup restoration. The event is rare enough to cost half a day or more if it happens. The second point is the price. With Glacier Deep Archive, a To cost ~1$/month.
-
@CarbonBee How much are you storing? AWS pricing in general is rarely competitive:
-
@marcusquinn These are the basic s3 plan prices. The Glacier Deep Archive is 0.0018$/Go : https://aws.amazon.com/fr/s3/pricing/.
Moreover we will use this solution for other needs (which require aws) and I would like to use the same service for all our applications, which includes cloudron backups. -
@marcusquinn said in Best aws s3 backup storage class:
@CarbonBee How much are you storing? AWS pricing in general is rarely competitive:
this is the right question, how many GB do you need?
There are free options under certain limits such as hooking up to Google Drive or Scaleway Object Storage (recommended).
-
@marcusquinn said in Best aws s3 backup storage class:
How much are you storing?
Relevent question indeed, I missed it sorry.
Well, my servers backups wieght 145Go and 190Go respectively. I backup them daily and I would like to have at least one week of backup history.
That means a bit more than 2To. -
I have been thinking about how to aggregate across multiple object storage backends via a single entry point, to which one can connect multiple different systems with varying capacities.
Aids in security splitting up the archives as well as resiliency via differing providers. A kind of coarse error-correction code (Reed-Solomon) across many nodes.
I wonder if such is doable with an a hierarchy of Minio instances with different backends.
-
@robi That would be interesting, but that does not solve my current problem, which is: "How to backup Cloudron on s3 glacier deep archive".
One solution would be to backup on a regular s3 and then use aws lifecycle rule to put them in glacier. But then, it would not be accessible through Glacier API, only s3's, and it would be costly to do the transition from s3 to Glacier.
In a near future, could it be possible to integrate Glacier API?
-
My 2 cents here (disclamer: I am by no means an expert on cloudron backups, nor on AWS).
I do not think Glacier would be currently integrable with the Cloudron backups. They expect an API to download stuff on demand, whereas Glacier expects you to request stuff in advance, and it's then made available later.
Cloudron backups could potentially be made to work with Glacier, but it would be a much bigger project than just adding an integration.