Fire declared in OVH SBG2 datacentre building
-
http://travaux.ovh.net/?do=details&id=49471&
Per https://twitter.com/olesovhcom/status/1369504527544705025, "We recommend to activate your Disaster Recovery Plan."
-
@p44 said in Fire declared in OVH SBG2 datacentre building:
Has any Cloudron user been impacted?
My company has been impacted, but thankfully not our production servers. : only our CI, and our secondary Object Storage. They are not cloudron servers though.
-
@p44 All our client-facing services, yes.
But for internal use, our Gitlab CI is currently down. We are not sure yet if we are part of the destroyed data-center, or just the offline ones, because the OVH manager page is currently down ^^.
As it's only internal, we are not in a big hurry to recover from backups, but we'll probably do it when the manager is back up.
-
-
-
@girish said in Fire declared in OVH SBG2 datacentre building:
Picture of the fire https://pbs.twimg.com/media/EwGqV17XMAMF_wa?format=jpg&name=large
https://mobile.twitter.com/abonin_DNA/status/1369538028243456000
Very horrible to see.
-
@girish said in Fire declared in OVH SBG2 datacentre building:
@p44 yes, some of our users lost their servers. We are helping them restore. Atleast, I only know of those who contacted us on support
I hope everything could be back ASAP operational. Next week we will consolidate our disaster recovery plan.
-
we are also affected.
Cloudron is runnning internal systems like crm and conflunce.I have an offsite backup at exoscale. It is tested some months ago.
But know i tried to look at the bucket to get the restore ID (filename) en encounterd an internal Server error at exoscale.
The support said that our backup is about 2TB big and because of its structure, which seems to have several delimiters i cant get a listing of backup.
I am trying "s3cmd ls s3://***** --recursive" to get prefix and a backup ID
so if someone had an similar experience i like to hear tips.
Regards Jens
-
@jensbee4 I hope you can restore all of your data and maybe you can find help here to retrieve backup listing
-
@jensbee4 I wrote a quick guide now on how to determine the backup id. Please ignore typos/grammar since this was a bit of a rush job with many people asking how to do this - https://docs.cloudron.io/guides/backup-id/ . For your case, I think you want https://docs.cloudron.io/guides/backup-id/#s3-aws-cli (no need to pass --recursive etc). Let me know if I can improve the guide further.
-
For people using Cyberduck with OVH, it does not work well because of Strasbourg timed-out connection...
I've found an undocumented workaround :
- Download OVH Public Cloud Storage Connection Profile
- modify the XML file by adding :
<key>Region</key> <string>WAW</string>
before </dict>
Instead of WAW, put your own region target.
Cyberduck is up again !
Bon courage...
-
@girish said in Fire declared in OVH SBG2 datacentre building:
Picture of the fire https://pbs.twimg.com/media/EwGqV17XMAMF_wa?format=jpg&name=large
https://mobile.twitter.com/abonin_DNA/status/1369538028243456000
Oh my goodness!! Thats insane!
Im betting OVH is gonna have some good deals coming up
-
@girish Thanks for the guide. This is a first step.
The problem on my side is i get an internal server error from exoscal. i checked with serveral clients. Will wait if the support can give me a list of buckits....
An important question. Is my infomation correct that every backup is a full backup?
Can ask the exoscal support to delete old objects? Maybe everythink from 2020?Regards Jens
-
@jensbee4
I got the hint to use the exoscale cli. And it seems to work, i am retrieving the backup list. -
@jensbee4 said in Fire declared in OVH SBG2 datacentre building:
The problem on my side is i get an internal server error from exoscal. i checked with serveral clients. Will wait if the support can give me a list of buckits....
Oh, that's unfortunate I am surprised how an object storage service can fail with just 2TB.
An important question. Is my infomation correct that every backup is a full backup?
It's not! Some backups are the backup of a single app. This is during right before we update an app. I created a task to make this clearer (https://git.cloudron.io/cloudron/box/-/issues/775).
If you want to give a person instructions on how to cleanup backups, you can tell them that if a directory contains a file named
box_xxx
, then it's a full backup. Otherwise, it's not. In general, if you had backups for this year, you can safely remove everything from 2020 I guess. Alternately, the backend people must have some mechanism to stash those things in some other bucket temporarily instead of deleting it.