Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Issue with backups listings and saving backup config in 6.2

Issue with backups listings and saving backup config in 6.2

Scheduled Pinned Locked Moved Solved Support
backupsovh
32 Posts 2 Posters 3.8k Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • d19dotcaD Offline
    d19dotcaD Offline
    d19dotca
    wrote on last edited by d19dotca
    #15

    So here's what I see right now...

    The endpoint seems to be s3 (which I can't explain since I'm on 6.2.0), per the screenshot here:
    494528c4-6be7-4583-82d3-486ca37cecda-image.png

    And to confirm I'm on 6.2.0...
    c3e384bb-730c-41c4-b656-61773c389efa-image.png

    So if 6.2 was supposed to change from s3 to storage (and I'm sure it did when I saw it yesterday), it seems like it's somehow reverted back now which doesn't make any sense to me either. lol. So many confusing things in this one.

    --
    Dustin Dauncey
    www.d19.ca

    1 Reply Last reply
    0
    • d19dotcaD d19dotca

      @girish Oh I mean I definitely upgraded to 6.2 earlier, and am still running it, so I'm not sure how the storage part that I saw yesterday has suddenly changed to s3 now. Very strange.

      I see I guess two issues then? I guess in this case the storage endpoint isn't correct since now that it's working in my environment it's using s3 (I just don't know how it changed suddenly, or was it always s3 still and I'm just losing my mind? lol). And the other issue appears to be losing the Region value.

      girishG Offline
      girishG Offline
      girish
      Staff
      wrote on last edited by
      #16

      @d19dotca The endpoint URLs are hardcoded in the UI. Meaning, this whole "dropdown" list of showing different storage providers etc is really just a way of setting up "Other S3 compatible" with presets (hope that explanation makes sense).

      It's possible that maybe the browser did not refresh after 6.2 update. This would mean that the frontend you are using is maybe the 6.1 one and thus still using s3 subdomain. Just an idea.

      d19dotcaD 1 Reply Last reply
      0
      • girishG girish

        @d19dotca The endpoint URLs are hardcoded in the UI. Meaning, this whole "dropdown" list of showing different storage providers etc is really just a way of setting up "Other S3 compatible" with presets (hope that explanation makes sense).

        It's possible that maybe the browser did not refresh after 6.2 update. This would mean that the frontend you are using is maybe the 6.1 one and thus still using s3 subdomain. Just an idea.

        d19dotcaD Offline
        d19dotcaD Offline
        d19dotca
        wrote on last edited by d19dotca
        #17

        @girish said in Issue with backups listings and saving backup config in 6.2:

        It's possible that maybe the browser did not refresh after 6.2 update

        I don't think that's the case, as I see this issue in two different computers with one of them running in a private browser session, so I don't think it's a cache issue in this case. Plus like I said, I swear I saw the endpoint change to storage after the update last night. It's only today that it seems to have changed back to s3 and I can't explain how. haha.

        --
        Dustin Dauncey
        www.d19.ca

        1 Reply Last reply
        0
        • d19dotcaD Offline
          d19dotcaD Offline
          d19dotca
          wrote on last edited by
          #18

          I can also add that the backup I ran earlier this morning was successful still using the OVH Object Storage backend. If it's truly on version 6.2 as it appears to be, then I guess that rules out the storage endpoint as an issue, right? The logs also indicated a missing Region earlier from the failed backup prior.

          I'm not 100% sold on this being caused by the change from s3 to storage endpoints yet (even though mine still shows s3 despite being on 6.2). haha. I think there's something bigger afoot here, especially if we're seeing Region missing too after the upgrade.

          Sorry for making things confusing, just trying to update as I learn more, lol. I don't mean to be moving the goal posts.

          --
          Dustin Dauncey
          www.d19.ca

          girishG 2 Replies Last reply
          0
          • d19dotcaD d19dotca

            I can also add that the backup I ran earlier this morning was successful still using the OVH Object Storage backend. If it's truly on version 6.2 as it appears to be, then I guess that rules out the storage endpoint as an issue, right? The logs also indicated a missing Region earlier from the failed backup prior.

            I'm not 100% sold on this being caused by the change from s3 to storage endpoints yet (even though mine still shows s3 despite being on 6.2). haha. I think there's something bigger afoot here, especially if we're seeing Region missing too after the upgrade.

            Sorry for making things confusing, just trying to update as I learn more, lol. I don't mean to be moving the goal posts.

            girishG Offline
            girishG Offline
            girish
            Staff
            wrote on last edited by
            #19

            @d19dotca Let's debug the main thing first. I cannot get storage endpoint to work. I still get the same error as you did initially "Failed Dependency The XML you provided was not well-formed or did not validate against our published schema".

            So, let's start with that. Why does it work for you now but it didn't previously?

            1 Reply Last reply
            0
            • d19dotcaD d19dotca

              I can also add that the backup I ran earlier this morning was successful still using the OVH Object Storage backend. If it's truly on version 6.2 as it appears to be, then I guess that rules out the storage endpoint as an issue, right? The logs also indicated a missing Region earlier from the failed backup prior.

              I'm not 100% sold on this being caused by the change from s3 to storage endpoints yet (even though mine still shows s3 despite being on 6.2). haha. I think there's something bigger afoot here, especially if we're seeing Region missing too after the upgrade.

              Sorry for making things confusing, just trying to update as I learn more, lol. I don't mean to be moving the goal posts.

              girishG Offline
              girishG Offline
              girish
              Staff
              wrote on last edited by
              #20

              @d19dotca OK, sorry, I can confirm you are not imagining things. If I use OVH in dropdown it does not work. If I use it in "S3 compatible", it works. (i.e testing both with storage subdomain) This should be easy to figure out now what is different.

              d19dotcaD 1 Reply Last reply
              2
              • girishG girish

                @d19dotca OK, sorry, I can confirm you are not imagining things. If I use OVH in dropdown it does not work. If I use it in "S3 compatible", it works. (i.e testing both with storage subdomain) This should be easy to figure out now what is different.

                d19dotcaD Offline
                d19dotcaD Offline
                d19dotca
                wrote on last edited by
                #21

                @girish lol, okay glad to know I'm not crazy. I am still wondering how it changed from storage to s3 as I didn't change the preset at all, it still shows OVH Object Storage, not the s3-compatible type one (though I realize they appear to use the same code in the backend).

                --
                Dustin Dauncey
                www.d19.ca

                girishG 1 Reply Last reply
                0
                • girishG Offline
                  girishG Offline
                  girish
                  Staff
                  wrote on last edited by
                  #22

                  @d19dotca Found it! Phew.

                  The storage subdomain only supports path style API whereas s3 subdomain supports subdomain style API. The path style API is already deprecated (though they have backtracked a bit on it) - https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/ . So, s3 subdomain is the way to go with it's vhost based API access.

                  Now, for Cloudron UI, all the named providers use vhost style. This is why storage does not work. The 'Compat' option (and minio) uses the path style for compat reasons.

                  d19dotcaD 1 Reply Last reply
                  1
                  • d19dotcaD d19dotca

                    @girish lol, okay glad to know I'm not crazy. I am still wondering how it changed from storage to s3 as I didn't change the preset at all, it still shows OVH Object Storage, not the s3-compatible type one (though I realize they appear to use the same code in the backend).

                    girishG Offline
                    girishG Offline
                    girish
                    Staff
                    wrote on last edited by
                    #23

                    @d19dotca It seems that apart from trying to figure why storage magically changed to s3, we are good. I am suspecting maybe there is some UI bug. Just clicking around the UI to see if it's something obvious to reproduce.

                    d19dotcaD 1 Reply Last reply
                    0
                    • girishG girish

                      @d19dotca Found it! Phew.

                      The storage subdomain only supports path style API whereas s3 subdomain supports subdomain style API. The path style API is already deprecated (though they have backtracked a bit on it) - https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/ . So, s3 subdomain is the way to go with it's vhost based API access.

                      Now, for Cloudron UI, all the named providers use vhost style. This is why storage does not work. The 'Compat' option (and minio) uses the path style for compat reasons.

                      d19dotcaD Offline
                      d19dotcaD Offline
                      d19dotca
                      wrote on last edited by
                      #24

                      @girish Awesome, thanks Girish! 🙂 Thanks for looking into that. Sorry for causing so much confusion, glad we got to the bottom of it though.

                      --
                      Dustin Dauncey
                      www.d19.ca

                      1 Reply Last reply
                      0
                      • girishG girish

                        @d19dotca It seems that apart from trying to figure why storage magically changed to s3, we are good. I am suspecting maybe there is some UI bug. Just clicking around the UI to see if it's something obvious to reproduce.

                        d19dotcaD Offline
                        d19dotcaD Offline
                        d19dotca
                        wrote on last edited by d19dotca
                        #25

                        @girish Actually sorry one more thing I'm wondering about... if we believe the storage and s3 thing in the GUI to just be a possible GUI issue, wouldn't that mean it's in fact using storage then since that's what the 6.2 code points it to and the database points to it too? And if it's indeed using the storage endpoint then my last backup worked, so why would that work if storage isn't the right endpoint? lol Sorry just trying to wrap my head around it and make sure we've resolved it.

                        --
                        Dustin Dauncey
                        www.d19.ca

                        girishG 2 Replies Last reply
                        0
                        • d19dotcaD d19dotca

                          @girish Actually sorry one more thing I'm wondering about... if we believe the storage and s3 thing in the GUI to just be a possible GUI issue, wouldn't that mean it's in fact using storage then since that's what the 6.2 code points it to and the database points to it too? And if it's indeed using the storage endpoint then my last backup worked, so why would that work if storage isn't the right endpoint? lol Sorry just trying to wrap my head around it and make sure we've resolved it.

                          girishG Offline
                          girishG Offline
                          girish
                          Staff
                          wrote on last edited by girish
                          #26

                          @d19dotca Sure. So the db in 6.2 says storage and also path style disabled. This means that when a user goes to Backups view, he will see storage but backups will fail. Also, if you go to Backups -> Configure and Click Save making no changes, you will get that XML error.

                          wouldn't that mean it's in fact using storage then since that's what the 6.2 code points it to and the database points to it too

                          Correct

                          And if it's indeed using the storage endpoint then my last backup worked, so why would that work if storage isn't the right endpoint?

                          This, I am not so sure (and I guess cause of much confusion for us). Atleast, from my tries here, I couldn't get it work with storage endpoint with 6.2. I don't see how it can work with path style disabled. I guess one way to know this for sure is to go to the EventLog and see if any backup succeeded after the 6.2 update but before the backup config was changed? Sadly, I think we don't log backup config in the event log.

                          1 Reply Last reply
                          0
                          • d19dotcaD d19dotca

                            @girish Actually sorry one more thing I'm wondering about... if we believe the storage and s3 thing in the GUI to just be a possible GUI issue, wouldn't that mean it's in fact using storage then since that's what the 6.2 code points it to and the database points to it too? And if it's indeed using the storage endpoint then my last backup worked, so why would that work if storage isn't the right endpoint? lol Sorry just trying to wrap my head around it and make sure we've resolved it.

                            girishG Offline
                            girishG Offline
                            girish
                            Staff
                            wrote on last edited by girish
                            #27

                            @d19dotca Also, looking at your post here - https://forum.cloudron.io/post/27080 . Somehow you managed to get s3 in endpoint URL. I think this is why the backup succeeded. Also, to clarify, what is in the db and what is in the UI are in "sync".

                            d19dotcaD 1 Reply Last reply
                            0
                            • girishG girish

                              @d19dotca Also, looking at your post here - https://forum.cloudron.io/post/27080 . Somehow you managed to get s3 in endpoint URL. I think this is why the backup succeeded. Also, to clarify, what is in the db and what is in the UI are in "sync".

                              d19dotcaD Offline
                              d19dotcaD Offline
                              d19dotca
                              wrote on last edited by d19dotca
                              #28

                              @girish said in Issue with backups listings and saving backup config in 6.2:

                              Somehow you managed to get s3 in endpoint URL. I think this is why the backup succeeded. Also, to clarify, what is in the db and what is in the UI are in "sync".

                              That's the thing, I didn't do that, at least not intentionally. haha. So I was about to go change to the generic s3-compatible type, but before I did that that's when I noticed the Region was empty, so I just left it as it was but set the Region and saved, then tried the backup again to see if it was successful and it was. Is it possible that saving it somehow overwrote the storage endpoint back to s3? Maybe another part of the code needs to be modified?

                              For the database, it showed the storage endpoint used earlier this morning, and when I run the same command now it shows the s3 endpoint instead. So I think maybe part of the confusion earlier was that when I saved the changes (where all I did was edit the Region from null to BHS), it overwrote it back to s3 from storage. Do you think that's possibly what happened, if you look at the code side of it?

                              Here's what the DB looks like now, but compare to this the link above where I pasted it this morning before I saved the change to fill in the Region field...

                              ubuntu@cloudron:~$ mysql -uroot -ppassword -e "SELECT * FROM box.settings WHERE name='backup_config'"
                              mysql: [Warning] Using a password on the command line interface can be insecure.
                              +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
                              | name          | value                                                                                                                                                                                                                                                                                                                                                                                                                                  |
                              +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
                              | backup_config | {"provider":"ovh-objectstorage","format":"tgz","memoryLimit":4294967296,"schedulePattern":"00 00 7,19 * * *","retentionPolicy":{"keepWithinSecs":172800},"bucket":"cloudron-backups","prefix":"","accessKeyId":"<accesskey>","secretAccessKey":"<secretkey>","endpoint":"https://s3.bhs.cloud.ovh.net","region":"bhs","signatureVersion":"v4","uploadPartSize":1073741824,"encryption":null} |
                              +---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
                              

                              --
                              Dustin Dauncey
                              www.d19.ca

                              1 Reply Last reply
                              0
                              • d19dotcaD Offline
                                d19dotcaD Offline
                                d19dotca
                                wrote on last edited by d19dotca
                                #29

                                I can confirm too in a recent test if I opt to use the s3-compatible method, and put in the https://storage.bhs.cloud.ovh.net/ endpoint, my backups succeed, and that also jives with what I wrote too last year. So I still wonder if the issue truly is with the storage endpoint. Or is that explained by the path style differences you spoke about earlier? I'm not familiar with that part at all, so not sure if that explains why it works fine when using the s3-compatible method instead of the dedicated OVH Object Storage method for the storage endpoint URL. That page you linked to definitely shows the s3 endpoint URL, but knowing OVH their docs tend to be outdated / not actively maintained, so I wonder if it should still show storage endpoint at all, since that's the endpoint URL shown in the GUI everywhere in the OVH Control Panel and the OopenStack backend. Just throwing it out there. I think you have a better handle on this now than I do, so I'll leave it to you. haha. But if I can help troubleshoot at all further, I'll be happy to help. For now, I've moved to the s3-compatible one and used the s3 endpoint as you suggested.

                                --
                                Dustin Dauncey
                                www.d19.ca

                                girishG 1 Reply Last reply
                                0
                                • d19dotcaD d19dotca

                                  I can confirm too in a recent test if I opt to use the s3-compatible method, and put in the https://storage.bhs.cloud.ovh.net/ endpoint, my backups succeed, and that also jives with what I wrote too last year. So I still wonder if the issue truly is with the storage endpoint. Or is that explained by the path style differences you spoke about earlier? I'm not familiar with that part at all, so not sure if that explains why it works fine when using the s3-compatible method instead of the dedicated OVH Object Storage method for the storage endpoint URL. That page you linked to definitely shows the s3 endpoint URL, but knowing OVH their docs tend to be outdated / not actively maintained, so I wonder if it should still show storage endpoint at all, since that's the endpoint URL shown in the GUI everywhere in the OVH Control Panel and the OopenStack backend. Just throwing it out there. I think you have a better handle on this now than I do, so I'll leave it to you. haha. But if I can help troubleshoot at all further, I'll be happy to help. For now, I've moved to the s3-compatible one and used the s3 endpoint as you suggested.

                                  girishG Offline
                                  girishG Offline
                                  girish
                                  Staff
                                  wrote on last edited by girish
                                  #30

                                  @d19dotca said in Issue with backups listings and saving backup config in 6.2:

                                  I'm not familiar with that part at all, so not sure if that explains why it works fine when using the s3-compatible method instead of the dedicated OVH Object Storage method for the storage endpoint URL

                                  I can explain at a high level. But essentially, given an endpoint say s3.objectstorage.com, it's a question of how to access the objects of a bucket foo. There are two "styles" of accessing the S3 REST API: one is https://foo.s3.objectstorage.com (the vhost style) and another is https://s3.objectstorage.com/foo (path style). The former as you can see requires a subdomain to be setup per bucket. The latter is deprecated for various reasons including security.

                                  When you choose 'OVH', we set the path style to false. When you choose 'S3 compat', we set the path style to true. This is why storage works with S3 compat - it's only supporting a legacy API style.

                                  I think OVH does not display s3 in the UI because those are openstack endpoints. In fact, to get S3 API access (as you know) in OVH is super geeky. You have to download this openrc.sh and then use a CLI tool to generate access keys etc. It's really second class citizen / after thought. (S3 API is actually just an optional addon in openstack. Lots of people implemented openstack 5-6 years ago but they are all in various states of dead afaik. only huge/massive companies use it now since it's a behemoth).

                                  For now, I've moved to the s3-compatible one and used the s3 endpoint as you suggested

                                  I pushed a 6.2.1. You can update to that and use the OVH backend which will use s3 subdomain.

                                  In any case, if you opened a ticket with OVH, let me know what they say.

                                  d19dotcaD 1 Reply Last reply
                                  1
                                  • girishG girish

                                    @d19dotca said in Issue with backups listings and saving backup config in 6.2:

                                    I'm not familiar with that part at all, so not sure if that explains why it works fine when using the s3-compatible method instead of the dedicated OVH Object Storage method for the storage endpoint URL

                                    I can explain at a high level. But essentially, given an endpoint say s3.objectstorage.com, it's a question of how to access the objects of a bucket foo. There are two "styles" of accessing the S3 REST API: one is https://foo.s3.objectstorage.com (the vhost style) and another is https://s3.objectstorage.com/foo (path style). The former as you can see requires a subdomain to be setup per bucket. The latter is deprecated for various reasons including security.

                                    When you choose 'OVH', we set the path style to false. When you choose 'S3 compat', we set the path style to true. This is why storage works with S3 compat - it's only supporting a legacy API style.

                                    I think OVH does not display s3 in the UI because those are openstack endpoints. In fact, to get S3 API access (as you know) in OVH is super geeky. You have to download this openrc.sh and then use a CLI tool to generate access keys etc. It's really second class citizen / after thought. (S3 API is actually just an optional addon in openstack. Lots of people implemented openstack 5-6 years ago but they are all in various states of dead afaik. only huge/massive companies use it now since it's a behemoth).

                                    For now, I've moved to the s3-compatible one and used the s3 endpoint as you suggested

                                    I pushed a 6.2.1. You can update to that and use the OVH backend which will use s3 subdomain.

                                    In any case, if you opened a ticket with OVH, let me know what they say.

                                    d19dotcaD Offline
                                    d19dotcaD Offline
                                    d19dotca
                                    wrote on last edited by
                                    #31

                                    @girish Ah that helps explain it then, thanks Girish! 🙂 I really appreciate the time and educational aspect to this. Thanks again!

                                    --
                                    Dustin Dauncey
                                    www.d19.ca

                                    girishG 1 Reply Last reply
                                    0
                                    • d19dotcaD d19dotca

                                      @girish Ah that helps explain it then, thanks Girish! 🙂 I really appreciate the time and educational aspect to this. Thanks again!

                                      girishG Offline
                                      girishG Offline
                                      girish
                                      Staff
                                      wrote on last edited by girish
                                      #32

                                      @d19dotca To add to this, this is why minio has path style to true. Because it will be a pain for selfhosters to create a subdomain (dns and certs) for every bucket they create.

                                      Edit: just looked this up now. In minio, one can set MINIO_DOMAIN to enable vhost style per https://docs.min.io/docs/minio-server-configuration-guide.html . I have to test if this works with Cloudron's domain alias feature.

                                      1 Reply Last reply
                                      0
                                      Reply
                                      • Reply as topic
                                      Log in to reply
                                      • Oldest to Newest
                                      • Newest to Oldest
                                      • Most Votes


                                      • Login

                                      • Don't have an account? Register

                                      • Login or register to search.
                                      • First post
                                        Last post
                                      0
                                      • Categories
                                      • Recent
                                      • Tags
                                      • Popular
                                      • Bookmarks
                                      • Search