Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. App Packaging & Development
  3. Garage packaging status, next steps

Garage packaging status, next steps

Scheduled Pinned Locked Moved App Packaging & Development
22 Posts 6 Posters 1.1k Views 7 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jadudmJ Offline
    jadudmJ Offline
    jadudm
    wrote on last edited by
    #1

    summary

    Garage is "an S3 object store so reliable you can run it outside data centers."

    Given that the open source version of Min.io recently went into maintenance mode, it feels appropriate to have another S3 option on Cloudron.

    I think I have a more-than-proof-of-concept, but not-ready-for-production package for Garage. I'm starting this thread here so there is a place to discuss what would be necessary to bring it into readiness for a "real" Cloudron package.

    related posts

    • https://forum.cloudron.io/post/116583
    • https://forum.cloudron.io/post/116585

    building the package

    1. Check out the repository (https://git.jadud.com/jadudm/cloudron-garage)
    2. export DOMAIN=s3.example.com
    3. make install

    (If the package has value, I'm of course happy to see the git repository move.)

    The build relies on the value of the environment variable DOMAIN; it will be easiest if you set that variable in the shell before running make. Within the Makefile, the cloudron tool is run; it is assumed you have this tool installed, as well as the ability to push/pull from a registry.

    I cannot provide support for people who have not set up their development environment for Cloudron packaging.

    alias domains

    The installation will set three alias domains for this site. If you installed at s3.example.com, then it will set:

    • admin.s3.example.com
    • api.s3.example.com
    • *.web.s3.example.com

    NOTE: These sub-sub domains are likely not covered by a certificate. It may be that the installation would rather create aliases like api-s3, admin-s3, and so forth, all off a root domain.

    inside the package

    The package launches two services:

    • caddy
    • garage

    Garage is a single Rust binary, but serves on ports 3900, 3902, and 3903. (Verdict is still out on 3901.) These appear to be "secure by default" ports, in that nothing comes back on any of them without valid credentials (API keys). That said:

    https://admin.s3.example.com/health
    

    should be a valid health check. Because this is the only valid health check, the caddyfile sets up a respond directive for api.DOMAIN, admin.DOMAIN, and DOMAIN so that /health works on all three domains.

    It might be that caddy would be unnecessary if the manifest's healthCheckPath worked differently. The solution here was to proxy inside the container.

    what is that wildcard domain?

    Garage can expose buckets as static websites. In this way, it might be possible for users to:

    1. Upload static content to a bucket
    2. Add an alias to this package
    3. Add a redirect to the caddyfile
    4. Profit!

    (That's an old internet meme.)

    Using a wildcard alias works, given that Cloudron passes the alias value straight through as a DNS A record. Therefore, it is possible to proxy everything that comes in to *.web.s3.example.com, add a rule to the caddyfile, and... it should work? In testing, I was able to confirm that buckets would serve content; I did not try layering a new domain on top, but believe it would work.

    The caddyfile is in /app/data/, and therefore will be backed up. This should mean that changes there would persist on a backup/restore.

    what works, what doesn't?

    The package seems to come up, and I can follow the instructions in the README to

    1. Create layouts, buckets, and keys (using the garage command, in the terminal for the app)
    2. Access the buckets using mc (Minio Client) to ls, cp and so on
    3. Declare a bucket to be a website, and serve content from it

    I have experimented briefly with the Administrative API; it works. It is likely in this version of the package it does not, as I just noticed that I fail to actually set the admin_token. So, that needs to be fixed. (Or, perhaps it is being set... but, it's random.)

    what needs to be done?

    1. Backup and restore. I have not yet tackled this. At /app/data/meta/db.sqlite is the metadata for all of the block storage in the S3 instance. This needs to be handled with care. Failure to care for this DB likely loses all of the data in the instance.
    2. Migrations. See "Backup and restore."
    3. Enable the key/value store. This is probably easy, as it is another service at port 3904. It will be a proxy value in the caddyfile and a possible alias.
    4. I don't know how/if aliases can be defined in the manifest. It would be nice.
    5. ... ?

    W.R.T. #4: I can't use httpPorts for the wildcard subsubdomain... but, I'd rather not have to tell users it's a step they have to follow. Should the manifest support adding alias values? If so, could the manifest support things like *.web.CLOUDRON_APP_DOMAIN? I don't want everything appearing on the root domain of the Cloudron instance by default; I want things to appear under the domain provided by the user.

    I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

    1 Reply Last reply
    2
    • robiR Offline
      robiR Offline
      robi
      wrote on last edited by
      #2

      Since you have Caddy, it can also proxy to the admin port via /admin instead, no?

      Same for /api to the API port & KV store.

      SQLite is pretty robust running on Billions of devices rather invisibly, so I wouldn't worry too much. Like you said a snapshot of it could be useful, but the use case may yet need to be discovered.

      Cloudron backup takes care of everything in /app/data so there will be a nested redundancy if used for backup.

      On another note:

      Does Garage have any S3 Gateway features?

      Conscious tech

      1 Reply Last reply
      0
      • jadudmJ Offline
        jadudmJ Offline
        jadudm
        wrote on last edited by
        #3

        I appreciate your input, @robi . I'm hoping to hear from other packagers and the product team. Your questions/comments help illuminate the kind of things I think need to be considered in this package, so I'll use this as an opportunity to capture them more fully, and I've woven some packaging questions in along the way for the Cloudron team/other packagers.

        Since you have Caddy, it can also proxy to the admin port via /admin instead, no?
        Same for /api to the API port & KV store.

        I don't think this is a good idea. The Administrative API may (now, or in the future) have an /admin path that maps to some behavior. To override/rewrite pathways on an API is to fundamentally change the present/future behavior of the API. Given that Cloudron is an infrastructure platform, this means that I would be packaging a "broken" version of Garage (because, by rewriting pathways, I'd be changing the behavior of Garage). It might not be broken now, but I would be setting up a future where it becomes broken. So, I think rewriting pathways on any of the ports (and, in doing so, altering the behavior of Garage) is a bad idea.

        Adding the K/V store is trivial. I just haven't done it.

        I stand my by assertion that I believe the CloudronManifest needs to be expanded.

        • I cannot use httpPorts to insert a wildcard into the DNS, because it claims that *.web.domain.tld is not a proper domain, and
        • I cannot insert aliases via manifest (but I can via cloudron install), which do allow me to insert A records into DNS

        So, either httpPorts needs to allow me to insert wildcard domains, or I need an aliases entry in the manifest (preferably mapped to an array). There may be another way, but I think that altering application behavior---especially for an S3-compatible API and administrative server---is the wrong approach. I also think including instructions that tell users to add aliases is a bad approach, but... at least there is precedent (e.g. packages that have default user/pass combos like admin/changeme123).

        SQLite is pretty robust running on Billions of devices rather invisibly, so I wouldn't worry too much. Like you said a snapshot of it could be useful, but the use case may yet need to be discovered.
        Cloudron backup takes care of everything in /app/data so there will be a nested redundancy if used for backup.

        Perhaps this is a bit direct, but I'm going to state it this way: you are a bit casual with my data. And, for that matter, everyone's data who relies on Cloudron or (say) the package I am working on. My premise is that if someone is using this package, they should be able to trust that their data will not be corrupted or lost by the normal day-to-day operation of Cloudron, including a package update or upgrade. That happens through careful thought and engineering.

        Two links that I think are educational in this regard:

        • SQLite documentation: https://sqlite.org/backup.html
        • A good explainer: https://oldmoe.blog/2024/04/30/backup-strategies-for-sqlite-in-production/

        Because SQLite writes temporary files while in operation, you cannot simply copy it. I assume that @girish and @nebulon took these kinds of things into account when they introduced the sqlite addon under localstorage. But, I don't know. I'm confident they, or someone, will help answer my uncertainty.

        Therefore, my question is: if I have indicated that the metadata database that Garage relies on is specified in localstorage, can I be confident that the standard Cloudron backup mechanism will properly backup that SQLite file, even if it is in use? For the Garage package, I have included it in my localstorage property, but I don't know what happens when I do that, because the packaging/manifest docs are not very specific about what including it means.

        I also know I need to handle database migrations. When Garage goes from v2.1.0 to v2.2.0 (or, more likely, v2 to v3), it might make changes to how it structures its metadata.

        • https://garagehq.deuxfleurs.fr/documentation/operations/upgrading/
        • https://garagehq.deuxfleurs.fr/documentation/operations/durability-repairs/
        • https://garagehq.deuxfleurs.fr/documentation/operations/recovering/

        These things are my responsibility as a packager. Because Cloudron does not appear to provide any "upgrade" hooks, I believe/think I need to write my startup scripts to always assume that the current boot could be one where I am upgrading---either minor or major. And, because that SQLite file, if damaged/lost, will mean losing all of the data stored by Garage, it is important to get the start/restart sequence right.

        Coupled with my uncertainty (currently) about how SQLite is handled by Cloudron, I don't even know if I can roll back to a Cloudron snapshot safely.

        Cloudron says this:

        Sqlite files that are actively in use cannot be backed up using a simple cp. Cloudron will take a consistent portable backups of Sqlite files specified in this option.

        So: I am assuming that Cloudron does the right thing w/ SQLite files w.r.t. backups.

        Garage suggests that for minor upgrades, a garage repair before upgrading is enough, and for major upgrades, more is necessary. However, the package doesn't "know" when it is minor vs. major. (I do, but the package doesn't... unless I build that logic into the package startup.) So, I suspect I need to pretend, on every boot, that it might be a major upgrade. (For packagers: is this the right strategy/assumption?)

        This is what Garage recommends for major upgrades:

        1. Disable API access (for instance in your reverse proxy, or by commenting the corresponding section in your Garage configuration file and restarting Garage)
        2. Check that your cluster is idle
        3. Make sure the health of your cluster is good (see garage repair)
        4. Stop the whole cluster
        5. Back up the metadata folder of all your nodes, so that you will be able to restore it if the upgrade fails (data blocks being immutable, they should not be impacted)
        6. Install the new binary, update the configuration
        7. Start the whole cluster
        8. If needed, run the corresponding migration from garage migrate
        9. Make sure the health of your cluster is good
        10. Enable API access (reverse step 1)
        11. Monitor your cluster while load comes back, check that all your applications are happy with this new version

        Now, some of this comes "for free" when a package is being upgraded, because I will be doing this at startup (in my start.bash), which is before the garage service is running. Therefore, I can take as "given" the things that involve being in a shut-down state.

        1. Disable API access (for instance in your reverse proxy, or by commenting the corresponding section in your Garage configuration file and restarting Garage)
        2. Check that your cluster is idle
        3. Make sure the health of your cluster is good (see garage repair)
        4. Stop the whole cluster
        5. Back up the metadata folder of all your nodes, so that you will be able to restore it if the upgrade fails (data blocks being immutable, they should not be impacted) (note: This probably means garage meta snapshot --all)
        6. Install the new binary, update the configuration
        7. Start the whole cluster
        8. If needed, run the corresponding migration from garage migrate
        9. Make sure the health of your cluster is good
        10. Enable API access (reverse step 1)
        11. Monitor your cluster while load comes back, check that all your applications are happy with this new version

        Every time the container starts, I think I need to do a garage repair. I think I also need to follow the steps above (e.g. snapshot, migrate, etc.). This way, if I rebuild the container, and go from v2.1.0 to v3.0.0, I am guaranteed that I have 1. repaired the database recently, 2. taken a snapshot that is robust/safe/not in active operation, and 3. migrated to the most recent table schema. This should ensure that, when I start garage serve, that the application is working against the right table schemas.

        What concerns me is the configuration. I am likely going to need to pay attention to updates/upgrades, and determine if (say) a v2.1.0 configuration would break a v3.0.0 deployment. If so, then I'm going to (possibly) package both the v2.1.0 and v3.0.0 binary into the image, and "fall back" to the v2 binary when a v2 configuration on a user's installation is detected. That way, I can prompt users as part of the upgrade to make sure to go in, update their configs, and reboot. (Or, something.)

        This not yet a problem, but it is directly tied to the SQLite database, which is the Garage filesystem. Loss or corruption of that DB is loss of data.

        Does Garage have any S3 Gateway features?

        I don't think so. If you need one, you can package versitygw, which is dedicated to that purpose.

        A question for packagers: how do we test packages? Does anyone have strategies for automating testing of package development/updates/upgrades?

        I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

        1 Reply Last reply
        0
        • jadudmJ Offline
          jadudmJ Offline
          jadudm
          wrote on last edited by
          #4

          Hm. Maybe I am wrong about the URL rewrites. I'll sleep on it.

          Because Garage puts the different APIs/functions on different ports... you might be right, @robi.

          I still think it would be nice to have aliases exposed in the manifest.

          I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

          1 Reply Last reply
          1
          • scookeS Offline
            scookeS Offline
            scooke
            wrote on last edited by
            #5

            I tried Garage a few times... never got it working. It keeps getting touted as the Minio replacement, but I'm not too sure about that.

            A life lived in fear is a life half-lived

            1 Reply Last reply
            0
            • jadudmJ Offline
              jadudmJ Offline
              jadudm
              wrote on last edited by
              #6

              Not much I can do with that statement. I have it packaged and working. I'm now in the weeds of how best to handle data backup and restore on Cloudron. Given that the Minio package must go away, this is at least a possibility that can be evaluated.

              I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

              scookeS 1 Reply Last reply
              3
              • jadudmJ jadudm

                Not much I can do with that statement. I have it packaged and working. I'm now in the weeds of how best to handle data backup and restore on Cloudron. Given that the Minio package must go away, this is at least a possibility that can be evaluated.

                scookeS Offline
                scookeS Offline
                scooke
                wrote on last edited by
                #7

                @jadudm said in Garage packaging status, next steps:

                the weeds

                Tell us when "the weeds" are dealt with. That's what I'm getting at. You actually have items being backed up to it, really? I also got it up and running, but then could never figure out how to actually get data INTO it. Instructions, and others' suggestions (not that there are many) didn't help at all.

                A life lived in fear is a life half-lived

                1 Reply Last reply
                0
                • jadudmJ Offline
                  jadudmJ Offline
                  jadudm
                  wrote on last edited by
                  #8

                  Hi @scooke ,

                  To your question, I followed the instructions on the Garage website, and I added a near-copy of that documentation to the README in the package I've developed. This is all documented in the git repository for the package that I linked to. I have Garage running on my Cloudron instance, and was able to create buckets, add content, and even serve static web content from within those buckets. I'm sorry your prior experience with this did not work for you.

                  I am now trying to do that in a way that it would be considered for inclusion as a standard package, if the Cloudron team thinks it is worthwhile. If they don't, perhaps I'll do it for myself. 🤷

                  I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

                  timconsidineT 1 Reply Last reply
                  4
                  • jadudmJ jadudm referenced this topic on
                  • jadudmJ jadudm

                    Hi @scooke ,

                    To your question, I followed the instructions on the Garage website, and I added a near-copy of that documentation to the README in the package I've developed. This is all documented in the git repository for the package that I linked to. I have Garage running on my Cloudron instance, and was able to create buckets, add content, and even serve static web content from within those buckets. I'm sorry your prior experience with this did not work for you.

                    I am now trying to do that in a way that it would be considered for inclusion as a standard package, if the Cloudron team thinks it is worthwhile. If they don't, perhaps I'll do it for myself. 🤷

                    timconsidineT Offline
                    timconsidineT Offline
                    timconsidine
                    App Dev
                    wrote on last edited by
                    #9

                    @jadudm what is the status of your package now ?

                    Indie app dev, scratching my itches, lover of Cloudron PaaS, communityapps.appx.uk

                    1 Reply Last reply
                    1
                    • jadudmJ Offline
                      jadudmJ Offline
                      jadudm
                      wrote on last edited by
                      #10

                      Hi @timconsidine ,

                      Good question. I didn't push further, given that @girish suggested this might be positioned to be an addon.

                      https://forum.cloudron.io/post/116655

                      @girish , do you think I should finish this as an app package, or do you think this is something that will land in the roadmap? Or, as we say, "two things can be true?"

                      @timconsidine , I'm happy to bring this app to completion. Or, perhaps, contribute work to the core. I guess I'd look to the product team to provide some guidance.

                      I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

                      timconsidineT 1 Reply Last reply
                      1
                      • jadudmJ jadudm

                        Hi @timconsidine ,

                        Good question. I didn't push further, given that @girish suggested this might be positioned to be an addon.

                        https://forum.cloudron.io/post/116655

                        @girish , do you think I should finish this as an app package, or do you think this is something that will land in the roadmap? Or, as we say, "two things can be true?"

                        @timconsidine , I'm happy to bring this app to completion. Or, perhaps, contribute work to the core. I guess I'd look to the product team to provide some guidance.

                        timconsidineT Offline
                        timconsidineT Offline
                        timconsidine
                        App Dev
                        wrote on last edited by
                        #11

                        @jadudm understandable status

                        I'm just reviewing my options for organising my system.

                        Indie app dev, scratching my itches, lover of Cloudron PaaS, communityapps.appx.uk

                        1 Reply Last reply
                        0
                        • robiR Offline
                          robiR Offline
                          robi
                          wrote on last edited by
                          #12

                          More options is better than less IMHO

                          Conscious tech

                          timconsidineT 1 Reply Last reply
                          1
                          • robiR robi

                            More options is better than less IMHO

                            timconsidineT Offline
                            timconsidineT Offline
                            timconsidine
                            App Dev
                            wrote on last edited by
                            #13

                            @robi said in Garage packaging status, next steps:

                            More options is better than less IMHO

                            In that spirit, I made my own package, principally so I could learn about Garage :
                            https://forum.cloudron.io/post/117911

                            Indie app dev, scratching my itches, lover of Cloudron PaaS, communityapps.appx.uk

                            d19dotcaD 1 Reply Last reply
                            3
                            • timconsidineT timconsidine

                              @robi said in Garage packaging status, next steps:

                              More options is better than less IMHO

                              In that spirit, I made my own package, principally so I could learn about Garage :
                              https://forum.cloudron.io/post/117911

                              d19dotcaD Offline
                              d19dotcaD Offline
                              d19dotca
                              wrote last edited by
                              #14

                              @timconsidine hi Tim,

                              I tried to use the community app but I get a 413 response code (which suggests the payload is too large). It happens at the very first backup attempt.

                              Any suggestions? Is there anything I may have missed, or anything you had to do after deploying the app? Did you ever run into the 413?

                              I’m using tarball and encryption in case that matters at all.

                              --
                              Dustin Dauncey
                              www.d19.ca

                              timconsidineT 1 Reply Last reply
                              0
                              • jadudmJ Offline
                                jadudmJ Offline
                                jadudm
                                wrote last edited by jadudm
                                #15

                                There's a good chance this is an Nginx error; we've seen this before on other packages. There's a limit on the front-side as to how large a file can be passed through an Nginx proxy. For example:

                                https://forum.cloudron.io/topic/14972/413-content-too-large-on-video-upload-inner-nginx-client_max_body_size-seems-too-low/2?_=1775394530576

                                It might be a similar problem here. Fixable, but it will require an update on the package.

                                I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

                                d19dotcaD 1 Reply Last reply
                                0
                                • jadudmJ jadudm

                                  There's a good chance this is an Nginx error; we've seen this before on other packages. There's a limit on the front-side as to how large a file can be passed through an Nginx proxy. For example:

                                  https://forum.cloudron.io/topic/14972/413-content-too-large-on-video-upload-inner-nginx-client_max_body_size-seems-too-low/2?_=1775394530576

                                  It might be a similar problem here. Fixable, but it will require an update on the package.

                                  d19dotcaD Offline
                                  d19dotcaD Offline
                                  d19dotca
                                  wrote last edited by d19dotca
                                  #16

                                  @jadudm Agreed. But isn’t that something configurable in the package? Looks like it isn’t a user-facing setting, but I assume this can be taken care of in the code part of the package? I’m not super familiar with packing apps just yet so correct me if I’m wrong.

                                  Probably worth setting the client max body size to 0 for unlimited since there will be all sorts of sizes to an app that’s used for backups. I’m just surprised nobody else has run into this issue, it happens immediately upon backup.

                                  --
                                  Dustin Dauncey
                                  www.d19.ca

                                  1 Reply Last reply
                                  0
                                  • d19dotcaD d19dotca

                                    @timconsidine hi Tim,

                                    I tried to use the community app but I get a 413 response code (which suggests the payload is too large). It happens at the very first backup attempt.

                                    Any suggestions? Is there anything I may have missed, or anything you had to do after deploying the app? Did you ever run into the 413?

                                    I’m using tarball and encryption in case that matters at all.

                                    timconsidineT Offline
                                    timconsidineT Offline
                                    timconsidine
                                    App Dev
                                    wrote last edited by timconsidine
                                    #17

                                    @d19dotca errr...

                                    No I have not experienced that.
                                    Personally I use rsync not tarball.
                                    I will try a tarball backup later.

                                    If it's a backup issue, is it an app issue ?
                                    I don't understand this.

                                    EDIT : that's meant as unhelpful, just not sure where to start looking - will ask AI
                                    But did you see that an official version is close ?

                                    https://forum.cloudron.io/post/122927

                                    Indie app dev, scratching my itches, lover of Cloudron PaaS, communityapps.appx.uk

                                    jadudmJ 1 Reply Last reply
                                    1
                                    • jdaviescoatesJ Offline
                                      jdaviescoatesJ Offline
                                      jdaviescoates
                                      wrote last edited by jdaviescoates
                                      #18

                                      I just saw that @girish is looking at this, so it sounds like there might be an official packing in the pipeline:

                                      @girish said:

                                      The garage app is packaged, just reviewing it and have to a get an initial package out.

                                      Edit: heh, and also just realised that @timconsidine just linked to that too! 😆

                                      I use Cloudron with Gandi & Hetzner

                                      1 Reply Last reply
                                      1
                                      • timconsidineT timconsidine

                                        @d19dotca errr...

                                        No I have not experienced that.
                                        Personally I use rsync not tarball.
                                        I will try a tarball backup later.

                                        If it's a backup issue, is it an app issue ?
                                        I don't understand this.

                                        EDIT : that's meant as unhelpful, just not sure where to start looking - will ask AI
                                        But did you see that an official version is close ?

                                        https://forum.cloudron.io/post/122927

                                        jadudmJ Offline
                                        jadudmJ Offline
                                        jadudm
                                        wrote last edited by jadudm
                                        #19

                                        @timconsidine Serving as your meat-based AI... 😄

                                        In theory, it would be an app issue. Because Nginx is serving as a proxy in front of Garage---it is handling the HTTPS and some domain routing to the app itself---the behavior of that HTTP server becomes "a thing." In this case, Nginx has a maximum client body size:

                                        https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size

                                        For many apps, this is never an issue; for apps that involve uploading files (Immich, Garage, etc.), clients routinely send large files. While I might think I'm connected to my Garage instance, what I'm actually connected to is Nginx, which is proxying my connection through to Garage in the backend. Therefore, the behavior of the proxy matters, and in this case, it has to do with the filesize limits of the proxy. When doing file uploads, we can easily exceed the per-request filesize limit on the proxy, and Nginx returns a 413 as a result.

                                        You wouldn't see it if you're using rsync backups and dealing with small files. However, a tarball backup can easily generate a request that comes in at gigabytes in size; as a result, Nginx says "no" and returns a 413.

                                        Hence, @d19dotca thinking that setting that value in the Nginx config to 0 would likely eliminate the error.

                                        All of that said, there's also the possible arrival of an officially maintained Garage package, which we might want to move to anyway. However, it is good to have options to experiment with!

                                        I use Cloudron on a DXP2800 NAS w/ 8TB in ZFS RAID1

                                        timconsidineT 1 Reply Last reply
                                        2
                                        • jadudmJ jadudm

                                          @timconsidine Serving as your meat-based AI... 😄

                                          In theory, it would be an app issue. Because Nginx is serving as a proxy in front of Garage---it is handling the HTTPS and some domain routing to the app itself---the behavior of that HTTP server becomes "a thing." In this case, Nginx has a maximum client body size:

                                          https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size

                                          For many apps, this is never an issue; for apps that involve uploading files (Immich, Garage, etc.), clients routinely send large files. While I might think I'm connected to my Garage instance, what I'm actually connected to is Nginx, which is proxying my connection through to Garage in the backend. Therefore, the behavior of the proxy matters, and in this case, it has to do with the filesize limits of the proxy. When doing file uploads, we can easily exceed the per-request filesize limit on the proxy, and Nginx returns a 413 as a result.

                                          You wouldn't see it if you're using rsync backups and dealing with small files. However, a tarball backup can easily generate a request that comes in at gigabytes in size; as a result, Nginx says "no" and returns a 413.

                                          Hence, @d19dotca thinking that setting that value in the Nginx config to 0 would likely eliminate the error.

                                          All of that said, there's also the possible arrival of an officially maintained Garage package, which we might want to move to anyway. However, it is good to have options to experiment with!

                                          timconsidineT Offline
                                          timconsidineT Offline
                                          timconsidine
                                          App Dev
                                          wrote last edited by
                                          #20

                                          @jadudm thanks for the clarification
                                          will certainly bear that in mind for future issues

                                          for this one on my Garage app, I'm not sure what to do. If an App Store version is coming, even unstable, that might be better for this wanting Garage.

                                          I will try find time to check out my Garage packaging code with this issue in mind.

                                          Indie app dev, scratching my itches, lover of Cloudron PaaS, communityapps.appx.uk

                                          1 Reply Last reply
                                          0

                                          Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                                          Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                                          With your input, this post could be even better 💗

                                          Register Login
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • Bookmarks
                                          • Search