Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Mastodon
  3. Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!

Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!

Scheduled Pinned Locked Moved Mastodon
34 Posts 11 Posters 5.7k Views 12 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • robiR robi

      @jdaviescoates you'd have to do that through rclone manually. Cloudron only supports S3 as a backup target, not App volumes.

      jdaviescoatesJ Offline
      jdaviescoatesJ Offline
      jdaviescoates
      wrote on last edited by
      #25

      @robi sounds like maybe @scooke has managed it previously?

      I use Cloudron with Gandi & Hetzner

      1 Reply Last reply
      0
      • girishG girish referenced this topic on
      • chetbakerC chetbaker referenced this topic on
      • dxciBelD Offline
        dxciBelD Offline
        dxciBel
        wrote on last edited by
        #26

        So I've been running into this exact issue really soon after hosting an instance with ~30 users for a few weeks. I'm now trying to migrate media files to an S3 storage, in my case it's a Linode Object Storage.

        I've found this awesome guide which makes things pretty clear, but there's a question that came up when I read through it. In the section about configuring the bucket it says:

        Also, these instructions are specific to manual deployments, you may need to modify paths slightly for docker or other automatic deployments.

        Just to make sure, and please excuse if this question seems kinda inane, but: How exactly do I migrate my media files via Cloudron? Do I do it via the server console (Linode) or the Cloudron terminal? Is there anything extra I need to be aware of, any instructions that differ from this GitHub page?

        Just trying to make sure I don't break everything while trying to migrate to external media storage. Please be patient. 🙏

        scookeS 1 Reply Last reply
        2
        • dxciBelD dxciBel

          So I've been running into this exact issue really soon after hosting an instance with ~30 users for a few weeks. I'm now trying to migrate media files to an S3 storage, in my case it's a Linode Object Storage.

          I've found this awesome guide which makes things pretty clear, but there's a question that came up when I read through it. In the section about configuring the bucket it says:

          Also, these instructions are specific to manual deployments, you may need to modify paths slightly for docker or other automatic deployments.

          Just to make sure, and please excuse if this question seems kinda inane, but: How exactly do I migrate my media files via Cloudron? Do I do it via the server console (Linode) or the Cloudron terminal? Is there anything extra I need to be aware of, any instructions that differ from this GitHub page?

          Just trying to make sure I don't break everything while trying to migrate to external media storage. Please be patient. 🙏

          scookeS Offline
          scookeS Offline
          scooke
          wrote on last edited by scooke
          #27

          @dxciBel I might be wrong, and I hope I am for your sake, but when you change storage systems I think you will basically be starting over. There is a way to move buckets if origin and destination are both already S3. For me, even though I was one user on my own instance, at the time the amount of data I had stored was huge, so I just called it a loss, changed to Minio, and let the instance slowly repopulate all the data it needed. The one thing I did do was SAVE all my own images, headers, icons, etc., for my instance.

          In the process of writing this I did find these:

          https://stanislas.blog/2018/05/moving-mastodon-media-files-to-wasabi-object-storage/
          - the author recommends AGAINST Wasabi, but you want to read about the tools to move data from a regular storage to S3 using aws-cli. Almost halfway down there is this command which is what you want, and which you'll (obviously) have to adjust for your use-case: aws s3 sync public/system/ s3://my-bucket/ --endpoint-url=https://s3.wasabisys.com. "aws" is the command, "s3" is an option, "public/system" refers to where the data currently is, in your cloudron image, and "s3://my-bucket" is your Minio instance. I guess the last bit you need to include; if it doesn't work initially tweak the endpoint a few times, like remove the https...not sure how that it will go, but, it looks straightforward.

          Here are a few more. I suggest reading all of them before embarking on your journey:
          https://github.com/cybrespace/cybrespace-meta/blob/master/s3.md
          https://chrishubbs.com/2022/11/19/hosting-a-mastodon-instance-moving-asset-storage-to-s3/

          A life lived in fear is a life half-lived

          jdaviescoatesJ dxciBelD 2 Replies Last reply
          4
          • scookeS scooke

            @dxciBel I might be wrong, and I hope I am for your sake, but when you change storage systems I think you will basically be starting over. There is a way to move buckets if origin and destination are both already S3. For me, even though I was one user on my own instance, at the time the amount of data I had stored was huge, so I just called it a loss, changed to Minio, and let the instance slowly repopulate all the data it needed. The one thing I did do was SAVE all my own images, headers, icons, etc., for my instance.

            In the process of writing this I did find these:

            https://stanislas.blog/2018/05/moving-mastodon-media-files-to-wasabi-object-storage/
            - the author recommends AGAINST Wasabi, but you want to read about the tools to move data from a regular storage to S3 using aws-cli. Almost halfway down there is this command which is what you want, and which you'll (obviously) have to adjust for your use-case: aws s3 sync public/system/ s3://my-bucket/ --endpoint-url=https://s3.wasabisys.com. "aws" is the command, "s3" is an option, "public/system" refers to where the data currently is, in your cloudron image, and "s3://my-bucket" is your Minio instance. I guess the last bit you need to include; if it doesn't work initially tweak the endpoint a few times, like remove the https...not sure how that it will go, but, it looks straightforward.

            Here are a few more. I suggest reading all of them before embarking on your journey:
            https://github.com/cybrespace/cybrespace-meta/blob/master/s3.md
            https://chrishubbs.com/2022/11/19/hosting-a-mastodon-instance-moving-asset-storage-to-s3/

            jdaviescoatesJ Offline
            jdaviescoatesJ Offline
            jdaviescoates
            wrote on last edited by jdaviescoates
            #28

            @scooke said in Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!:

            Almost halfway down there is this command which is what you want, and which you'll (obviously) have to adjust for your use-case: aws s3 sync public/system/ s3://my-bucket/ --endpoint-url=https://s3.wasabisys.com. "aws" is the command, "s3" is an option, "public/system" refers to where the data currently is, in your cloudron image, and "s3://my-bucket" is your Minio instance. I guess the last bit you need to include; if it doesn't work initially tweak the endpoint a few times, like remove the https...not sure how that it will go, but, it looks straightforward.

            Yeah, I've not attempted moving files but it's obviously possible.

            She also this guide that has similar command to above.

            https://thomas-leister.de/en/mastodon-s3-media-storage/

            Indeed in their set-up they not only move stuff over but also do some clever nginx stuff to serve files locally if they exist and if not to fetch from S3 and cache locally.

            I use Cloudron with Gandi & Hetzner

            doodlemania2D shanelord01S 2 Replies Last reply
            0
            • jdaviescoatesJ jdaviescoates

              @scooke said in Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!:

              Almost halfway down there is this command which is what you want, and which you'll (obviously) have to adjust for your use-case: aws s3 sync public/system/ s3://my-bucket/ --endpoint-url=https://s3.wasabisys.com. "aws" is the command, "s3" is an option, "public/system" refers to where the data currently is, in your cloudron image, and "s3://my-bucket" is your Minio instance. I guess the last bit you need to include; if it doesn't work initially tweak the endpoint a few times, like remove the https...not sure how that it will go, but, it looks straightforward.

              Yeah, I've not attempted moving files but it's obviously possible.

              She also this guide that has similar command to above.

              https://thomas-leister.de/en/mastodon-s3-media-storage/

              Indeed in their set-up they not only move stuff over but also do some clever nginx stuff to serve files locally if they exist and if not to fetch from S3 and cache locally.

              doodlemania2D Offline
              doodlemania2D Offline
              doodlemania2
              App Dev
              wrote on last edited by
              #29

              @jdaviescoates I can +1 the need for a cache - moving to S3 works great but it does slow things way down. I am trying to find a way to do a local nginx cache with CR but coming up short so far.

              1 Reply Last reply
              1
              • jdaviescoatesJ jdaviescoates

                @scooke said in Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!:

                Almost halfway down there is this command which is what you want, and which you'll (obviously) have to adjust for your use-case: aws s3 sync public/system/ s3://my-bucket/ --endpoint-url=https://s3.wasabisys.com. "aws" is the command, "s3" is an option, "public/system" refers to where the data currently is, in your cloudron image, and "s3://my-bucket" is your Minio instance. I guess the last bit you need to include; if it doesn't work initially tweak the endpoint a few times, like remove the https...not sure how that it will go, but, it looks straightforward.

                Yeah, I've not attempted moving files but it's obviously possible.

                She also this guide that has similar command to above.

                https://thomas-leister.de/en/mastodon-s3-media-storage/

                Indeed in their set-up they not only move stuff over but also do some clever nginx stuff to serve files locally if they exist and if not to fetch from S3 and cache locally.

                shanelord01S Offline
                shanelord01S Offline
                shanelord01
                wrote on last edited by
                #30

                @jdaviescoates It doesn't seem possible to change the NGINX settings for this to work in Cloudron, unless I'm missing something?

                jdaviescoatesJ 1 Reply Last reply
                1
                • shanelord01S shanelord01

                  @jdaviescoates It doesn't seem possible to change the NGINX settings for this to work in Cloudron, unless I'm missing something?

                  jdaviescoatesJ Offline
                  jdaviescoatesJ Offline
                  jdaviescoates
                  wrote on last edited by
                  #31

                  @shanelord01 I've no idea to be honest, but it would be nice if it was possible.

                  @doodlemania2 may work it out, or perhaps @staff can assist

                  I use Cloudron with Gandi & Hetzner

                  1 Reply Last reply
                  0
                  • scookeS scooke

                    @dxciBel I might be wrong, and I hope I am for your sake, but when you change storage systems I think you will basically be starting over. There is a way to move buckets if origin and destination are both already S3. For me, even though I was one user on my own instance, at the time the amount of data I had stored was huge, so I just called it a loss, changed to Minio, and let the instance slowly repopulate all the data it needed. The one thing I did do was SAVE all my own images, headers, icons, etc., for my instance.

                    In the process of writing this I did find these:

                    https://stanislas.blog/2018/05/moving-mastodon-media-files-to-wasabi-object-storage/
                    - the author recommends AGAINST Wasabi, but you want to read about the tools to move data from a regular storage to S3 using aws-cli. Almost halfway down there is this command which is what you want, and which you'll (obviously) have to adjust for your use-case: aws s3 sync public/system/ s3://my-bucket/ --endpoint-url=https://s3.wasabisys.com. "aws" is the command, "s3" is an option, "public/system" refers to where the data currently is, in your cloudron image, and "s3://my-bucket" is your Minio instance. I guess the last bit you need to include; if it doesn't work initially tweak the endpoint a few times, like remove the https...not sure how that it will go, but, it looks straightforward.

                    Here are a few more. I suggest reading all of them before embarking on your journey:
                    https://github.com/cybrespace/cybrespace-meta/blob/master/s3.md
                    https://chrishubbs.com/2022/11/19/hosting-a-mastodon-instance-moving-asset-storage-to-s3/

                    dxciBelD Offline
                    dxciBelD Offline
                    dxciBel
                    wrote on last edited by dxciBel
                    #32

                    @scooke As it turns out, with the help of the blog post you found, it was possible. Moving storage to S3 was rather easy, you just have to add the information to env.production. I moved the files to the bucket with the aws tool mentioned in the linked post, but s3cmd would most likely work as well. Last hurdle was making the bucket publically accessible since all the copied files are private by default. Made a policy.json file using this support doc from Linode as an example and voila, everything works again.

                    doodlemania2D 1 Reply Last reply
                    1
                    • dxciBelD dxciBel

                      @scooke As it turns out, with the help of the blog post you found, it was possible. Moving storage to S3 was rather easy, you just have to add the information to env.production. I moved the files to the bucket with the aws tool mentioned in the linked post, but s3cmd would most likely work as well. Last hurdle was making the bucket publically accessible since all the copied files are private by default. Made a policy.json file using this support doc from Linode as an example and voila, everything works again.

                      doodlemania2D Offline
                      doodlemania2D Offline
                      doodlemania2
                      App Dev
                      wrote on last edited by
                      #33

                      Yeah, the same doc shows how to set up an Ngnix cache - I'm gonna try to hack something together to see if I can front that in CR somehow. Could serve some other generic purposes too for other systems that could benefit from an S3 cache.

                      robiR 1 Reply Last reply
                      1
                      • doodlemania2D doodlemania2

                        Yeah, the same doc shows how to set up an Ngnix cache - I'm gonna try to hack something together to see if I can front that in CR somehow. Could serve some other generic purposes too for other systems that could benefit from an S3 cache.

                        robiR Offline
                        robiR Offline
                        robi
                        wrote on last edited by
                        #34

                        @doodlemania2 Seaweed FS may be what you're looking for to cache/gateway object storage.

                        https://forum.cloudron.io/post/56024

                        Conscious tech

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                          • Login

                          • Don't have an account? Register

                          • Login or register to search.
                          • First post
                            Last post
                          0
                          • Categories
                          • Recent
                          • Tags
                          • Popular
                          • Bookmarks
                          • Search