Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!
-
@LoudLemur said in Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!:
A win for Cloudron would be to offer a combined federation app - Mastodon + Scaleway, where the Mastodon instance also generates a user, Access Key ID and Secret Access Key which the user could then plug into a Scaleway Object Storage bucket
+1 I'm too scared to run Mastodon on my server due to it eating all the disk. This would help.
-
@Sam_uk said in Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!:
I'm too scared to run Mastodon on my server due to it eating all the disk. This would help.
Could just spin up a new VPS for it (that's what I did, and switfly ran out of space! But then increased the size...)
And/ or just immediately go an set-up some storage elsewhere as a Volume and set-up the app to use that volume as the data storage directory as per https://docs.cloudron.io/apps/#data-directory
-
@jdaviescoates said in Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!:
@Sam_uk said in Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!:
I'm too scared to run Mastodon on my server due to it eating all the disk. This would help.
Could just spin up a new VPS for it (that's what I did, and switfly ran out of space! But then increased the size...)
And/ or just immediately go an set-up some storage elsewhere as a Volume and set-up the app to use that volume as the data storage directory as per https://docs.cloudron.io/apps/#data-directory
People like you make this place the brilliant community it is! Thanks.
-
@scooke said in Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!:
A win for Cloudron would be to offer a combined federation app - Mastodon + Scaleway, where the Mastodon instance also generates a user, Access Key ID and Secret Access Key which the user could then plug into a Scaleway Object Storage bucket (or there could be a drop-own menu offering S3, Scaleway or Minio), and also set the .env.production settings correctly. I realize doing that is relatively simple if you've done it before, but otherwise it could prove to be a barrier.
+1 something like that would be great because I've just gone to create a bucket on Scaleway (easy enough), but now I'm at a loss as to how I actually connect that to a volume. Which type of mount point should I use for the Volume?
-
@jdaviescoates you'd have to do that through rclone manually. Cloudron only supports S3 as a backup target, not App volumes.
-
-
-
So I've been running into this exact issue really soon after hosting an instance with ~30 users for a few weeks. I'm now trying to migrate media files to an S3 storage, in my case it's a Linode Object Storage.
I've found this awesome guide which makes things pretty clear, but there's a question that came up when I read through it. In the section about configuring the bucket it says:
Also, these instructions are specific to manual deployments, you may need to modify paths slightly for docker or other automatic deployments.
Just to make sure, and please excuse if this question seems kinda inane, but: How exactly do I migrate my media files via Cloudron? Do I do it via the server console (Linode) or the Cloudron terminal? Is there anything extra I need to be aware of, any instructions that differ from this GitHub page?
Just trying to make sure I don't break everything while trying to migrate to external media storage. Please be patient.
-
@dxciBel I might be wrong, and I hope I am for your sake, but when you change storage systems I think you will basically be starting over. There is a way to move buckets if origin and destination are both already S3. For me, even though I was one user on my own instance, at the time the amount of data I had stored was huge, so I just called it a loss, changed to Minio, and let the instance slowly repopulate all the data it needed. The one thing I did do was SAVE all my own images, headers, icons, etc., for my instance.
In the process of writing this I did find these:
https://stanislas.blog/2018/05/moving-mastodon-media-files-to-wasabi-object-storage/
- the author recommends AGAINST Wasabi, but you want to read about the tools to move data from a regular storage to S3 using aws-cli. Almost halfway down there is this command which is what you want, and which you'll (obviously) have to adjust for your use-case:aws s3 sync public/system/ s3://my-bucket/ --endpoint-url=https://s3.wasabisys.com
. "aws" is the command, "s3" is an option, "public/system" refers to where the data currently is, in your cloudron image, and "s3://my-bucket" is your Minio instance. I guess the last bit you need to include; if it doesn't work initially tweak the endpoint a few times, like remove the https...not sure how that it will go, but, it looks straightforward.Here are a few more. I suggest reading all of them before embarking on your journey:
https://github.com/cybrespace/cybrespace-meta/blob/master/s3.md
https://chrishubbs.com/2022/11/19/hosting-a-mastodon-instance-moving-asset-storage-to-s3/ -
@scooke said in Running a federated Mastodon instance will take up ALOT of space and RAM - be prepared!:
Almost halfway down there is this command which is what you want, and which you'll (obviously) have to adjust for your use-case: aws s3 sync public/system/ s3://my-bucket/ --endpoint-url=https://s3.wasabisys.com. "aws" is the command, "s3" is an option, "public/system" refers to where the data currently is, in your cloudron image, and "s3://my-bucket" is your Minio instance. I guess the last bit you need to include; if it doesn't work initially tweak the endpoint a few times, like remove the https...not sure how that it will go, but, it looks straightforward.
Yeah, I've not attempted moving files but it's obviously possible.
She also this guide that has similar command to above.
https://thomas-leister.de/en/mastodon-s3-media-storage/
Indeed in their set-up they not only move stuff over but also do some clever nginx stuff to serve files locally if they exist and if not to fetch from S3 and cache locally.
-
@jdaviescoates I can +1 the need for a cache - moving to S3 works great but it does slow things way down. I am trying to find a way to do a local nginx cache with CR but coming up short so far.
-
@jdaviescoates It doesn't seem possible to change the NGINX settings for this to work in Cloudron, unless I'm missing something?
-
@shanelord01 I've no idea to be honest, but it would be nice if it was possible.
@doodlemania2 may work it out, or perhaps @staff can assist
-
@scooke As it turns out, with the help of the blog post you found, it was possible. Moving storage to S3 was rather easy, you just have to add the information to env.production. I moved the files to the bucket with the aws tool mentioned in the linked post, but s3cmd would most likely work as well. Last hurdle was making the bucket publically accessible since all the copied files are private by default.
Made a policy.json file using this support doc from Linode as an example and voila, everything works again. -
Yeah, the same doc shows how to set up an Ngnix cache - I'm gonna try to hack something together to see if I can front that in CR somehow. Could serve some other generic purposes too for other systems that could benefit from an S3 cache.
-
@doodlemania2 Seaweed FS may be what you're looking for to cache/gateway object storage.