Backing up media cache
-
@nichu42 I've been hosting my own Mastodon and Pixelfed instances for awhile now on Cloudron. If you search for my own posts you'll see my journey from frustrated user to happy user. I've been using self-hosted Mino instances for my media storage needs for both Mastodon and Pixelfed (and a bunch of other uses, such as my entire Cloudron backups to a 2TB Servarica VPS).
-
ALL media goes to the Minio instances, even my own posted stuff. So, in a sense it's being backed up. But if your instance crashed for whatever reason, or you moved it, once restored with the Minio settings, you'd still have all the media (except for what has been culled due to storage settings).
-
Not sure what you need here. Pretty much ALL apps in a Cloudron are being proxied through a server, whether nginx or apache or what have you. It's all based on Docker. So I'll say it again... ALL my web apps, federated or not, just work on Cloudron. There are steps to follow for some of them to get them optimal, but most of that info is in the documentation, or can be found in these Forums. I'll share one which is crucial - federated apps on Cloudron need to be on a subdomain whose main domain has another app being hosted, such as a WP app. So my Mastodon is social.futurnumerique.com and I have a WP app at futurnumerique.com (the media on that WP site are also stored on a Minio instance using a WP plugin called Media Cloud).
-
-
nichu42replied to scooke on Jan 4, 2023, 11:55 AM last edited by nichu42 Jan 4, 2023, 12:16 PM
@scooke said in Backing up media cache:
- ALL media goes to the Minio instances, even my own posted stuff. So, in a sense it's being backed up. But if your instance crashed for whatever reason, or you moved it, once restored with the Minio settings, you'd still have all the media (except for what has been culled due to storage settings).
So, all media files exist on the S3 only? That's not exactly what I would call a backup, but it seems it's the common way to go. I understand that uploaded files are much less important than user profiles, posted texts, etc. So OK, I'm open to doing this.
- Not sure what you need here. Pretty much ALL apps in a Cloudron are being proxied through a server, whether nginx or apache or what have you. It's all based on Docker.
All I want is a working backup for disaster recovery. I am willing to get another storage (3rd one), if necessary. I can follow instructions (like for doing an Ubuntu upgrade, as I recently did). I understand that I am supposed to some configuration changes. But I don't know anything about nginx, I don't know where to start, I don't know what applies to Cloudron-based installations, what I would need to skip, or what I need to do differently when going through the instructions for Mastodon/S3 that can be found online.
I really don't want to sound ungrateful (I am in fact very thankful for your insights), but I have chosen to use Cloudron because "Cloudron lets you focus on using the apps and not worry about system administration" (that's what it says on the web-site).
-
@nichu42 I think you are overthinking this, approaching Cloudron like you have had to with other server management tools, even if it was just your own management. Cloudron really, truly, does it all for you. You install Mastodon from the App Store, follow a few extra steps that are laid out in the Mastodon app dashboard (that is, the app's dashboard within Cloudron, not the Mastodon settings) to get federation working, and then in the same app dashboard set up the S3, Minio storage (in the File Manager view), and bam... you're gold. You literally, seriously, do not need to worry about nginx, reverse proxies, backups, updating the VPS or Ubuntu... Cloudron does it all. Once you've installed Cloudron you may never need to ever ssh into the server again! Unlike so many other server and app management offerings, Cloudron does an amazing job of putting all the complexity behind a simple GUI.
As for user posted media, I would think it's normal that you save your own generated media on your laptop or computer before uploading it anyway, no? Same with other users.
Have you even installed Cloudron yet? Start there, on a fresh Ubuntu server (lots of recent support posts here are due to people installing Cloudron on top of an existing Docker installation, or nginx installation, avoid Cloudflare, etc. It's gotta be fresh. Read the docs. It's very simple and straightforward.
-
@scooke
Thank you for getting back to me.Yes, I agree - I may be overthinking the backup thing when it comes to media data. Okay, I'll scrap that.
Anyway, I do have a running VPS with Cloudron Premium and Mastodon installed. The backup takes ~ 4 hours now and thus gets into conflict with scheduled updates already. I'll be willing to move media data to S3 (even though I don't need the extra storage space itself), but I have no clue what to do. The instructions posted above do not really help as I have no idea for nginx proxies and stuff, as mentioned before. I wouldn't even know where to do the changes. This is pretty frustrating.
-
@nichu42 the updates will be tried again later, so this shouldn't be a problem. Just move the update schedule maybe to a later time maybe.
As for moving the media cache to S3, I will put note in our docs on how to do this and link it here. But you don't really need to change nginx proxies and stuff.
-
This post is deleted!
-
@nichu42 said in Backing up media cache:
The instructions posted above do not really help as I have no idea for nginx proxies and stuff, as mentioned before. I wouldn't even know where to do the changes. This is pretty frustrating.
To just start using S3 for your media storage for your Mastodon app doesn't require doing anything with nginx proxies.
Literally all you need to do is use the File Manger to edit your
/app/data/env.production
and add something like this:# Store data using S3 object S3_ENABLED=true S3_BUCKET=bucket-name AWS_ACCESS_KEY_ID=<key_id> AWS_SECRET_ACCESS_KEY=<secret_key> S3_REGION=fr-par S3_PROTOCOL=https S3_HOSTNAME=s3.fr-par.scw.cloud
The problem in your case is, I think, you'd also like to move your existing data over to the S3 storage too. I've not personally done that and so can't help other than pointing to the guide linked to in the guide I pointed to previously which explains how to do it, i.e. read this:
https://github.com/cybrespace/cybrespace-meta/blob/master/s3.md
Good luck!
-
-
nichu42replied to doodlemania2 on Jan 5, 2023, 6:13 PM last edited by nichu42 Jan 5, 2023, 6:18 PM
Thank you all for coming to my rescue! I really appreciate it.
I think I'll just start with making the settings in the env.production file and take it from there. I plan to use IDrive e2 where I got a 30 days free trial.A last question for now: Once I do this, visitor's browsers will load media directly from S3 and no longer from my server, is that right? (At least I think what that nginx proxy thing is all about).
If I want to avoid this, I'll have to make the vhost settings as in https://stanislas.blog/2018/05/moving-mastodon-media-files-to-wasabi-object-storage/#setting-up-a-nginx-reverse-proxy-with-cache-for-the-bucket, correct?
@doodlemania2: Can you give me a hint how to do this in Cloudron? -
@nichu42 said in Backing up media cache:
Can you give me a hint how to do this in Cloudron?
I like an idiots guide to doing this too
-
@nichu42 I didn't do the cache bit which can't easily be done in cloudron, media loads directly from "s3" (mine is also in idrive). It's definitely slower and you SHOULD use a cache (in or very near to your cloudron server) if you have more than a few dozen users.
I was thinking about packaging up a VERY simple custom cloudron app that did exactly that but figured my scenario was too esoteric.
When you put the details into your .env file, then restart and all new media will go to "s3" but you have to repopulate your media for historical stuff in your bucket with tootctl - but you'll be able to rm -rf the cache folder on your vps!
-
@jdaviescoates indeed, and idiot guide for an already running cloudron/mastodon app would be nice.
Got deepl running with simpel guide, so hope that for adding s3 bucket this can be the same
-
-
Hi, I don't want to start a new thread. I need to connect an S3 bucket to the IDrive storage. I am trying to add the configuration from the bucket in the repository to the /app/data/env.production file. I have tried just about everything. Bucket private, public, set Cname record with subdomain pointing to the repository, but after restarting the app, uploading images ends with a 500 error, or the image uploads but can't be opened, it's like hidden. Has anyone solved a similar problem please. I must have tried everything and I don't know how to set the configuration file anymore. Thank you all very much for any advice.
-
@archos it could be a CORS issue?
Have you already done what is described here: https://www.idrive.com/object-storage-e2/faq-dashboard#cross_origin ?
-
Thank you for your reply. I tried it now, but the result is the same.
According to me, I think I'm making a mistake in the hostname. Please Bucket should be public or private? I've been playing with this all day, I really don't know anymore.
Do I need a Cname DNS record? -
@archos said in Backing up media cache:
Please Bucket should be public or private?
From my limited experience I think either should work but you probably want it to be private (I think public means the bucket itself can be publicly browsed)
@archos said in Backing up media cache:
Do I need a Cname DNS record?
I don't think so, unless you're trying to set-up an S3 hostname using one of your domains, but I've no idea if that's actually possible if you're not self-hosting the S3 buckets yourself. I tried a while ago but then gave up (although seems this might be possible with iDrive https://www.idrive.com/object-storage-e2/faq-dashboard#use-domain but that's just adding additional complexity, so I'd focus on just getting it working for now!)
@archos said in Backing up media cache:
According to me, I think I'm making a mistake in the hostname.
Quite possible.
The annoying thing is there doesn't seem to be any standard agreed upon way it all works and it seems different providers want different things.
On Scaleway S3 this is what my working settings looks like for Mastodon:
# Store media on Scaleway S3 object S3_ENABLED=true S3_BUCKET=example_bucknet_name AWS_ACCESS_KEY_ID=example_key_id AWS_SECRET_ACCESS_KEY=example_secret_access_key S3_REGION=fr-par S3_PROTOCOL=https S3_HOSTNAME=s3.fr-par.scw.cloud S3_ENDPOINT=https://s3.fr-par.scw.cloud
-
@jdaviescoates Thank you very much, I will try to create a bucket in Scaleway, the price is almost the same, so hopefully it will work.
-
@archos good luck!
-
@archos Note, you also have to do the CORS stuff on Scaleway too, so it won't work without doing that as well,
see:
https://www.scaleway.com/en/docs/storage/object/api-cli/setting-cors-rules/
But before you can do that you need to set-up the AWS CLI stuff, see:
https://www.scaleway.com/en/docs/storage/object/api-cli/object-storage-aws-cli/
And before you can do that you need API Keys:
https://www.scaleway.com/en/docs/identity-and-access-management/iam/how-to/create-api-keys/
Good luck!
-
Thank you very much, it doesn't look very easy but I will try and let you know. Thanks again for your help.
15/34