@xarp
Another vote for LibreTranslate. It seems that Mastodon v4 now offers translation using the LibreTranslate API. So it would be sweet to have this running on the same server.

Best posts made by nichu42
-
RE: LibreTranslate
-
RE: "An unexpected error occurred" – web app problem while Metatext works fine on iOS
@thoresson I had the same. Deleting browser cache and cookies solved the problem for me.
-
RE: Run s3_media_upload script
@girish Yay! Thank you.
I am all new to this Linux game so I wasn't aware I could just set the environment variables like that.For everyone else, this is what you need to do:
──────────────────────────────
-
Set up S3 with Synapse. See my post here: https://forum.cloudron.io/post/60415
-
Create a database.yaml file in /app/data/configs that contains the postgres database credentials.
You can find those in the existing homeserver.yaml file.
user: xxx password: xxx database: xxx host: postgresql
- Create a script (e.g., s3cleanup.sh) with the following contents:
#!/bin/bash cd /app/data/configs export AWS_ACCESS_KEY_ID=[your S3 compatible access key] export AWS_SECRET_ACCESS_KEY=[your s3 compatible secret access key] /app/code/env/bin/s3_media_upload update /app/data/data/media_store 1m /app/code/env/bin/s3_media_upload upload --delete --endpoint-url https://yours3storageendpoint.com /app/data/data/media_store [your s3_bucket_name]
- Run the s3cleanup.sh script.
It will look up media that hasn't been touched for 1m (= 1 month) or whatever you set above. It needs to be an integeger value, followed by either m = month(s), d = day(s) or y = year(s).
It will create a cache.db file that refers to the media that matches your criteria.
In the second step, it will upload all files from the cache.db to your s3 storage and delete the local copies.
The output looks like this:
Syncing files that haven't been accessed since: 2022-12-25 14:59:14.674154 Synced 603 new rows 100%|████████████████████████████████████| 603/603 [00:00<00:00, 16121.24files/s] Updated 0 as deleted 100%|████████████████████████████████████| 603/603 [03:25<00:00, 2.93files/s] Uploaded 603 media out of 603 Uploaded 3203 files Uploaded 263.6M Deleted 603 media Deleted 3203 files Deleted 263.6M
Edit: Added path /app/data/configs to script to make it work as cron job.
Edit2: Added more choices for duration suffixes in 's3_media_upload update' job.Disclaimer: This is to the best of my knowledge and understanding. It worked for me, but I accept no liability for loss of data on your server caused by my incompetence.
-
-
Mailbox quota < 1 GB
I am very stingy when it comes to my server resources, so I'd love to be able to assign quotas of less than 1 GB to some users (those who only need their mailbox to identify and receive server messages). Could you adapt the slider or introduce an input field instead?
-
RE: Request: Include S3 Storage Module
Sure! It is quite easy, once you know what you are doing.
Take the following code and replace the <S3...> boxes with your S3 compatible storage's data.
media_storage_providers: - module: s3_storage_provider.S3StorageProviderBackend store_local: True store_remote: True store_synchronous: True config: bucket: <S3_BUCKET_NAME> region_name: <S3_REGION_NAME> endpoint_url: <S3_LIKE_SERVICE_ENDPOINT_URL> access_key_id: <S3_ACCESS_KEY_ID> secret_access_key: <S3_SECRET_ACCESS_KEY>
Make sure the endpoint URL starts with https://
Copy the code to your clipboard.Open the file manager of your Synapse App in Cloudron Dashboard. Navigate to configs and open the homeserver.yaml file. Paste the code from your clipboard at the end of the configuration file.
Save and restart the app. -
RE: LibreTranslate with Cloudron Mastodon Server
@shanelord01 My VPS is at ~ 30% most of the time, but I found the LT API to be awfully slow. Mastodon also gave too many 503 errors (due to timeouts? I don't know). And the translation quality is just too poor compared to DeepL. However, I'm happy to have LT available via Cloudron and will certainly give it another try at a later time. I still like the idea of in-house translation without submitting data to a third party.
-
RE: Request: Include S3 Storage Module
@nichu42
OK, I found the error. The config file is expecting the endpoint URL to start with https://
I have the S3 storage up and running now. If anyone needs help, please feel free to reach out to me. -
RE: Cloudron 7.3.4: "Analyze Disk" doesn't do anything and no statistics shown...
Even stranger here: It used to work, but it stopped updating 3 days ago. No manual update either.
Jan 04 12:46:21 box:tasks startTask - starting task 630 with options {}. logs at /home/yellowtent/platformdata/logs/tasks/630.log Jan 04 12:46:21 box:shell startTask spawn: /usr/bin/sudo -S -E /home/yellowtent/box/src/scripts/starttask.sh 630 /home/yellowtent/platformdata/logs/tasks/630.log 0 400 Jan 04 12:46:21 box:shell startTask (stdout): sudo: unable to resolve host 1001585-634: Name or service not known Jan 04 12:46:21 box:shell startTask (stdout): Running as unit: box-task-630.service Jan 04 12:46:22 box:shell startTask (stdout): Finished with result: exit-code processes terminated with: code=exited/status=50 runtime: 856ms time consumed: 731ms Jan 04 12:46:22 box:shell startTask (stdout): Service box-task-630 failed to run Jan 04 12:46:22 box:shell startTask (stdout): Service box-task-630 finished with exit code 1 Jan 04 12:46:22 box:shell startTask code: 1, signal: null Jan 04 12:46:22 box:tasks startTask: 630 completed with code 1 Jan 04 12:46:22 box:tasks startTask: 630 done. error: { stack: "TypeError: Cannot read properties of undefined (reading 'contents')\n" + ' at getDisks (/home/yellowtent/box/src/system.js:106:60)\n' + ' at processTicksAndRejections (node:internal/process/task_queues:96:5)\n' + ' at async updateDiskUsage (/home/yellowtent/box/src/system.js:182:19)', message: "Cannot read properties of undefined (reading 'contents')"
Latest posts made by nichu42
-
RE: LibreTranslate with Cloudron Mastodon Server
@nebulon
I think Mastodon's translate button only appears if the poster has set a language for his / hear toot that's different from your language settings.
It could be that the LT API relies on the source and destination language settings that are submitted by Mastodon. That could be an explanation for the errors I received, b/c not everyone sets the language correct. Maybe the DeepL API is more flexible here. -
RE: LibreTranslate with Cloudron Mastodon Server
@shanelord01 My VPS is at ~ 30% most of the time, but I found the LT API to be awfully slow. Mastodon also gave too many 503 errors (due to timeouts? I don't know). And the translation quality is just too poor compared to DeepL. However, I'm happy to have LT available via Cloudron and will certainly give it another try at a later time. I still like the idea of in-house translation without submitting data to a third party.
-
RE: LibreTranslate with Cloudron Mastodon Server
@shanelord01
What do you think about performance and quality? I was quite disappointed and returned to DeepL for now. -
RE: Run s3_media_upload script
@girish Yay! Thank you.
I am all new to this Linux game so I wasn't aware I could just set the environment variables like that.For everyone else, this is what you need to do:
──────────────────────────────
-
Set up S3 with Synapse. See my post here: https://forum.cloudron.io/post/60415
-
Create a database.yaml file in /app/data/configs that contains the postgres database credentials.
You can find those in the existing homeserver.yaml file.
user: xxx password: xxx database: xxx host: postgresql
- Create a script (e.g., s3cleanup.sh) with the following contents:
#!/bin/bash cd /app/data/configs export AWS_ACCESS_KEY_ID=[your S3 compatible access key] export AWS_SECRET_ACCESS_KEY=[your s3 compatible secret access key] /app/code/env/bin/s3_media_upload update /app/data/data/media_store 1m /app/code/env/bin/s3_media_upload upload --delete --endpoint-url https://yours3storageendpoint.com /app/data/data/media_store [your s3_bucket_name]
- Run the s3cleanup.sh script.
It will look up media that hasn't been touched for 1m (= 1 month) or whatever you set above. It needs to be an integeger value, followed by either m = month(s), d = day(s) or y = year(s).
It will create a cache.db file that refers to the media that matches your criteria.
In the second step, it will upload all files from the cache.db to your s3 storage and delete the local copies.
The output looks like this:
Syncing files that haven't been accessed since: 2022-12-25 14:59:14.674154 Synced 603 new rows 100%|████████████████████████████████████| 603/603 [00:00<00:00, 16121.24files/s] Updated 0 as deleted 100%|████████████████████████████████████| 603/603 [03:25<00:00, 2.93files/s] Uploaded 603 media out of 603 Uploaded 3203 files Uploaded 263.6M Deleted 603 media Deleted 3203 files Deleted 263.6M
Edit: Added path /app/data/configs to script to make it work as cron job.
Edit2: Added more choices for duration suffixes in 's3_media_upload update' job.Disclaimer: This is to the best of my knowledge and understanding. It worked for me, but I accept no liability for loss of data on your server caused by my incompetence.
-
-
RE: Run s3_media_upload script
@girish Yes, correct: How to set these environment variables with Cloudron?
-
RE: Run s3_media_upload script
@girish Thank you for responding!
Yes, this thread is about the script that you have linked (https://github.com/matrix-org/synapse-s3-storage-provider#regular-cleanup-job). It is part of Cloudron's Synapse installation and can be found in /app/code/env/bin.
I had already managed to make the database config as you have mentioned in your post.
The problem is: The script uses "Boto3" (AWS SDK for Python) which expects the S3 credentials either to be saved in the config file ~/.aws/credentials or as environment variables, see
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.htmlPlease correct me if I'm wrong, but Cloudron doesn't grant me access to either of these. That's why I mentioned you in this thread. I think you'd have to enable one of these options to make the script work.
-
RE: Run s3_media_upload script
@robi said in Run s3_media_upload script:
@nichu42 right, here are the options:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.htmlYes, that's what I figured.
But I have no idea how to make any of these options work with Cloudron.
The file system is read-only, so I cannot put a config file where "boto3" expects it (~/.aws/credentials).That's why I thought that maybe @girish has to enable the use of environment variables.
-
RE: Run s3_media_upload script
@robi homeserver.yaml is the configuration file for Synapse. It will not start without it.
The S3 configuration is correct: Synapse uploads new media to the bucket.
However, "boto3" needs different configuration as it seems. -
RE: Run s3_media_upload script
@robi
The example command is what I stated above.
The S3 credentials cannot be submitted on the command line.They are in the homeserver.yaml, but it seems that boto3 (whatever that is) doesn't read them but expects environment variables to be set. That's why I thought this might be something that needs to be done by Cloudron.
-
RE: Run s3_media_upload script
@robi
Thank you for your response.I think I made a major step forward.
The script expects a database.yaml file to be present. It needs to include the user, password, database and host entries that can be found and copied over from the homeserver.yaml file.
Once you have prepared this, the script will create the cache.db on its own after you run the following command:s3_media_upload update /app/data/data/media_store 1m
1m means all files that haven't been touched for a month will be used.
Now the script is ready to upload. This can be triggered with the following command:
s3_media_upload upload --delete /app/data/data/media_store s3_bucket_name
Unfortunately, here I am stuck again. The script returns the following error message:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
The script documentation states "This module uses boto3, and so the credentials should be specified", and refers to https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#guide-configuration
Here I am completely lost again and don't even know where to start. Is this maybe something that needs to be done from Cloudron, @girish ?