I think Mastodon's translate button only appears if the poster has set a language for his / hear toot that's different from your language settings.
It could be that the LT API relies on the source and destination language settings that are submitted by Mastodon. That could be an explanation for the errors I received, b/c not everyone sets the language correct. Maybe the DeepL API is more flexible here.
@privsec Their Basic endpoint has a character limit, this self-hosted version doesn't. Plus there's the privacy aspect, but mainly it's a way to use Languagetool on browsers without having that character limit on long inputs, like Webmail, Bloc Posts, other CMS long text edit fields.
I want to use a queue control to n8n tasks. I saw this is commonly done with RabbitMQ, but I use kafka on other projects and I guessed it could be used as queue control as well. So, i tried to create a local kafka with docker, but it is not simple. I guess RabbitMQ is more friendly to IT usage.
@girish Yes true, 2FA in OpenVPN connect is good to have. Importantly we need 2FA in FrontEnd is necessary as that one secured by password very likely user will reuse same password in all places or can provide very weak password.
So for now if you could enable 2FA in frontend that would be very helpful.
My mistake. I restarted the app too early.
After the full download of almost 8 GB EN n-gram dataset, you need to give your Cloudron instance enough time to unpack and move the files to (by default) /app/data/ngram/en.
In my case, it took more than 15 minutes to meet the requirements.
The moment you restarted the app too early, the zip file was not unzipped completely. In this case the error occurs and you have to start again from the beginning.
Search your logs for ===> en ngram dataset has been installed.
@nebulon Not that I could tell, other than there were no error messages in the logs when I tried importing via the app. I could see it was an image problem because of the cli progress.
Oh, and I had o boost both my Wallabag app AND its Redis to 5 or 6GB of ram each to manage 1200 articles. Anything less than 5GB didn't seem to work. Afterwards I've lowered both to under 1GB and it's all good.