@vladimir What other configurations are needed by redisearch though? I think it needs to be laid out specifically so that the amazing Cloudron team can figure out what can be safely added if any to the current redis server configuration, or if it's configurations needed within the WordPress server/app instead, etc. When you deploy redissearch, does it error on anything, what errors does it provide in the user interface and the logs for example?
@girish Thanks, after looking through a long list, turns out it was Snipe-IT, that we don't use and I just had installed as a test. So stopping it for now but maybe worth knowing in-case the pp itself is a but hungrier than it should be for some reason.
Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 * Running mode=standalone, port=6379.
Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 # Server initialized
Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
Nov 10 13:05:05 15:M 10 Nov 21:05:05.039 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
@girish Thanks, this fixed the issue! However, during subsequent testing I found that Redis goes down approx. 30% of the time when I stop the app. For testing I am using Scrumblr but I have seen similar issues with GitLab too. Is this an issue locally for me on my VPS, or could there be something else going on?
As I said on the other thread I seem to have managed to install cloudron on Ubuntu 20.04 LTS by following your advice to use ./cloudron-setup --version 5.6.3. I've disabled automatic upgrades and I'll wait a bit to upgrade to version 6.
All is good!
@necrevistonnezr indeed, previous versions of Cloudron would allocate more memory for redis based on the app's memory limit. This limit was not settable by the user. In 5.2, we have made the redis limits visible and configurable by the user. Our 5.2 migration script should ideally could have been smarter and allocated more memory to redis.
Wanted to leave some notes here... The server was an EC2 R5 xlarge instance. It worked well but when you resize any app, it will just hang. And the whole server will stop responding eventually. One curious thing was that server had 32GB and ~20GB was in buff/cache in free -m output. I have never seen kernel caching so much. We also found this backtrace in dmesg output:
With the explanation "By default Linux uses up to 40% of the available memory for file system caching. After this mark has been reached the file system flushes all outstanding data to disk causing all following IOs going synchronous. For flushing out this data to disk this there is a time limit of 120 seconds by default. In the case here the IO subsystem is not fast enough to flush the data withing". Crazy 🙂 After we put those settings, it actually worked (!). Still cannot believe that choosing AWS instance is that important.
Would be great if you could update this thread with your findings. Also please note that the memory settings for the addons are currently not preserved across app restores or even server reboots. We are working on this fix and you can see the status of it at https://git.cloudron.io/cloudron/box/issues/566