High latency with redis
-
@harveywillj
I suggest you should get a second Cloudron Server as a staging / development server.
There you can install the WordPress from Backup and debug to your hearts content without interrupting the live site. Even if you destroy the whole staging server, the live is still up and running.A clone of the WordPress app would also work but since it is still on the same server it might not be advisable for a real staging / development approach.
But, the app clone approach might help you here debugging what is causing the problem.
A clone will have its own redis service.
This would also show if only the clone with redis enabled is affected or both.
That would narrow it down a bit. -
As mentioned I do have a second server and second Cloudron instance and have tested on that and its the same. The thing is this was happening right from the start when I installed Wordpress. Im not sure if its just a misconfig on redis upon install and thats what going on but I dont really know about that stuff.
-
This is a back and forth email chain with joseph @ cloudron. cant really format it any better as its straight from word doc and the forum has removed all the formatting. Apologies.
Hi,
I am not sure if there is an issue with the Redis application or how I can go about diagnosing what is causing this issue. On my wordpress install, the redis latency can go as high as 10000ms and stay there and then drops back down to very low numbers. Normally it spikes around 500-1000ms and causes extreme slowdowns on our site. Our server is more than capable of running wordpress/woocmmerce being a 12 core AMD EPYC 4464P with 64gb ram and nvme ssds. Can you give me some guidance or help me diagnose what can be causing this. I have setup caching, cloudflare etc as well as optimising database and installing highspeed keys but nothing helps. Most of the time, the latency will stay quite high but then when I restart the wordpress instance it clears it and is okay again for a while with hardly any issues. I have attached screenshot of the latency. Any help would be great please.But to answer your question: it's hard to know where without debugging what the issue is. Ultimately, WP and Redis are just running in docker containers . You can do docker ps to get the container ids. Given they are just local processes, I am not sure why there is so much delay. You have to do so basic query checks and see. For example, there is a redis button in the Web Terminal of the WordPress app . Maybe you can write a script to query from the redis container locally and then later from via WP container to localize the problem . The redis container config is in /etc/redis/redis.conf of the redis container if you want to inspect it.
Thanks for the reply. Just very stuck on what i am doing. I installed cloudron and paid thinking I would get some support and it being a user friendly thing. I have installed new relic on my server but not sure where to look for redis. I do see lots of containers for redirects which I am not sure is correct or not please see images. A push in the right direction would be great
Hi,
redis like other addons on Cloudron run isolated in their own docker container. As you can see in that screenshot you attached, each app which uses redis gets its own redis instance. I am not familiar with new relic or how to configure it to be useful in such a setup.
Do you see any specific redis instance having an issue and if so, which app is using that? (you can correlate those by the id of the container, the redis container name will have the corresponding app id)
It's a WordPress instance I am having issues with. I just checked the latency now and it's spiked to 15 seconds which of course is extremely high. I'm not familiar with relic either, just installed it as people recommended it for monitoring. Do you offer any one time payments for support in regards to this? I'm not sure if it's a bad install (I have reinstalled already though) or its something within my wordpress. I do have query monitor installed on WordPress but nothing looks out the ordinary and redis is connected fine.
Hi,
where do you see this latency and is the server overall impacted by such latency or just one app (WordPress instance)? We are not WordPress experts to debug such things at Cloudron. Do you have any extra WordPress addons installed which might cause that?
Did you reinstall the app with the same result or reinstalled the server?
Once you have identified which WordPress and thus maybe which redis container is affected, it may help to increase the memory limit of it?
Hi,
I only have 1 app on this server (Wordpress Dev). I tried reinstalling the server as well as the wordpress instance and importing the wordpress content back over but same issue. My redis service has 16gb which is plenty. I am trying to diagnose this but quite hard to replicate even when removing plugins etc as its something that happens over time. How can I connect to the redis server as someone who is helping me has said this “The problem with the config is that the redis service itself is not running on the live server and the defined hostname is "redis-d1942f3e-0d8f-488c-be87-05b9884bcd8d" - which I believe is from Cloudron.”
Hi,
I am not quite following. In that screenshot you were attaching earlier, you had various redis-<id> containers running. Not sure where those would come from if not from other apps.
Further redis does run locally just in a docker container on localhost. So that is not the issue here either. Both WordPress and redis amongst other services run isolated on the same host.If you open a webterminal into the app through the Cloudron dashboard, then you have a button on the top to connect to the apps redis service.
Sorry, this screenshot was from my other cloudron installation but the one I have issues with just has a single wordpress install on. I have since uninstalled relic as I am not sure what im looking at within there.
Just had a reply from the person helping me
“I've managed to connect to the Redis service and have changed your eviction policy to "allkeys-lfu". It should allow Redis to automatically manage memory by removing the least useful items. However, when I tried to run CONFIG REWRITE I got the following error: (error) ERR Rewriting config file: Permission denied This seems to be common with managed environments like Cloudron where Redis runs with restricted permissions. The Redis server process doesn't have permission to modify its own configuration file. This generally means that the eviction policy change is not permanent. I also can't find redis.conf file anywhere. Do you have any idea where I can find this?”
Hi,
that is correct, the redis instance is fully managed by the platform and even runs in a mostly readonly file system for various reasons. The config file would be at /run/redis.conf within the container, but any changes there are not persistent as it is generated on startup.
Not a redis expert but if the eviction policy would keep too many keys in your case, the memory should be the bottleneck, which though you indicated is not.
But would be great if you would share any findings on your side. Ideally of course this would be all handled in the forum as then other users of larger WordPress instances on Cloudron might be able to give more insights. We are only exports on the Cloudron system but not the individual apps.
Hi thanks for the insight. I have had anther response from them
“I've found that. Thanks. However, I get the following error when I try to modify the redis.conf file: Error response from daemon: container rootfs is marked read-only This seems to be a common issue with Cloudron containers as they are designed to be read-only for security purposes. I would suggest contacting Cloudron support to they can configure the redis.conf in their end by changing the eviction policy to "allkeys-lfu".”
How can we edit the conf file or change from read only then back again
Hi,
as the configuration file is created on startup, the changes are by design not persistent. To test this one change though, you can use normal docker tooling to exec into the redis container, then change the conf file in /run and then restart the redis process inside the redis container:
supervisorctl restart redis
HI thanks for the info. Just had another reply “Hi Harvey, I've also taken a quick look and it seems like the issue may be the keys eviction policy that is set by Cloudron - "maxmemory-policy noeviction" This policy means that when Redis reaches its memory limit (set by maxmemory), it will not evict any keys and instead will return errors on write operations. This can cause the site to hang when memory fills up.”
I don’t think this would be an issue though as my memory for redis is set to 16gb and has never gone above a few GB really. Not sure if they are suggesting I move away from cloudron as there are quite a lot of limitations.
-
@james Okay, I do have other wordpress installs running fine. Obviously I dont really want to re-create the website as its taken me months to do and only really saw the issue when it was pushed live and started getting traffic which makes sense.
-
@james Okay, I do have other wordpress installs running fine. Obviously I dont really want to re-create the website as its taken me months to do and only really saw the issue when it was pushed live and started getting traffic which makes sense.
To note, these Wordpress installs are on a different server. But when cloning the install to the "second" server it does the same things.
-
More than happy to provide any access you need to my Cloudron instance or WP install :). Just very frustrating as I have had people look into the issue but they cant get very far due to Cloudron limitations and now allowing things, files not being persistent etc.
-
@james Okay, I do have other wordpress installs running fine. Obviously I dont really want to re-create the website as its taken me months to do and only really saw the issue when it was pushed live and started getting traffic which makes sense.
@harveywillj said in High latency with redis:
Okay, I do have other wordpress installs running fine.
That indicates that this WordPress has something special going on.
Either plugins, custom code, theme or something else.
But since it is something this custom, this does not fall into the scope of the forum or generic Cloudron mail support.I suggest either requested specialized support, which will cost extra money or to debug this problem yourself.
Start with the plugins and cross compare them with your other WordPress instances that do not have this issue.Unfortunately, I am unable to assist any further than this.
-
And the answer to your question about plugins, I am not sure. I have the standard plugins like WooCommerce, Elementor etc. Nothing obscure. Turned on Redis 15 mins ago and its already at 500ms and rising. Logs are below:
Jun 12 08:55:40 ==> Starting supervisor Jun 12 08:55:40 2025-06-12 07:55:40,317 INFO Included extra file "/etc/supervisor/conf.d/redis-service.conf" during parsing Jun 12 08:55:40 2025-06-12 07:55:40,317 INFO Included extra file "/etc/supervisor/conf.d/redis.conf" during parsing Jun 12 08:55:40 2025-06-12 07:55:40,317 INFO Set uid to user 0 succeeded Jun 12 08:55:40 2025-06-12 07:55:40,319 INFO RPC interface 'supervisor' initialized Jun 12 08:55:40 2025-06-12 07:55:40,319 CRIT Server 'inet_http_server' running without any HTTP authentication checking Jun 12 08:55:40 2025-06-12 07:55:40,319 INFO supervisord started with pid 1 Jun 12 08:55:41 2025-06-12 07:55:41,320 INFO spawned: 'redis' with pid 13 Jun 12 08:55:41 2025-06-12 07:55:41,321 INFO spawned: 'redis-service' with pid 14 Jun 12 08:55:41 13:C 12 Jun 2025 07:55:41.326 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. Jun 12 08:55:41 13:C 12 Jun 2025 07:55:41.326 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo Jun 12 08:55:41 13:C 12 Jun 2025 07:55:41.326 * Redis version=7.4.2, bits=64, commit=00000000, modified=0, pid=13, just started Jun 12 08:55:41 13:C 12 Jun 2025 07:55:41.326 * Configuration loaded Jun 12 08:55:41 13:M 12 Jun 2025 07:55:41.326 * monotonic clock: POSIX clock_gettime Jun 12 08:55:41 13:M 12 Jun 2025 07:55:41.327 # Failed to write PID file: Permission denied Jun 12 08:55:41 13:M 12 Jun 2025 07:55:41.327 * Running mode=standalone, port=6379. Jun 12 08:55:41 13:M 12 Jun 2025 07:55:41.327 * Server initialized Jun 12 08:55:41 13:M 12 Jun 2025 07:55:41.327 * Ready to accept connections tcp Jun 12 08:55:41 Redis service endpoint listening on http://:::3000 Jun 12 08:55:42 2025-06-12 07:55:42,391 INFO success: redis entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Jun 12 08:55:42 2025-06-12 07:55:42,391 INFO success: redis-service entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Jun 12 08:55:55 [GET] /healthcheck Jun 12 08:56:18 [GET] /healthcheck Jun 12 08:56:42 13:M 12 Jun 2025 07:56:42.009 * 10000 changes in 60 seconds. Saving... Jun 12 08:56:42 13:M 12 Jun 2025 07:56:42.011 * Background saving started by pid 36 Jun 12 08:56:42 36:C 12 Jun 2025 07:56:42.143 * DB saved on disk Jun 12 08:56:42 36:C 12 Jun 2025 07:56:42.144 * Fork CoW for RDB: current 2 MB, peak 2 MB, average 1 MB Jun 12 08:56:42 13:M 12 Jun 2025 07:56:42.213 * Background saving terminated with success Jun 12 08:57:36 [GET] /healthcheck Jun 12 08:58:02 13:M 12 Jun 2025 07:58:02.382 * 10000 changes in 60 seconds. Saving... Jun 12 08:58:02 13:M 12 Jun 2025 07:58:02.384 * Background saving started by pid 42 Jun 12 08:58:02 42:C 12 Jun 2025 07:58:02.541 * DB saved on disk Jun 12 08:58:02 42:C 12 Jun 2025 07:58:02.542 * Fork CoW for RDB: current 7 MB, peak 7 MB, average 4 MB Jun 12 08:58:02 13:M 12 Jun 2025 07:58:02.585 * Background saving terminated with success Jun 12 08:59:33 13:M 12 Jun 2025 07:59:33.849 * 10000 changes in 60 seconds. Saving... Jun 12 08:59:33 13:M 12 Jun 2025 07:59:33.851 * Background saving started by pid 43 Jun 12 08:59:34 43:C 12 Jun 2025 07:59:34.037 * DB saved on disk Jun 12 08:59:34 43:C 12 Jun 2025 07:59:34.038 * Fork CoW for RDB: current 3 MB, peak 3 MB, average 2 MB Jun 12 08:59:34 13:M 12 Jun 2025 07:59:34.052 * Background saving terminated with success Jun 12 09:01:12 13:M 12 Jun 2025 08:01:12.992 * 10000 changes in 60 seconds. Saving... Jun 12 09:01:12 13:M 12 Jun 2025 08:01:12.994 * Background saving started by pid 44 Jun 12 09:01:13 44:C 12 Jun 2025 08:01:13.210 * DB saved on disk Jun 12 09:01:13 44:C 12 Jun 2025 08:01:13.211 * Fork CoW for RDB: current 3 MB, peak 3 MB, average 2 MB Jun 12 09:01:13 13:M 12 Jun 2025 08:01:13.297 * Background saving terminated with success Jun 12 09:02:14 13:M 12 Jun 2025 08:02:14.082 * 10000 changes in 60 seconds. Saving... Jun 12 09:02:14 13:M 12 Jun 2025 08:02:14.085 * Background saving started by pid 45 Jun 12 09:02:14 45:C 12 Jun 2025 08:02:14.348 * DB saved on disk Jun 12 09:02:14 45:C 12 Jun 2025 08:02:14.349 * Fork CoW for RDB: current 3 MB, peak 3 MB, average 2 MB Jun 12 09:02:14 13:M 12 Jun 2025 08:02:14.387 * Background saving terminated with success Jun 12 09:03:15 13:M 12 Jun 2025 08:03:15.053 * 10000 changes in 60 seconds. Saving... Jun 12 09:03:15 13:M 12 Jun 2025 08:03:15.055 * Background saving started by pid 46 Jun 12 09:03:15 46:C 12 Jun 2025 08:03:15.222 * DB saved on disk Jun 12 09:03:15 46:C 12 Jun 2025 08:03:15.224 * Fork CoW for RDB: current 2 MB, peak 2 MB, average 1 MB Jun 12 09:03:15 13:M 12 Jun 2025 08:03:15.256 * Background saving terminated with success Jun 12 09:04:22 13:M 12 Jun 2025 08:04:22.408 * 10000 changes in 60 seconds. Saving... Jun 12 09:04:22 13:M 12 Jun 2025 08:04:22.410 * Background saving started by pid 47 Jun 12 09:04:22 47:C 12 Jun 2025 08:04:22.655 * DB saved on disk Jun 12 09:04:22 47:C 12 Jun 2025 08:04:22.657 * Fork CoW for RDB: current 4 MB, peak 4 MB, average 2 MB Jun 12 09:04:22 13:M 12 Jun 2025 08:04:22.714 * Background saving terminated with success Jun 12 09:05:23 13:M 12 Jun 2025 08:05:23.070 * 10000 changes in 60 seconds. Saving... Jun 12 09:05:23 13:M 12 Jun 2025 08:05:23.074 * Background saving started by pid 48 Jun 12 09:05:23 48:C 12 Jun 2025 08:05:23.399 * DB saved on disk Jun 12 09:05:23 48:C 12 Jun 2025 08:05:23.400 * Fork CoW for RDB: current 9 MB, peak 9 MB, average 5 MB Jun 12 09:05:23 13:M 12 Jun 2025 08:05:23.404 * Background saving terminated with success Jun 12 09:05:55 [GET] /healthcheck Jun 12 09:06:34 13:M 12 Jun 2025 08:06:34.360 * 10000 changes in 60 seconds. Saving... Jun 12 09:06:34 13:M 12 Jun 2025 08:06:34.364 * Background saving started by pid 54 Jun 12 09:06:34 54:C 12 Jun 2025 08:06:34.701 * DB saved on disk Jun 12 09:06:34 54:C 12 Jun 2025 08:06:34.702 * Fork CoW for RDB: current 5 MB, peak 5 MB, average 3 MB Jun 12 09:06:34 13:M 12 Jun 2025 08:06:34.706 * Background saving terminated with success Jun 12 09:07:35 13:M 12 Jun 2025 08:07:35.026 * 10000 changes in 60 seconds. Saving... Jun 12 09:07:35 13:M 12 Jun 2025 08:07:35.030 * Background saving started by pid 55 Jun 12 09:07:35 55:C 12 Jun 2025 08:07:35.592 * DB saved on disk Jun 12 09:07:35 55:C 12 Jun 2025 08:07:35.594 * Fork CoW for RDB: current 3 MB, peak 3 MB, average 2 MB Jun 12 09:07:35 13:M 12 Jun 2025 08:07:35.666 * Background saving terminated with success Jun 12 09:08:57 13:M 12 Jun 2025 08:08:57.634 * 10000 changes in 60 seconds. Saving... Jun 12 09:08:57 13:M 12 Jun 2025 08:08:57.638 * Background saving started by pid 56 Jun 12 09:08:58 56:C 12 Jun 2025 08:08:58.208 * DB saved on disk Jun 12 09:08:58 56:C 12 Jun 2025 08:08:58.210 * Fork CoW for RDB: current 6 MB, peak 6 MB, average 3 MB Jun 12 09:08:58 13:M 12 Jun 2025 08:08:58.243 * Background saving terminated with success Jun 12 09:10:18 13:M 12 Jun 2025 08:10:18.012 * 10000 changes in 60 seconds. Saving... Jun 12 09:10:18 13:M 12 Jun 2025 08:10:18.018 * Background saving started by pid 57 Jun 12 09:10:18 57:C 12 Jun 2025 08:10:18.598 * DB saved on disk Jun 12 09:10:18 57:C 12 Jun 2025 08:10:18.600 * Fork CoW for RDB: current 4 MB, peak 4 MB, average 2 MB Jun 12 09:10:18 13:M 12 Jun 2025 08:10:18.622 * Background saving terminated with success
@harveywillj said in High latency with redis:
Jun 12 09:01:12 13:M 12 Jun 2025 08:01:12.992 * 10000 changes in 60 seconds. Saving...
10k changes in 60 seconds seems like a lot.
But to take a step back. For many apps redis is a persistent store (like a database). In context of WP, redis is just an optional thing and it is just an object cache. See https://wordpress.org/plugins/wp-redis/ . Is there a reason you want to use redis at all ? Just disable redis and use the local file cache for objects. In all cases, local cache will be faster than redis . The reason redis is even present is because some people back in the day insisted on this for reasons I don't remember. In fact, we should probably remove redis altogether from the WP packages .
-
Oh really, that's interesting. Is this local cache something I have to setup or is this a default behaviour by WP? I just figured redis helps a lot especially with backend admin jobs. We have a large website pushing £3M+ a year with 2k+ products and counting.
-
Oh really, that's interesting. Is this local cache something I have to setup or is this a default behaviour by WP? I just figured redis helps a lot especially with backend admin jobs. We have a large website pushing £3M+ a year with 2k+ products and counting.
@harveywillj redis (in WP) has nothing do with backend admin jobs. It's used as an object cache in WP. There's many levels and kinds of caching .But if you want object caching, you can use some plugin like W3 Total Cache and set it up to use ACPu (in memory cache). There's many options to cache and minify pages on disk or something like cloudflare to proxy requests.
I will look into making redis disabled by default in any case .