Shouldn't we get an alert when a service container fails / is unhealthy?
-
Hello,
I got an alert when some backups were failing to run ("request timedout"), but it took me a bit too long for my liking to realize that the issue was not the backup endpoint but rather the fact that a redis container was not in a good state for a particular WordPress app install. Restarting the affected redis container didn't seem to help either, I had to restart Docker entirely for it to resolve itself. Now backups are succeeding, thankfully.
My concern though is that if a service container like redis isn't in a healthy state... shouldn't we be getting alerted to that soon after the event occurs? If not, then I'd suggest this be added. I know we can get alerts for certain events, and I'm realizing that I guess this doesn't apply to services which is unfortunate.
-
Usually, the services go down only when out of memory. This situation is monitored and covered in "App ran out of memory" . I can change that text to "App/Service ran out of memory" .
The other situation when services go down is during updates or out of disk space or some corruption. When this happens, the apps will go down and then you get App down notifications .
This is why it's not there separately.
DId you find out why redis was down and why docker restart fixed it? I wonder if restarting docker just freed up memory
-
Usually, the services go down only when out of memory. This situation is monitored and covered in "App ran out of memory" . I can change that text to "App/Service ran out of memory" .
The other situation when services go down is during updates or out of disk space or some corruption. When this happens, the apps will go down and then you get App down notifications .
This is why it's not there separately.
DId you find out why redis was down and why docker restart fixed it? I wonder if restarting docker just freed up memory
@girish it definitely didn't look like a memory issue as at the time I found it in the orange state it was showing only about 20% usage of its allocated memory. Mind you I guess it could have been hitting a memory limit earlier and restarted but couldn’t come up for some reason? It was odd as the logs showed it was ready for connections but just nothing happened after that. There weren’t any errors that I could find. I’ll try to review the logs in more detail later on though and attach here if I can.
I suppose it could have been related to the 8.3.1 update as that was installed after the last successful backup, but I am doubtful of that when it was running fine on 8.3.0 earlier and this isn’t related to the PostgreSQL issues found earlier.
-
@girish , I appear to be running into the same issue again. I got an alert that my backup failed to "timedout", and when I checked the Services page it showed one redis instance orange instead of green. Restarting the redis container doesn't seem to do anything to resolve the issue. Things tend to look better when I restart docker entirely.
But even that isn't always enough. The logs show no issues for the redis container. And later on, a different redis container got the same issue. It's very odd. This is definitely a recent issue for me. Seems related to when I installed 8.3.1 but I'm not certain yet.
Logs look fine in the container:
Mar 24 22:16:03 ==> Starting supervisor Mar 24 22:16:04 2025-03-25 05:16:04,052 INFO Included extra file "/etc/supervisor/conf.d/redis-service.conf" during parsing Mar 24 22:16:04 2025-03-25 05:16:04,053 INFO Included extra file "/etc/supervisor/conf.d/redis.conf" during parsing Mar 24 22:16:04 2025-03-25 05:16:04,053 INFO Set uid to user 0 succeeded Mar 24 22:16:04 2025-03-25 05:16:04,059 INFO RPC interface 'supervisor' initialized Mar 24 22:16:04 2025-03-25 05:16:04,059 CRIT Server 'inet_http_server' running without any HTTP authentication checking Mar 24 22:16:04 2025-03-25 05:16:04,059 INFO supervisord started with pid 1 Mar 24 22:16:05 2025-03-25 05:16:05,061 INFO spawned: 'redis' with pid 13 Mar 24 22:16:05 2025-03-25 05:16:05,062 INFO spawned: 'redis-service' with pid 14 Mar 24 22:16:05 13:C 25 Mar 2025 05:16:05.092 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. Mar 24 22:16:05 13:C 25 Mar 2025 05:16:05.099 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo Mar 24 22:16:05 13:C 25 Mar 2025 05:16:05.099 * Redis version=7.4.2, bits=64, commit=00000000, modified=0, pid=13, just started Mar 24 22:16:05 13:C 25 Mar 2025 05:16:05.099 * Configuration loaded Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.099 * monotonic clock: POSIX clock_gettime Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.105 # Failed to write PID file: Permission denied Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.105 * Running mode=standalone, port=6379. Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.106 * Server initialized Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.106 * Loading RDB produced by version 7.4.2 Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.106 * RDB age 169 seconds Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.106 * RDB memory usage when created 3.71 Mb Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.130 * Done loading RDB, keys loaded: 7119, keys expired: 0. Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.130 * DB loaded from disk: 0.024 seconds Mar 24 22:16:05 13:M 25 Mar 2025 05:16:05.130 * Ready to accept connections tcp Mar 24 22:16:05 Redis service endpoint listening on http://:::3000 Mar 24 22:16:06 2025-03-25 05:16:06,473 INFO success: redis entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Mar 24 22:16:06 2025-03-25 05:16:06,474 INFO success: redis-service entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Mar 24 22:21:06 13:M 25 Mar 2025 05:21:06.032 * 10 changes in 300 seconds. Saving... Mar 24 22:21:06 13:M 25 Mar 2025 05:21:06.032 * Background saving started by pid 26 Mar 24 22:21:06 26:C 25 Mar 2025 05:21:06.049 * DB saved on disk Mar 24 22:21:06 26:C 25 Mar 2025 05:21:06.049 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 0 MB Mar 24 22:21:06 13:M 25 Mar 2025 05:21:06.132 * Background saving terminated with success Mar 24 22:26:07 13:M 25 Mar 2025 05:26:07.054 * 10 changes in 300 seconds. Saving... Mar 24 22:26:07 13:M 25 Mar 2025 05:26:07.055 * Background saving started by pid 27 Mar 24 22:26:07 27:C 25 Mar 2025 05:26:07.073 * DB saved on disk Mar 24 22:26:07 27:C 25 Mar 2025 05:26:07.073 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 0 MB Mar 24 22:26:07 13:M 25 Mar 2025 05:26:07.155 * Background saving terminated with success
Notice that it's listening and ready for connections. However one thing I notice between a container in this state and the green ones is the lack of health check access logs for it. On a working one for example, I see this:
Mar 24 22:16:03 ==> Starting supervisor Mar 24 22:16:04 2025-03-25 05:16:04,123 INFO Included extra file "/etc/supervisor/conf.d/redis-service.conf" during parsing Mar 24 22:16:04 2025-03-25 05:16:04,124 INFO Included extra file "/etc/supervisor/conf.d/redis.conf" during parsing Mar 24 22:16:04 2025-03-25 05:16:04,124 INFO Set uid to user 0 succeeded Mar 24 22:16:04 2025-03-25 05:16:04,128 INFO RPC interface 'supervisor' initialized Mar 24 22:16:04 2025-03-25 05:16:04,129 CRIT Server 'inet_http_server' running without any HTTP authentication checking Mar 24 22:16:04 2025-03-25 05:16:04,129 INFO supervisord started with pid 1 Mar 24 22:16:05 2025-03-25 05:16:05,130 INFO spawned: 'redis' with pid 12 Mar 24 22:16:05 2025-03-25 05:16:05,132 INFO spawned: 'redis-service' with pid 13 Mar 24 22:16:05 12:C 25 Mar 2025 05:16:05.138 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. Mar 24 22:16:05 12:C 25 Mar 2025 05:16:05.138 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo Mar 24 22:16:05 12:C 25 Mar 2025 05:16:05.138 * Redis version=7.4.2, bits=64, commit=00000000, modified=0, pid=12, just started Mar 24 22:16:05 12:C 25 Mar 2025 05:16:05.138 * Configuration loaded Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.138 * monotonic clock: POSIX clock_gettime Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.139 # Failed to write PID file: Permission denied Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.139 * Running mode=standalone, port=6379. Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.140 * Server initialized Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.140 * Loading RDB produced by version 7.4.2 Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.140 * RDB age 170 seconds Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.140 * RDB memory usage when created 10.12 Mb Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.231 * Done loading RDB, keys loaded: 28836, keys expired: 0. Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.231 * DB loaded from disk: 0.091 seconds Mar 24 22:16:05 12:M 25 Mar 2025 05:16:05.231 * Ready to accept connections tcp Mar 24 22:16:05 Redis service endpoint listening on http://:::3000 Mar 24 22:16:06 2025-03-25 05:16:06,477 INFO success: redis entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Mar 24 22:16:06 2025-03-25 05:16:06,481 INFO success: redis-service entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) Mar 24 22:19:39 [GET] /healthcheck Mar 24 22:21:06 12:M 25 Mar 2025 05:21:06.083 * 10 changes in 300 seconds. Saving... Mar 24 22:21:06 12:M 25 Mar 2025 05:21:06.083 * Background saving started by pid 30 Mar 24 22:21:06 30:C 25 Mar 2025 05:21:06.148 * DB saved on disk Mar 24 22:21:06 30:C 25 Mar 2025 05:21:06.149 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 0 MB Mar 24 22:21:06 12:M 25 Mar 2025 05:21:06.183 * Background saving terminated with success Mar 24 22:26:07 12:M 25 Mar 2025 05:26:07.060 * 10 changes in 300 seconds. Saving... Mar 24 22:26:07 12:M 25 Mar 2025 05:26:07.061 * Background saving started by pid 31 Mar 24 22:26:07 31:C 25 Mar 2025 05:26:07.124 * DB saved on disk Mar 24 22:26:07 31:C 25 Mar 2025 05:26:07.125 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 0 MB Mar 24 22:26:07 12:M 25 Mar 2025 05:26:07.161 * Background saving terminated with success
Here's what it looks like in the UI:
-
I also face similar issues, with apps such as WordPress or Shaarli, which sometimes stop responding to requests, yet do not crash but appear unhealthy to Cloudron or to any monitoring tool. It become annoying to need to restart apps manually several times a week, to the point I had to script something to fix the missing monitoring / autorestart for the more fragile apps running on Cloudron. As discussed in some thread in Cloudron forum this kind of healthcheck must be part of the Cloudron platform ideally. See -> https://forum.cloudron.io/post/101225 for the thread I mentioned and the mitigation (script) I'm using.
-
When this happens again, can you check
dmesg
? Do you see any OOM messages? I can't think of a reason why those redis instances go down otherwise.@girish No OOM errors seen. They often barely get close to the memory limit too. It's usually only about 15-30% usage for the memory for the redis containers.
I looked at
dmesg
but do not see anything yet. I'll take a look when It happens again.Interestingly though, I do see a similar issue with the Graphite service too which I filed a separate report for in case they're different. The main difference to me is that it doesn't block the backup process so it's less of a blocker for me at the moment, but something that should still get resolved.