I've synced the expiration now to 7 days to keep changes low until we have a reason to change it
Instead of making it configurable I would prefer a sensible default, since so far no-one else has complained.
I think this means that the postgres server ran out of available connection slots in the pool, possibly due to too many clients or buggy clients. Some of those slots are apparently reserved for some admin/superuser connection, I guess to still be able to do maintenance.
Which apps do you have installed which use postgres? (I think you have to dig into the CloudronManifests to figure that)
@vladimir Usually for LAMP app, we tend to create separate apps. i.e LAMP 7.3, 7.4 and in the future 8, are totally separate apps. So, your app won't update automatically.
For addons which are shared across apps, we don't have a way to do this. I guess one idea is to have the app "pin" itself to a specific mysql version and that way we will have multiple mysql databases with different versions running in parallel. Doable but quite complicated! I think @luckow also reported that MySQL 8 broke his drupal app.
Good news is that I don't expect such a big upgrade any time soon.
@girish I guess I'm wondering though why it'd say "Not available yet"... is that because I had restarted the server a few hours earlier? I don't normally notice that though when I restart, it usually still shows data. Is it possible there's a bug here?
If the restarts are losing that data, then I'd think that's a bug, right, if it shows for some services but not all? To me that makes it seems like it's either not completing properly when it runs and that could maybe explain why it shows values for some but not all, or perhaps it's losing data when it should be remembering it. My gut tells me there's a bug here. Or am I way off?
I guess it's okay since we have a workaround to run that command when it happens, my brain is just wondering why it happened in the first place and how it could be prevented.
Okay... I may be on the side of this working properly again. lol. Maybe I've been wrong this whole time in thinking it wasn't working correctly.
So coincidentally I was checking the mail server logs and saw another example of the same message go through to the same recipient from the same mail server, it was listed in the logs as "just now" so I quickly checked mxtoolbox and found that only 4 at that time had been listed, none of which were ones I was using.
Here is how it looked at the very moment I checked when it was "just now" in the logs:
Edit: Checking about 6 minutes later, I see the blocklists have aleady been updated for more (Spamhaus Zen in this case would have caught it if it were about 5 minutes earlier):
So I guess we can probably mark this as resolved, as I now see conclusive evidence that the various blocklists used just didn't have it listed until a few minutes after the message was received. I guess in order for it to adapt so quickly this spam attack on one of my users from those mail servers must be right at the beginning of a spam wave. Kind of neat actually to see how real-time these lists are. haha.
I think one idea is to patch the user ids to match the existing cloudron user ids (in the database directly). I don't know if this will work, but I am just trying this out with @yusf and will report back.
@freetommy If I understand you correctly, what you are saying is that when email forwarding is enabled in rainloop, then the forwarding is done directly to the destination domain via port 25 instead of using Cloudron's email relay. Did I get that right?
@nebulon I thought it was perhaps the Broken Links Checker on one of my WordPress installs, as as @marcusquinn noted it's widely known to sometimes cause such issues, but I've disabled that and I'm still getting these /mysql was restarted (OOM) (although they don't seem to appear in the event log?).
I note that the most recent one happened 5 minutes after my backups are due to start, and the other previous two times it's been 7-9mins before updates are due to run.
I wonder if either of those (backup/ update) processes might have something to do with it?
I guess I could just give mysql more memory and not worry, but be nice to know what's happening and why...
@robi is right. @ryan can you send us an email to support@ to move your plan to a free plan. Currently, we have to do this manually i.e move something from a paid plan (even if cancelled) -> free plan.
@drpaneas Just give that redis more memory. The "e1ca318a" is the app id. If you put in the search in the apps view, you will know which app's redis needs a bit more memory. It was reported earlier that pixelfed's redis needs more memory after the update. I guess this is because of update to Redis 5.