Resizing an app in Cloudron, seems to take much longer than it used to
d19dotca last edited by girish
This may just be me misremembering as I don't have to do this often, but lately I've been resizing some of my apps to sort of follow my own "standard" for the app type, and anyways it seems it takes the app offline much longer than it used to. When I used to do this several months back, I thought it would only be roughly a minute offline for the app, if that. But it seems like it almost takes 2-5 minutes depending on the app (image size?), and I'm not sure if that's honestly how it's always been and I just never noticed, or if it's a performance degradation over the last several Cloudron updates. Anyone else notice this, or is it just me?
Edit: Slightly different topic, but in the event it's always been this long and I'm just going crazy, lol.... why does it take this long to change the memory of an app? I can change the memory instantly on any of the system services, why do app changes require so much time to do the same thing?
Changing the memory limit currently recreates the docker container for the app and thus also means the app is restarted. Essentially if you restart the app without resizing the memory limit it should more or less take the same time, since container recreation is usually quite fast.
I was waiting for someone to notice before optimizing
At container level, adjusting memory limit does not require us to re-create or restart the container. Indeed, this is how the Service view memory limit code works. It does not re-create or restart the service. But for the app memory limit, we re-create and restart the container because that was what was easier and we haven't optimized that code path. IIRC, the reason for this was there are some apps that adjust their settings based on the memory allocated to the container. For example, I think WordPress configures itself to have more apache processes when given more memory. This code is part of the app startup script. By restarting the app, we solved these cases.
d19dotca last edited by
@girish @nebulon - Thank you for your responses. Seems like it could be improved a bit further though, no? I ask because it seems to me that when I deploy an app for the first time, it passes the "Creating container" step pretty quickly, but when I resize it, the "Creating container" step in particular seems to take a lot longer. Theoretically they should take the same amount of time, no? But this isn't what I observe in my environment.
Yes, it should theoretically take the same amount of time (at least, in my demo setup it is the same). In any case, I think the best fix is to skip recreating the container entirely when just the memory limits are changed.
d19dotca last edited by
@girish Ah interesting, maybe it's all in my head then. haha. I can run some more tests if it'll help. Ultimately I just want to reduce downtime to near-zero whenever possible.
But you mentioned that some apps require the restart, so does that mean they'll require the container to be recreated?
I'm okay with it the way it is if it'll prevent other issues. Just was a concern to me earlier because I thought the time actually increased lately anytime I change the memory, but maybe I'm wrong there and it's all in my head.
Whatever you feel works best to minimize downtime while ideally not causing issues for some apps that may be require the full restart / recreation, that will be best.
Correct. We don't need to re-create container to change the memory limit. But we still need to restart it after adjusting memory limit because of the limitations in our packaging.
I think over time we have learnt that it is not a good idea to setup apache to auto-scale based on container memory limit. Those things are very dependent on the app/plugin use. I think java apps require a restart at this point since the JVM gets passed in the heap memory as a flag on startup, maybe there is a workaround for this as well, have to investigate.
But at that point, we can atleast make the memory limit code not re-create container which I think is where bulk of the slowness is.