trying to scale N8N
I am running into this problem with N8N under heavy load it will easily take down Cloudron. On the N8N website they talk about "queue mode" where one instance of N8N drives another.
It is important to have n8n using postgres or mysql
And it is important to have redis running to direct the sub instances.
beyond that... I am wondering if it is going to be possible to make this happen while using Cloudron.
I have the current problem where a load spike will bring in 2000 requests in a few minutes which reliably crashes N8N. It seems premature to upgrade the whole server just to handle a load spike.
You should expose psql, and keep the credentials the save.
And both are feature not currently available in Cloudron
subven last edited by
n8n in queue mode using workers seems similiar to GitLab using GitLab Runner but it isn't.
n8n workers are seperate instances acting as workers for the main instance so it's easy to scale. For n8n on Cloudron I don't see this as beneficial since you could just increase the usable ressources for your n8n app. Starting another n8n as worker would just produce some overhead on your system. To add n8n worker instances from somewhere outside of your Cloudron, Redis and the database has to be reachable from the outside which has not been the case so far.
So if your single instance brings down your server, maybe try to increase usable CPU/RAM/SSD ressources and if this is not sufficient, scale up your server?
More or less it's correct but the difference is that n8n workers need access to db and redis for the queue.
And they don't work correctly (they add a lot of delay) if you use webhook
subven last edited by
ference is that n8n workers need access to db and redis for the queue.
And they don't work correctly (they add a lot of del
The delay is actually what I was most excited about. N8N is really happy to crash load spikes which may only last a few minutes. I was hoping that queue mode would allow me to handle 5000 requests over say... 5 minutes, and then work them over the course of an hour....
just 2 note on that:
- remember that N8N is not an OpenSource project but a Sustainable Use License/faircode so you have to pay if your income on it is more than 5k /y (you should check on their website for the number I just remember that number. )
- In that case it can improve stability using n8n in queue mode, but do it outside cloudron, or at least that's my suggestion, and use the response to the webhook at the start, do not w8 the process to finish to replay or if a worker crashes your request queue/accepted by that worker will not be replied.
jdaviescoates last edited by
remember that N8N is not an OpenSource project but a Sustainable Use License/faircode so you have to pay if your income on it is more than 5k /y (you should check on their website for the number I just remember that number. )
I don't see any such numbers over at https://github.com/n8n-io/n8n/blob/master/LICENSE.md
I was not sure about that, after the change to fair code license, but it was written on the Commons Clause License.
@roofboard Per this doc, the default is
EXECUTIONS_PROCESS=own. This spins us processes for each execution - so 2000 processes in your case. Have you tried with
EXECUTIONS_PROCESS=main? Also, be sure to bump up the memory limit. Note the memory limit is an upper limit, so once the burst goes away, other apps can use it.
2000 requests doesn't seem like much at all to warrant additional workers and complexity (of course, this depends on what the execution is doing). Is it just i/o stuff?
There is also a good explanation of all this here.
At the end of the day, I found the culpret.
Also upgraded to a bigger server.
While yes N8N will happily crash it's self - I had a few perfmance bottlenecks and for now N8N seems to be happy with the traffic. Here is a breakdown of the solution without setting up queue mode.
- N8N uses postgress - so bumping up the RAM for N8N is not enough that just grows the list of RAM eating active processes, you also have to go bump the RAM for Postgres
- I am using Baserow for logging and followup task Queueing. By default baserow only has 3 Auth workers which is not enough to cover heavy load. So I had to install the CLI and run the command cloudron env set --app baserow.draglabs.com BASEROW_AMOUNT_OF_GUNICORN_WORKERS=9
Conclusion - Step 1 Significantly improved the performance of N8N dropping normal RAM consumption by about 30%
Step 2 Caused active processes to complete 10X faster further reducing N8N footprint.
Now that I upgraded to 24 gigs I am only using 8
@roofboard Hello how are you?
Could you tell me how you did the n8n queue mode installation? I've tried and I couldn't and I don't understand your explanation very well
@hugoo_souza10 yes that is accurate I increased the "gunicorn" workers in baserow allowing for more writes and reads. Essentially I never got queue mode working, but i did optimize the instance for maximum performance. It has a limit somewhere around 10 requests per second.
@girish I wanted to see the simplest way to install the queue mode on the n8n, I've looked a lot and I don't find it. So, searching, I fell into this conversation.
This post is deleted!
@roofboard Can you explain to me how you did it? This would be just fantastic for escalating requests to people who have machines crashing.
the matter of gunicorn workers is specific to baserow which I use for logging on N8N actions.
to change the baserow env variables just go into the baserow terminal and type
that should do it.
However for N8N the best you can do is give is 8gb or 16gb of ram, and increase the postgress ram to 4gb .