trying to scale N8N
-
@roofboard
just 2 note on that:- remember that N8N is not an OpenSource project but a Sustainable Use License/faircode so you have to pay if your income on it is more than 5k /y (you should check on their website for the number I just remember that number. )
- In that case it can improve stability using n8n in queue mode, but do it outside cloudron, or at least that's my suggestion, and use the response to the webhook at the start, do not w8 the process to finish to replay or if a worker crashes your request queue/accepted by that worker will not be replied.
-
@MooCloud_Matt said in trying to scale N8N:
remember that N8N is not an OpenSource project but a Sustainable Use License/faircode so you have to pay if your income on it is more than 5k /y (you should check on their website for the number I just remember that number. )
I don't see any such numbers over at https://github.com/n8n-io/n8n/blob/master/LICENSE.md
-
@jdaviescoates
I was not sure about that, after the change to fair code license, but it was written on the Commons Clause License. -
@roofboard Per this doc, the default is
EXECUTIONS_PROCESS=own
. This spins us processes for each execution - so 2000 processes in your case. Have you tried withEXECUTIONS_PROCESS=main
? Also, be sure to bump up the memory limit. Note the memory limit is an upper limit, so once the burst goes away, other apps can use it.2000 requests doesn't seem like much at all to warrant additional workers and complexity (of course, this depends on what the execution is doing). Is it just i/o stuff?
There is also a good explanation of all this here.
-
At the end of the day, I found the culpret.
Also upgraded to a bigger server.While yes N8N will happily crash it's self - I had a few perfmance bottlenecks and for now N8N seems to be happy with the traffic. Here is a breakdown of the solution without setting up queue mode.
- N8N uses postgress - so bumping up the RAM for N8N is not enough that just grows the list of RAM eating active processes, you also have to go bump the RAM for Postgres
- I am using Baserow for logging and followup task Queueing. By default baserow only has 3 Auth workers which is not enough to cover heavy load. So I had to install the CLI and run the command cloudron env set --app baserow.draglabs.com BASEROW_AMOUNT_OF_GUNICORN_WORKERS=9
Conclusion - Step 1 Significantly improved the performance of N8N dropping normal RAM consumption by about 30%
Step 2 Caused active processes to complete 10X faster further reducing N8N footprint.Now that I upgraded to 24 gigs I am only using 8
-
@roofboard Hello how are you?
Could you tell me how you did the n8n queue mode installation? I've tried and I couldn't and I don't understand your explanation very well -
@hugoo_souza10 I think @roofboard enabled queuing in baserow and not in n8n.
What are you trying to do/solve ? Maybe you can open a separate thread with your issue.
-
@hugoo_souza10 yes that is accurate I increased the "gunicorn" workers in baserow allowing for more writes and reads. Essentially I never got queue mode working, but i did optimize the instance for maximum performance. It has a limit somewhere around 10 requests per second.
-
This post is deleted!
-
@roofboard Can you explain to me how you did it? This would be just fantastic for escalating requests to people who have machines crashing.
-
@hugoo_souza10
the matter of gunicorn workers is specific to baserow which I use for logging on N8N actions.to change the baserow env variables just go into the baserow terminal and type
export BASEROW_AMOUNT_OF_GUNICORN_WORKERS=20that should do it.
However for N8N the best you can do is give is 8gb or 16gb of ram, and increase the postgress ram to 4gb .
-
@zonzonzon Services view - https://docs.cloudron.io/services/#configure