And this seems to be a duplicate of https://forum.cloudron.io/topic/8677/how-to-use-redis-in-n8n
Merging your topic.

james
Posts
-
How to use redis in n8n -
How to use redis in n8nHello @ikka
Redis needs to be added from the Cloudron staff.
I have created an internal ticket to review this. -
Recurrent Cloudron Downtime - Request for SupportHello @Felipe.rubilar
If you feel comfortable doing so, please provide the logs of your system, so people here can take a look.
Also, since you are monitoring CPU, RAM, storage, etc please provide some Screenshots of the weeks where this issue arrises. -
Federation testing fails unless port 8448 is forwarded to 443Also, you have to use the base domain in the federation tester not e.g.
synapse.cloudron.club
. -
New to Cloudron & Matrix/ElementHello @stefanwirtz
@stefanwirtz said in New to Cloudron & Matrix/Element:
after this step I tried verifying the settings via SSH on my VPS server and only got error messages.
What did you try to verify and how, and what was the error message?
.well-known
is served for the root domain see https://docs.cloudron.io/domains/#well-known-locationsQuote:
Requires app on bare domain
In the above example, an app must be installed on the bare domain https://cloudron.club for Cloudron to be able to respond to well known queries.Did you follow this step?
Matrix federation can be tested with this web tool: https://federationtester.matrix.org and in there you put the bare domain e.g.
cloudron.club
and notsynapse.cloudron.club
. -
Integrated RedisHello @firmansi
Thanks for the feedback.
I was only able to confirm it was working with a debug startup.
So it is very good to read from you that it is in fact working. -
Integrated Redis@firmansi
I see you have noticed the update https://forum.cloudron.io/post/111557 -
Gigantic blunder caused by a lack of warningsHello @tsu.douady and welcome to the Cloudron Community Forum
Cloudron does configure outgoing mails for the GitLab App automatically.
@tsu.douady said in Gigantic blunder caused by a lack of warnings:
I really feel like it shouldn't be this easy to end up in this situation. It's really badly lacking a big warning somewhere to tell me that it's a bad idea to attach an inbox containing thousands of existing emails to the Gitlab app.
Good point.
We will add this warning to https://docs.cloudron.io/packages/gitlab/#incoming-email -
Integrated RedisHello @firmansi
Thanks for the clarification, I will look into it. -
Integrated RedisHello @firmansi
Are you referring to the latest release v0.6.19 with the following patch note:Efficient Redis Connection Management: Implemented a shared connection pool cache to reuse Redis connections, dramatically reducing the number of active clients. This prevents connection exhaustion errors, improves performance, and ensures greater stability in high-concurrency deployments and those using Redis Sentinel.
Would you like this Redis feature you are looking for?
Since there is also
REDIS_SENTINEL_HOSTS
for scaled hosting environments like K8S, so I highly doubt you are referring to that one. -
Integrated RedisI am looking into it.
-
all-in-one wp migration cannot be scheduled!Hello @ImatBagjaGumilar
Please provide more details about your issue.
When are you scheduling theAll in One WP Migration
backups? Do you see anything in the app logs for this timeframe? -
3.93.1 was the last stable release of n8n (fixed in 3.97.0)Hello @umnz
Thanks for reporting this. -
Deploying Anubis (AI Crawler Filtering) on a Cloudron ServerHello @hareen
That sounds great! Would you be willing to write up a detailed how to post? -
SFTP to LAMPIssue was reproduced and is getting looked into.
-
SFTP to LAMPYes, only app operators and admins can use the SFTP.
-
Missing model even with configured Api KeySecond try.
I got it working in LibreChat much faster.
I added this to/app/data/env
:PERPLEXITY_API_KEY="YOUR-API-TOKEN-GOES-HERE"
And after reading https://www.librechat.ai/docs/configuration/librechat_yaml/ai_endpoints/perplexity and adding this config to the
/app/data/librechat.yaml
I found multiple issues with this sample config.
First there were some error about something being wrong in thisyaml
and after fixing that, the logs complained that:Error 400 Invalid model 'llama-3-sonar-small-32k-chat'. Permitted models can be found in the documentation at https://docs.perplexity.ai/guides/model-cards.[39m
So I fixed and tweaked this config to:
version: 1.2.4 endpoints: custom: - name: "Perplexity" apiKey: "${PERPLEXITY_API_KEY}" baseURL: "https://api.perplexity.ai/" models: default: - "sonar-deep-research" - "sonar-reasoning-pro" - "sonar-reasoning" - "sonar-pro" - "sonar" - "r1-1776" fetch: false titleConvo: true titleModel: "sonar" summarize: false summaryModel: "sonar" forcePrompt: false dropParams: - "stop" - "frequency_penalty" modelDisplayLabel: "Perplexity"
Which is working when using
sonar
,sonar-pro
,sonar-reasoning
,sonar-reasoning-pro
but for some reasonsonar-deep-research
returns nothing.I hope this helps and solves your issue.
-
Missing model even with configured Api KeyHello @p44
Yes, I did. . .
Sorry for that, I will look into LibreChat also. -
Hetzner Storage Box not workingHello @JueBam
Did you put your ssh public key into the storage-box?
Maybe you can give us some more details so we can figure out this issue together.