roofboard
Posts
-
System fails to create new certs, all certs are currently invalid. -
System fails to create new certs, all certs are currently invalid.BTW this is how it's done.
By going into google admin -> apps -> google workspace -> Gmail
You can create a domain forwarding protocol, and forwarding destination.
Used together - you can tell google to forward all mail (or all mail that does not have a valid recipient) to another server. In this case cloudron. -
System fails to create new certs, all certs are currently invalid.@girish yes, I was trying to solve the problem and managed to fix it. However. Yes, for example if the Sync DNS could have a popup that allows the user to choose which types of records get synced. That would be huge!
-
System fails to create new certs, all certs are currently invalid.Figured it out... something wrong with the API token. I Switched to the global api key and it started working.
So annoying though! Because after running the DNS sync it broke my MX records, and email went offline.Feature Request! It would be awesome to have the ability to disable MX record sync. I have a custom configuration and cloudron always breaks it!
-
System fails to create new certs, all certs are currently invalid.Running v7.7.1 (Ubuntu 22.04.1 LTS)
None of my Cloudron apps are working right now all services are experiencing https errors. When I log into Cloudron and click renew certs I get the following error on all services.
{
"domain": "git.draglabs.com",
"errorMessage": "message: Max auth failures reached, please check your Authorization header. statusCode: 403 code:9109"
}When I try to change from "lets encrypt prod - wildcard" to "lets encrypt prod" i get the following error.
-
Updates not possible "BoxError: Version info mismatch"Worked on v 7.3 same issue.
-
CLI Update command not working.Things I have done..
Checked to see if server time and local time were synched.
I have logged out of the CLI and back in.
I have used the flag --no-backup
I have rebooted the serverCloudron finishes the update, but it seems like after about 120 seconds it just stops communicating with the cli.
-
CLI Update command not working.Cloudron CLI update Error:
I am working to update some apps. I have chained together many commands, but I keep getting errors from the error below
[1] $ cloudron -V ✘ 5.4.0 $ cloudron update --app n8n.shield.com --appstore-id io.n8n.cloudronapp@2.47.0; \ sleep 200; \ cloudron update --app n8n.shield.com --appstore-id io.n8n.cloudronapp@2.48.0; \ sleep 200; \ => Waiting for app to be updated => Queued => Backup - Snapshotting app n8n.sellersshield.com .. App update error: Client network socket disconnected before secure TLS connection was established
I have about 30 updates to do... so it would be really awesome if the socket would not get disconnected after a few minutes....
-
unable to recieve mail sent to admin@cloudron.localHaving this problem where I cannot receive mail sent to admin@cloudron.local is there a way to capture that mailbox? I have tried creating a domain cloudron.local so i could assign the catchall to myself but it is not working.
In wordpress for example - to change an email address you must click on a confirmation email. This is really difficult if the confirmation email keeps bouncing...
How do I set a redirect for the cloudron.local to cloudron will send those emails to a specific address?
-
Way to boost SecuritySomething like Crowdsec is surely a step in the right direction. Unfortunately I do not know enough about security to write the guide on configuring these solutions. However - I know enough to know that it is desperately needed. Especially for SMB and Enterprise applications.
In general - Wouldn't it be nice to have a control center where you can know who is accessing your server, what they are doing, and if anything suspicious is going on?
-
Way to boost SecurityLooking for a way to make Cloudron more secure. Rather - have more auditable security. Last year I spent time trying to get bitninja.io to play nice with Cloudron but ultimately gave up because BN was crashing my services, and it was too risky to fiddle around... Actually I was unable to remove it and had to migrate to a whole new server to restore normal functionality.
Has anyone had luck getting Cloudron to play nice with an active treat detection program? How did you set it up, which ones did you use?
The security ven diagram.
Cloudron.
Easy to install, and kind of builds a nice wall. If you don't have the password you cannot get in.BitNinja or Other Sec Software
Actively scans your system looking for signs of an attack, suspicious traffic, and logins from strange destinations and strange queries.REQUESTED FEATURE
Low Level Plugin Support for Bit Ninja or other active threat management software.
Note - this could be a revinue source for cloudron as you will get a referral fee any time some creates a paid account on an external service like this. -
How to automate migrating/importing apps from Cloudron to Cloudron via APIHere is the start of an N8N workflow.
Copy and paste all the JSON into your N8N instance it will detect the JSON and nodes.
We are using query auth here because we do not know how to do the Bearer Token.And we are having trouble figuring out how to tell the destination cloudron to pull a backup from a remote location.
{ "meta": { "instanceId": "8d8e1b7ceae09105ea1231dc6c31045b9d36dc713f8e22970f6eacf9f1f4d996" }, "nodes": [ { "parameters": {}, "id": "4be687c9-4a4b-4be3-a8fc-d1405a66fa95", "name": "When clicking \"Execute Workflow\"", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [ 680, 320 ] }, { "parameters": { "url": "https://my.demo.cloudron.io/api/v1/apps", "authentication": "genericCredentialType", "genericAuthType": "httpQueryAuth", "options": {} }, "id": "33c66a41-753b-491a-bf03-8683f86b95c5", "name": "PullAppsFromSource", "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.1, "position": [ 900, 320 ], "credentials": { "httpHeaderAuth": { "id": "11", "name": "RobCloudron-Demo" }, "httpQueryAuth": { "id": "12", "name": "Query Auth account" } } }, { "parameters": { "jsCode": "// Loop over input items and add a new field called 'myNewField' to the JSON of each one\nfor (const item of $input.all()) {\n item.json.myNewField = 1;\n}\n\nreturn $input.all();" }, "id": "a1d203d0-4d01-421a-9d88-7299f8f515ce", "name": "ManipulateResponse", "type": "n8n-nodes-base.code", "typeVersion": 1, "position": [ 1120, 320 ] }, { "parameters": { "url": "=https://my.demo.cloudron.io/api/v1/apps/{{ $json.apps[0].id }}/backups", "authentication": "genericCredentialType", "genericAuthType": "httpQueryAuth", "options": {} }, "id": "9c925efa-458f-4599-8172-7d9fafc3ce23", "name": "PullBackupsFromSource", "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.1, "position": [ 1320, 320 ], "credentials": { "httpHeaderAuth": { "id": "11", "name": "RobCloudron-Demo" }, "httpQueryAuth": { "id": "12", "name": "Query Auth account" } } } ], "connections": { "When clicking \"Execute Workflow\"": { "main": [ [ { "node": "PullAppsFromSource", "type": "main", "index": 0 } ] ] }, "PullAppsFromSource": { "main": [ [ { "node": "ManipulateResponse", "type": "main", "index": 0 } ] ] }, "ManipulateResponse": { "main": [ [ { "node": "PullBackupsFromSource", "type": "main", "index": 0 } ] ] } } }
-
Favorite VPS providers?recently switched to contabo lately and they have been awesome. They have servers in many countries which is cool.
-
rsync for cloudron CLICLI "cloudron sync" to upgrade "push" and "pull"
the Cloudron Push and Cloudron pull commands are limited in functionality especially when pushing and pulling recursively through many directories. Or trying to get special behavior like only transmitting files which have changed.
So the feature request would be to wrap rsync and enable us to use a command like "cloudron sync" with the same options as rsync when moving files to and from the server.
see this post for a detailed use case
-
sftp is not enough@girish the issue with using it directly is that I do not want to expose my root keys. Honestly I have done that in the past, but with setting up automation on the server I want it's access to be scoped. I mean we kinda have something with cloudron exec, it allows me to do almost anything. However I am having big trouble running rsync through it. That would be a game changer.
I mean... I could rsync from the server to my runner - but that just sounds crazy.
-
sftp is not enoughthe script command is like an inline .sh
however I was not able to use the push command because it was not getting the subdirectories.What I would really like to see is an example of using cloudron exec in combination with rsync. I was not able to get that working but rsync would be the ideal way to push and pull files from the server.
-
sftp is not enough@BrutalBirdie Can you share some of your ci commands?
I guess I am just going to upload a zip archive and the log in with exec to decompress it...
#does not work because it does not recusively copy files
script -q -c 'cloudron login --password ${CLOUDRON_PASS} --username ${CLOUDRON_USER} my.domain.com \ cloudron push --app members.domain.com ${name} /app/data/wp-content/plugins/${name}'
#does not work because of unknown errors
script -q -c 'cloudron login --password ${CLOUDRON_PASS} --username ${CLOUDRON_USER} my.domain.com \ rsync -avz -e 'cloudron exec --app members.domain.com --' ${name} /app/data/wp-content/plugins/'
#does not work because of ttl errors
cloudron login --password ${CLOUDRON_PASS} --username ${CLOUDRON_USER} my.domain.com cloudron push --app members.domain.com ${name} /app/data/wp-content/plugins/${name}
#This managed to work!!! errors above in the single quotes - you cannot pass vars through single quotes.
script -q -c "cloudron login --password ${CLOUDRON_PASS} --username ${CLOUDRON_USER} my.domain.com; \ cloudron push --app members.domain.com ${name}.zip /app/data/wp-content/plugins/ ; \ cloudron exec --app members.domain.com -- unzip -o /app/data/wp-content/plugins/${name}.zip -d /app/data/wp-content/plugins/ ; sleep 3; \ cloudron exec --app members.domain.com -- rm /app/data/wp-content/plugins/${name}.zip ; \ cloudron exec --app members.domain.com -- chown -R www-data:www-data /app/data/wp-content/plugins/${name} "
-
sftp is not enough@BrutalBirdie so.. how do you get around all the ci error which come from having TTY disabled in the ci runner?
-
sftp is not enough@roofboard
Yah I see it now. I should have gone for the CLI first -- reinventing the wheel over here! -
sftp is not enough@girish
hmmm breaking is a problem.
Lets start with the motivation.
I have a website which needs regular updates coming from gitlab-ci
-- Originally
I was just going to sftp the files over.
but the files are in folders with subdirectories and you cannot recursively delete using sftp.
-- Backup plan
Make special user of the host machine which can access the data folders in yellowtent. Now for this new linux user I can access just the directories which need updating.-- I would have used my root keys
But that did not sound like a good idea I don't like the idea of storing my private key on gitlab-- I would have installed something on the container
but that would have the issue of getting de-configured on upgrade--maybe I can use Cloudron CLI???
can I launch web-terminal from the CLI? That would be super awesome because I can install the cli on a gitlab runner-- It worked last night- I was able to automate pushing files to the website but if that is going to break... how is it going to break?
My ideal would be to ssh directly into the container and be able to do anything...