@svallory Accept self-signed certs and login to dashboard. Once logged in, I would first go to settings and check for updates/update all the way to Cloudron 6. This is because LE made a change in the last few months which makes cert renewal fail on Cloudron side. Once updated, Domains -> Renew all certs.
@nebulon said in Managing SSL certs via Cloudron CLI:
you have to "forget" the page in your browser
yes, or visit the site in an incognito session. Clearing these entries from the profile in Chrome is slightly more complicated, but doable as well.
https://msutexas.edu/library/clearhsts.php
@staypath Continuing my conversation with myself
Posting this here in case anyone else comes across this with the same question: I found that configuring fail2ban to use systemd was the trick:
[sshd]
port = ssh
#logpath = %(sshd_log)s
#backend = %(sshd_backend)s
backend = systemd
enabled = true
maxretry = 1
bantime = 14d
@mastadamus If you use namecheap API, you don't need port 80. This is because Cloudron will use Let's encrypt DNS automation to get certs. Note that this will require you to sometimes type "https://" explicitly in some browsers because some browsers will default to connecting on port 80 and then the redirect will take it to the https site. In addition, Cloudron has HSTS, so future connects will directly be to 443 and no redirect dance.
One other note, the .sql file originally failed to upload due to the dump having the following line:
Create SCHEMA public;
I commented that out and the import ran fine. I'm guessing there is a command to dump the Postgres DB without a create statement but someone smarter than me probably knows it. I haven't used Postgres a day in my life until yesterday so I'm still getting to know it. I have only used MySQL, Maria and MS SQL and all of those have an option to dump without the create statement, but I figured it was just easier to comment that line out of the SQL file instead of looking up the command and doing a new dump.
@girish it's been solved. I did a docker logout and tried again. Got the error again but this time the error was a bit more detailed. Tried it again and it worked.
Thanks!
Thank you all for your answers! Yes i was curious if cloudron did something special with the networking. I will contact the network administrators and try to fix the issue with them.
@girish said in [BUG] Automount fail on reboot with Ubuntu 20.04:
Currently, there is no solution to use DNS names in mounts
don't worry, I understand that is an exception and probably not a priority, thanks for the consideration
@girish Thanks! This ended up being a combination of issues I believe. So throttling on the backups due to using rsync, encryption, and file names ended up being too long (only a few) So I've changed some settings around and it seems to have been resolved. I appreciate the help from both @robi and yourself.
baobab is available on Ubuntu, and it's GUI app has that capability.
Other options here:
https://serverfault.com/questions/386784/linux-trigger-a-real-time-alarm-on-a-low-disk-space-condition
but if the C code listed in the answer there is the approach, might as well do something similar in the box code.
Similar here: https://www.linux.com/training-tutorials/get-disk-space-email-alerts-your-linux-servers/
Having it be a part of the standard system health check would be