Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content

Support

Get help for your Cloudron

3.3k Topics 23.1k Posts
  • Docs

    Pinned Moved
    21
    2 Votes
    21 Posts
    6k Views
    girishG
    @taowang thanks, I have made them all the same now.
  • External Provider (OIDC / OAuth) - Google Worspace

    Unsolved oidc
    3
    1 Votes
    3 Posts
    23 Views
    L
    Hi James. I've been through this documentation and reread it at least a dozen times to figure out where I might be going wrong. In User Directory (/#/user-directory), there's a Provider referenced as "Other." I'm using: Server URL: ldaps://ldap.google.com:636 or ldaps://ldap.google.com Base DN: dc=mydomain,dc=com,dc=br Filter: (objectClass=person) Username field: uid Bind DN/Username (optional): credential-generated-by-google Bind Password (optional): psw-generated-by-google When I save without the "Accept Self-signed certificate" option checked, I get the error "self-signed certificate." When I save with the "Accept Self-signed certificate" option checked, I get "Incorrect bind password." From everything I've read, it seems that for Google Workspace, I would need to make Cloudron use the certificate generated by Google Workspace LDAP. From the server where Cloudron is installed, I can perform tests and listings using the command LDAPTLS_CERT=/root/cert.crt \ LDAPTLS_KEY=/root/cert.key \ ldapsearch -x \ -H ldaps://ldap.google.com:636 \ -D "credential-generated-by-google" \ -w 'psw-generated-by-google' \ -b dc=mydomain,dc=com,dc=br \ '(objectClass=person)' uid The problem is that without the certificate, the integration doesn't work. That's what I understand is happening with Cloudron. Does that make sense? Can I force Cloudron to use the Google-generated certificate? Is there another way to do this integration that I haven't figured out yet? Best regards
  • Disk usage update problem

    Unsolved
    4
    2
    1 Votes
    4 Posts
    46 Views
    nebulonN
    Do you see any errors in the system logs at /home/yellowtent/platformdata/logs/box.log when you trigger a refresh of the disk usage?
  • Migrating active Server

    Unsolved backups restore
    5
    2 Votes
    5 Posts
    57 Views
    W
    yes @james I have also thought of using that. but manually running it on all mailboxes sounds like a pain I don't want to go through. I guess the current way I envision it is writing a script for imapsync that uses the api of cloudron to get all mailboxes and impersonates the users of the mailboxes to auto run imapsync on them. but then again maybe a block of the port is the way to go, so I don't have to write that script. the whole backup and recover process should be done quite quickly anyways. I am wondering though if this is not something others have gone through as well?
  • Backup failed - Logs unavailable. Maybe the logs were logrotated.

    Unsolved
    4
    4
    1 Votes
    4 Posts
    37 Views
    nebulonN
    Does the folder /home/yellowtent/platformdata/logs/tasks/ even exist? Maybe you purged all subfolders there? If it doesn't exist, run: mkdir -p /home/yellowtent/platformdata/logs/tasks/ chown yellowtent:yellowtent /home/yellowtent/platformdata/logs/tasks/
  • Are successful/failed login attempts logged anywhere?

    Solved failed logs
    8
    1 Votes
    8 Posts
    94 Views
    M
    awesome as always, much obliged
  • domain name change

    Unsolved domains
    7
    1 Votes
    7 Posts
    81 Views
    J
    You should do this - https://docs.cloudron.io/troubleshooting/#domain-issues-expiry
  • Clean up external users that have accessed gitea instance

    Solved
    4
    1 Votes
    4 Posts
    45 Views
    infogulchI
    With help from Grok: https://grok.com/share/bGVnYWN5_baedbb67-0507-41f0-b26d-29da9f1b7f94 Exported users to delete with mysql --user=${CLOUDRON_MYSQL_USERNAME} --password=${CLOUDRON_MYSQL_PASSWORD} --host=${CLOUDRON_MYSQL_HOST} ${CLOUDRON_MYSQL_DATABASE} -e "SELECT id FROM user WHERE FROM_UNIXTIME(created_unix) > '2025-03-01'" > /app/data/users-to-delete.txt Deleted the header name with vim Switch to git user sudo -u git bash Run a command to use the cli to purge each user in the list: while read -r id; do /home/git/gitea/gitea -c /run/gitea/app.ini admin user delete --id "$id" --purge; done < /app/data/users-to-delete.txt All bad users cleared but one (not sure why), deleted manually. Done!
  • iDrive E2 Backups Failing More Frequently

    Solved
    16
    1 Votes
    16 Posts
    250 Views
    P
    @d19dotca said in iDrive E2 Backups Failing More Frequently: What I had to do was delete the region hostname and re-enable it, which created my backup buckets somewhere else and allowed me to proceed without issue. The unfortunate part of this though was that it deletes all the files of course in the prior region, so what I did was first download a few important backups from it to my local disk and reloaded afterwards. I had the same rate-limiting issue with IDrive E2, but with a twist: my bucket was originally blazing fast in one region. After I deleted and recreated it (like you did), it became painfully slow—the exact opposite of your experience! I suspect their ‘region reassignment’ doesn’t guarantee consistent performance, maybe due to uneven server loads or hidden throttling. Moral of the story? With IDrive E2, if your bucket works fine, don’t touch it—even if they tempt you with ‘better regions.’
  • Services and all apps down due to cgroups error

    Solved cgroups docker
    16
    1
    1 Votes
    16 Posts
    137 Views
    infogulchI
    Ok if I switch it to "Direct disk" it fails to boot but "GRUB (Legacy)" kernel option seems to boot fine.
  • 0 Votes
    15 Posts
    639 Views
    d19dotcaD
    @girish Yes, I no longer got the Gmail issues when I created my own system service to implement the iptables rule. So it seemed to do the trick. The IP tables rule to add is really just this: iptables -t nat -I POSTROUTING -s 172.18.0.0/16 -o enp3s0f0 -j SNAT --to-source {FLOATING_IP} Where FLOATING_IP is really just replaced with whatever the recognized IP address is that’s used in the DNS records for the MX record. I supposed it could be further improved to only be applicable to the mail container rather than all Docker traffic. And of course the interface would have to be dynamic too. I guess an alternative is for me to create additional MX records with the other IP addresses but then it’s manually done and prone to mistakes/issues. In my opinion, I think we really need an option to select the outbound IP interface using Cloudron for the mail component, in order to avoid the Gmail issues (and any other provider who will use FCrDNS for verifying or rejecting emails in the future). I recognize this may not be a common concern as most people probably only have the one IP address of each type and so the DNS records if setup automatically by Cloudron will use both the IPv4 and IPv6 address, but for those of us who use a floating/failover IP addresses that we want to be the one true IP address being used, this becomes an issue without that workaround in place.
  • 0 Votes
    7 Posts
    101 Views
    sponchS
    Same here (since sending through Brevo (Sendinblue)).
  • 1 Votes
    22 Posts
    266 Views
    U
    @girish Thanks for this. Access given / mail sent. Appreciate the help.
  • Memory exhausted on wp site affecting entire server

    Unsolved docker resources
    6
    1 Votes
    6 Posts
    103 Views
    J
    @mbarria for example, like this: root@my:~# docker inspect mail | grep Memory "Memory": 536870912, "MemoryReservation": 0, "MemorySwap": -1, "MemorySwappiness": null, The Memory field is the max memory app can use. See https://docs.docker.com/reference/api/engine/version/v1.47/#tag/Container/operation/ContainerInspect (If you click on "200" then it expands to the response fields)
  • Email client config when using external SMTP server

    Solved
    5
    0 Votes
    5 Posts
    89 Views
    avatar1024A
    @nebulon wonderful, thank you for the clarification!
  • Trying out minio as a backup destination

    Solved minio backup
    6
    0 Votes
    6 Posts
    119 Views
    A
    Thanks, I don't use cloudflare, but a good thing to be aware of.
  • Connection to Google's doubleclick from cloudron server

    Solved analytics privacy
    3
    1
    2 Votes
    3 Posts
    38 Views
    girishG
    Just tried this on the demo server. https://my.demo.cloudron.io (username/password is cloudron) [image: 1751631070055-fffc8926-44d5-49c4-af87-f77371ab2e98-image-resized.png]
  • Backups before retention policy not being deleted. Bug?

    Solved backups cleanup-backups
    16
    1 Votes
    16 Posts
    2k Views
    jamesJ
    Hello @shrey This is expected. Each Cloudron release and update has to treat every system the same and can not respect changes made by the user. Changes made to the core services need to be managed by the user manually.
  • * 1.0 SPF_SOFTFAIL SPF: sender does not match SPF record (softfail)

    Solved email spf
    3
    1 Votes
    3 Posts
    62 Views
    J
    Are you saying you are getting SPF_SOFTFAIL for all your emails? Note that the SPF fail here is for the domain of the incoming mail and not your domain itself.
  • tarExtract pipeline error: Invalid tar header

    Solved restore nextcloud header tar
    24
    1 Votes
    24 Posts
    1k Views
    P
    @girish Periodic integrity checks – for example, verifying the backup archive and reporting issues proactively – could help discover corruption early, before a restore is needed. This would give admins a chance to re-run or fix backups in time. What do you think about such a feature?