Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content

Support

Get help for your Cloudron

3.4k Topics 24.3k Posts
  • 1 Votes
    16 Posts
    4k Views
    robiR
    @d19dotca Yes, the limits are there to protect against the noisy neighbor problem which exists when many processes are competing for the same resources and ONE uses up more than their fair share. Technically we could have all 30 Apps be set to 1+GB on a 16GB RAM system and it would work fine until one App behaved badly. Then the system would be in trouble as the OOM killer would select a potentially critical service to kill. With limits, the system is happy, and the killing happens in containers instead.
  • 0 Votes
    6 Posts
    2k Views
    d19dotcaD
    So I'm pretty convinced the issue was the way I wrote the CAA records. I think my DNS provider didn't need the double-quotes in there and it caused issues. Reason I say that is because after introducing the CAA records, I suddenly had the certificate renewal errors. Then when using a DNS check tool and I looked up CAA records for Google and Mozilla and more, none of them had the double-quote in there, but mine did. So I am sure that was the issue, as everything worked fine again after I removed the double-quotes. I suspect the double-quotes was being taken literally as a string and so letsencrypt.org is not the same as "letsencrypt.org" in the DNS CAA record. I was able to later find the logs I had seen in the early morning which shows the following which confirms my conclusion: CAA record for <domain> prevents issuance. So for anyone who comes across this later, make sure you're not using double-quotes I guess. haha.
  • Minio backup fails for no reason

    Solved minio backups
    16
    0 Votes
    16 Posts
    5k Views
    girishG
    @thibaud I replied to you on support@ but the issue is that there is some long file name. The current rsync+encryption backup has some file name length limitation - https://docs.cloudron.io/backups/#encryption . There is a feature request at https://forum.cloudron.io/topic/3057/consider-improvements-to-the-backup-experience-to-support-long-filenames-directory-names . Run the following command in /home/yellowtent/appsdata to find the large filenames: find . -type f -printf "%f\n" | awk '{ print length(), $0 | "sort -rn" }' | less
  • Backup feedback (minio)

    Solved feature-request
    6
    1 Votes
    6 Posts
    2k Views
    girishG
    Marking as solved since a feature request is open at https://forum.cloudron.io/topic/3057/consider-improvements-to-the-backup-experience-to-support-long-filenames-directory-names
  • Cloudron update exited with code 1 and no space left in /boot

    Solved ubuntu kernel
    8
    0 Votes
    8 Posts
    2k Views
    mehdiM
    @d19dotca I think the -f flag on install just "cleans up" the installation : it notices that some files required for some packages were removed (in the previous line), so it removes said packages. What actually frees up the space is the rm. The apt-get -f install just makes the system notice that the packages in question are not installed anymore
  • Backblaze backups failed most of the time

    backup backblaze
    2
    0 Votes
    2 Posts
    640 Views
    girishG
    @vladimir Can you try this change https://git.cloudron.io/cloudron/box/-/commit/bedcd6fccf58830b316318699375bc1f582a5d7a ? The file on Cloudron is /home/yellowtent/box/src/storage/s3.js. Essentially change the timeout from 3000 * 1000 to 0. And also change maxRetries to 10. You don't need to restart anything after the change since the code changes are immediately picked up. (See also https://forum.cloudron.io/topic/3680/backup-issues-with-minio)
  • 0 Votes
    3 Posts
    1k Views
    BrutalBirdieB
    @nebulon said in App Install & Uninstall fails with &#x60;Error : Inactive - Error getting IP of postgresql service&#x60;: I doubt this is related to your subdomains. Can you verify that unbound on the Cloudron is running and maybe restart the postgres addon? You are 100% right. The postgres service was simply not running. Silly me, could have checked that myself.
  • How do you manage secrets/credentials during runtime?

    Solved secrets env
    12
    0 Votes
    12 Posts
    4k Views
    marcusquinnM
    @saikarthik Nope, I just don't like Amazon's ethics. https://www.ethicalconsumer.org/company-profile/amazoncom-inc
  • Add ldap auth to custom webapp?

    Solved proxyauth
    2
    1
    1 Votes
    2 Posts
    869 Views
    mehdiM
    @saikarthik Yes, the proxy addon seems good for what you are looking for. However, it's not available yet, it'll only be released with Cloudron 6 (I think the devs estimated about 2 weeks, but it's only an estimate). Also, it does not allow for more fine-grained control, so if you want to restrict only a few things, you'll have to do it manually, and in that case yeah you can take inspiration in the Surfer app for example.
  • How to add files into /app/data?

    Solved
    3
    0 Votes
    3 Posts
    1k Views
    saikarthikS
    @nebulon Thanks! For anyone else, this is how I did it. In 'Dockerfile', I added the files into the /app/code directory using: COPY public /app/code/temp-public Then, in 'start.sh' I added the following to ensure it only copies files over on first run: if [[ -z "$(ls -A /app/data/public)" ]]; then echo "==> Add public files on first run" cp -r /app/code/temp-public/* /app/data/public/ else echo "==> Do not override existing public folder" fi
  • Why is my backup drive full?

    Solved backups backup
    20
    1 Votes
    20 Posts
    3k Views
    girishG
    @ei8fdb said in Why is my backup drive full?: I've moved the backup directories from /var/backups/ to my external drive. If I want to restore from those backup directories, do I need the contents of the /var/backups/snapshot directory also? The snapshot directory is not required for restoring. BUT it's required for the actual the backups to work (so think of it as a working directory for backups). It's important to not remove that directory when doing backups! Since, Cloudron will use hard links from the actual backups to the snapshot directory, it's not really taking up extra space.
  • 0 Votes
    4 Posts
    1k Views
    girishG
    @wu-lee do you know why it had failed to renew previously?
  • Is cloudron down?

    Solved
    3
    0 Votes
    3 Posts
    1k Views
    nebulonN
    The appstore server seems to be up normally. Also the warning/error you mentioned is not fatal. Are there any other errors being shown somewhere in the logs?
  • Backup issues with minio

    Solved minio backups timeout
    4
    1
    0 Votes
    4 Posts
    2k Views
    S
    @savity Worked Perfekt Thanks
  • File Manager available while App is installed

    Solved filemanager
    4
    0 Votes
    4 Posts
    989 Views
    robiR
    Thanks for the detail @girish Noted @nebulon - perhaps if the different stages of the startup process were tagged, one could track this more easily.
  • Backups are not removed from aws after retention passed

    Solved backups aws cleanup-backups
    9
    0 Votes
    9 Posts
    3k Views
    girishG
    @carbonbee So, you are saying that there are backups in S3 that are not listed in the Cloudron dashboard? And those backups are not getting removed?
  • 7 Votes
    6 Posts
    2k Views
    d19dotcaD
    For completion sake: https://docs.cloudron.io/backups/#restore-email
  • Alternative to "oauth proxy"?

    Solved
    6
    0 Votes
    6 Posts
    3k Views
    girishG
    https://forum.cloudron.io/topic/3682/proxyauth-addon is the new alternative which uses LDAP.
  • 0 Votes
    12 Posts
    3k Views
    girishG
    I will mark this as solved for now but I suspect this will come back some day.
  • App updates were stuck, until I manually checked an App

    updates
    7
    0 Votes
    7 Posts
    2k Views
    robiR
    @girish this was a different Cloudron than the DNS MX issue which is still persisting ;-/