Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • "docker-volumes" is filling my whole server storage

    Solved Support
    25
    1 Votes
    25 Posts
    758 Views
    D

    Hi Girish,

    First, thank you for taking the time to provide such a detailed and technical explanation. I genuinely appreciate it, as I find it very insightful.

    As I mentioned in an earlier post, the TMP directory that fills up the most is indeed Emby's because we move a lot of data there. We add a substantial amount of content regularly. However, it's not just Emby that has this issue. Several other applications also exhibit similar behavior. The list of these applications, which I posted earlier, includes:

    Emby
    N8N
    Cal
    Uptime Kuma
    OpenWebUI
    Penpot

    And a few other unidentified applications
    This list is not exhaustive, but these are the primary culprits.

    To answer your questions:

    a) Yes, the directory that had around 83GB of data was indeed Emby's TMP directory. This directory fills up quickly as we add more content. However, other applications also contribute to the storage issue, just not at the same scale as Emby.

    b) I add files to Emby using the File Manager. We do not use Emby's upload feature, even though we have an Emby Premiere instance. We prefer to manage our files directly through Cloudron because we trust its infrastructure and prefer to use it as intended.

    I understand that the issue is not inherently with Cloudron itself. My reflection was that this docker-volume issue only started appearing after I added a new storage volume to my server. To clarify, I added an OVH NAS-HA as a new volume to the server, and then I mounted it on Cloudron through the admin panel. This docker-volume did not exist, or at least was not noticeable, before this addition.

    Since moving Emby to the new volume, I see my data on the new volume as expected, but it also appears to be duplicated in the TMP directory on the server's disk. This duplication was not happening before the new volume was added. Hence, my hypothesis is that there might be a configuration that wasn't fully adjusted to accommodate the new volume, causing files to be stored in both locations.

    I'm not entirely sure how the backend works, so these are just observations and hypotheses based on my use of Cloudron.

  • Customize PostgreSQL settings/limits

    Solved Support
    6
    1 Votes
    6 Posts
    515 Views
    M

    @girish perfect, thanks - appreciate it! I'm almost certain there could be ways to improve those queries, but it's always better to have a failsafe in place 😉

  • "I can't solve the 'no space left on device' issue."

    Solved Support
    2
    1 Votes
    2 Posts
    181 Views
    girishG

    @freetommy most likely the container needs to be re-pulled. If you go to Repair view, does it let you enable Recovery Mode? If so, enable recovery mode and afterwards disable it (this is a hack to repull the docker image).

  • Cloudron not showing correct LVM space

    Solved Support
    3
    1 Votes
    3 Posts
    232 Views
    CameronEdmondsonC

    Hi @girish that has fixed it thank you for your support

    Regards

    Chloe Edmondson

    They/Them (why pronouns matter)

  • Nextcloud not responding after disk full incident

    Solved Support
    7
    1 Votes
    7 Posts
    541 Views
    girishG

    @whitespace I see that earlier in the thread you said you had no backups. But just wanted to double check, if you had any (even ancient ones) ? The reason is config.php has passwordsalt and secret which is what is used to hash the db entries. https://github.com/nextcloud/server/issues/34780 has some suggestions which may or may not work .

    As for your question, not a nextcloud expert, but if I had to guess user settings and password will be reset since the data files is only the files.

  • Issues with Guacamole and Open Project

    Unsolved Support
    4
    1 Votes
    4 Posts
    432 Views
    girishG

    @gnulab it will start using any free disk space automatically. Maybe try, going to the Repair section of the app. Click Enable Recovery Mode and then Click Disable Recovery Mode. This will recreate the container and maybe it works now. Can you try?

  • Cloudron is running out of space

    Support
    4
    0 Votes
    4 Posts
    345 Views
    nebulonN

    The collabora app image is fairly large due to all the language support packages. So far we haven't found a way to make those dynamically selectable.

    Also it appears you have a lot of data for the system itself, possibly you have some old leftover backups in /var/backups which could be purged, if you have configured a proper backup storage?

  • Cannot update due to one tiny app filling up 20GB drive

    Solved Support
    19
    0 Votes
    19 Posts
    1k Views
    scookeS

    @jdaviescoates Phew, my thoughts and prayers worked!

  • 0 Votes
    9 Posts
    717 Views
    robiR

    @jdaviescoates you could set up a cron that gets usage info, and prints out a webpage you can monitor with Kuma for a keyword. When a threshold is met, you add/remove the keyword and fire off an alert via TG.

    Similar for monitoring something with change detection app or n8n workflow.

  • Disk usage issue

    Solved Support
    6
    1 Votes
    6 Posts
    523 Views
    jdaviescoatesJ

    @girish said in Disk usage issue:

    https://git.cloudron.io/cloudron/box/-/commit/67cde5a62cf0394c8bf2d78ee3408e5995a220e7 is the fix if you want to patch it yourself.

    Thanks, I wouldn't know where to start with that, but I guess useful for others who do 🙂

  • 0 Votes
    2 Posts
    250 Views
    nebulonN

    Have you seen https://docs.cloudron.io/troubleshooting/#recovery-after-disk-full with steps to bring a system back from such a disk full event?

    Regarding Nextcloud, you should be able to restore it to the previous version then.

  • out of space - normal process not working.

    Solved Support
    13
    0 Votes
    13 Posts
    955 Views
    R

    @subven 🤦

    wordwrap!!! the problem was staring me in the face! But I could not see it because i did not scroll to the right...

    Thanks for the help!!!!

    I took a cert and key file from another server, and renamed the default cert to be whatever the error wanted.
    first thedomain.com.cert, then thedomain.com.key. and so-forth until it loaded. It is working now! .

  • 1 Votes
    20 Posts
    2k Views
    girishG

    @robi @timconsidine good catch, I don't think it is. Will fix. Opened https://git.cloudron.io/cloudron/box/-/issues/832 to track internally

  • 0 Votes
    20 Posts
    1k Views
    robiR

    @shan 7.3.3 I believe.

  • out of space error leading to missing certs

    Solved Support
    22
    1 Votes
    22 Posts
    1k Views
    robiR

    @subven said in out of space error leading to missing certs:

    @girish there is no way to trigger certificate renewal over the (SSH) console?

    I'd like an answer to this question.. as I just ran into the missing cert problem too.

    Having deleted all the conf/cert files, and gotten nginx started, the UI is still not accessible after box restart. All apps are inaccessible too.

    box restart seems to recreate the /etc/nginx/applications/my.domain.conf BUT doesn't check if the /home/yellowtent/platformdata/nginx/certs/my.domain.host.cert is there.

    How are they regenerated from the CLI?

  • 0 Votes
    3 Posts
    490 Views
    S

    @nebulon Thanks for confirming! 🙂

  • 0 Votes
    3 Posts
    494 Views
    jeauJ

    @girish great thanks, It works 😉

  • Docker Images Taking huge space

    Solved Support
    5
    0 Votes
    5 Posts
    924 Views
    girishG

    @ei8fdb right the graphs don't update immediately, but every 6 hours or so. Otherwise, the disk spins a lot 🙂

    I will mark this thread as solved, but this problem will go away soonish. I think only ~20 apps are left to use the new base image. The 7.1 release also updates all the internal containers to use the latest image. Should all be done end of this month.

  • Backup job killed my cloudron

    Discuss
    4
    0 Votes
    4 Posts
    563 Views
    O

    @scooke
    @murgero

    I am working on get my FreeNAS NFS mount working for the backups, but that seems to be a bit difficult to get the permissions working.

    If you have experience with NFS, you might have an answer on my problem here...

    https://forum.cloudron.io/topic/5927/cloudron-backups-on-truenas-nfs-mount/3?_=1636891033300

    However thanks!

  • Help Cloudron offline

    Solved Support
    8
    0 Votes
    8 Posts
    1k Views
    D

    @girish

    UPDATE: Finally found the culprit the was a huge backup file in /var/backups/snapshot#

    Did the internal DNS restart steps, restarted docker and my sites are working again. Still having trouble accessing the cloudron admin page, complaint about certificates etc. will wait and see if this resolves itself.
    Dimitri

    UPDATE: /home/yellowtent/box/setup/start.sh did the trick cloudron dashboard also available again