Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


    Cloudron Forum

    • Register
    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular

    Solved Disk is (suddenly) full on 1TB drive, can't access cloudron

    Support
    nginx disk space dashboard
    6
    20
    482
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      shan last edited by girish

      Hello. A few days ago my disk usage was around ~20% when I checked. Today I got an alert that my disk was full, yet the disk usage when I ran some du commands on my server does not reflect that. I can't access cloudron as the disk is completely full. It appears to have filled up rapidly within a few hours, with no changes on my end.

      I've had problems with my separate local backup server recently and thought maybe Cloudron had fallen back to storing backups on the local disk, but I checked var/backups and there is nothing there more than a few kb. Unless there is another place the backups might be stored?

      I got a glimpse of the disk usage statistics on Cloudron dashboard before I stopped being able to access it; it said that like 90% of the usage was for the green "System" type. Not sure if that is helpful for diagnosing what's going on.

      Seeking any tips for debugging this, thanks! (really hoping it's not a data corruption issue..)

      imc67 girish 2 Replies Last reply Reply Quote 0
      • marcusquinn
        marcusquinn last edited by

        Frustrating!

        Cross-post of a thread we had on this same subject a while back:

        • https://forum.cloudron.io/topic/4604/disk-space-should-never-bring-a-whole-server-down

        We're not here for a long time - but we are here for a good time :)
        Jersey/UK
        Work & Ecommerce Advice: https://brandlight.org
        Personal & Software Tips: https://marcusquinn.com

        1 Reply Last reply Reply Quote 0
        • imc67
          imc67 translator @shan last edited by

          @shan @girish I recognize similar effect but was in time because I use a seperate Zabbix server with triggers.

          This was my situation after long long search:

          1. since Cloudron doesn't notify admin by mail if there is something wrong like 'backup not succeeded' or 'CIFS connection lost' after a few days I noticed backup failure because of CIFS disconnection.
          2. I reconnected and everything seemd fine, except in Zabbix I noticed the disk usage graph was increased
          3. long story short: when I umount the CIFS I noticed the "hidden" backup at the mount path (before connecting)! I deleted all backups there and mounted again: SOLVED

          This same issue was on 2 of my 4 Cloudron Premium servers.

          1 Reply Last reply Reply Quote 2
          • girish
            girish Staff @shan last edited by

            @shan Like @imc67 suggested, are you able to check the local file system after unmounting your remote backup ?

            Meanwhile, I will investigate why mount failure is not reported as a backup error.

            1 Reply Last reply Reply Quote 1
            • Referenced by  V vjvanjungg 
            • S
              shan last edited by shan

              Thanks for the responses guys!

              I realized that my backup server was unmounted after it bugged out the other day, and it was storing backups locally in mnt/backups/snapshots folder. After purging that folder the disk space issue is resolved.

              However! My cloudron instance is still inaccessible.

              I followed all of the steps on this troubleshooting guide, to no avail.

              After rebooting after cleaning up disk space, both nginx and unbound were in an error state. unbound restarted right away, but nginx had some issues with old certs preventing it from restarting. After purging the old certs (which I read was safe in the troubleshooting guide 🙂), nginx was able to restart and is now running.

              Unfortunately, my cloudron instance is still inaccessible and I'm not sure why. All other services mentioned in the troubleshooting guide are working properly (docker, mysql, box) according to the logs.

              As far as I can tell everything is working properly, I just can't access my cloudron instance and don't know where to go from here. Any ideas for troubleshooting?

              EDIT: Looks like nginx just died again for some reason. It restarted successfully once after I purged old certs, but now has the same error again even though the certs are gone. 🤔

              This is the error it's giving me when I run nginx -t:

              nginx: [emerg] cannot load certificate key "/home/yellowtent/platformdata/nginx/cert/_.myserver.net.key": 
              PEM_read_bio_PrivateKey() failed (SSL: error:0909006C:PEM routines:get_name:no start line:Expecting: ANY PRIVATE KEY)
              
              girish 1 Reply Last reply Reply Quote 0
              • girish
                girish Staff @shan last edited by

                @shan Delete the nginx config files as well and then systemctl restart box. This will regenerate the nginx configs and cert files. After that, you will be able to access the dashboard. Go into each app's Location view and click save. That will regenerate the nginx config of each app.

                (This tedious process is automated/fixed in next release.)

                S 1 Reply Last reply Reply Quote 2
                • S
                  shan @girish last edited by shan

                  @girish I've deleted the nginx conf file (home/yellowtent/platformdata/nginx/nginx.conf) and am encountering a new error. It seems systemctl restart box did not regenerate this.

                  [emerg] open() "/etc/nginx/nginx.conf" failed (2: No such file or directory)
                  
                  girish 1 Reply Last reply Reply Quote 0
                  • girish
                    girish Staff @shan last edited by

                    @shan Oh, my bad. I should have been clearer that only app configs have to be deleted. Anyway, please run /home/yellowtent/box/setup/start.sh which will create nginx config files.

                    S 1 Reply Last reply Reply Quote 1
                    • S
                      shan @girish last edited by

                      @girish that seems to have fixed all my problems! Can access the dashboard again. Looking forward to the next release when this is automated lol. 🙏

                      1 Reply Last reply Reply Quote 2
                      • Topic has been marked as a question  girish girish 
                      • Topic has been marked as solved  girish girish 
                      • nebulon
                        nebulon Staff last edited by

                        As the root cause of this was, that the backup was continuing even though the backup disk was not mounted, we were now able to find the bug which caused this and possibly other similar issues.

                        So the check itself for the mountpoint was correct, but this result was just ignored by the code. This oversight will be fixed for the next release and should avoid such cases for mounted backup volumes in the future.

                        S 1 Reply Last reply Reply Quote 5
                        • S
                          shan @nebulon last edited by

                          @nebulon hey, not sure if you guys already implemented this fix, but I just had the exact same issue happen again this morning on a fully up-to-date Cloudron instance. Trying to remember wtf I did to fix this

                          girish 1 Reply Last reply Reply Quote 0
                          • girish
                            girish Staff @shan last edited by

                            @shan yes, fix is coming in 7.3.3. What you have to do is first do journalctl -u nginx -fa. It will say some cert/conf file is bad. Just delete them and systemctl restart nginx and get it running. Then, systemctl restart box.

                            S 1 Reply Last reply Reply Quote 0
                            • S
                              shan @girish last edited by shan

                              @girish the error it gives is the same as this one above; it is not clear that I need to actually just delete the application certs ( just found their location again at nginx/applications )

                              nginx: [emerg] cannot load certificate key "/home/yellowtent/platformdata/nginx/cert/_.myserver.net.key": 
                              PEM_read_bio_PrivateKey() failed (SSL: error:0909006C:PEM routines:get_name:no start line:Expecting: ANY PRIVATE KEY)
                              

                              Deleting the application certs allowed nginx to restart but my webserver is still not running, gah

                              girish 1 Reply Last reply Reply Quote 0
                              • girish
                                girish Staff @shan last edited by girish

                                @shan that message is saying that the cert cannot be loaded. Did you remove that file? (you have to also remove the .cert file along with the .key file). If you did remove that file, then go into /etc/nginx/applications and delete the conf files that reference the above cert.

                                S 1 Reply Last reply Reply Quote 0
                                • S
                                  shan @girish last edited by

                                  @girish I didn't remove that file, just the application certs and nginx is now running fine according to systemctl status nginx, just the webserver isn't loading. Unbound & box are fine too. Should I still delete that file even though nginx is running?

                                  girish 1 Reply Last reply Reply Quote 0
                                  • girish
                                    girish Staff @shan last edited by

                                    @shan what do you mean by "webserver" ? Do you mean the dashboard? nginx is the webserver.

                                    S 1 Reply Last reply Reply Quote 0
                                    • S
                                      shan @girish last edited by

                                      @girish yeah the dashboard

                                      girish 1 Reply Last reply Reply Quote 0
                                      • girish
                                        girish Staff @shan last edited by

                                        @shan you have to systemctl restart box, it will regenerate the nginx config needed for the dashboard. Then, if you refresh in browser, you might have to accept self signed certificate and login (that's OK). Then, go to Domains -> Renew Certs and you should be back.

                                        BTW, it's safe to delete configs and certs because it's all in the database and code. Renew certs above does not get a new cert internally. It will sync the existing cert in db to disk.

                                        S 1 Reply Last reply Reply Quote 0
                                        • S
                                          shan @girish last edited by

                                          @girish alright was able to get the dashboard back, thanks! When can we expect that update with the fix?

                                          robi 1 Reply Last reply Reply Quote 0
                                          • robi
                                            robi @shan last edited by

                                            @shan 7.3.3 I believe.

                                            Life of Advanced Technology

                                            1 Reply Last reply Reply Quote 1
                                            • First post
                                              Last post
                                            Powered by NodeBB