Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Baserow
  3. Baserow didn't let me access some tables then crashed when restarting it

Baserow didn't let me access some tables then crashed when restarting it

Scheduled Pinned Locked Moved Baserow
14 Posts 2 Posters 1.1k Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • G Offline
    G Offline
    gabrielle
    wrote on last edited by
    #5

    I am running into a similar issue, there is 2/3 of the content of a table I can not access (record lines are displayed in white with no content and I can't click on them). I can not figure out what the issue is and it makes some view of the table crash (but not others).
    In Cloudron, Baserow has an error saying: FileSystem Error: failed to register layer: write /app/code/web-frontend/node_modules/node-sass/build/Release/obj.target/binding.node: no space left on device
    It's failing to update to 1.29.2 (currently at 1.29.0).
    Weirdly, I have 4Go of space left on my server and the update is about 1Go.

    Is there another device I should look at?

    1 Reply Last reply
    0
    • nebulonN Offline
      nebulonN Offline
      nebulon
      Staff
      wrote on last edited by
      #6

      Maybe other filesystems like /tmp or /run ran out of disk space temporarily? Can you open a webterminal into the app and run df right after the issue happens?

      1 Reply Last reply
      0
      • G Offline
        G Offline
        gabrielle
        wrote on last edited by
        #7

        I've retried the update, df says:

        Filesystem 1K-blocks Used Available Use% Mounted on
        overlay 30297152 27992580 2288188 93% /
        tmpfs 65536 0 65536 0% /dev
        shm 65536 16 65520 1% /dev/shm
        /dev/root 30297152 27992580 2288188 93% /tmp
        tmpfs 1974552 0 1974552 0% /proc/acpi
        tmpfs 1974552 0 1974552 0% /proc/scsi
        tmpfs 1974552 0 1974552 0% /sys/firmware

        1 Reply Last reply
        0
        • nebulonN Offline
          nebulonN Offline
          nebulon
          Staff
          wrote on last edited by
          #8

          That seems okish. So rereading the previous messages here, just to clarify, so the original issue was about loading/displaying a huge table, while then the issue was to update the app which fails?

          1 Reply Last reply
          0
          • G Offline
            G Offline
            gabrielle
            wrote on last edited by
            #9

            Yes exactly. Now, the app can also not create new backups. It seems to be keeping backups from Nov 15th onwards (when the backup policy is set to keep them 2 days only). I don't see how to delete specific backups to make more room nor why Cloudron doesn't manage to clear them.
            Any chance we could hop on a call to figure this out?

            1 Reply Last reply
            0
            • nebulonN Offline
              nebulonN Offline
              nebulon
              Staff
              wrote on last edited by
              #10

              Cloudron will keep backups of the last update, regardless of the retention policy. Maybe this is what you see from the 15th. Generally if you try to make some room for backups, I assume you are having your backups on the local disk, which is strongly discouraged as if the server or the disk run into trouble you might lose those. Overall both issues might be connected if the system is running low on disk space, also /tmp will have trouble if (and we are not experts on baserow internals) the app needs to create large tmp files for big tables.

              1 Reply Last reply
              0
              • G Offline
                G Offline
                gabrielle
                wrote on last edited by
                #11

                Good call on the backups saving location, I've set it up somewhere else yesterday.
                Cloudron managed to create new backups last night and delete the old ones.
                However, it's still failing to update to the next Baserow version. It seems that it creates a backup every time it tries, filling out the disk space and then it fails because there is not enough disk space.
                Any idea of what I could do to fix this?

                1 Reply Last reply
                0
                • nebulonN Offline
                  nebulonN Offline
                  nebulon
                  Staff
                  wrote on last edited by
                  #12

                  I am confused, so have you configured backups to be on a remote storage or not?

                  1 Reply Last reply
                  0
                  • G Offline
                    G Offline
                    gabrielle
                    wrote on last edited by
                    #13

                    Yes, the backups are on a remote storage.
                    The system now says:
                    Free space: 2.99 Go and /var/backups (Old Backups): 3.93 GB
                    I'd like to remove these old backups, how do I do it?

                    1 Reply Last reply
                    0
                    • nebulonN Offline
                      nebulonN Offline
                      nebulon
                      Staff
                      wrote on last edited by
                      #14

                      If you have new backups made into the remote storage, you can safely just remove the local ones at /var/backups/ via SSH. Something like rm -rf /var/backups/* (to keep the folder itself)

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search