Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Backup is failing for UpTime Kuma on 9.0.15

Backup is failing for UpTime Kuma on 9.0.15

Scheduled Pinned Locked Moved Solved Support
backup
18 Posts 4 Posters 746 Views 4 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • potemkin_aiP Offline
    potemkin_aiP Offline
    potemkin_ai
    wrote on last edited by
    #9

    Yep - it does.
    Stop & start didn't help, unfortunately.

    1 Reply Last reply
    1
    • nebulonN Offline
      nebulonN Offline
      nebulon
      Staff
      wrote on last edited by
      #10

      If you still get the same docker error about an already existing container with the name sqlite-<appid>, can you check with the docker cli if some dead container exists and if so, if you just remove that, it starts working?

      potemkin_aiP 1 Reply Last reply
      0
      • nebulonN nebulon

        If you still get the same docker error about an already existing container with the name sqlite-<appid>, can you check with the docker cli if some dead container exists and if so, if you just remove that, it starts working?

        potemkin_aiP Offline
        potemkin_aiP Offline
        potemkin_ai
        wrote on last edited by
        #11

        @nebulon there seems to be none of the dead ones:

        ubuntu@cloudron:~$ sudo docker ps -a | grep sqlite | wc -l
        1
        ubuntu@cloudron:~$ sudo docker ps -a | grep sqlite
        bbb52ef9e686   cloudron/louislam.uptimekuma.app:202511081423470000       "sqlite3 /app/data/d…"   2 weeks ago   Created                                                                                                                                                                sqlite-be0be218-57bf-427b-abb6-b7660943eaf6
        

        There are not that much non-running containers - only 2 nextcloud workers with exit(0) and another service, that I have stopped explicitely.

        1 Reply Last reply
        0
        • nebulonN Offline
          nebulonN Offline
          nebulon
          Staff
          wrote on last edited by
          #12

          So yeah looks like there is a dangling container with the conflicting name sqlite-be0be218-57bf-427b-abb6-b7660943eaf6, not sure why that wasn't cleanly exiting, but it exists but stopped, hence no new one for another backup can be created.

          The fix here would probably be to purge that with docker rm bbb52ef9e686 and then the backup should work again. Before you do this, maybe check with docker logs bbb52ef9e686 if there is anything useful, why it possibly failed.

          1 Reply Last reply
          0
          • potemkin_aiP Offline
            potemkin_aiP Offline
            potemkin_ai
            wrote on last edited by
            #13

            Thank you, that fixed the issue.
            There been no logs in the container.

            Any ideas what is the nature of the sqlite container for uptime kuma? I've been under impression that sqlite requires no server side...

            1 Reply Last reply
            0
            • nebulonN Offline
              nebulonN Offline
              nebulon
              Staff
              wrote on last edited by
              #14

              That is true, no server is required for sqlite, however to make a correct and consistent backup, one cannot just copy the sqlite file on disk, as data might not have been flushed fully to disk. This is why the localstorage addon in Cloudron has a way to signal the system, which file is an sqlite database and the backup will spin up a container to make a proper database dump. See https://docs.cloudron.io/packaging/addons#localstorage

              potemkin_aiP 1 Reply Last reply
              1
              • nebulonN nebulon

                That is true, no server is required for sqlite, however to make a correct and consistent backup, one cannot just copy the sqlite file on disk, as data might not have been flushed fully to disk. This is why the localstorage addon in Cloudron has a way to signal the system, which file is an sqlite database and the backup will spin up a container to make a proper database dump. See https://docs.cloudron.io/packaging/addons#localstorage

                potemkin_aiP Offline
                potemkin_aiP Offline
                potemkin_ai
                wrote on last edited by
                #15

                @nebulon thank you. And yes - I understand why the work-around required. If you don't mind sharing the source code to read the logic exactly?

                I've been wondering on the best way to achieve that, but never seen any good practical approach - would love to see how you are approaching that, without a container shutdown, if you don't mind sharing of course.

                If that's too complicated - never mind and please, feel free to close the issue - the question now is purely to satisfy my curiosity.

                1 Reply Last reply
                0
                • J joseph has marked this topic as solved on
                • nebulonN Offline
                  nebulonN Offline
                  nebulon
                  Staff
                  wrote on last edited by
                  #16

                  This is really just a docker exec into the container and then run sqlite3 /path/to/database.db .dump > dump.sql

                  1 Reply Last reply
                  0
                  • robiR Offline
                    robiR Offline
                    robi
                    wrote on last edited by
                    #17

                    Once the DB gets busy, big or the storage gets slow, consider to copy the .db file first then dump from a DB file that isn't changing.

                    Conscious tech

                    1 Reply Last reply
                    0
                    • potemkin_aiP Offline
                      potemkin_aiP Offline
                      potemkin_ai
                      wrote last edited by
                      #18

                      Thanks!!

                      1 Reply Last reply
                      1
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • Bookmarks
                      • Search