Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps - Status | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Help with migrating Cloudron to a new server

Help with migrating Cloudron to a new server

Scheduled Pinned Locked Moved Solved Support
backuprestoremigration
15 Posts 3 Posters 268 Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jamesJ james

    Hello @davejgreen
    This is rather complex backup set up which is out of scope of the nominal Cloudron use-case.

    @davejgreen said:

    I have added several entries to the /etc/hosts file on the new server to point all our Cloudron domains to its IPv4 and IPv6 addresses.

    This should be done on your local device so you can access the domain names without the DNS lookup but with a fixed entry overwriting DNS lookup.

    @davejgreen said:

    So, on the new server, I did mkdir -p /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs/snapshot. I could then see the backups from our office device at /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs on the new server. (How did they get there if all I did was make a directory?) I then did the chown as instructed and tried clicking "Restore" in the browser page again. This time I got the error:
    Failed to unmount existing mount
    Is this referring to a mount that was somehow made when I created the directory? Why does it need to unmount it, and why can't it do that? Do I need to unmount something?

    This is indeed strange.
    Did you configure the NFS mount on the new system manually?
    Can you run mount -l and see if this NFS mount is there?
    Also, if mounted, there might be a systemd service for that mnt-managedbackups.*.

    @davejgreen said:

    I tried the dry run restore by rsyncing the folder of tarballs from a recent backup onto the new server, and then using "Filesystem" as the storage provider (as mentioned in the docs).

    This means that the location was already filled with data?
    I would advise resetting the new server again and only attempting the dry run restore from NFS.
    The rsync of files to the same location might have caused issues with the mounting and access of files.

    D Offline
    D Offline
    davejgreen
    wrote last edited by
    #5

    Hi @james, thanks for such a prompt response.

    I'm surprised that trying to migrate from one server to another using an NFS tarball backup on a local device is considered complex - although I am certainly finding it very complex! But I figured that was down to my inexperience in these things. Is it the NFS tarball backup that is unusual?

    Our DNS records are manually configured. I think I understand I probably don't need the new server to have the /etc/hosts entries until we do the actual migration. Our thinking was to keep the existing server live for as long as possible, so we could do the migration with these /etc/host entries on the new server, check it is working, then turn off the live server, switch the DNS records, and remove the /etc/hosts entries on the new server. (Though, maybe that's a bad idea and we should just turn the old server off first.) For other parts of my job I still need access to the existing live Cloudron instance, so I hadn't added anything to my local device's /etc/hosts yet - but was planning to do this once the install had completed so that I could check it. I did this with the earlier rsync-Filesystem attempt.

    Just to clarify, I did do a complete reinstall of the server before each attempt - installing an Ubuntu image from our provider and erasing all existing data, so I was starting with a clean slate each time. So the backups I rsync over on the earlier attempt were not still there when I tried again today.

    The mounting stuff I am a bit confused by (I'm relatively new to the idea of mounting file systems). Doing mount -l on the new server, I can see there is a mount: our-office-device:/export/nfs-cloudron-backups on /mnt/managedbackups/cloudron-restore-validation type nfs4 (with,various,config). I did not create this (or any other mount) manually. Our office device runs on NixOS, and I added a line for the new Cloudron server to its configuration:

      services.nfs.server = {
        enable = true;
        exports = ''
          /export/nfs-cloudron-backups <old-server-tailscale-IP>(rw,sync,no_subtree_check,no_root_squash)
          /export/nfs-cloudron-backups <new-server-tailscale-IP>(rw,sync,no_subtree_check,no_root_squash)
        '';
    
    

    My understanding was that this would only allow a device with that IP address to mount the folder at /export/nfs-cloudron-backups, but maybe it 'pushed' the mount to the device it could see at that IP address? I'd be surprised if that was the case though, as I haven't set the mount location on the new server anywhere here, and with the old server, I still had to go and create the backup site which included it mounting to this location. The new server has a systemd service mnt-managedbackups-cloudron\x2drestore\x2dvalidation.mount which is active and mounted. I assume that must have been created when I first clicked the "Restore" button and then got the "Access denied. Create..." message? Is that something Cloudron does? Are there some directory permissions I need to check or change? Should I have manually created a mount so the new server could see the office device backup folder before clicking "Restore"?

    jamesJ 1 Reply Last reply
    0
    • D davejgreen

      Hi @james, thanks for such a prompt response.

      I'm surprised that trying to migrate from one server to another using an NFS tarball backup on a local device is considered complex - although I am certainly finding it very complex! But I figured that was down to my inexperience in these things. Is it the NFS tarball backup that is unusual?

      Our DNS records are manually configured. I think I understand I probably don't need the new server to have the /etc/hosts entries until we do the actual migration. Our thinking was to keep the existing server live for as long as possible, so we could do the migration with these /etc/host entries on the new server, check it is working, then turn off the live server, switch the DNS records, and remove the /etc/hosts entries on the new server. (Though, maybe that's a bad idea and we should just turn the old server off first.) For other parts of my job I still need access to the existing live Cloudron instance, so I hadn't added anything to my local device's /etc/hosts yet - but was planning to do this once the install had completed so that I could check it. I did this with the earlier rsync-Filesystem attempt.

      Just to clarify, I did do a complete reinstall of the server before each attempt - installing an Ubuntu image from our provider and erasing all existing data, so I was starting with a clean slate each time. So the backups I rsync over on the earlier attempt were not still there when I tried again today.

      The mounting stuff I am a bit confused by (I'm relatively new to the idea of mounting file systems). Doing mount -l on the new server, I can see there is a mount: our-office-device:/export/nfs-cloudron-backups on /mnt/managedbackups/cloudron-restore-validation type nfs4 (with,various,config). I did not create this (or any other mount) manually. Our office device runs on NixOS, and I added a line for the new Cloudron server to its configuration:

        services.nfs.server = {
          enable = true;
          exports = ''
            /export/nfs-cloudron-backups <old-server-tailscale-IP>(rw,sync,no_subtree_check,no_root_squash)
            /export/nfs-cloudron-backups <new-server-tailscale-IP>(rw,sync,no_subtree_check,no_root_squash)
          '';
      
      

      My understanding was that this would only allow a device with that IP address to mount the folder at /export/nfs-cloudron-backups, but maybe it 'pushed' the mount to the device it could see at that IP address? I'd be surprised if that was the case though, as I haven't set the mount location on the new server anywhere here, and with the old server, I still had to go and create the backup site which included it mounting to this location. The new server has a systemd service mnt-managedbackups-cloudron\x2drestore\x2dvalidation.mount which is active and mounted. I assume that must have been created when I first clicked the "Restore" button and then got the "Access denied. Create..." message? Is that something Cloudron does? Are there some directory permissions I need to check or change? Should I have manually created a mount so the new server could see the office device backup folder before clicking "Restore"?

      jamesJ Online
      jamesJ Online
      james
      Staff
      wrote last edited by
      #6

      Hello @davejgreen

      @davejgreen said:

      I'm surprised that trying to migrate from one server to another using an NFS tarball backup on a local device is considered complex - although I am certainly finding it very complex! But I figured that was down to my inexperience in these things. Is it the NFS tarball backup that is unusual?

      Sorry, what I referred to was the network isolation setup including the NFS on a device in your office.

      @davejgreen said:

      Our thinking was to keep the existing server live for as long as possible, so we could do the migration with these /etc/host entries on the new server, check it is working, then turn off the live server, switch the DNS records, and remove the /etc/hosts entries on the new server.

      Yes. That is the flow.
      But not ON the new server the /etc/hosts change is needed, but on your local device so you device resolves the names to your new server for the dry run restore.

      @davejgreen said:

      Doing mount -l on the new server, I can see there is a mount: our-office-device:/export/nfs-cloudron-backups on /mnt/managedbackups/cloudron-restore-validation type nfs4 (with,various,config).

      This was created by the attempted restore, so that is good.
      But since there was some folders and files before from your rsync attempt (if I understood this correctly), that might interfere with the mounting and restore process.

      @davejgreen said:

      but maybe it 'pushed' the mount to the device it could see at that IP address?

      That should not be the case unless your NixOS system has access to the new server via. ssh and runs some provisioning style set up for NFS.

      @davejgreen said:

      Should I have manually created a mount so the new server could see the office device backup folder before clicking "Restore"?

      No.
      This should be done by Cloudron.

      D 1 Reply Last reply
      0
      • jamesJ james

        Hello @davejgreen

        @davejgreen said:

        I'm surprised that trying to migrate from one server to another using an NFS tarball backup on a local device is considered complex - although I am certainly finding it very complex! But I figured that was down to my inexperience in these things. Is it the NFS tarball backup that is unusual?

        Sorry, what I referred to was the network isolation setup including the NFS on a device in your office.

        @davejgreen said:

        Our thinking was to keep the existing server live for as long as possible, so we could do the migration with these /etc/host entries on the new server, check it is working, then turn off the live server, switch the DNS records, and remove the /etc/hosts entries on the new server.

        Yes. That is the flow.
        But not ON the new server the /etc/hosts change is needed, but on your local device so you device resolves the names to your new server for the dry run restore.

        @davejgreen said:

        Doing mount -l on the new server, I can see there is a mount: our-office-device:/export/nfs-cloudron-backups on /mnt/managedbackups/cloudron-restore-validation type nfs4 (with,various,config).

        This was created by the attempted restore, so that is good.
        But since there was some folders and files before from your rsync attempt (if I understood this correctly), that might interfere with the mounting and restore process.

        @davejgreen said:

        but maybe it 'pushed' the mount to the device it could see at that IP address?

        That should not be the case unless your NixOS system has access to the new server via. ssh and runs some provisioning style set up for NFS.

        @davejgreen said:

        Should I have manually created a mount so the new server could see the office device backup folder before clicking "Restore"?

        No.
        This should be done by Cloudron.

        D Offline
        D Offline
        davejgreen
        wrote last edited by
        #7

        Hi @james, thanks again for a quick response.

        No Left Over Files
        I think you're not understanding what I said about a complete reinstall. I tried the rsync attempt last week, then I erased everything on the new server to try again. Wiped it clean, all files gone, new install of Ubuntu 24.04, no left over files. Then I did this morning's attempt. There were not any files or folders left over from the rsync attempt when I tried the restore today, so that cannot be the reason it didn't work.

        DNS
        I don't think I understand your point about the /etc/hosts change. I understand that to view the newly restored Cloudron instance in a browser on my local device, I will need to change my local device's /etc/hosts. But I don't need to do this to see the "Restore Cloudron" page, I can just enter the new server's IPv4 in the browser address bar. Then when I start the restore from there, doesn't this happen on the new server? I didn't think my local device was doing anything other than showing me what was happening on the new server through the browser window? Once the restore has finished, then I would need to adjust my local /etc/hosts to view the newly restored Cloudron instead of the old existing one. Are you saying I need to adjust /etc/hosts on my local device before I do the restore? (If so, why?, given that I'm doing a dry run and that we manage all DNS manually.)

        Next Steps?
        I'm unsure what to try next. I still have the error "Failed to unmount existing mount". I tried doing sudo umount /mnt/managedbackups/cloudron-restore-validation which appeared to work, and then I clicked the restore button again, but got the same error message. The first error message started with "Access denied." which makes me think the problem might be related to file and folder ownership and/or permissions, as I am often confused by these.

        How is this mounting meant to work? What should I try next?

        jamesJ 1 Reply Last reply
        1
        • D davejgreen

          Hi @james, thanks again for a quick response.

          No Left Over Files
          I think you're not understanding what I said about a complete reinstall. I tried the rsync attempt last week, then I erased everything on the new server to try again. Wiped it clean, all files gone, new install of Ubuntu 24.04, no left over files. Then I did this morning's attempt. There were not any files or folders left over from the rsync attempt when I tried the restore today, so that cannot be the reason it didn't work.

          DNS
          I don't think I understand your point about the /etc/hosts change. I understand that to view the newly restored Cloudron instance in a browser on my local device, I will need to change my local device's /etc/hosts. But I don't need to do this to see the "Restore Cloudron" page, I can just enter the new server's IPv4 in the browser address bar. Then when I start the restore from there, doesn't this happen on the new server? I didn't think my local device was doing anything other than showing me what was happening on the new server through the browser window? Once the restore has finished, then I would need to adjust my local /etc/hosts to view the newly restored Cloudron instead of the old existing one. Are you saying I need to adjust /etc/hosts on my local device before I do the restore? (If so, why?, given that I'm doing a dry run and that we manage all DNS manually.)

          Next Steps?
          I'm unsure what to try next. I still have the error "Failed to unmount existing mount". I tried doing sudo umount /mnt/managedbackups/cloudron-restore-validation which appeared to work, and then I clicked the restore button again, but got the same error message. The first error message started with "Access denied." which makes me think the problem might be related to file and folder ownership and/or permissions, as I am often confused by these.

          How is this mounting meant to work? What should I try next?

          jamesJ Online
          jamesJ Online
          james
          Staff
          wrote last edited by
          #8

          Hello @davejgreen

          @davejgreen said:

          I tried the rsync attempt last week, then I erased everything on the new server to try again.

          Ah! Thanks for the clarification. I indeed did not understand you correctly.

          @davejgreen said:

          Then when I start the restore from there, doesn't this happen on the new server?

          https://docs.cloudron.io/backups/#dry-run-restore

          Dry run skips DNS updates. The new server won't be publicly accessible - you access it using /etc/hosts entries on your local machine.

          Dry run will not change any DNS records.
          This also means, as soon as you hit the restore with dry run, when a redirect happens to your restored domain, you DNS will resolve to your production server.
          This is why you need to update your local device /etc/hosts to prevent this.
          If one does not do that, you can have a mixed view of the dashboard of cached content, the old server serving stuff and the new system serving stuff.

          @davejgreen said:

          The first error message started with "Access denied." which makes me think the problem might be related to file and folder ownership and/or permissions, as I am often confused by these.

          This could be an issue.
          With this partial mount that is failing, can you review the access to that mounted NFS?

          1 Reply Last reply
          1
          • D Offline
            D Offline
            davejgreen
            wrote last edited by
            #9

            When I click Restore, I get the message: "Access denied. Create /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs/snapshot and run "chown yellowtent:yellowtent /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs" on the server".

            Then, the /mnt/managedbackups/cloudron-restore-validation folder looks like it is mounted correctly. It has permissions "drwxrwxrwx root root", and I can see the "nfs-tarballs" folder from our office device inside it.

            The "nfs-tarballs" folder, and all its contents, have permissions "drwxr-xr-x djg djg" (djg is my user, which happens to have the UID 1000, on the office device there is no user assigned to 1000 and these folders show "drwxr-xr-x 1000 1000"). The "nfs-tarballs" folder contains several folders named with date-times, as well as one called "snapshot". Inside these are the .tar.gz and .backupinfo files. Is the problem that there is already a "snapshot" folder here? (The message is asking me to create it, but it already exists.)

            The error message also says to change the ownership of the "nfs-tarballs" folder to yellowtent:yellowtent. If I do that, it changes the ownership to "808:808" on the office device (because that is where the files actually are) - is that what is intended? If I try this and click the Restore button again, I get the message "Failed to unmount existing mount". If I then unmount cloudron-restore-validation, and click the Restore button again, I get the error message: "Unable to create test file as 'yellowtent' user in /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs: EACCES: permission denied, open '/mnt/managedbackups/cloudron-restore-validation/nfs-tarballs/snapshot/cloudron-testfile'. Check dir/mount permissions". Inside the "snapshot" folder there is a full set of .tar.gz and .backupinfo files, but no "cloudron-testfile". What should the "dir/mount" permissions be?

            jamesJ 1 Reply Last reply
            0
            • D davejgreen

              When I click Restore, I get the message: "Access denied. Create /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs/snapshot and run "chown yellowtent:yellowtent /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs" on the server".

              Then, the /mnt/managedbackups/cloudron-restore-validation folder looks like it is mounted correctly. It has permissions "drwxrwxrwx root root", and I can see the "nfs-tarballs" folder from our office device inside it.

              The "nfs-tarballs" folder, and all its contents, have permissions "drwxr-xr-x djg djg" (djg is my user, which happens to have the UID 1000, on the office device there is no user assigned to 1000 and these folders show "drwxr-xr-x 1000 1000"). The "nfs-tarballs" folder contains several folders named with date-times, as well as one called "snapshot". Inside these are the .tar.gz and .backupinfo files. Is the problem that there is already a "snapshot" folder here? (The message is asking me to create it, but it already exists.)

              The error message also says to change the ownership of the "nfs-tarballs" folder to yellowtent:yellowtent. If I do that, it changes the ownership to "808:808" on the office device (because that is where the files actually are) - is that what is intended? If I try this and click the Restore button again, I get the message "Failed to unmount existing mount". If I then unmount cloudron-restore-validation, and click the Restore button again, I get the error message: "Unable to create test file as 'yellowtent' user in /mnt/managedbackups/cloudron-restore-validation/nfs-tarballs: EACCES: permission denied, open '/mnt/managedbackups/cloudron-restore-validation/nfs-tarballs/snapshot/cloudron-testfile'. Check dir/mount permissions". Inside the "snapshot" folder there is a full set of .tar.gz and .backupinfo files, but no "cloudron-testfile". What should the "dir/mount" permissions be?

              jamesJ Online
              jamesJ Online
              james
              Staff
              wrote last edited by james
              #10

              Hello @davejgreen

              @davejgreen said:

              The "nfs-tarballs" folder, and all its contents, have permissions "drwxr-xr-x djg djg"

              This sounds like the issue.
              Since the Cloudron user yellowtent can't access that, it fails.

              How does this set up work on the live/production system?
              What permissions are set there so it can access the NFS?

              @davejgreen said:

              If I do that, it changes the ownership to "808:808" on the office device (because that is where the files actually are) - is that what is intended?

              So what are the permissions for the live/production NFS?
              Can we do some comparison here?

              So we have to figure out why the live/production system has no issues with permissions but the new one does.
              I assume, that would solve it all.

              1 Reply Last reply
              0
              • D Offline
                D Offline
                davejgreen
                wrote last edited by
                #11

                Ah, I think I understand the issue. On our existing server, yellowtent has UID 1000. This is why the office device that is receiving the NFS Tarball backups has everything owned by "1000:1000". But on the new server, I created my user before installing cloudron, and it happened to be assigned UID 1000. Then I installed Cloudron and it must have assigned yellowtent to 808 because 1000 was taken. So, then the ownership does not match.

                Thank you, I believe I can sort this out now!

                jdaviescoatesJ 1 Reply Last reply
                2
                • D davejgreen

                  Ah, I think I understand the issue. On our existing server, yellowtent has UID 1000. This is why the office device that is receiving the NFS Tarball backups has everything owned by "1000:1000". But on the new server, I created my user before installing cloudron, and it happened to be assigned UID 1000. Then I installed Cloudron and it must have assigned yellowtent to 808 because 1000 was taken. So, then the ownership does not match.

                  Thank you, I believe I can sort this out now!

                  jdaviescoatesJ Offline
                  jdaviescoatesJ Offline
                  jdaviescoates
                  wrote last edited by
                  #12

                  @davejgreen said:

                  on the new server, I created my user before installing cloudron

                  FYI, afaict all the issues you've faced were caused by you doing something that isn't needed 🙂

                  e.g. the above creating a user, plus editing hosts files stuff mentioned previously.

                  I've migrated my Cloudron server probably 3 or 4 times without any issues 🙂

                  I use Cloudron with Gandi & Hetzner

                  1 Reply Last reply
                  2
                  • D Offline
                    D Offline
                    davejgreen
                    wrote last edited by
                    #13

                    Just to clarify, the editing of /etc/hosts was irrelevant and did not cause any of the issues I was having. This is a business set up, and I have been instructed to create at least 2 users on the server, to give them admin rights, and then prevent ssh-ing in as root, so the user accounts on the server are needed. I needed to add the server to our tailscale network before doing the restore, as that is how the server will access the backups. It seemed sensible to add the users and check ssh-ing between devices when I did that.

                    I wondered if it would be worth adding a note to the migration docs about making sure the UID of the "yellowtent" user on the old server is available on the new server when Cloudron is installed?

                    1 Reply Last reply
                    2
                    • jamesJ Online
                      jamesJ Online
                      james
                      Staff
                      wrote last edited by
                      #14

                      Hello @davejgreen

                      @davejgreen said:

                      I wondered if it would be worth adding a note to the migration docs about making sure the UID of the "yellowtent" user on the old server is available on the new server when Cloudron is installed?

                      That might be something good to add.
                      But also stems from the issue that the installation instruction was not followed to the letter.
                      If one does run only the cloudron setup and nothing else, this issue would not arise.
                      So maybe even a check in the Cloudron installation script to check if a user with UID 1000 is already present.

                      1 Reply Last reply
                      2
                      • jamesJ Online
                        jamesJ Online
                        james
                        Staff
                        wrote last edited by
                        #15

                        For future readers.

                        I was also just thinking, this issue could be resolved by the NFS host to allow other users access as well.
                        With setfacl you can allow multiple users to access files/folders.

                        One could run the following commands on the NFS host to ensure all existing folders and files and future files can be accessed by UID 808

                        # as user root or run with sudo
                        
                        # Apply ACLs to existing content
                        setfacl -R -m u:808:rwx /mnt/nfs-backup
                        
                        # Apply default ACLs for future content:
                        setfacl -R -d -m u:808:rwx /mnt/nfs-backup
                        

                        But for this to work the NFS host must use NFSv4 with idmapping configured properly.
                        Also, root_squash may block expected access, but your post:

                          services.nfs.server = {
                            enable = true;
                            exports = ''
                              /export/nfs-cloudron-backups <old-server-tailscale-IP>(rw,sync,no_subtree_check,no_root_squash)
                              /export/nfs-cloudron-backups <new-server-tailscale-IP>(rw,sync,no_subtree_check,no_root_squash)
                            '';
                        

                        Has explicit no_root_squash as documented https://docs.cloudron.io/guides/nfs-share#exposing-a-directory

                        1 Reply Last reply
                        1

                        Hello! It looks like you're interested in this conversation, but you don't have an account yet.

                        Getting fed up of having to scroll through the same posts each visit? When you register for an account, you'll always come back to exactly where you were before, and choose to be notified of new replies (either via email, or push notification). You'll also be able to save bookmarks and upvote posts to show your appreciation to other community members.

                        With your input, this post could be even better 💗

                        Register Login
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Don't have an account? Register

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • Bookmarks
                        • Search