Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Task 'mvvolume.sh' dies from oom-kill during move of data-directory from local drive to NFS share

Task 'mvvolume.sh' dies from oom-kill during move of data-directory from local drive to NFS share

Scheduled Pinned Locked Moved Solved Support
migration
16 Posts 6 Posters 1.1k Views 6 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Offline
    J Offline
    jdeighton
    wrote on last edited by
    #6

    I upped the VM memory allocation from 24Gb to 30Gb (the whole box has 32Gb, I wanted to leave a little for the hypervisor).

    Same problem. Partway though the file copy the oom-kill steps in and spoils the party. The total -vm size reported by the oom-killer was a little larger - so maybe something is using a lot of RAM during that process - but I cannot for the life of me think what it would be.

    Have ordered more physical RAM for the server, will add that in the next service window and re-try this to see if it helps.

    What's odd is outside the VM, there is no evidence of that amount of memory usage. The Proxmox host itself tracks the memory usage and it's not changing significantly during that time, and the looking at the system monitoring inside the Ubuntu VM shows a peak memory usage of less than 10Gb overall.

    1 Reply Last reply
    0
    • J joseph has marked this topic as solved on
    • J Offline
      J Offline
      jdeighton
      wrote on last edited by
      #7

      Just to close this one out - I added more physical RAM to the box, and upped the allocation to the VM to 48Gb - re-ran the migration and it worked fine. Still not sure why moving files should cause this type of oom-kill, but more RAM was the solution.

      1 Reply Last reply
      1
      • robiR Offline
        robiR Offline
        robi
        wrote on last edited by
        #8

        Perhaps it was in another context than just RAM, like a name space, or temp I/O log that is usually short lived, except where it isn't.

        This reminds me of the old days where copying floppies was inordinately slow since it copied only a few blocks at a time which had a lot of context switches, until someone introduced the xcopy algorythm where as much of the source that could be was read into memory at once then written to the destination at once, minimizing block copies and the back and forth context switching. Later it was improved further to not use so much memory and do nice large chunks, optimizing reads, writes of the drives (heat, errors) and time.

        If the code could copy in predefined chunks, then the memory will be bound and not fill up for larger runs.

        Conscious tech

        girishG 1 Reply Last reply
        2
        • robiR robi

          Perhaps it was in another context than just RAM, like a name space, or temp I/O log that is usually short lived, except where it isn't.

          This reminds me of the old days where copying floppies was inordinately slow since it copied only a few blocks at a time which had a lot of context switches, until someone introduced the xcopy algorythm where as much of the source that could be was read into memory at once then written to the destination at once, minimizing block copies and the back and forth context switching. Later it was improved further to not use so much memory and do nice large chunks, optimizing reads, writes of the drives (heat, errors) and time.

          If the code could copy in predefined chunks, then the memory will be bound and not fill up for larger runs.

          girishG Offline
          girishG Offline
          girish
          Staff
          wrote on last edited by
          #9

          @robi said in Task 'mvvolume.sh' dies from oom-kill during move of data-directory from local drive to NFS share:

          If the code could copy in predefined chunks, then the memory will be bound and not fill up for larger runs.

          in this situation atleast, the code is executing the cp tool ๐Ÿค” Strange behavior really.

          robiR 1 Reply Last reply
          0
          • girishG girish

            @robi said in Task 'mvvolume.sh' dies from oom-kill during move of data-directory from local drive to NFS share:

            If the code could copy in predefined chunks, then the memory will be bound and not fill up for larger runs.

            in this situation atleast, the code is executing the cp tool ๐Ÿค” Strange behavior really.

            robiR Offline
            robiR Offline
            robi
            wrote on last edited by
            #10

            @girish said in Task 'mvvolume.sh' dies from oom-kill during move of data-directory from local drive to NFS share:

            in this situation atleast, the code is executing the cp tool Strange behavior really.

            That's the issue: See here https://serverfault.com/questions/156431/copying-large-directory-with-cp-fills-memory

            Conscious tech

            girishG 1 Reply Last reply
            1
            • robiR robi

              @girish said in Task 'mvvolume.sh' dies from oom-kill during move of data-directory from local drive to NFS share:

              in this situation atleast, the code is executing the cp tool Strange behavior really.

              That's the issue: See here https://serverfault.com/questions/156431/copying-large-directory-with-cp-fills-memory

              girishG Offline
              girishG Offline
              girish
              Staff
              wrote on last edited by
              #11

              @robi interesting link. I haven't found why it takes so much memory though, do you happen to have any information on this?

              Do you think it just loads all the files into memory and works of a massive list one by one? If that's the case, yeah... this is a problem . No easy fix other than rolling our own cp it seems.

              robiR 1 Reply Last reply
              0
              • girishG girish

                @robi interesting link. I haven't found why it takes so much memory though, do you happen to have any information on this?

                Do you think it just loads all the files into memory and works of a massive list one by one? If that's the case, yeah... this is a problem . No easy fix other than rolling our own cp it seems.

                robiR Offline
                robiR Offline
                robi
                wrote on last edited by
                #12

                @girish Here's what I found:

                https://www.qwant.com/?q=long+run+copies+via+cp+causing+oom+memory

                Various OOM workarounds:
                https://github.com/rfjakob/earlyoom
                https://github.com/hakavlad/nohang
                https://github.com/facebookincubator/oomd
                https://gitlab.freedesktop.org/hadess/low-memory-monitor/
                https://github.com/endlessm/eos-boot-helper/tree/master/psi-monitor

                Understanding OOM Score adjustment:
                https://last9.io/blog/understanding-the-linux-oom-killer/

                Possible LowFree issue:
                https://bugzilla.redhat.com/show_bug.cgi?id=536734

                Parallel vs sequential copy:
                https://askubuntu.com/questions/1471139/fuse-zip-using-cp-reported-running-out-of-virtual-memory

                And lastly, since you use stdio as output, perhaps it fills that up somehow. If it were redirected to disk it might be different.

                That might also enable resuming a failed restore.

                Conscious tech

                1 Reply Last reply
                0
                • potemkin_aiP Offline
                  potemkin_aiP Offline
                  potemkin_ai
                  wrote last edited by
                  #13

                  I got data lost due to a weird find commands in the shell script; wonder why not to use rsync

                  1 Reply Last reply
                  1
                  • J Offline
                    J Offline
                    joseph
                    Staff
                    wrote last edited by
                    #14

                    I think rsync requires a server, no?

                    necrevistonnezrN 1 Reply Last reply
                    0
                    • J joseph

                      I think rsync requires a server, no?

                      necrevistonnezrN Offline
                      necrevistonnezrN Offline
                      necrevistonnezr
                      wrote last edited by necrevistonnezr
                      #15

                      @joseph said in Task 'mvvolume.sh' dies from oom-kill during move of data-directory from local drive to NFS share:

                      I think rsync requires a server, no?

                      ?
                      No, not even over SSH. Thereโ€™s a daemon mode and it can work as a server but itโ€™s not required

                      1 Reply Last reply
                      0
                      • J Offline
                        J Offline
                        joseph
                        Staff
                        wrote last edited by joseph
                        #16

                        @necrevistonnezr I guess you mean the rsync binary. Sure that will work "locally" but the real benefit of rsync (protocol) comes from having a server component. This server component will not be used with nfs mounts (the topic of this thread). If you want to use rsync just locally, there are also other tools you can use.

                        I don't have any context more than that one sentence though ๐Ÿ™‚ I was just replying in passing.

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Don't have an account? Register

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • Bookmarks
                        • Search