Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
  1. Cloudron Forum
  2. Support
  3. Volumes: red dot but mounted

Volumes: red dot but mounted

Scheduled Pinned Locked Moved Solved Support
volumessshfs
14 Posts 3 Posters 1.4k Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M mononym

    Hello.
    In the Volumes interface, my sshfs volumes are indicated with a red dot: "Could not determine mount failure reason". Remounting also does not turn the dot green. But I can flawlessly access the File Browser and rely on the data in their relative apps.
    Don't know if this appeared after the latest update. Current Platform Version v8.1.0 (Ubuntu 20.04.4 LTS)

    J Offline
    J Offline
    joseph
    Staff
    wrote on last edited by
    #2

    @mononym Can you run this on your server?

    systemctl status mnt-volumes-8aa128f595aa40f99f8aa6d4b7c1bed2.mount
    

    The 8aa128f595aa40f99f8aa6d4b7c1bed2 above is the volume id. You can get volume id from the url of the volume's file manager.

    M 1 Reply Last reply
    0
    • J joseph marked this topic as a question on
    • J joseph

      @mononym Can you run this on your server?

      systemctl status mnt-volumes-8aa128f595aa40f99f8aa6d4b7c1bed2.mount
      

      The 8aa128f595aa40f99f8aa6d4b7c1bed2 above is the volume id. You can get volume id from the url of the volume's file manager.

      M Offline
      M Offline
      mononym
      wrote on last edited by
      #3
      ● mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount - volume-name
           Loaded: loaded (/etc/systemd/system/mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount; enabled; vendor preset>
           Active: failed (Result: exit-code) since Fri 2024-11-22 18:31:09 UTC; 5 days ago
            Where: /mnt/volumes/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
             What: account@address.of.external.storage:/folder
            Tasks: 13 (limit: 9425)
           Memory: 2.8M
           CGroup: /system.slice/mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount
                   ├─    892 sshfs account@address.of.external.storage:/folder /mnt/volumes/xxxxxxxxxxxxxxxxxxxxxxxxxxx>
                   └─2279000 ssh -x -a -oClearAllForwardings=yes -oport=23 -oIdentityFile=/home/yellowtent/platformdata/s>
      
      Warning: journal has been rotated since unit was started, output may be incomplete.
      
      1 Reply Last reply
      1
      • nebulonN Offline
        nebulonN Offline
        nebulon
        Staff
        wrote on last edited by
        #4

        Can you also try to remount this via systemctl restart mnt-volumes-xxxxx.mount and then also check the logs from journalctl --system as well as journalctl -u mnt-volumes-xxxxx.mount ?

        M 1 Reply Last reply
        1
        • nebulonN nebulon

          Can you also try to remount this via systemctl restart mnt-volumes-xxxxx.mount and then also check the logs from journalctl --system as well as journalctl -u mnt-volumes-xxxxx.mount ?

          M Offline
          M Offline
          mononym
          wrote on last edited by
          #5

          @nebulon

          systemctl restart mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount
          journalctl --system

          Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 892 (sshfs) in control group while starting unit. Ignoring.
          Dec 11 19:44:09 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
          Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 2279000 (ssh) in control group while starting unit. Ignoring.
          Dec 11 19:44:09 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
          Dec 11 19:44:09 1001507-513 systemd[1]: Mounting xxxxxx...
          Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed
          Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed
          Dec 11 19:44:09 1001507-513 mount[122713]: read: Connection reset by peer
          Dec 11 19:44:09 1001507-513 systemd[121068]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Succeeded.
          Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed
          Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed
          Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Mount process exited, code=exited, status=1/FAILURE
          Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Failed with result 'exit-code'.
          Dec 11 19:44:09 1001507-513 systemd[1]: Failed to mount xxxxxx.
          

          journalctl -u mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount

          Dec 11 19:38:41 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 892 (sshfs) in control group while starting unit. Ignoring.
          Dec 11 19:38:41 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
          Dec 11 19:38:41 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 2279000 (ssh) in control group while starting unit. Ignoring.
          Dec 11 19:38:41 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
          Dec 11 19:38:41 1001507-513 systemd[1]: Mounting xxxxxx...
          Dec 11 19:38:41 1001507-513 mount[121215]: read: Connection reset by peer
          Dec 11 19:38:42 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Mount process exited, code=exited, status=1/FAILURE
          Dec 11 19:38:42 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Failed with result 'exit-code'.
          Dec 11 19:38:42 1001507-513 systemd[1]: Failed to mount xxxxxx.
          
          1 Reply Last reply
          0
          • nebulonN Offline
            nebulonN Offline
            nebulon
            Staff
            wrote on last edited by
            #6

            This very much looks like the mounts are not actually active. Can you check with mount to see them listed? Otherwise I might guess that somehow the files you are seeing are probably on the actual root disk at the same folder as the mointpoint would be.

            M 1 Reply Last reply
            0
            • nebulonN nebulon

              This very much looks like the mounts are not actually active. Can you check with mount to see them listed? Otherwise I might guess that somehow the files you are seeing are probably on the actual root disk at the same folder as the mointpoint would be.

              M Offline
              M Offline
              mononym
              wrote on last edited by
              #7

              @nebulon

              Can you check with mount

              How are the volumes identified ? I can't find them with mount | grep "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"

              the files you are seeing are probably on the actual root disk at the same folder as the mointpoint would be

              It's odd, because I don't use the app to upload the files but directly from my computer via FTP to the volume.

              1 Reply Last reply
              0
              • nebulonN Offline
                nebulonN Offline
                nebulon
                Staff
                wrote on last edited by
                #8

                If you can't find it in the mount output it is not actually mounted. Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.

                It very much sounds like the volume is thus not mounting the remote storage but uses that mountpoint folder like a local folder.

                Either way first you have to find out how to mount this. Maybe try to mount this via normal linux tooling outside of Cloudron to ensure what you provide to Cloudron itself is correct. Unfortunately the logs output and the mount error does not reveal much.

                M 2 Replies Last reply
                0
                • nebulonN nebulon

                  If you can't find it in the mount output it is not actually mounted. Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.

                  It very much sounds like the volume is thus not mounting the remote storage but uses that mountpoint folder like a local folder.

                  Either way first you have to find out how to mount this. Maybe try to mount this via normal linux tooling outside of Cloudron to ensure what you provide to Cloudron itself is correct. Unfortunately the logs output and the mount error does not reveal much.

                  M Offline
                  M Offline
                  mononym
                  wrote on last edited by
                  #9

                  @nebulon said in Volumes: red dot but mounted:

                  Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.

                  Sorry, I wasn't clear. I'm using the FTP connection from the storage provider to transfer files; not via Cloudron. So it's curious that these files could end up in the mountpoint folder.

                  The problem is only with volumes mounted via sshfs. Is using cifs a better option ?

                  1 Reply Last reply
                  0
                  • nebulonN Offline
                    nebulonN Offline
                    nebulon
                    Staff
                    wrote on last edited by
                    #10

                    Alright something is off for sure it seems. A bit out of ideas. Have you managed to get the mountpoint working without Cloudron using sshfs directly? sshfs is usually quite a good option and most likely there is just some small detail off somewhere. If you get it to work using linux tools directly we can most likely check the difference, as Cloudron only really just generates a config file for systemd to mount it via sshfs. There is not much specific to Cloudron.

                    M 1 Reply Last reply
                    0
                    • nebulonN nebulon

                      Alright something is off for sure it seems. A bit out of ideas. Have you managed to get the mountpoint working without Cloudron using sshfs directly? sshfs is usually quite a good option and most likely there is just some small detail off somewhere. If you get it to work using linux tools directly we can most likely check the difference, as Cloudron only really just generates a config file for systemd to mount it via sshfs. There is not much specific to Cloudron.

                      M Offline
                      M Offline
                      mononym
                      wrote on last edited by
                      #11

                      @nebulon said in Volumes: red dot but mounted:

                      If you get it to work using linux tools directly we can most likely check the difference, as Cloudron only really just generates a config file for systemd to mount it via sshfs.

                      I manage to connect with my private key and the sftp command.

                      1 Reply Last reply
                      0
                      • nebulonN nebulon

                        If you can't find it in the mount output it is not actually mounted. Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.

                        It very much sounds like the volume is thus not mounting the remote storage but uses that mountpoint folder like a local folder.

                        Either way first you have to find out how to mount this. Maybe try to mount this via normal linux tooling outside of Cloudron to ensure what you provide to Cloudron itself is correct. Unfortunately the logs output and the mount error does not reveal much.

                        M Offline
                        M Offline
                        mononym
                        wrote on last edited by
                        #12

                        @nebulon said in Volumes: red dot but mounted:

                        It very much sounds like the volume is thus not mounting the remote storage but uses that mountpoint folder like a local folder.

                        I checked the file manager of the volume and there were two "abandoned" folders. So I was confident that the volume is not mounted. I deleted these two folders. When I tried to edit the sshfs mount by pasting the identical ssh private key, the mount worked !

                        Funnily, the second sshfs volume had no folders/files, but it also mounted now.

                        1 Reply Last reply
                        1
                        • nebulonN Offline
                          nebulonN Offline
                          nebulon
                          Staff
                          wrote on last edited by
                          #13

                          ah so the reason in the end was that the folder wasn't empty and thus the mount command failed?

                          M 1 Reply Last reply
                          0
                          • nebulonN nebulon has marked this topic as solved on
                          • nebulonN nebulon

                            ah so the reason in the end was that the folder wasn't empty and thus the mount command failed?

                            M Offline
                            M Offline
                            mononym
                            wrote on last edited by
                            #14

                            @nebulon

                            Not so clear. The volume disconnected again. The "Remount Volume" button did not help but "Edit Volume" and saving (unchanged settings) worked.

                            1 Reply Last reply
                            0
                            Reply
                            • Reply as topic
                            Log in to reply
                            • Oldest to Newest
                            • Newest to Oldest
                            • Most Votes


                            • Login

                            • Don't have an account? Register

                            • Login or register to search.
                            • First post
                              Last post
                            0
                            • Categories
                            • Recent
                            • Tags
                            • Popular
                            • Bookmarks
                            • Search