Volumes: red dot but mounted
-
Hello.
In the Volumes interface, my sshfs volumes are indicated with a red dot: "Could not determine mount failure reason". Remounting also does not turn the dot green. But I can flawlessly access the File Browser and rely on the data in their relative apps.
Don't know if this appeared after the latest update. Current Platform Version v8.1.0 (Ubuntu 20.04.4 LTS) -
-
-
● mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount - volume-name Loaded: loaded (/etc/systemd/system/mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount; enabled; vendor preset> Active: failed (Result: exit-code) since Fri 2024-11-22 18:31:09 UTC; 5 days ago Where: /mnt/volumes/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx What: account@address.of.external.storage:/folder Tasks: 13 (limit: 9425) Memory: 2.8M CGroup: /system.slice/mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount ├─ 892 sshfs account@address.of.external.storage:/folder /mnt/volumes/xxxxxxxxxxxxxxxxxxxxxxxxxxx> └─2279000 ssh -x -a -oClearAllForwardings=yes -oport=23 -oIdentityFile=/home/yellowtent/platformdata/s> Warning: journal has been rotated since unit was started, output may be incomplete.
-
systemctl restart mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount
journalctl --system
Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 892 (sshfs) in control group while starting unit. Ignoring. Dec 11 19:44:09 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 2279000 (ssh) in control group while starting unit. Ignoring. Dec 11 19:44:09 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. Dec 11 19:44:09 1001507-513 systemd[1]: Mounting xxxxxx... Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed Dec 11 19:44:09 1001507-513 mount[122713]: read: Connection reset by peer Dec 11 19:44:09 1001507-513 systemd[121068]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Succeeded. Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Mount process exited, code=exited, status=1/FAILURE Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Failed with result 'exit-code'. Dec 11 19:44:09 1001507-513 systemd[1]: Failed to mount xxxxxx.
journalctl -u mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount
Dec 11 19:38:41 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 892 (sshfs) in control group while starting unit. Ignoring. Dec 11 19:38:41 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. Dec 11 19:38:41 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 2279000 (ssh) in control group while starting unit. Ignoring. Dec 11 19:38:41 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. Dec 11 19:38:41 1001507-513 systemd[1]: Mounting xxxxxx... Dec 11 19:38:41 1001507-513 mount[121215]: read: Connection reset by peer Dec 11 19:38:42 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Mount process exited, code=exited, status=1/FAILURE Dec 11 19:38:42 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Failed with result 'exit-code'. Dec 11 19:38:42 1001507-513 systemd[1]: Failed to mount xxxxxx.
-
-
Can you check with mount
How are the volumes identified ? I can't find them with
mount | grep "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
the files you are seeing are probably on the actual root disk at the same folder as the mointpoint would be
It's odd, because I don't use the app to upload the files but directly from my computer via FTP to the volume.
-
If you can't find it in the
mount
output it is not actually mounted. Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.It very much sounds like the volume is thus not mounting the remote storage but uses that mountpoint folder like a local folder.
Either way first you have to find out how to mount this. Maybe try to mount this via normal linux tooling outside of Cloudron to ensure what you provide to Cloudron itself is correct. Unfortunately the logs output and the mount error does not reveal much.
-
@nebulon said in Volumes: red dot but mounted:
Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.
Sorry, I wasn't clear. I'm using the FTP connection from the storage provider to transfer files; not via Cloudron. So it's curious that these files could end up in the mountpoint folder.
The problem is only with volumes mounted via sshfs. Is using cifs a better option ?
-
Alright something is off for sure it seems. A bit out of ideas. Have you managed to get the mountpoint working without Cloudron using sshfs directly? sshfs is usually quite a good option and most likely there is just some small detail off somewhere. If you get it to work using linux tools directly we can most likely check the difference, as Cloudron only really just generates a config file for systemd to mount it via sshfs. There is not much specific to Cloudron.