Volumes: red dot but mounted
-
Hello.
In the Volumes interface, my sshfs volumes are indicated with a red dot: "Could not determine mount failure reason". Remounting also does not turn the dot green. But I can flawlessly access the File Browser and rely on the data in their relative apps.
Don't know if this appeared after the latest update. Current Platform Version v8.1.0 (Ubuntu 20.04.4 LTS) -
Hello.
In the Volumes interface, my sshfs volumes are indicated with a red dot: "Could not determine mount failure reason". Remounting also does not turn the dot green. But I can flawlessly access the File Browser and rely on the data in their relative apps.
Don't know if this appeared after the latest update. Current Platform Version v8.1.0 (Ubuntu 20.04.4 LTS) -
-
@mononym Can you run this on your server?
systemctl status mnt-volumes-8aa128f595aa40f99f8aa6d4b7c1bed2.mount
The
8aa128f595aa40f99f8aa6d4b7c1bed2
above is the volume id. You can get volume id from the url of the volume's file manager.● mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount - volume-name Loaded: loaded (/etc/systemd/system/mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount; enabled; vendor preset> Active: failed (Result: exit-code) since Fri 2024-11-22 18:31:09 UTC; 5 days ago Where: /mnt/volumes/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx What: account@address.of.external.storage:/folder Tasks: 13 (limit: 9425) Memory: 2.8M CGroup: /system.slice/mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount ├─ 892 sshfs account@address.of.external.storage:/folder /mnt/volumes/xxxxxxxxxxxxxxxxxxxxxxxxxxx> └─2279000 ssh -x -a -oClearAllForwardings=yes -oport=23 -oIdentityFile=/home/yellowtent/platformdata/s> Warning: journal has been rotated since unit was started, output may be incomplete.
-
Can you also try to remount this via
systemctl restart mnt-volumes-xxxxx.mount
and then also check the logs fromjournalctl --system
as well asjournalctl -u mnt-volumes-xxxxx.mount
?systemctl restart mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount
journalctl --system
Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 892 (sshfs) in control group while starting unit. Ignoring. Dec 11 19:44:09 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 2279000 (ssh) in control group while starting unit. Ignoring. Dec 11 19:44:09 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. Dec 11 19:44:09 1001507-513 systemd[1]: Mounting xxxxxx... Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed Dec 11 19:44:09 1001507-513 mount[122713]: read: Connection reset by peer Dec 11 19:44:09 1001507-513 systemd[121068]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Succeeded. Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed Dec 11 19:44:09 1001507-513 udisksd[655]: udisks_mount_get_mount_path: assertion 'mount->type == UDISKS_MOUNT_TYPE_FILESYSTEM' failed Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Mount process exited, code=exited, status=1/FAILURE Dec 11 19:44:09 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Failed with result 'exit-code'. Dec 11 19:44:09 1001507-513 systemd[1]: Failed to mount xxxxxx.
journalctl -u mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount
Dec 11 19:38:41 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 892 (sshfs) in control group while starting unit. Ignoring. Dec 11 19:38:41 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. Dec 11 19:38:41 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Found left-over process 2279000 (ssh) in control group while starting unit. Ignoring. Dec 11 19:38:41 1001507-513 systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies. Dec 11 19:38:41 1001507-513 systemd[1]: Mounting xxxxxx... Dec 11 19:38:41 1001507-513 mount[121215]: read: Connection reset by peer Dec 11 19:38:42 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Mount process exited, code=exited, status=1/FAILURE Dec 11 19:38:42 1001507-513 systemd[1]: mnt-volumes-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.mount: Failed with result 'exit-code'. Dec 11 19:38:42 1001507-513 systemd[1]: Failed to mount xxxxxx.
-
This very much looks like the mounts are not actually active. Can you check with
mount
to see them listed? Otherwise I might guess that somehow the files you are seeing are probably on the actual root disk at the same folder as the mointpoint would be. -
This very much looks like the mounts are not actually active. Can you check with
mount
to see them listed? Otherwise I might guess that somehow the files you are seeing are probably on the actual root disk at the same folder as the mointpoint would be.Can you check with mount
How are the volumes identified ? I can't find them with
mount | grep "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
the files you are seeing are probably on the actual root disk at the same folder as the mointpoint would be
It's odd, because I don't use the app to upload the files but directly from my computer via FTP to the volume.
-
If you can't find it in the
mount
output it is not actually mounted. Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.It very much sounds like the volume is thus not mounting the remote storage but uses that mountpoint folder like a local folder.
Either way first you have to find out how to mount this. Maybe try to mount this via normal linux tooling outside of Cloudron to ensure what you provide to Cloudron itself is correct. Unfortunately the logs output and the mount error does not reveal much.
-
If you can't find it in the
mount
output it is not actually mounted. Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.It very much sounds like the volume is thus not mounting the remote storage but uses that mountpoint folder like a local folder.
Either way first you have to find out how to mount this. Maybe try to mount this via normal linux tooling outside of Cloudron to ensure what you provide to Cloudron itself is correct. Unfortunately the logs output and the mount error does not reveal much.
@nebulon said in Volumes: red dot but mounted:
Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.
Sorry, I wasn't clear. I'm using the FTP connection from the storage provider to transfer files; not via Cloudron. So it's curious that these files could end up in the mountpoint folder.
The problem is only with volumes mounted via sshfs. Is using cifs a better option ?
-
Alright something is off for sure it seems. A bit out of ideas. Have you managed to get the mountpoint working without Cloudron using sshfs directly? sshfs is usually quite a good option and most likely there is just some small detail off somewhere. If you get it to work using linux tools directly we can most likely check the difference, as Cloudron only really just generates a config file for systemd to mount it via sshfs. There is not much specific to Cloudron.
-
Alright something is off for sure it seems. A bit out of ideas. Have you managed to get the mountpoint working without Cloudron using sshfs directly? sshfs is usually quite a good option and most likely there is just some small detail off somewhere. If you get it to work using linux tools directly we can most likely check the difference, as Cloudron only really just generates a config file for systemd to mount it via sshfs. There is not much specific to Cloudron.
@nebulon said in Volumes: red dot but mounted:
If you get it to work using linux tools directly we can most likely check the difference, as Cloudron only really just generates a config file for systemd to mount it via sshfs.
I manage to connect with my private key and the
sftp
command. -
If you can't find it in the
mount
output it is not actually mounted. Using the ftp connection to the volume via Cloudron would end up using the same internal filesystem functionality as the filemanager app.It very much sounds like the volume is thus not mounting the remote storage but uses that mountpoint folder like a local folder.
Either way first you have to find out how to mount this. Maybe try to mount this via normal linux tooling outside of Cloudron to ensure what you provide to Cloudron itself is correct. Unfortunately the logs output and the mount error does not reveal much.
@nebulon said in Volumes: red dot but mounted:
It very much sounds like the volume is thus not mounting the remote storage but uses that mountpoint folder like a local folder.
I checked the file manager of the volume and there were two "abandoned" folders. So I was confident that the volume is not mounted. I deleted these two folders. When I tried to edit the sshfs mount by pasting the identical ssh private key, the mount worked !
Funnily, the second sshfs volume had no folders/files, but it also mounted now.
-
N nebulon has marked this topic as solved on
-
ah so the reason in the end was that the folder wasn't empty and thus the mount command failed?