Help with Wasabi mounting
-
@privsec So, the df output says
/dev/sda3 157G 25G 126G 17% /
. Doesn't this mean, it's working as expected ? I think the Cloudron graphs are probably wrong because we haven't really accounted for this FUSE use case (since it doesn't appear in df output).What we do is: get df output. Then substract away the du output size of each app. I think in your case, the app is in a FUSE mount, so the code will blindly subtract from the disk output and show wrong numbers.
-
@girish If that is the case then how come I see data in the mounted path, but not in the wasabi bucket?
In the data directory I see no files
But via file manager
I see these files
But in wasabi I seeSomething isnt right and I dont know what it is
-
@privsec I remember the Backblaze B2 UI does not always show the latest items in the object store. Could it be similar with Wasabi? If you create a file in the filesystem, do you see it in wasabi immediately? One other idea is to install rclone in another machine and list the remote wasabi mount. What objects/files do you see?
-
@girish Nope, I added 7 giggs of files and nothing is in wasabi.
Do you mean set up a second instance of rclone on another machine, set up the remote there and see if rclone syncs the files with the 2nd machine?
Is there a better/different way to do a file mounting?
I only did rclone based off of @robi 's request/recommendation
-
@privsec You can try s3fs because wasabi even has an article about it - https://wasabi-support.zendesk.com/hc/en-us/articles/115001744651-How-do-I-use-S3FS-with-Wasabi- . Also, for a start, I would test the mount first without Cloudron involved at all, just to make sure it's not Cloudron related.
-
If we can't validate the rclone mount, don't continue with apps & Cloudron.
Same for s3fs if you're going to try that again.
The reason you see files locally but not remotely is because there's something wrong with the mount or it didn't work, hence all files written are only in a local directory.
Remote is never getting data via the mount. -
-
Other then running
rclone lsd remote:bucket
andrclone ls wasabi:nextcloud.myprivsec
. I do not know how to test rclones connection.I just created an empty text file called "hello.txt" via touch hello.txt in the mounted folder on my server, and it did not get sent to wasabi.
So then that means that the connection is not working right?
Could this be do to unmounting after a reboot? I believe one occurred after setting the mount up. How do i get this to persist through reboots?
I am going to try out the s3fs once more right now and will report back on that.
-
Ok, so I went through the s3fs installing and running and now I am getting this
root@vxxxx:~# s3fs xxxx.xxxx /mnt/volumes/nextcloud -o passwd_file=/etc/passwd-s3fs -o url=https://s3.eu-central-1.wasabisys.com -o use_path_request_style -o dbglevel=info -f -o curldbg [CRT] s3fs.cpp:set_s3fs_log_level(257): change debug level from [CRT] to [INF] [INF] s3fs.cpp:set_mountpoint_attribute(4193): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) [CRT] s3fs.cpp:s3fs_init(3378): init v1.82(commit:unknown) with GnuTLS(gcrypt) [INF] s3fs.cpp:s3fs_check_service(3754): check services. [INF] curl.cpp:CheckBucket(2914): check a bucket. [INF] curl.cpp:prepare_url(4205): URL is https://s3.eu-central-1.wasabisys.com/myprivsec.nextcloud/ [INF] curl.cpp:prepare_url(4237): URL changed is https://s3.eu-central-1.wasabisys.com/myprivsec.nextcloud/ [INF] curl.cpp:insertV4Headers(2267): computing signature [GET] [/] [] [] [INF] curl.cpp:url_to_host(100): url is https://s3.eu-central-1.wasabisys.com * Trying 130.117.252.11... * TCP_NODELAY set * Connected to s3.eu-central-1.wasabisys.com (130.117.252.11) port 443 (#0) * found 138 certificates in /etc/ssl/certs/ca-certificates.crt * found 414 certificates in /etc/ssl/certs * ALPN, offering http/1.1 * SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384 * server certificate verification OK * server certificate status verification SKIPPED * common name: *.s3.eu-central-1.wasabisys.com (matched) * server certificate expiration date OK * server certificate activation date OK * certificate public key: RSA * certificate version: #3 * subject: OU=Domain Control Validated,OU=EssentialSSL Wildcard,CN=*.s3.eu-central-1.wasabisys.com * start date: Thu, 24 Jan 2019 00:00:00 GMT * expire date: Sat, 23 Jan 2021 23:59:59 GMT * issuer: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA * compression: NULL * ALPN, server accepted to use http/1.1 > GET /myprivsec.nextcloud/ HTTP/1.1 host: s3.eu-central-1.wasabisys.com User-Agent: s3fs/1.82 (commit hash unknown; GnuTLS(gcrypt)) Accept: */* Authorization: AWS4-HMAC-SHA256 Credential=xxxxxx/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=xxxxx x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date: 20201206T014829Z < HTTP/1.1 200 OK < Content-Type: application/xml < Date: Sun, 06 Dec 2020 01:48:29 GMT < Server: WasabiS3/6.2.3223-2020-10-14-51cd02c (head02) < x-amz-bucket-region: eu-central-1 < x-amz-id-2: xxxxx < x-amz-request-id: xxxxx < Transfer-Encoding: chunked < * Connection #0 to host s3.eu-central-1.wasabisys.com left intact [INF] curl.cpp:RequestPerform(1940): HTTP response code 200
However, files are not updating. If I upload directly to wasabi, they are not showing in the cli, and vice versa.
-
Ok, I never tried moving files with that screenshot above running.
Now that I have, and I tried moving content over it is looking similar to a packet sniffer as data is transferring over.
So does that mean I have to always ensure that command runs via ssh session?
I feel like thats way to manual and too much upkeep, especially if I have to constantly have an SSH session at all times.
So what am I missing?
-
Ok, well I just dont get it.
ITs almost as if wasabi and cloudron are only syncing some information and not all. And honestly, its irritating the crap out of me to the point where I might just cancel the subscription I bought banking on getting nextlcoud and wasabi to work.
This should not be this difficult or problem prone.App settings
The base directory has no data in it. Which is what I told it to do.
In this photo, it is evident that there is data inside the /mnt/volumes/nextcloud folder
However, in my wasabi bucket, there is no where near as much info
Why is cloudron only syncing a piece of the full app folder?
-
In addition, I thought maybe it was a folder/directory issue so I create one deeper level folder called wasabi in my server (fulll path
/mnt/volumes/nextcloud/wasabi
) and then I ran the commands3fs xxxx /mnt/volumes/nextcloud/wasabi -o passwd_file=/etc/passwd-s3fs -o url=https://s3.eu-central-1.wasabisys.com
and then I went to the dashboard for nextcloud and moved the directoryThen I went to wasabi to see if anything got moved to it, and there is nothing.
-
And probably my final note,
I cant access the console of nextcloud either. It keeps doing this loopFirst this one
And then this one
And hen it goes back toKeep in mind, this is a brand new, fresh install not even logged into yet, my cache is cleared out as well