Solved Help with Wasabi mounting
-
Good afternoon,
What I want to accomplish is as follows:
I need to set up an external storage through wasabi to be my main HDD for the server. Then I want to mount additional buckets for each individual app to use specifically.The backups would go to an additional bucket within Wasabi.
I currently have 4 buckets, but will eventually have once bucket for each app, 1 bucket for backups of cloudron config, and one bucket for a backup of everything (I am still looking in to how to accomplish this piece). But for now, lets assume I need 3 out of the 4 buckets from wasabi mounted into cloudron for app access (ultimately to keep as little data on the VPS itself as possible).
I have a user w/ a group already made in wasabi. I have my app (nextcloud) where I change how much of a data quota the specific user gets. All user data will be stored on a single bucket from Wasabi, this drive will be broken up into boxes via the user quota and nextclouds account functionality.
I also have Firefly III and rainloop as well, that will each have its own separate bucket from wasabi (if possible, if they have to be in the same bucket then so be it).The problem is that Cloudron says you can mount a drive, but I do not see how. Wasabi docs are not helpful and neither is cloudrons.
The docs I have found are for backups (https://docs.cloudron.io/backups/) and then the other one I found is for storage (https://docs.cloudron.io/storage/). This is the one I was hoping would help, but alas, it doesnt. It says its possible, then vaguely provides info and I am back to being lost.
I just found this guide (https://dotlayer.com/how-to-mount-s3-wasabi-digital-ocean-storage-bucket-on-centos-and-ubuntu-using-s3fs/) but the steps included are not working for me (several steps don't work).
Can anyone hold my hand to achieve this?
-
-
@robi so this doesn’t really answer my question.
I want my wasabi bucket to contain all of my nextcloud data. So the app directory and all is on wasabi.
I don’t want an external drive connected for remote storage.
If I have 100 users and they all are allowed 50gb of space, then I would need a minimum of 5tb of space. My 180gb ssd in my vps won’t cut it.
Since nextcloud natively stores everything on the app directory, I need to move that to a block storage solution. -
Under resources/storage for nextcloud is where I’d assume I would add a block storage to replace my current data directory. But I don’t see how to do this.
@privsec said in Help with Wasabi mounting:
@robi so this doesn’t really answer my question.
I want my wasabi bucket to contain all of my nextcloud data. So the app directory and all is on wasabi.
I don’t want an external drive connected for remote storage.
If I have 100 users and they all are allowed 50gb of space, then I would need a minimum of 5tb of space. My 180gb ssd in my vps won’t cut it.
Since nextcloud natively stores everything on the app directory, I need to move that to a block storage solution. -
@privsec That's not the place to add it. (NC)
The place to look into adding it is in the container for NC.
From CL UI, in the NC App Config > Resources > Storage section is where you would move the App storage to.
To make this happen you need to have a filelsystem based mount point.
This is where
rclone
comes in which can mount remote filesystems and object stores into the local filesystem.See it now?
-
@robi
That is my problem, I believe.My system, for whatever reason wont allow any mountpoints.
I have followed these guides
https://wasabi-support.zendesk.com/hc/en-us/articles/115001744651-How-do-I-use-S3FS-with-Wasabi-
https://www.interserver.net/tips/kb/mount-s3-bucket-centos-ubuntu-using-s3fs/
And neither of them are working. Ill have to check out Rclone as nothing else is working at the moment. -
@robi
So I have installed rclone, and went through the config step. However, I am getting this
This is the same, if not similar error message I was getting without rclone.What resolves this?
-
@privsec copy/paste issue? double check connecting via mountainduck or similar desktop S3FS/Object Storage tool to validate credentials.
-
@robi I manually type it in. In case the key is bad or wrong I just generated a new key and I will try with mountainduck to ensure its working
-
@privsec Avoid manually typing it in, as it is very error prone. Sometimes the keys have 'Il' and you can't tell which is the L and which is the i.
-
@robi I disabled ssh, and am using my VPS "ssh" console.
Tbh, I dont know how to re-enable root user
I ran this https://github.com/akcryptoguy/vps-harden and ever since I cant get ssh to work. So I am forced to do it all by hand -
@privsec you likely edit /etc/ssh/sshd_config
-
@robi Thank you.
I have access again for easier troubleshooting and I ended up reinstalling the OS and everything and once it all came back up I was able to add mounts.
Though its odd. I changed my data directory for nextcloud to /mnt/volumes/nextcloud and not all content is there.
Like if I click files manager it takes me to what Id assume is the cloud drive from wasabi, but there are files there that are not in wasabi
In addition, when I try to view the logs for any application OR for cloudron itself I am getting
-
@privsec As an example
That is on my server via CLI
And this
That is within the nextcloud bucket for my app.
-
I am uploading 7.23 gigs worth of files and will report back. On my VPS server the SSD had 131.59gb free out of 156gb
When the uploads finish, Ill report back.
-
So, I confirmed its saving data to the ssd of the VPS rather than sending it to the cloud
The uploads are not complete yet, but its evident that they are being stored locally rather then in the cloud.
Below is my Nexcloud app storage setting. Nexcloud is the actual rclone mounted folder.
Am I doing something wrong?
-
@privsec Did I understand correctly that /mnt/volumes/nextcloud is an rclone wasabi mount and you have also moved the app data directory of nextcloud to this location ? If so, what you have done seems correct. I don't know about the correctness of rclone itself nor do I know about how rclone mounting really works. Does the mount show up in df -h output ?
You say there is a discrepancy between what's the in the filesystem and what's in wasabi, so this looks like some rclone issue?
-
can you paste the output of:
df -h
mount
How did you set up rclone? (OS pkg, latest, beta)
Is rclone still running?What version is it?
-
@girish It might be an rclone issue
@robi
I installed Rclone by following these steps https://rclone.org/install/- curl https://rclone.org/install.sh | sudo bash
- rclone config
3 rclone mount remote:path /path/to/mountpoint --vfs-cache-mode full --daemon
df -h
root@v2202012134940134654:~# df -h Filesystem Size Used Avail Use% Mounted on udev 3.9G 0 3.9G 0% /dev tmpfs 798M 11M 788M 2% /run /dev/sda3 157G 25G 126G 17% / tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sda2 976M 146M 764M 17% /boot overlay 157G 25G 126G 17% /var/lib/docker/overlay2/5e80dc05d8a594a5a0c51aa5602ca124b428e0edba09181759e184497e45a875/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/903a9ff390459afb42b38eaa6b07c0f30f4f45fa413e154e35f766d3bcec7893/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/6af811d56e81cce0af9c14fcb1f0eadb6466a4a4eaa6bfd750eed95d5207c688/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/fbdaaedecbb9dbeeb0212e3dadb0cb32697ad523af7cbaf713a25bd567536edb/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/0c70992b7b8b2c704fc0a080263eb8eb09ae8518fc7054518ce58b75b4ad7e0d/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/f0d4d24114f30306a670e596a11d42b5fde57d30cee607746760dba613331055/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/ed4fd11e18c018a69bde9307d18a20f7380621126baf20e6de96949cb2a0cb19/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/c03dea8bf938e7044b8f9eb746dfbdcce955e69534c2c6a82b86c22c5aaaca5c/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/2eb8958e2598de5291a0895da411df947b97a6f873022b5eab415b6b2f8678b6/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/57f8ec7cc792bcc8e7f8c6a3a24c89cc2ef74bfcf794423a422d75aec0825144/merged tmpfs 798M 0 798M 0% /run/user/0 overlay 157G 25G 126G 17% /var/lib/docker/overlay2/acd1b16c14ff94c587d697ff2a57c1c52df183d034ac7a7067f6a6b251a88977/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/c65f5aa0b3af3f75055a43b3a8d529b0a3fd51d23806538e4c5b8d2173ac4489/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/299711c86c910f26718ad9670963b511d86726d61cb67716d1f8566dbd83addd/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/043dea9ba03986e56e9fbd7f3760faf2dd4538db77839be209c049e74648a0be/merged overlay 157G 25G 126G 17% /var/lib/docker/overlay2/0d3609e2d1f83f95df827fce2598eb713edf161e1a8bf0a01045621e1061ad95/merged
-
@privsec From a casual reading of rclone, it seems that it's a FUSE mount. Isn't the mount point supposed to appear in df -h output ? Also, see https://linoxide.com/linux-command/list-mounted-drives-on-linux/ . I think it's not mounted properly (I also don't know if this mount persists reboots, maybe you rebooted in the middle).
-
-
@privsec So, the df output says
/dev/sda3 157G 25G 126G 17% /
. Doesn't this mean, it's working as expected ? I think the Cloudron graphs are probably wrong because we haven't really accounted for this FUSE use case (since it doesn't appear in df output).What we do is: get df output. Then substract away the du output size of each app. I think in your case, the app is in a FUSE mount, so the code will blindly subtract from the disk output and show wrong numbers.
-
@girish If that is the case then how come I see data in the mounted path, but not in the wasabi bucket?
In the data directory I see no files
But via file manager
I see these files
But in wasabi I seeSomething isnt right and I dont know what it is
-
@privsec I remember the Backblaze B2 UI does not always show the latest items in the object store. Could it be similar with Wasabi? If you create a file in the filesystem, do you see it in wasabi immediately? One other idea is to install rclone in another machine and list the remote wasabi mount. What objects/files do you see?
-
@girish Nope, I added 7 giggs of files and nothing is in wasabi.
Do you mean set up a second instance of rclone on another machine, set up the remote there and see if rclone syncs the files with the 2nd machine?
Is there a better/different way to do a file mounting?
I only did rclone based off of @robi 's request/recommendation
-
@privsec You can try s3fs because wasabi even has an article about it - https://wasabi-support.zendesk.com/hc/en-us/articles/115001744651-How-do-I-use-S3FS-with-Wasabi- . Also, for a start, I would test the mount first without Cloudron involved at all, just to make sure it's not Cloudron related.
-
If we can't validate the rclone mount, don't continue with apps & Cloudron.
Same for s3fs if you're going to try that again.
The reason you see files locally but not remotely is because there's something wrong with the mount or it didn't work, hence all files written are only in a local directory.
Remote is never getting data via the mount. -
-
@privsec the latter.
verify the mount on my Server OS first and then afterwords resume with cloudron
-
Other then running
rclone lsd remote:bucket
andrclone ls wasabi:nextcloud.myprivsec
. I do not know how to test rclones connection.I just created an empty text file called "hello.txt" via touch hello.txt in the mounted folder on my server, and it did not get sent to wasabi.
So then that means that the connection is not working right?
Could this be do to unmounting after a reboot? I believe one occurred after setting the mount up. How do i get this to persist through reboots?
I am going to try out the s3fs once more right now and will report back on that.
-
-
Ok, so I went through the s3fs installing and running and now I am getting this
root@vxxxx:~# s3fs xxxx.xxxx /mnt/volumes/nextcloud -o passwd_file=/etc/passwd-s3fs -o url=https://s3.eu-central-1.wasabisys.com -o use_path_request_style -o dbglevel=info -f -o curldbg [CRT] s3fs.cpp:set_s3fs_log_level(257): change debug level from [CRT] to [INF] [INF] s3fs.cpp:set_mountpoint_attribute(4193): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) [CRT] s3fs.cpp:s3fs_init(3378): init v1.82(commit:unknown) with GnuTLS(gcrypt) [INF] s3fs.cpp:s3fs_check_service(3754): check services. [INF] curl.cpp:CheckBucket(2914): check a bucket. [INF] curl.cpp:prepare_url(4205): URL is https://s3.eu-central-1.wasabisys.com/myprivsec.nextcloud/ [INF] curl.cpp:prepare_url(4237): URL changed is https://s3.eu-central-1.wasabisys.com/myprivsec.nextcloud/ [INF] curl.cpp:insertV4Headers(2267): computing signature [GET] [/] [] [] [INF] curl.cpp:url_to_host(100): url is https://s3.eu-central-1.wasabisys.com * Trying 130.117.252.11... * TCP_NODELAY set * Connected to s3.eu-central-1.wasabisys.com (130.117.252.11) port 443 (#0) * found 138 certificates in /etc/ssl/certs/ca-certificates.crt * found 414 certificates in /etc/ssl/certs * ALPN, offering http/1.1 * SSL connection using TLS1.2 / ECDHE_RSA_AES_256_GCM_SHA384 * server certificate verification OK * server certificate status verification SKIPPED * common name: *.s3.eu-central-1.wasabisys.com (matched) * server certificate expiration date OK * server certificate activation date OK * certificate public key: RSA * certificate version: #3 * subject: OU=Domain Control Validated,OU=EssentialSSL Wildcard,CN=*.s3.eu-central-1.wasabisys.com * start date: Thu, 24 Jan 2019 00:00:00 GMT * expire date: Sat, 23 Jan 2021 23:59:59 GMT * issuer: C=GB,ST=Greater Manchester,L=Salford,O=Sectigo Limited,CN=Sectigo RSA Domain Validation Secure Server CA * compression: NULL * ALPN, server accepted to use http/1.1 > GET /myprivsec.nextcloud/ HTTP/1.1 host: s3.eu-central-1.wasabisys.com User-Agent: s3fs/1.82 (commit hash unknown; GnuTLS(gcrypt)) Accept: */* Authorization: AWS4-HMAC-SHA256 Credential=xxxxxx/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=xxxxx x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date: 20201206T014829Z < HTTP/1.1 200 OK < Content-Type: application/xml < Date: Sun, 06 Dec 2020 01:48:29 GMT < Server: WasabiS3/6.2.3223-2020-10-14-51cd02c (head02) < x-amz-bucket-region: eu-central-1 < x-amz-id-2: xxxxx < x-amz-request-id: xxxxx < Transfer-Encoding: chunked < * Connection #0 to host s3.eu-central-1.wasabisys.com left intact [INF] curl.cpp:RequestPerform(1940): HTTP response code 200
However, files are not updating. If I upload directly to wasabi, they are not showing in the cli, and vice versa.
-
Ok, I never tried moving files with that screenshot above running.
Now that I have, and I tried moving content over it is looking similar to a packet sniffer as data is transferring over.
So does that mean I have to always ensure that command runs via ssh session?
I feel like thats way to manual and too much upkeep, especially if I have to constantly have an SSH session at all times.
So what am I missing?
-
Ok, well I just dont get it.
ITs almost as if wasabi and cloudron are only syncing some information and not all. And honestly, its irritating the crap out of me to the point where I might just cancel the subscription I bought banking on getting nextlcoud and wasabi to work.
This should not be this difficult or problem prone.App settings
The base directory has no data in it. Which is what I told it to do.
In this photo, it is evident that there is data inside the /mnt/volumes/nextcloud folder
However, in my wasabi bucket, there is no where near as much info
Why is cloudron only syncing a piece of the full app folder?
-
In addition, I thought maybe it was a folder/directory issue so I create one deeper level folder called wasabi in my server (fulll path
/mnt/volumes/nextcloud/wasabi
) and then I ran the commands3fs xxxx /mnt/volumes/nextcloud/wasabi -o passwd_file=/etc/passwd-s3fs -o url=https://s3.eu-central-1.wasabisys.com
and then I went to the dashboard for nextcloud and moved the directoryThen I went to wasabi to see if anything got moved to it, and there is nothing.
-
And, I cant do the remaining steps either per
https://wasabi-support.zendesk.com/hc/en-us/articles/115001744651-How-do-I-use-S3FS-with-Wasabi-
As when I try to follow these steps, I get this
-
And probably my final note,
I cant access the console of nextcloud either. It keeps doing this loopFirst this one
And then this one
And hen it goes back toKeep in mind, this is a brand new, fresh install not even logged into yet, my cache is cleared out as well
-
@privsec Does the app say "Running" in the Cloudron dashboard? The Web Terminal won't work until the app is running.
-
@privsec I will probably give s3fs a try tomorrow and see if I can write a guide for it
(I have never tried s3fs or rclone mount either, so let's see if I am successful).
-
@girish
It is now saying -
@girish Is there another mounting solution?
It is absolutely pivotal that I get the app directories off my servers drive.
-
@privsec Is the goal here to have a large amount of space external (at correct price) to the server? If you are hosted on a VPS is there an option for you to attach external storage? For example, many VPS providers like time4vps, hetzner etc have quite cheap external disks that can be attached to a server. The reason I ask is even if s3fs or rclone mount does work, it's probably going to be quite slow - for example, I see posts like https://serverfault.com/questions/396100/s3fs-performance-improvements-or-alternative . We have no experience with how well these mounts work and if they work at all. Maybe files go missing, maybe it's unstable, we don't know.
-
There is also https://forum.rclone.org/ , maybe you can ask there about rclone mounting issues.
-
So I found some interesting posts:
- https://www.reddit.com/r/aws/comments/dplfoa/why_is_s3fs_such_a_bad_idea/
- https://www.reddit.com/r/aws/comments/a5jdth/what_are_peoples_thoughts_on_using_s3fs_in/
The point of both of them is that S3FS is not a file system and as such file permissions won't work. It's also not consistent (ie. if you write and read back, it may not be the same). This makes it pretty much unsuitable for making it an app data directory. What might work is if you make it as an external storage inside nextcloud - https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/external_storage_configuration_gui.html. I didn't suggest this earlier because I think you said you didn't want this in your initial post. Can you recheck this? Because I think it will provide what you want. You can just add the wasabi storage space inside nextcloud and it will appear as a folder for your users (you can create 1 bucket for each user or something).
@privsec So, I think it's best not to go down this s3fs route, it won't work reliably.
(If you were banking on s3fs somehow, for refunds etc, not a problem
, just reach out to us on support@).
-
Thank you for your detailed responses.
Ill have to play around with the external storage options for nextcloud to see if that will work out or not.
Its an interesting workaround/hack, lol.
-
There may be another "advanced" way via Minio.
Minio can be used as a storage gateway.
See if you can connect Minio to Wasabi, if you can in gateway mode, then see if you can mount the local Minio instance in Linux, even if s3fs, it would be local.
-
@robi But wouldnt that be un-ideal as S3 isnt a FS, and that is what I am wanting?
-
@privsec We tried this once and it was unbearably slow. Personally I'd look for ways to reduce the Nextcloud storage needs and use a native mounted drive from the host, and just offer an Archive Wasabi S3 bucket for people to move things no longer needed for regular access to that to save space needs on the main NC.
-
@marcusquinn Cyber Duck/Mountain Duck is very good as a platform to making Wasabi/S3 storage quite accessible to users if they need.
-
@marcusquinn
I can not get a mounted drive on my host to actually work. I have tried both Rclone and S3FS, both gave me the same result of no errors, but no syncing.I have decided (atleast for now till I find a better solution) to give each user on nextcloud a wasabi bucket as a external storage drive, and restrict only them to be able to access that bucket.
I am now working on trying to figure out how to remove the built in files for nextcloud to just have the external storage folder listed with the nextcloud folders/files within the wasabi drive.
In regards to Cyber Duck/Mountain Duck, whats the difference between the external storage connection in nextcloud and them ? Is there a performance difference?
-
@privsec Sorry, I mean just a mounted local host storage, not mounting external S3, which will have such high latency that it makes NC painful to use.
-
@privsec I realise host local storage isn't cheap, and Wasabi is cheap, which is why we tried doing what you are once. Maybe you'll have a different experience but we just found the juice wasn't worth the squeeze, especially with large numbers of small files.
It might work for something like Video storage where there's fewer files but I just don't fancy the slow user experience costs and would always opt for speed over price since time is so much more expensive if users are spending too much waiting on their file services to show results.
-
@privsec Just reading this, not sure if you've read? I'm not saying i can't be done, just that we gave up. Maybe this helps? I'm still reading: https://autoize.com/s3-compatible-storage-for-nextcloud/
-
From reading the above article, is sounds like it might need to be that Cloudron has 2 x versions of Nextcloud packaged, or at least an option in installing to make it setup for S3 storage.
Probably needs one of the @appdev team to have a read and confirm or deny.
I agree it would be interesting and our past experience might have been what this article above solves.
Let's get more opinions on this, someone else might also have a need and interest and be willing to try themselves too.
-
@marcusquinn Ill have to give that article above a read.
Anything I can do to help make launching costs low is sought after, while keeping user experience high
-
@marcusquinn said in Help with Wasabi mounting:
From reading the above article, is sounds like it might need to be that Cloudron has 2 x versions of Nextcloud packaged, or at least an option in installing to make it setup for S3 storage
Ihmo no second app necessary, the drawback is though that the s3 backend can only be enabled before any data is added to the app.
I am however sceptical if adding network latency for file access is such a smart idea. Especially for a php application.
-
So after reviewing this article, Cloudron does not respect the nextcloud config in this manner.
When I include that piece of info in the config file, nextcloud either error's out or it does not use the S3 bucket as its primary storage.
-
@privsec
I am giving you advanced options, as you already know what is ideal.ie. Get a VPS with a large amount of disk mounted locally.
-
TBH I can see the advantages for having a separate Nextcloud instance with just the Files app enabled as web interface for S3 to allow users to have it available as a slower cold-storage option, with having a faster standard NC setup for their daily hot-storage needs.
So, I wouldn't dismiss this aim, just with speed cautions, and the cautions in the article that the storage would then be unbrowsable directly without the NC database metadata interface, so it also has mass disruption risks if that were lost.
-
@marcusquinn said in Help with Wasabi mounting:
@privsec Just reading this, not sure if you've read? I'm not saying i can't be done, just that we gave up. Maybe this helps? I'm still reading: https://autoize.com/s3-compatible-storage-for-nextcloud/
This is along the same lines as my advanced object store usage option above.
I didn't know this was possible from within NC, but it appears they engineered it in. Very smart & clever.
They are essentially caching everything recent in /tmp and background syncing to the object store over time.
The user experience is good as it's all mostly local, and the capacity is greatly extended as the Object Store is vast.I've designed this for other use cases at IBM, and wrote their Redbook on it, hence advanced prior knowledge.
For this article to be applied in Cloudron, we'd need a new packaged NC App that is configured for this before the setup/install. Then one can point it at a local Minio instance or external S3 object store.