Cloudron makes it easy to run web apps like WordPress, Nextcloud, GitLab on your server. Find out more or install now.


Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

Cloudron Forum

Apps | Demo | Docs | Install
jadudmJ

jadudm

@jadudm
About
Posts
96
Topics
18
Shares
0
Groups
0
Followers
0
Following
0

Posts

Recent Best Controversial

  • Garage, an open-source distributed storage service you can self-host to fullfill many needs
    jadudmJ jadudm

    @girish Glad to serve.

    If the goal is an addon, then...

    1. I think the backup piece should be straight-forward? (I have some more info in the packaging thread about this.) Essentially, you want to make sure the SQLite DB is backed up, and then you backup the filesystem. I have questions about what happens if (say) a restore happens (e.g. what if the filesystem path changes?), but those things can be explored.
    2. There is an administrative API that (once you generate a secret/trusted admin API token) lets you do everything administratively via that API (bucket creation, etc.).

    As to a web interface, I would not recommend you create one for end-users. (I suspect this is not what you mean.) You have NextCloud, XBackBone, and other apps that can talk to an S3-compatible backend for file browsing. What people might need/want is a way to:

    • Create/remove buckets (which, on the backend, you'd use your secret admin key)
    • Create/remove keys and attach them to buckets
    • Create/remove administrative keys (for superusers who want to script things against the backend)
    • Bonus: the ability to designate a bucket as a static site, and then you do the DNS work on the backend to point either a subsubdomain at it (e.g. site.s3.example.com) or a whole new domain (e.g. someothersite.com -> site.s3.example.com)

    I suspect you could iterate towards this, if you wanted to. Release it with terminal-only management to start, and work towards an admin interface for common bucket creation/removal-type tasks.

    There are things that Garage does not do (e.g. lifecycle rules), so this is not a "full" S3 clone (the way Minio aspired to be). In this regard, SeaweedFS might offer more features (and a built-in admin UI)... so, it might be worth doing a spike to explore it as well. At a glance, it is similar, but it also is intended to scale (e.g. it does Raft consensus and Reed-Solomon EC if you want it) in a way that Garage does not. This might also be a reason to not use Seaweed.

    I can poke it with a stick if it would be valuable.

    Update, a few hours later: I would recommend sticking with Garage.

    App Wishlist

  • Garage packaging status, next steps
    jadudmJ jadudm

    Hi @scooke ,

    To your question, I followed the instructions on the Garage website, and I added a near-copy of that documentation to the README in the package I've developed. This is all documented in the git repository for the package that I linked to. I have Garage running on my Cloudron instance, and was able to create buckets, add content, and even serve static web content from within those buckets. I'm sorry your prior experience with this did not work for you.

    I am now trying to do that in a way that it would be considered for inclusion as a standard package, if the Cloudron team thinks it is worthwhile. If they don't, perhaps I'll do it for myself. 🤷

    App Packaging & Development

  • Garage packaging status, next steps
    jadudmJ jadudm

    Not much I can do with that statement. I have it packaged and working. I'm now in the weeds of how best to handle data backup and restore on Cloudron. Given that the Minio package must go away, this is at least a possibility that can be evaluated.

    App Packaging & Development

  • Garage packaging status, next steps
    jadudmJ jadudm

    Hm. Maybe I am wrong about the URL rewrites. I'll sleep on it.

    Because Garage puts the different APIs/functions on different ports... you might be right, @robi.

    I still think it would be nice to have aliases exposed in the manifest.

    App Packaging & Development

  • Garage packaging status, next steps
    jadudmJ jadudm

    I appreciate your input, @robi . I'm hoping to hear from other packagers and the product team. Your questions/comments help illuminate the kind of things I think need to be considered in this package, so I'll use this as an opportunity to capture them more fully, and I've woven some packaging questions in along the way for the Cloudron team/other packagers.

    Since you have Caddy, it can also proxy to the admin port via /admin instead, no?
    Same for /api to the API port & KV store.

    I don't think this is a good idea. The Administrative API may (now, or in the future) have an /admin path that maps to some behavior. To override/rewrite pathways on an API is to fundamentally change the present/future behavior of the API. Given that Cloudron is an infrastructure platform, this means that I would be packaging a "broken" version of Garage (because, by rewriting pathways, I'd be changing the behavior of Garage). It might not be broken now, but I would be setting up a future where it becomes broken. So, I think rewriting pathways on any of the ports (and, in doing so, altering the behavior of Garage) is a bad idea.

    Adding the K/V store is trivial. I just haven't done it.

    I stand my by assertion that I believe the CloudronManifest needs to be expanded.

    • I cannot use httpPorts to insert a wildcard into the DNS, because it claims that *.web.domain.tld is not a proper domain, and
    • I cannot insert aliases via manifest (but I can via cloudron install), which do allow me to insert A records into DNS

    So, either httpPorts needs to allow me to insert wildcard domains, or I need an aliases entry in the manifest (preferably mapped to an array). There may be another way, but I think that altering application behavior---especially for an S3-compatible API and administrative server---is the wrong approach. I also think including instructions that tell users to add aliases is a bad approach, but... at least there is precedent (e.g. packages that have default user/pass combos like admin/changeme123).

    SQLite is pretty robust running on Billions of devices rather invisibly, so I wouldn't worry too much. Like you said a snapshot of it could be useful, but the use case may yet need to be discovered.
    Cloudron backup takes care of everything in /app/data so there will be a nested redundancy if used for backup.

    Perhaps this is a bit direct, but I'm going to state it this way: you are a bit casual with my data. And, for that matter, everyone's data who relies on Cloudron or (say) the package I am working on. My premise is that if someone is using this package, they should be able to trust that their data will not be corrupted or lost by the normal day-to-day operation of Cloudron, including a package update or upgrade. That happens through careful thought and engineering.

    Two links that I think are educational in this regard:

    • SQLite documentation: https://sqlite.org/backup.html
    • A good explainer: https://oldmoe.blog/2024/04/30/backup-strategies-for-sqlite-in-production/

    Because SQLite writes temporary files while in operation, you cannot simply copy it. I assume that @girish and @nebulon took these kinds of things into account when they introduced the sqlite addon under localstorage. But, I don't know. I'm confident they, or someone, will help answer my uncertainty.

    Therefore, my question is: if I have indicated that the metadata database that Garage relies on is specified in localstorage, can I be confident that the standard Cloudron backup mechanism will properly backup that SQLite file, even if it is in use? For the Garage package, I have included it in my localstorage property, but I don't know what happens when I do that, because the packaging/manifest docs are not very specific about what including it means.

    I also know I need to handle database migrations. When Garage goes from v2.1.0 to v2.2.0 (or, more likely, v2 to v3), it might make changes to how it structures its metadata.

    • https://garagehq.deuxfleurs.fr/documentation/operations/upgrading/
    • https://garagehq.deuxfleurs.fr/documentation/operations/durability-repairs/
    • https://garagehq.deuxfleurs.fr/documentation/operations/recovering/

    These things are my responsibility as a packager. Because Cloudron does not appear to provide any "upgrade" hooks, I believe/think I need to write my startup scripts to always assume that the current boot could be one where I am upgrading---either minor or major. And, because that SQLite file, if damaged/lost, will mean losing all of the data stored by Garage, it is important to get the start/restart sequence right.

    Coupled with my uncertainty (currently) about how SQLite is handled by Cloudron, I don't even know if I can roll back to a Cloudron snapshot safely.

    Cloudron says this:

    Sqlite files that are actively in use cannot be backed up using a simple cp. Cloudron will take a consistent portable backups of Sqlite files specified in this option.

    So: I am assuming that Cloudron does the right thing w/ SQLite files w.r.t. backups.

    Garage suggests that for minor upgrades, a garage repair before upgrading is enough, and for major upgrades, more is necessary. However, the package doesn't "know" when it is minor vs. major. (I do, but the package doesn't... unless I build that logic into the package startup.) So, I suspect I need to pretend, on every boot, that it might be a major upgrade. (For packagers: is this the right strategy/assumption?)

    This is what Garage recommends for major upgrades:

    1. Disable API access (for instance in your reverse proxy, or by commenting the corresponding section in your Garage configuration file and restarting Garage)
    2. Check that your cluster is idle
    3. Make sure the health of your cluster is good (see garage repair)
    4. Stop the whole cluster
    5. Back up the metadata folder of all your nodes, so that you will be able to restore it if the upgrade fails (data blocks being immutable, they should not be impacted)
    6. Install the new binary, update the configuration
    7. Start the whole cluster
    8. If needed, run the corresponding migration from garage migrate
    9. Make sure the health of your cluster is good
    10. Enable API access (reverse step 1)
    11. Monitor your cluster while load comes back, check that all your applications are happy with this new version

    Now, some of this comes "for free" when a package is being upgraded, because I will be doing this at startup (in my start.bash), which is before the garage service is running. Therefore, I can take as "given" the things that involve being in a shut-down state.

    1. Disable API access (for instance in your reverse proxy, or by commenting the corresponding section in your Garage configuration file and restarting Garage)
    2. Check that your cluster is idle
    3. Make sure the health of your cluster is good (see garage repair)
    4. Stop the whole cluster
    5. Back up the metadata folder of all your nodes, so that you will be able to restore it if the upgrade fails (data blocks being immutable, they should not be impacted) (note: This probably means garage meta snapshot --all)
    6. Install the new binary, update the configuration
    7. Start the whole cluster
    8. If needed, run the corresponding migration from garage migrate
    9. Make sure the health of your cluster is good
    10. Enable API access (reverse step 1)
    11. Monitor your cluster while load comes back, check that all your applications are happy with this new version

    Every time the container starts, I think I need to do a garage repair. I think I also need to follow the steps above (e.g. snapshot, migrate, etc.). This way, if I rebuild the container, and go from v2.1.0 to v3.0.0, I am guaranteed that I have 1. repaired the database recently, 2. taken a snapshot that is robust/safe/not in active operation, and 3. migrated to the most recent table schema. This should ensure that, when I start garage serve, that the application is working against the right table schemas.

    What concerns me is the configuration. I am likely going to need to pay attention to updates/upgrades, and determine if (say) a v2.1.0 configuration would break a v3.0.0 deployment. If so, then I'm going to (possibly) package both the v2.1.0 and v3.0.0 binary into the image, and "fall back" to the v2 binary when a v2 configuration on a user's installation is detected. That way, I can prompt users as part of the upgrade to make sure to go in, update their configs, and reboot. (Or, something.)

    This not yet a problem, but it is directly tied to the SQLite database, which is the Garage filesystem. Loss or corruption of that DB is loss of data.

    Does Garage have any S3 Gateway features?

    I don't think so. If you need one, you can package versitygw, which is dedicated to that purpose.

    A question for packagers: how do we test packages? Does anyone have strategies for automating testing of package development/updates/upgrades?

    App Packaging & Development

  • Garage packaging status, next steps
    jadudmJ jadudm

    summary

    Garage is "an S3 object store so reliable you can run it outside data centers."

    Given that the open source version of Min.io recently went into maintenance mode, it feels appropriate to have another S3 option on Cloudron.

    I think I have a more-than-proof-of-concept, but not-ready-for-production package for Garage. I'm starting this thread here so there is a place to discuss what would be necessary to bring it into readiness for a "real" Cloudron package.

    related posts

    • https://forum.cloudron.io/post/116583
    • https://forum.cloudron.io/post/116585

    building the package

    1. Check out the repository (https://git.jadud.com/jadudm/cloudron-garage)
    2. export DOMAIN=s3.example.com
    3. make install

    (If the package has value, I'm of course happy to see the git repository move.)

    The build relies on the value of the environment variable DOMAIN; it will be easiest if you set that variable in the shell before running make. Within the Makefile, the cloudron tool is run; it is assumed you have this tool installed, as well as the ability to push/pull from a registry.

    I cannot provide support for people who have not set up their development environment for Cloudron packaging.

    alias domains

    The installation will set three alias domains for this site. If you installed at s3.example.com, then it will set:

    • admin.s3.example.com
    • api.s3.example.com
    • *.web.s3.example.com

    NOTE: These sub-sub domains are likely not covered by a certificate. It may be that the installation would rather create aliases like api-s3, admin-s3, and so forth, all off a root domain.

    inside the package

    The package launches two services:

    • caddy
    • garage

    Garage is a single Rust binary, but serves on ports 3900, 3902, and 3903. (Verdict is still out on 3901.) These appear to be "secure by default" ports, in that nothing comes back on any of them without valid credentials (API keys). That said:

    https://admin.s3.example.com/health
    

    should be a valid health check. Because this is the only valid health check, the caddyfile sets up a respond directive for api.DOMAIN, admin.DOMAIN, and DOMAIN so that /health works on all three domains.

    It might be that caddy would be unnecessary if the manifest's healthCheckPath worked differently. The solution here was to proxy inside the container.

    what is that wildcard domain?

    Garage can expose buckets as static websites. In this way, it might be possible for users to:

    1. Upload static content to a bucket
    2. Add an alias to this package
    3. Add a redirect to the caddyfile
    4. Profit!

    (That's an old internet meme.)

    Using a wildcard alias works, given that Cloudron passes the alias value straight through as a DNS A record. Therefore, it is possible to proxy everything that comes in to *.web.s3.example.com, add a rule to the caddyfile, and... it should work? In testing, I was able to confirm that buckets would serve content; I did not try layering a new domain on top, but believe it would work.

    The caddyfile is in /app/data/, and therefore will be backed up. This should mean that changes there would persist on a backup/restore.

    what works, what doesn't?

    The package seems to come up, and I can follow the instructions in the README to

    1. Create layouts, buckets, and keys (using the garage command, in the terminal for the app)
    2. Access the buckets using mc (Minio Client) to ls, cp and so on
    3. Declare a bucket to be a website, and serve content from it

    I have experimented briefly with the Administrative API; it works. It is likely in this version of the package it does not, as I just noticed that I fail to actually set the admin_token. So, that needs to be fixed. (Or, perhaps it is being set... but, it's random.)

    what needs to be done?

    1. Backup and restore. I have not yet tackled this. At /app/data/meta/db.sqlite is the metadata for all of the block storage in the S3 instance. This needs to be handled with care. Failure to care for this DB likely loses all of the data in the instance.
    2. Migrations. See "Backup and restore."
    3. Enable the key/value store. This is probably easy, as it is another service at port 3904. It will be a proxy value in the caddyfile and a possible alias.
    4. I don't know how/if aliases can be defined in the manifest. It would be nice.
    5. ... ?

    W.R.T. #4: I can't use httpPorts for the wildcard subsubdomain... but, I'd rather not have to tell users it's a step they have to follow. Should the manifest support adding alias values? If so, could the manifest support things like *.web.CLOUDRON_APP_DOMAIN? I don't want everything appearing on the root domain of the Cloudron instance by default; I want things to appear under the domain provided by the user.

    App Packaging & Development

  • Garage, an open-source distributed storage service you can self-host to fullfill many needs
    jadudmJ jadudm

    update

    I've poked the package with a stick, added more to the README, and have run into a few interesting things about this package.

    s3 functionality

    I was able to create a bucket and put stuff in it. This seems to be a core function of an S3-compatible object server.

    administrative server

    Garage has a notion of having an administrative API at one port. I can use httpPorts to bind this port, and I can use it. For example, if I have

    s3.example.com

    as the root domain, then

    admin.s3.example.com

    can be the home for the Admin API. And, I was able to create an administrative API token, and using it, access the admin API.

    static websites

    This one is tricky. In theory, if I configure part of the garage.toml correctly:

    [s3_web]
    bind_addr = "[::]:3902"
    # This wants to be set dynamically in the startup.
    # That way, it can grab a Cloudron variable.
    root_domain = ".web.s3.example.com"
    index = "index.html"
    

    I can serve static sites out of buckets. However, this implies domain name manipulation. And, possibly, a wildcard. As in, I'd like

    *.web.example.com to resolve to s3.example.com, and for Garage to pick it up (internally) on port 3902.

    I have explored this manually (by manipulating DNS settings in Cloudflare), but even though I have configured a bucket to serve static content, I can't (yet) convince it to serve something up.

    While it might be that this functionality has to be sacrificed, I think it would be a nice way (if it was baked in) to manage 100% static sites. However, that would be new machinery: a way to map domains to buckets.

    backups

    I'm not convinced the backups are good. Specifically

    <path>/meta/db.sqlite

    is the metadata database for the Garage instance. This is, as far as I can tell, all of the information about where all of the files is stored. Losing this database is tantamount to losing all of the data. I think. So, making sure it backs up correctly matters. And, it is clear that updates will need to do things like garage repair and garage migrate, in the event of migrations/changes to this metadata database.

    Ah:

    Since Garage v0.9.4, you can use the garage meta snapshot --all command to take a simultaneous snapshot of the metadata database files of all your nodes. This avoids the tedious process of having to take them down one by one before upgrading. Be careful that if automatic snapshotting is enabled, Garage only keeps the last two snapshots and deletes older ones, so you might want to disable automatic snapshotting in your upgraded configuration file until you have confirmed that the upgrade ran successfully. In addition to snapshotting the metadata databases of your nodes, you should back-up at least the cluster_layout file of one of your Garage instances (this file should be the same on all nodes and you can copy it safely while Garage is running).

    (Emphasis mine.)

    So, the backup process is something I'll need to investigate further. It might be that some manual/scripted management of this database file---and dumping it---is going to be a bit of a thing in terms of having it be a robust process.

    (Given that Cloudron does backups before upgrades, as long as the SQLite DB is snapshotted correctly on backup, I think it will be fine.) I suspect that a cron will need to be installed for this package that---daily?---runs the snapshot command, rotates DBs, and those are part of the backup. (I have a suspicion that Cloudron packages handle this kind of thing in the start.sh scripts?)

    healthcheck URL

    The manifest assumes that the health check URL is on the main app. In this case, if I have

    s3.example.com

    and the Admin API is at

    admin.s3.example.com

    (defined in httpPorts), I want the health check URL to be

    https://admin.s3.example.com/health

    because that is where Garage put it. I don't think I can do that with the manifest as-designed.

    summary

    I think the package is off to a good start. I have questions, but most of them are described above, and I'll probably figure things out. The health check and static site subdomains, though, might not be easily solved.

    App Wishlist

  • Garage, an open-source distributed storage service you can self-host to fullfill many needs
    jadudmJ jadudm

    I started work on a package; this was under a different thread, but it is probably more appropriate to mention here:

    https://forum.cloudron.io/post/116583

    App Wishlist

  • Docs - Alternative to Notion / Outline with OIDC, GDPR compliant, PDF Export (with template) etc...
    jadudmJ jadudm

    Yep. There's multiple alternatives. When/if they're appropriate as a drop-in for Minio and (for packagers) as an extension, there's more than one path. Looking only at things that seem to be "live"/viable:

    Service URL License Notes
    Garage https://garagehq.deuxfleurs.fr/ GNU AGPL EU-based, compatible with many common clients, might serve as a static site/fileserver as well
    SeaweedFS https://github.com/seaweedfs/seaweedfs Apache 2.0 Can run as a single binary; can grow its storage area by adding to a list of paths (which would play well with Cloudron's volume mount model)
    VersityGW https://github.com/versity/versitygw Apache 2.0 Essentially proxies S3 straight to the filesystem, allowing access to files either through the S3 API or directly through the underlying filesystem. (Sounds easy to backup.)
    RustFS https://github.com/rustfs/rustfs Apache 2.0 Explicitly supports a single disk/single node deployment, but it looks like it wants direct access to disk mounts.
    Ozone https://github.com/apache/ozone Apache 2.0 Apache Foundation object store project. Handles HDFS and intended to scale. Not really appropriate for Cloudron's use-case.

    Garage and/or Seaweed are likely the most mature of this bunch. Versity might be the simplest.

    the start of a package

    After looking at the Garage repo, it was apparent that it should be very packageable. All the right things are broken out as environment variables.

    https://git.jadud.com/jadudm/cloudron-garage

    I was able to:

    • Push this to a private registry I'm hosting on my Cloudron
    • Use the cloudron build and cloudron update sequence to make changes and improve it on my Cloudron instance
    • Use the terminal to create a zone/location, assign it some space, and create a bucket
    • Create a key for that bucket
    • Use mc (minio client) to put a file in the bucket and list the contents of the bucket

    There's a bunch more that would need to be done. A few thoughts, mostly random:

    • The SQLite metadata needs to be backed up correctly. It might be that I've already done everything necessary by using localstorage and pointing it at that metadata DB.
    • I used httpPorts to map almost all of the endpoints that are supposed to be public, but... I'm not sure I wired everything up correctly in the config. Something was right (since I could use the API), but I did not test (say) the admin API, and I did not expose the K/V database API. (Which... could be handy to expose.)
    • The docs say httpPort is optional/not required, but the command line tools disagree. The docs should be updated.
    • I didn't try and play with SSO, but I don't know if I have to? Or, there's a bunch to think about there. I think garage is kinda secure out-of-the-box (with no keys configured by default, etc.), but that doesn't mean I'm confident. As a result, perhaps SSO isn't necessary?
    • I did not experiment with exposing anything as a web page. The notion that I could push to a bucket and use that as a static site server (as opposed to creating surfer instances, say) is compelling. But, I'd have to figure out how to map the domains...

    If this is a direction things go, I'd be glad to be a sounding board/help out.

    Because this is really about Garage, not Notion/etc., I'll continue commenting here: https://forum.cloudron.io/post/116584

    App Wishlist

  • Is the OIDC Addon a kind of... "instant App Proxy" for packaging apps?
    jadudmJ jadudm

    Absolutely. That's an excellent and thorough answer, @james . Many, many thanks.

    Possible text:

    This add-on is intended for applications that already support OpenID Connect-based (OIDC) authentication. Apply this add-on to use Cloudron as an OpenID provider with an application that supports OIDC-based authentication.

    Possible improvement for proxyauth:

    The proxyauth add-on allows you to put an authentication wall in front of an application. This is useful in situations where the application has no authentication mechanisms of its own, or where there are features that you cannot easily turn off (for the general public), and you want to restrict access using Cloudron's built-in user and group management tools.

    I think the idea being that, in both cases, a bit more "why" might help.

    Either way, thank you. All of my questions are answered (including "oh, hey, there was an app packaging topic!").

    App Packaging & Development

  • Is the OIDC Addon a kind of... "instant App Proxy" for packaging apps?
    jadudmJ jadudm

    Description

    How does the OIDC addon work?

    (Is there a "packaging" tag? It seems no? So, I'm filing this question under "Support." Should be an easy one.)

    Steps to reproduce

    https://docs.cloudron.io/packaging/addons/#oidc

    I could experiment, but I'd rather ask.

    Does this add-on let me do the following:

    1. Package an app that is (at some level) insecure
    2. "Wrap" an OIDC/Cloudron login around it
    3. Choose where to go after authentication

    In other words, is this "App Proxy" for arbitrary apps? Can I make it point at my local Cloudron instance, and get "magic" OAuth (with a redirect... to myself?) for an app that I'd rather not modify/extend to have OIDC?

    The reason I ask is because I'd like to package something, but it 1) allows user creation in an unrestricted manner, and 2) I don't want to deal with it. I'd rather put it behind Cloudron's OIDC, and (as a first step), and once authenticated, bounce them through to the app. This would still require people to create a second account, but I can live with that. At least I'd know that I can restrict access using Cloudron's groups feature, and therefore get a reasonably secure app with minimal effort.

    Logs

    I haven't tried anything yet, so there are no logs.

    However, another sentence or two in the Addons section of the docs for the OIDC add-on might be useful, so we know how it works/how to use it.

    Troubleshooting Already Performed

    None. I looked at the docs, and it is not obvious from the docs what this add-on does.

    System Details

    I don't know that it matters, at the moment.

    Generate Diagnostics Data

    Tricorder malfunction.

    Cloudron Version

    8.x.2, I think. I forget the 'x'. 3?

    Ubuntu Version

    24.02.

    Cloudron installation method

    A long time ago, on an SSD far, far away...

    Output of cloudron-support --troubleshoot

    N/A

    App Packaging & Development

  • Does mounting a backup location via SSHFS change the remote directory permissions?
    jadudmJ jadudm

    @nebulon , per request:

    https://forum.cloudron.io/topic/14525/improving-user-experience-with-ssh-keys-for-sshfs-and-volume-mounts

    Many, many thanks. And, if I find anything useful, I'll update that thread. Or, this one, and cross-link.

    Support backups sshfs

  • Improving user experience with SSH keys for SSHFS and volume mounts
    jadudmJ jadudm

    feature statement

    As a user, I want copy-paste to "just work" when pasting SSH private keys into Cloudron.

    context

    When setting up SSHFS, either for backups or volume mounts, a private key is needed. These typically have the form

    -----BEGIN OPENSSH PRIVATE KEY-----
    MULTIPLE/ASDFLAKSJDFLKAJASDFLKJASDF
    LINES/ASDFASDFKLJASDLFJKSADFLKJASDF
    OF/ASDFLKJASDFLKJASDFLKJASDFLJKASDL
    BASE64/ASDFJKLASDFLKJASDLFJKASDFLKJ
    DATA/ANDPADDING=
    -----END OPENSSH PRIVATE KEY-----
    

    As a user, I might be copy-pasting this from a number of places.

    1. I might cat a private key on my terminal, and have to use a three-key sequence (CTRL-SHIFT-C) to copy
    2. I might cat a private key in a web terminal, and have to CTRL-INS to copy (because that is how the web terminal is configured)
    3. I might use Bitwarden/Vaultwarden, and have it generate a keypair for me. That key will then have a "copy icon" that I can click for both the public and private keys
    4. I might use a web gui in another product (e.g. TrueNAS Scale) to generate the keys, and copy-paste out of a web text area

    In each case, the way whitespace is handled may vary.

    Further, it appears (based on skimming things on the web) that SSH defines the protocol, but there are not good definitions for how SSH keys should be stored. That is, the bytestream representation for communicating them between client and server is specified, but it is a bit up-in-the-air as to how they should be stored at rest.

    On inspection, it looks like it is common for a MIME encoding to be used on the Base64 content. Base64 does not consider __ (that's a space) to be a valid character. Some encodings, like MIME, specify maximum line lengths, but the use of spaces/newlines/etc. as separators should be ignored.

    https://en.wikipedia.org/wiki/Base64

    (Apologies for not linking to authoritative sources/RFCs.)

    the problem

    Long story short: when I paste a private key into Cloudron, I am pasting a lot of text into a small text area. How whitespaces or linebreaks are or are not used once I hit "Save" or "Submit" is invisible to me as a user. However, it is clear that it has impact.

    1. When I copy-paste and carefully preserve line breaks, it appears to work.
    2. When I use Bitwarden, and copy-paste from an auto-generated keypair, it appears to fail.

    replicating the error

    1. Go to your Bitwarden install
    2. Generate and save an SSH keypair
    3. Copy the private key
    4. Create an SSHFS volume mount
    5. Paste in the private key
    6. On another system, add the public key to the authorized_keys file
    7. It should fail.

    It is also possible that there is some kind of subtle user error taking place; however, I'm uncertain where to look in my Cloudron instance to debug this under the covers.

    what i want as a user

    I want things to "just work."

    In this case, I would like Cloudron to either:

    1. Warn me my key is not well-formatted, or
    2. Make a best effort to format the key appropriately behind-the-scenes

    If I paste something like this (the Bitwarden example):

    -----BEGIN OPENSSH PRIVATE KEY----- MULTIPLE/ASDFLAKSJDFLKAJASDFLKJASDF LINES/ASDFASDFKLJASDLFJKSADFLKJASDF ... -----END OPENSSH PRIVATE KEY-----
    

    with whitespaces instead of newlines, I expect Cloudron to write it to disk replacing my spaces with newlines, so it becomes:

    -----BEGIN OPENSSH PRIVATE KEY-----
    MULTIPLE/ASDFLAKSJDFLKAJASDFLKJASDF
    LINES/ASDFASDFKLJASDLFJKSADFLKJASDF ... 
    -----END OPENSSH PRIVATE KEY-----
    

    if that is necessary to "make it just work." Or, I expect it to complain, and tell me the format is invalid. Either way, I don't want to be able to paste a key and then have SSH failures that are inscrutable. (SSHFS mount failed for unknown reason, or whatever the vague error case is.)

    other solutions I'd think work for me as a user

    I'd also be happy to:

    1. Have Cloudron generate the keypair for me, and let me copy the key(s) (pub/priv) to my local machine. Or, you could put them on a page and say "copy these and don't lose them." Either way, if you control key generation, you guarantee that I can't mess them up. (Or, if I mess them up elsewhere, that's my problem, not yours).
    2. Upload a file for the key. It would be OK if I uploaded the keyfile. This way, I can inspect it on disk, and the upload process won't (shouldn't?) mangle the file en route.

    The spirit here is that I'm excited about anything that doesn't have invisible errors.

    fun find

    https://superuser.com/questions/1444319/how-to-check-ssh-key-version-locally

    You can do

    ssh-keygen -l -f <file>
    

    and if it is a valid pub or priv keyfile, it will spit out

    <bits> <SHA> <comment> (<type>)
    

    which may be a good check to add to the backend after writing the key. Then, you could either get a valid SHA, or you could say "Could not generate SHA of SSH key; see <docs> for more info."

    side note: types of key

    Some (probably poorly written) systems only accept RSA keys (vs ED25519, etc.). This probably has to do with OpenSSL version(s) that are installed.

    If there are any known limitations to Cloudron's use of pub/priv keypairs (e.g. "Cloudron can only use RSA keys up to 2048 bits"), then that should be communicated to the user up front. I think Cloudron is fine with any valid kind of SSH key, but that would be invisible to me at the moment.

    Feature Requests

  • Does mounting a backup location via SSHFS change the remote directory permissions?
    jadudmJ jadudm

    @nebulon , will do. I realized I can probably also dig around on my instance and look at what the mounting scripts are doing to debug further. Many thanks.

    Support backups sshfs

  • Does mounting a backup location via SSHFS change the remote directory permissions?
    jadudmJ jadudm

    And...

    Reading

    https://superuser.com/questions/1477472/openssh-public-key-file-format

    and digging in to some of the RFCs a bit deeper, it seems like this is a complex, largely unspecified space.

    It might be good if Cloudron:

    1. Was clear about what format it could ingest, and
    2. Considered accepting a file upload for the private key

    as opposed to dealing with copy-paste. But, either way... being clear about what was expected from us for the key (at least as far as Cloudron is concerned) would be good.

    Support backups sshfs

  • Does mounting a backup location via SSHFS change the remote directory permissions?
    jadudmJ jadudm

    And, while I'm at it...

    This came up because I had set up:

    1. Backups
    2. An SSHFS mount for NextCloud
    3. A separate SSHFS mount for Navidrome

    All of these connections worked. I even went through multiple backup cycles.

    Then, this afternoon, the mounts all failed.

    I cannot determine what caused it. I was able to reset some keys, and get mounts to work. But, now, my mounts are failing again, and I suspect I'm going to find permissions/other issues. I cannot yet get to a root cause.

    1. I am very suspicious of Cloudron's SSHFS mount code. Given that it seems to make aggressive permission changes, I'm worried. That said,
    2. It could be something about TrueNAS Scale. That said, it is "just" a Debian. On the other hand, I've never worked with ZFS or TrueNAS. So... is there something going on, where permissions are shifting?

    What bothers me is that I can, from both my Cloudron host and my local machine, use the SSH keys in question without difficulty. So, I am not inclined to believe that TrueNAS is doing something odd, given that the standard SSH from a Linux command line can connect, but Cloudron fails to make mounts. Something is breaking, and I don't know if I have the right logs/tools to debug what is going on in Box.

    Happy to do what I can to help.

    Support backups sshfs

  • Does mounting a backup location via SSHFS change the remote directory permissions?
    jadudmJ jadudm

    Another lesson learned. @nebulon , the SSHFS mounting code is kinda fragile, I think. This is still on 8.3.2.

    In setting up a volume mount, I tried pasting in an SSH private key.

    If I paste in

    -----BEGIN ... ----- asdfkljasdflkjasdf alsdkfjals kdfjalskdjf asdlfjkasdlfkjasldfkj -----END ...------
    

    then things do not work. However, if I carefully reformat my key:

    -----BEGIN ... -----
    asdfkljasdflkjasdf
    alsdkfjals
    kdfjalskdjf
    asdlfjkasdlfkjasldfkj
    -----END ...------
    

    and paste it in, then the key works. This matters because I stored my key in a custom field in Bitwarden, and hit the "copy" button in the Bitwarden browser gui. The key came out somewhat mangled.

    I would argue the whitespace was safe to split on, and could have been reformatted easily into a good key. However, I had to paste it into Cloudron exactly right, or else I got auth failures.

    Maybe that is on me, but it feels like when setting up SSH mounts, splitting and formatting on whitespace is splitting and formatting on whitespace. Given that the whitespace issues are invisible to me (and Cloudron does not help me debug it... nor do the auth.log messages on the remote server), it might be nice if the GUI was a but more forgiving, or able to give me a hint.

    Food for thought, anyway. I don't know if/how much of my issues have been this vs. other challenges. (I know the permissions issue is real, and repeatable. This also seems to be repeatable.)

    Good luck; the v9 firehose seems real...

    Support backups sshfs

  • Does mounting a backup location via SSHFS change the remote directory permissions?
    jadudmJ jadudm

    This solved the problem.

    (Editing later: "this" meaning "mounting a path like $HOME/subdir solved the problem, because the permissions on $HOME remained 755, but the permissions on subdir were still changed to 777. This is good, because $HOME has to be 755, or SSH will fail. But...)

    I'm still concerned that the remote directory becomes

    drwxrwxrwx 3 cbackup cbackup    3 Nov  3 14:33 aloe
    

    which seems awfully permissive. In this instance, I don't have a security threat (or, if someone gets onto the NAS, this is the least of my problems). But once I'm SSH'd into a machine via SSHFS, I'd think that drwx------ would be fine. (Put another way: once Cloudron has the private key, it should not need to set permissions on the remote directory at all... unless this is somehow related to symlinking, or what rsync wants to do, or...)

    Either way, many thanks for the good ideas. I think I'm moving forward. We'll call this one closed.

    Support backups sshfs

  • Does mounting a backup location via SSHFS change the remote directory permissions?
    jadudmJ jadudm

    Good thought. I had added a prefix, which didn't make a difference (because I was mounting $HOME), but that might make all the difference. I'll report back after the experiment.

    Support backups sshfs

  • Does mounting a backup location via SSHFS change the remote directory permissions?
    jadudmJ jadudm

    Description

    I have set up a TrueNAS Scale host. We'll call this nas.lan, and my Cloudron host cloudron.lan. They're both internal (10.x.y.z) addresses that my local DNS server has provided static DHCP entries for. I can ping them, etc.

    It seems that configuring a directory on nas.lan via the Cloudron Backups SSHFS option changes the directory permissions from 755 to 777, which breaks ssh.

    1. On my Cloudron host, I created an SSH keypair.
    2. I created a user, cbackup, on nas.lan.
    3. I provided the public key for the cbackup user to nas.lan (this is part of the GUI-driven user creation process in TrueNAS).
    4. I ssh into cloudron.lan, and I can then use the private key I created to ssh into nas.lan. This tells me the key works.
      1. I can also do this from another host in the network, if I move the key. So, I believe the key is good. And, multiple machines can hit nas.lan and log in as the cbackup user with an SSH key.
    5. I go to cloudron.lan, and think "this is excellent, I will now configure SSHFS for backups." It is important to note that I am excited about moving my backups to a ZFS mirrored pair of drives, served from nas.lan, and mounted from cloudron.lan via SSHFS.
    6. I go to the Backups portion of the admin, and choose "Configure" to set up my SSHFS-mounted backup location.
    7. I enter all of the information. It is correct.
    8. I get a failure for unknown reasons.

    Now, here's what's cool.

    1. When I first create the cbackup user on nas.lan, I can see that the home directory has permissions 755.
    2. When I ssh in with my key, I can see that the home directory has 755.
    3. If I create files, my home directory remains 755.
    4. If I create directories, my home directory remains 755.
    5. If I wait a bit, just to see what happens, for no reason, I can ssh in and my permissions are 755.
    6. If I restart nas.lan, to see if something magical happens on restart, nothing magic happens, and my cbackup user has a home directory with permissions 755.

    Now, if I go to the configuration for backups on cloudron.lan, and try and configure an SSHFS mount on the NAS, the mount fails. If I log into the NAS shell via the browser, su to root, and look at my cbackup user's home directory... it has permissions 777.

    Question: Does the SSHFS mount do anything to change the permissions of the home directory on the remote system? Why, after trying to configure an SSHFS backup mount would the home directory on the remote system change from 755 to 777?

    Steps to reproduce

    1. chmod 755 /mnt/poolone/cbackup (this is $HOME)
    2. Confirm that permissions on $HOME are 755
    3. SSH into the machine using the private key from ssh on the command line
    4. Confirm 755 permissions
    5. Create things, do things, log out, restart nas.lan, etc., observe a non-changing home directory with perms 755
    6. SSHFS setup from Cloudron
    7. Cannot SSH into the machine
    8. Sneak into the machine using the web terminal on nas.lan, and confirm that $HOME now has perms 777

    Logs

    If I confirm permissions 755 and SSH in, everything is fine. Below are the logs from an attempt to mount the SSHFS backup location.

    2025-11-02T20:15:26.944Z box:backups setStorage: validating new storage configuration
    2025-11-02T20:15:26.944Z box:backups setupManagedStorage: setting up mount at /mnt/backup-storage-validation with sshfs
    2025-11-02T20:15:26.946Z box:shell mounts /usr/bin/sudo -S /home/yellowtent/box/src/scripts/addmount.sh [Unit]\nDescription=backup-storage-validation\n\nRequires=network-online.target\nAfter=network-online.target\nBefore=docker.service\n\n\n[Mount]\nWhat=cbackup@22:/mnt/poolone/cbackup\nWhere=/mnt/backup-storage-validation\nOptions=allow_other,port=22,IdentityFile=/home/yellowtent/platformdata/sshfs/id_rsa_22,StrictHostKeyChecking=no,reconnect\nType=fuse.sshfs\n\n[Install]\nWantedBy=multi-user.target\n\n 10
    2025-11-02T20:15:30.113Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive
    2025-11-02T20:15:37.521Z box:shell Failed to mount
    
    2025-11-02T20:15:37.525Z box:shell mounts: /usr/bin/sudo -S /home/yellowtent/box/src/scripts/addmount.sh [Unit]\nDescription=backup-storage-validation\n\nRequires=network-online.target\nAfter=network-online.target\nBefore=docker.service\n\n\n[Mount]\nWhat=cbackup@22:/mnt/poolone/cbackup\nWhere=/mnt/backup-storage-validation\nOptions=allow_other,port=22,IdentityFile=/home/yellowtent/platformdata/sshfs/id_rsa_22,StrictHostKeyChecking=no,reconnect\nType=fuse.sshfs\n\n[Install]\nWantedBy=multi-user.target\n\n 10 errored BoxError: mounts exited with code 3 signal null
        at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19)
        at ChildProcess.emit (node:events:519:28)
        at ChildProcess._handle.onexit (node:internal/child_process:294:12) {
      reason: 'Shell Error',
      details: {},
      code: 3,
      signal: null
    }
    2025-11-02T20:15:37.525Z box:shell mounts: mountpoint -q -- /mnt/backup-storage-validation
    2025-11-02T20:15:40.090Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive
    2025-11-02T20:15:42.535Z box:shell mounts: mountpoint -q -- /mnt/backup-storage-validation errored BoxError: mountpoint exited with code null signal SIGTERM
        at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:72:23)
        at ChildProcess.emit (node:events:519:28)
        at maybeClose (node:internal/child_process:1105:16)
        at ChildProcess._handle.onexit (node:internal/child_process:305:5) {
      reason: 'Shell Error',
      details: {},
      stdout: <Buffer >,
      stdoutLineCount: 0,
      stderr: <Buffer >,
      stderrLineCount: 0,
      code: null,
      signal: 'SIGTERM'
    }
    2025-11-02T20:15:42.536Z box:shell mounts: systemd-escape -p --suffix=mount /mnt/backup-storage-validation
    2025-11-02T20:15:42.551Z box:shell mounts: journalctl -u mnt-backup\x2dstorage\x2dvalidation.mount\n -n 10 --no-pager -o json
    2025-11-02T20:15:42.570Z box:shell mounts /usr/bin/sudo -S /home/yellowtent/box/src/scripts/rmmount.sh /mnt/backup-storage-validation
    2025-11-02T20:15:50.084Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive
    
    

    Troubleshooting Already Performed

    See above.

    System Details

    Generate Diagnostics Data

    I'll send this if it seems warranted.

    Cloudron Version

    8.3.2

    Ubuntu Version

    No LSB modules are available.
    Distributor ID: Ubuntu
    Description:    Ubuntu 24.04.2 LTS
    Release:        24.04
    Codename:       noble
    

    Cloudron installation method

    A long time ago. Manual.

    Output of cloudron-support --troubleshoot

    I can cleanup my ipv6 at some point. I nuked it further up the chain, too.

    Vendor: Dell Inc. Product: OptiPlex 7040
    Linux: 6.8.0-86-generic
    Ubuntu: noble 24.04
    Processor: Intel(R) Core(TM) i5-6500T CPU @ 2.50GHz
    BIOS Intel(R) Core(TM) i5-6500T CPU @ 2.50GHz  CPU @ 2.4GHz x 4
    RAM: 32729416KB
    Disk: /dev/nvme0n1p2  734G
    [OK]    node version is correct
    [FAIL]  Server has an IPv6 address but api.cloudron.io is unreachable via IPv6 (ping6 -q -c 1 api.cloudron.io)
    Instead of disabling IPv6 globally, you can disable it at an interface level.
            sysctl -w net.ipv6.conf.enp0s31f6.disable_ipv6=1
            sysctl -w net.ipv6.conf.tailscale0.disable_ipv6=1
    For the above configuration to persist across reboots, you have to add below to /etc/sysctl.conf
            net.ipv6.conf.enp0s31f6.disable_ipv6=1
            net.ipv6.conf.tailscale0.disable_ipv6=1
    
    Support backups sshfs
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Recent
  • Tags
  • Popular
  • Bookmarks
  • Search