@nebulon , per request:
Many, many thanks. And, if I find anything useful, I'll update that thread. Or, this one, and cross-link.
@nebulon , per request:
Many, many thanks. And, if I find anything useful, I'll update that thread. Or, this one, and cross-link.
As a user, I want copy-paste to "just work" when pasting SSH private keys into Cloudron.
When setting up SSHFS, either for backups or volume mounts, a private key is needed. These typically have the form
-----BEGIN OPENSSH PRIVATE KEY-----
MULTIPLE/ASDFLAKSJDFLKAJASDFLKJASDF
LINES/ASDFASDFKLJASDLFJKSADFLKJASDF
OF/ASDFLKJASDFLKJASDFLKJASDFLJKASDL
BASE64/ASDFJKLASDFLKJASDLFJKASDFLKJ
DATA/ANDPADDING=
-----END OPENSSH PRIVATE KEY-----
As a user, I might be copy-pasting this from a number of places.
cat a private key on my terminal, and have to use a three-key sequence (CTRL-SHIFT-C) to copycat a private key in a web terminal, and have to CTRL-INS to copy (because that is how the web terminal is configured)In each case, the way whitespace is handled may vary.
Further, it appears (based on skimming things on the web) that SSH defines the protocol, but there are not good definitions for how SSH keys should be stored. That is, the bytestream representation for communicating them between client and server is specified, but it is a bit up-in-the-air as to how they should be stored at rest.
On inspection, it looks like it is common for a MIME encoding to be used on the Base64 content. Base64 does not consider __ (that's a space) to be a valid character. Some encodings, like MIME, specify maximum line lengths, but the use of spaces/newlines/etc. as separators should be ignored.
https://en.wikipedia.org/wiki/Base64
(Apologies for not linking to authoritative sources/RFCs.)
Long story short: when I paste a private key into Cloudron, I am pasting a lot of text into a small text area. How whitespaces or linebreaks are or are not used once I hit "Save" or "Submit" is invisible to me as a user. However, it is clear that it has impact.
authorized_keys fileIt is also possible that there is some kind of subtle user error taking place; however, I'm uncertain where to look in my Cloudron instance to debug this under the covers.
I want things to "just work."
In this case, I would like Cloudron to either:
If I paste something like this (the Bitwarden example):
-----BEGIN OPENSSH PRIVATE KEY----- MULTIPLE/ASDFLAKSJDFLKAJASDFLKJASDF LINES/ASDFASDFKLJASDLFJKSADFLKJASDF ... -----END OPENSSH PRIVATE KEY-----
with whitespaces instead of newlines, I expect Cloudron to write it to disk replacing my spaces with newlines, so it becomes:
-----BEGIN OPENSSH PRIVATE KEY-----
MULTIPLE/ASDFLAKSJDFLKAJASDFLKJASDF
LINES/ASDFASDFKLJASDLFJKSADFLKJASDF ...
-----END OPENSSH PRIVATE KEY-----
if that is necessary to "make it just work." Or, I expect it to complain, and tell me the format is invalid. Either way, I don't want to be able to paste a key and then have SSH failures that are inscrutable. (SSHFS mount failed for unknown reason, or whatever the vague error case is.)
I'd also be happy to:
The spirit here is that I'm excited about anything that doesn't have invisible errors.
https://superuser.com/questions/1444319/how-to-check-ssh-key-version-locally
You can do
ssh-keygen -l -f <file>
and if it is a valid pub or priv keyfile, it will spit out
<bits> <SHA> <comment> (<type>)
which may be a good check to add to the backend after writing the key. Then, you could either get a valid SHA, or you could say "Could not generate SHA of SSH key; see <docs> for more info."
Some (probably poorly written) systems only accept RSA keys (vs ED25519, etc.). This probably has to do with OpenSSL version(s) that are installed.
If there are any known limitations to Cloudron's use of pub/priv keypairs (e.g. "Cloudron can only use RSA keys up to 2048 bits"), then that should be communicated to the user up front. I think Cloudron is fine with any valid kind of SSH key, but that would be invisible to me at the moment.
@nebulon , will do. I realized I can probably also dig around on my instance and look at what the mounting scripts are doing to debug further. Many thanks.
And...
Reading
https://superuser.com/questions/1477472/openssh-public-key-file-format
and digging in to some of the RFCs a bit deeper, it seems like this is a complex, largely unspecified space.
It might be good if Cloudron:
as opposed to dealing with copy-paste. But, either way... being clear about what was expected from us for the key (at least as far as Cloudron is concerned) would be good.
And, while I'm at it...
This came up because I had set up:
All of these connections worked. I even went through multiple backup cycles.
Then, this afternoon, the mounts all failed.
I cannot determine what caused it. I was able to reset some keys, and get mounts to work. But, now, my mounts are failing again, and I suspect I'm going to find permissions/other issues. I cannot yet get to a root cause.
What bothers me is that I can, from both my Cloudron host and my local machine, use the SSH keys in question without difficulty. So, I am not inclined to believe that TrueNAS is doing something odd, given that the standard SSH from a Linux command line can connect, but Cloudron fails to make mounts. Something is breaking, and I don't know if I have the right logs/tools to debug what is going on in Box.
Happy to do what I can to help.
Another lesson learned. @nebulon , the SSHFS mounting code is kinda fragile, I think. This is still on 8.3.2.
In setting up a volume mount, I tried pasting in an SSH private key.
If I paste in
-----BEGIN ... ----- asdfkljasdflkjasdf alsdkfjals kdfjalskdjf asdlfjkasdlfkjasldfkj -----END ...------
then things do not work. However, if I carefully reformat my key:
-----BEGIN ... -----
asdfkljasdflkjasdf
alsdkfjals
kdfjalskdjf
asdlfjkasdlfkjasldfkj
-----END ...------
and paste it in, then the key works. This matters because I stored my key in a custom field in Bitwarden, and hit the "copy" button in the Bitwarden browser gui. The key came out somewhat mangled.
I would argue the whitespace was safe to split on, and could have been reformatted easily into a good key. However, I had to paste it into Cloudron exactly right, or else I got auth failures.
Maybe that is on me, but it feels like when setting up SSH mounts, splitting and formatting on whitespace is splitting and formatting on whitespace. Given that the whitespace issues are invisible to me (and Cloudron does not help me debug it... nor do the auth.log messages on the remote server), it might be nice if the GUI was a but more forgiving, or able to give me a hint.
Food for thought, anyway. I don't know if/how much of my issues have been this vs. other challenges. (I know the permissions issue is real, and repeatable. This also seems to be repeatable.)
Good luck; the v9 firehose seems real...
This solved the problem.
(Editing later: "this" meaning "mounting a path like $HOME/subdir solved the problem, because the permissions on $HOME remained 755, but the permissions on subdir were still changed to 777. This is good, because $HOME has to be 755, or SSH will fail. But...)
I'm still concerned that the remote directory becomes
drwxrwxrwx 3 cbackup cbackup 3 Nov 3 14:33 aloe
which seems awfully permissive. In this instance, I don't have a security threat (or, if someone gets onto the NAS, this is the least of my problems). But once I'm SSH'd into a machine via SSHFS, I'd think that drwx------ would be fine. (Put another way: once Cloudron has the private key, it should not need to set permissions on the remote directory at all... unless this is somehow related to symlinking, or what rsync wants to do, or...)
Either way, many thanks for the good ideas. I think I'm moving forward. We'll call this one closed.
Good thought. I had added a prefix, which didn't make a difference (because I was mounting $HOME), but that might make all the difference. I'll report back after the experiment.
I have set up a TrueNAS Scale host. We'll call this nas.lan, and my Cloudron host cloudron.lan. They're both internal (10.x.y.z) addresses that my local DNS server has provided static DHCP entries for. I can ping them, etc.
It seems that configuring a directory on nas.lan via the Cloudron Backups SSHFS option changes the directory permissions from 755 to 777, which breaks ssh.
cbackup, on nas.lan.cbackup user to nas.lan (this is part of the GUI-driven user creation process in TrueNAS).ssh into cloudron.lan, and I can then use the private key I created to ssh into nas.lan. This tells me the key works.
nas.lan and log in as the cbackup user with an SSH key.cloudron.lan, and think "this is excellent, I will now configure SSHFS for backups." It is important to note that I am excited about moving my backups to a ZFS mirrored pair of drives, served from nas.lan, and mounted from cloudron.lan via SSHFS.Now, here's what's cool.
cbackup user on nas.lan, I can see that the home directory has permissions 755.nas.lan, to see if something magical happens on restart, nothing magic happens, and my cbackup user has a home directory with permissions 755.Now, if I go to the configuration for backups on cloudron.lan, and try and configure an SSHFS mount on the NAS, the mount fails. If I log into the NAS shell via the browser, su to root, and look at my cbackup user's home directory... it has permissions 777.
Question: Does the SSHFS mount do anything to change the permissions of the home directory on the remote system? Why, after trying to configure an SSHFS backup mount would the home directory on the remote system change from 755 to 777?
chmod 755 /mnt/poolone/cbackup (this is $HOME)ssh on the command linenas.lan, etc., observe a non-changing home directory with perms 755nas.lan, and confirm that $HOME now has perms 777If I confirm permissions 755 and SSH in, everything is fine. Below are the logs from an attempt to mount the SSHFS backup location.
2025-11-02T20:15:26.944Z box:backups setStorage: validating new storage configuration
2025-11-02T20:15:26.944Z box:backups setupManagedStorage: setting up mount at /mnt/backup-storage-validation with sshfs
2025-11-02T20:15:26.946Z box:shell mounts /usr/bin/sudo -S /home/yellowtent/box/src/scripts/addmount.sh [Unit]\nDescription=backup-storage-validation\n\nRequires=network-online.target\nAfter=network-online.target\nBefore=docker.service\n\n\n[Mount]\nWhat=cbackup@22:/mnt/poolone/cbackup\nWhere=/mnt/backup-storage-validation\nOptions=allow_other,port=22,IdentityFile=/home/yellowtent/platformdata/sshfs/id_rsa_22,StrictHostKeyChecking=no,reconnect\nType=fuse.sshfs\n\n[Install]\nWantedBy=multi-user.target\n\n 10
2025-11-02T20:15:30.113Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive
2025-11-02T20:15:37.521Z box:shell Failed to mount
2025-11-02T20:15:37.525Z box:shell mounts: /usr/bin/sudo -S /home/yellowtent/box/src/scripts/addmount.sh [Unit]\nDescription=backup-storage-validation\n\nRequires=network-online.target\nAfter=network-online.target\nBefore=docker.service\n\n\n[Mount]\nWhat=cbackup@22:/mnt/poolone/cbackup\nWhere=/mnt/backup-storage-validation\nOptions=allow_other,port=22,IdentityFile=/home/yellowtent/platformdata/sshfs/id_rsa_22,StrictHostKeyChecking=no,reconnect\nType=fuse.sshfs\n\n[Install]\nWantedBy=multi-user.target\n\n 10 errored BoxError: mounts exited with code 3 signal null
at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:137:19)
at ChildProcess.emit (node:events:519:28)
at ChildProcess._handle.onexit (node:internal/child_process:294:12) {
reason: 'Shell Error',
details: {},
code: 3,
signal: null
}
2025-11-02T20:15:37.525Z box:shell mounts: mountpoint -q -- /mnt/backup-storage-validation
2025-11-02T20:15:40.090Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive
2025-11-02T20:15:42.535Z box:shell mounts: mountpoint -q -- /mnt/backup-storage-validation errored BoxError: mountpoint exited with code null signal SIGTERM
at ChildProcess.<anonymous> (/home/yellowtent/box/src/shell.js:72:23)
at ChildProcess.emit (node:events:519:28)
at maybeClose (node:internal/child_process:1105:16)
at ChildProcess._handle.onexit (node:internal/child_process:305:5) {
reason: 'Shell Error',
details: {},
stdout: <Buffer >,
stdoutLineCount: 0,
stderr: <Buffer >,
stderrLineCount: 0,
code: null,
signal: 'SIGTERM'
}
2025-11-02T20:15:42.536Z box:shell mounts: systemd-escape -p --suffix=mount /mnt/backup-storage-validation
2025-11-02T20:15:42.551Z box:shell mounts: journalctl -u mnt-backup\x2dstorage\x2dvalidation.mount\n -n 10 --no-pager -o json
2025-11-02T20:15:42.570Z box:shell mounts /usr/bin/sudo -S /home/yellowtent/box/src/scripts/rmmount.sh /mnt/backup-storage-validation
2025-11-02T20:15:50.084Z box:apphealthmonitor app health: 19 running / 0 stopped / 0 unresponsive
See above.
I'll send this if it seems warranted.
8.3.2
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 24.04.2 LTS
Release: 24.04
Codename: noble
A long time ago. Manual.
cloudron-support --troubleshootI can cleanup my ipv6 at some point. I nuked it further up the chain, too.
Vendor: Dell Inc. Product: OptiPlex 7040
Linux: 6.8.0-86-generic
Ubuntu: noble 24.04
Processor: Intel(R) Core(TM) i5-6500T CPU @ 2.50GHz
BIOS Intel(R) Core(TM) i5-6500T CPU @ 2.50GHz CPU @ 2.4GHz x 4
RAM: 32729416KB
Disk: /dev/nvme0n1p2 734G
[OK] node version is correct
[FAIL] Server has an IPv6 address but api.cloudron.io is unreachable via IPv6 (ping6 -q -c 1 api.cloudron.io)
Instead of disabling IPv6 globally, you can disable it at an interface level.
sysctl -w net.ipv6.conf.enp0s31f6.disable_ipv6=1
sysctl -w net.ipv6.conf.tailscale0.disable_ipv6=1
For the above configuration to persist across reboots, you have to add below to /etc/sysctl.conf
net.ipv6.conf.enp0s31f6.disable_ipv6=1
net.ipv6.conf.tailscale0.disable_ipv6=1
Many thanks, @james. The text was absolutely helpful. Also, that diff hint is gold. I may end up with a PR against the docs when I'm done, because that kind of hint is a nice trick to figure out quickly all the places an app touches the disk.
Hi all,
I'm looking for some insight into packaging. Invariably, I get started packaging an app, and then I run into something about the Cloudron model that slows me down, and then weeks/months go by, because I can't dedicate lots of time to this kind of work.
I'm going to pick Passbolt as my most recent example of a "started and then got stuck" problem. I thought "It would be nice to have a 100% open source password vault on Cloudron." That, and it looks easier to use than Bitwarden for my use case. I dug in a bit, and it seems straight-forward enough:
BLUF: All three approaches seem to be stuck on the non-writeable filesystem, and the inability for packages to map arbitrary paths inside the container to paths outside the container (e.g. --volumes).
If there's another way to solve this packaging puzzle, I would appreciate suggestions. Otherwise, I'd love having the ability to add constrained volume mappings to the manifest. E.g.
[
{ "path": "/var/log", "destination": "log" },
...
]
where the "destinations" would be under /app/mappings/... or similar.
I started by wondering if I could use their Docker image as a base image, and just build some layers on top. What appears to be a blocker is that their container is writeable; so, Passbolt expects to be able to write logs to /var/log and so on. Because Cloudron expects containers to be read-only, this approach rapidly fails.
I do not know of any way, in the Cloudron app model, to map arbitrary paths inside the container to paths outside the container. (Put another way: I cannot set up --volume mappings.) I cannot ln -s in a Dockerfile and have it persist. (If there is a way, I do not know what it is.) So, finding a way to grant write access to arbitrary paths within the container feels like a non-starter. (And I don't think I can prevent the app from engaging in this behavior, short of making upstream patches.)
Unless someone can provide a nudge, I think this approach does not work.
I decided I would look at the Ubuntu installation instructions, on the theory that I would use those to build a container. This looked promising; they have a Debian/Ubuntu package, so installation should be one apt-get call!
Before I got very far, I realized this will still have the same problem as the Docker-based approach. Passbolt assumes it has a clean Debian installation to work with. And, the package installs a database and who-knows-what-else. (I already have a DB; I don't want apt-get to try and install MariaDB inside of the container...) So, this is much the same as installing Cloudron: it wants a whole machine, and expects to do all the work.
Using their Ubuntu package does not seem like the right approach to building a new container.
I could build it from scratch. They document it. To do this "right," I think I would:
Unfortunately, I'm still going to run into the non-writeable filesystem. The code I built will still want to write to places that are impossible under the Cloudron security model. I... could try and patch their code, but ultimately, I'm going to need a way to make some things in /etc and /var editable. I think. Or, I have to find a way to modify the code/redirect those mappings. That feels complex and fragile as Passbolt updates their software; I don't want to maintain patchfiles against an upstream.
I could make a PR against their codebase that attempts to move all file write operations into one fixed location in the filesystem, and make that location configurable via ENV var. (Or, multiple locations configurable via multiple ENV vars.)
I don't know if they'd take that PR. (There's a lot of work in this approach. I'm not going to walk this path today.) It seems like, if the app was developed differently (meaning: if the app only wrote to the disk where I told it to), then this problem wouldn't exist. I'd set those environment variables and my uncle would be named "Bob."
I feel like I read how to solve this, somewhere... but now I can't remember/find it.
The app expects the certs to be in /etc/ssl/certs... and I know Cloudron manages those. However, this feels like the least of my packaging problems at this point, and I know that the app should "just" listen to HTTP on the port declared in the manifest... but. I'll tackle that if/when I can tackle the other challenges.
I'm out of questions, really. At some level, it feels like this should be easier/more straight-forward---like I'm missing something. But, maybe I'm not.
For anyone doing a setup on Cloudron, I found my way here. I was specifically trying to set up monitoring of a Minio instance on my installation.
mc admin prometheus generate ALIAS, you will need to make sure you set that alias up as having enough permissions to actually read everything. Using an alias that has limited access controls (e.g. read-only on a single bucket) will not work; Prometheus will not be able to query the Minio API.At this point, you can add some dashboards. I used the "import" option, and copy-pasted JSON from these dashboards:
https://grafana.com/grafana/dashboards/13502-minio-dashboard/
At that point, I had dashboards living for my Minio installation.
I have no idea yet what log volume will look like, or rotation, or... or... but, I might update this thread (or start another) later as I discover those things. I found myself having to dig around for this information/experiment. Perhaps I just didn't read the correct docs.
Understood. And, yes: the support side would be horrible. Thank you!
I'm going to re-open this thread for a moment.
I ran into this with a Minio instance.
I intentionally set the instance up so that:
rclone to periodically clone this instance to B2In short: I have no interest in Cloudron trying to handle backups for my object store. I currently replicate it periodically to B2 and (soon) to a second Minio object store, off-prem. (I was about to set up the off-prem replication when I encountered all of this.)
My Cloudron is hosted on a physical machine with a 1TB NVMe drive, and I have a 1TB SSD for backups. It simply is not possible for my backup drive to handle backing up Minio. This was a conscious choice on my part, because I knew I could disable backups on the Minio app. After digging around the forums, I discovered that I have to disable auto-updates for the app, and manually handle those---so I can manually skip the backups when updating.
However, I'd really like to benefit from automatic application updates.
I recognize the danger to users to have a way to disable backups on an app completely. But, my backups disk is now almost full, because I didn't realize it was backing up Minio on software update. Once Minio grew to a point that I could not fit it in the backup disk, that update started failing. It is only a matter of time until other app updates started to fail, because they do not fit in the remaining space, either. But, really... Minio (from my point of view, once I disabled updates) should never have hit my backup disk.
Admittedly there is a kind of Catch-22 here. However, once you give users the ability to put their data in separate locations, and disable backups on an app, it is reasonable (I would claim) for the user to expect backups to be disabled.
Apologies for raising this thread from the dead, but it feels like the "disable backups for this app" feature is actually not doing what it says. Or, it needs to be clearer: "Disable your scheduled backups, but not the backups that run when the app auto-updates, at which point we'll run a backup on the app and its data regardless. If you don't want Cloudron to backup your data, also disable auto-updates, so you can manually update the app and manually skip backups with each update."
@jdaviescoates and @girish: Excellent. Thank you. I can work with this. Very much appreciated.
Hi all,
I can't find this question asked elsewhere. I'm hoping there's a simple answer.
What is cached, where, that does this? How can I clear that cache so I can log in as Bob?
Many thanks,
M
I've started work on this, and will update this thread when I have it in a repo. That might be later today, it might be in another day or two. I managed to get:
And, at that point, I have more environment variables to set.
So, it seems possible/I'll make the work public shortly.
Agreed. I'm not offering thoughts from a spirit of "GIVE UP!," by any means. It is more from the perspective of "I think this one is trickier than it seems at first glance."
But, I am still learning. So, the staff may say "this is actually easy!" Or, they might say "Yep, it's kinda tricky." And, as a result, we all learn more.
I would welcome input from a member of the team on this.
Docker is intended to run a single process in a single container. When you want to run multiple services, you run multiple containers.
This is where you (typically) would use some kind of tool to orchestrate or compose those containers. For example, a docker-compose.yml will define a set of services, and how they connect and interact with each-other.
Cloudron is designed to host singleton containers. Unlike industrial-scale platforms as a service, it provides limited tooling for how to define connections to other services. The manifest allows you to connect to the services that exist; for example, I can say "connect to the Cloudron-provided Postgres server." However, I have no way to say "I have chosen to run an S3 server/Minio at location X, and please use it." As a result, it puts a significant burden on the user. Further, there is no way within the package to say "this should not boot if that service is not present." You have to write custom code in order to provide that logic.
Further, the Docs app itself wants to run multiple services. The frontend is separate from multiple backends in their design. The app itself has orchestration concerns and considerations.
So while I appreciate people saying "but it is all there!," we're not discussing what is required to make this a production-grade package.
I'm forgetting things, I'm sure. I estimate 80-120h of work in this, and it is essentially devops work. It should bill at $85+/hour. Further, I'm notorious for underestimating how long it takes to develop features by a factor of 2x-4x. So, I think this is work worth at least $8K-$12K---to say nothing of having to maintain the package, against a large, fast-moving target. (And, while it is open source, governments tend to be very careful about accepting changes from third parties, because there are significant security and compliance burdens they must bear.)
Maybe I'm just accustomed to deployments in Ansible and Terraform, and am overstating the difficulty of this deploy. However, my experience is that when a system is designed to run one way, and you want it to run some other way, there's significant work involved.
So, in return, please forgive my ignorance. I may be misunderstanding things about packaging for Cloudron, and you may be right: this may be easier than I think.
I've started poking at a package for Planka to refresh myself on packaging, because it is a singleton Node app that only has Postgres as an external dependency. It is an example, in my mind, of an app that fits the Cloudron model perfectly. However, anything that wants orchestration beyond the core services Cloudron provides---especially when some of those services are custom components internal to the application itself---is, in my mind, significantly more effort.
Ah. I tend to look at the compose, because if there is a large list of additional services, cramming them into a single container can become a problem from my point-of-view.
I agree 100% with your assessment. (I think that was what I was implying with my #2.)
If the Cloudron team thought this was good to add to the stable of apps, I'd give it a go. And, I'd want to do the work to integrate it into whatever build/test frameworks that are in place. However, my workplace is going through some complex and public difficulties, so my energy at the end of the day tends to be limited. So, that's a "yes, but" on diving in on packaging right now.