At this point, I don't think the issue is having something that can be packaged. I've already packaged Prosody and it's working fine on my Cloudron. The problem is all the "hacky" things I had to do to make it work that aren't currently compatible with how packaging works - and Snikket will need to do some of those same things. So, our issue is still getting enough attention from the Cloudron team to get XMPP over the finish line.
djxx
Posts
-
Snikket Server - Your own messaging server in a box -
Snikket Server - Your own messaging server in a box@robi said in Snikket Server - Your own messaging server in a box:
Maybe it's a simple thing to adjust in the code and have a fork.
Then there is the maintenance issue.
Yes, the maintenance would be quite an issue. I guess Snikket will put 0 effort into not breaking customizations they told people not to make in the first place.
Prosody (and ejabberd) take care to be backwards compatible with older configurations, and follow a well-documented deprecation process when they're going to make breaking changes.
-
Snikket Server - Your own messaging server in a box@jdaviescoates What it requires is that your username matches the TLD it is deployed to, which means it needs to be installed as the "root" application on that domain. This is incompatible with Cloudron, and frankly with how many organizations operate. It's very common to have XMPP installed somewhere like xmpp.domain.com , and the protocol supports configuration to use the TLD for accounts. This is one of the areas where Snikket is firmly standing by "it should be so simple and require no configuration" - so either you install it at the root or you live with ugly usernames like user@xmpp.domain.com.
I don't think it's worth the time to customize something that doesn't want to be customized - but rather use something that is meant to be configured and come up with sane defaults that fit Cloudron. That's what I've tried to do with my Prosody packaging.
-
Snikket Server - Your own messaging server in a boxThanks for the upvote! I'm biased, but I think Prosody is the better choice. Snikket has too many things they do not (and possibly never will) support. The #1 reason I didn't package Snikket is not being able to use my TLD for accounts. It's a common convention to use user@domain.com for both e-mail and XMPP, but Snikket doesn't support this. The #2 reason is SSO - Snikket doesn't support it but SSO is one of my favorite features of most Cloudron apps.
-
Retention policy - 1 day possible?Yes - you're right it's "storage". I have been meaning to try the rsync instead of zip, but it does warn that it's not a good idea for lots of files. I'll give it a try. Thanks!
-
Retention policy - 1 day possible?@nebulon Thanks for checking! The UI change did take effect, and I do see a reduction of backup size - but it's not down to 1x the size of the backup. It's still 2x the size of the backup.
Inside the remote storage folder, I see:
- storage
- 2026-01-03-150000-664
- all other backups before
It looks like the latest backup also has a copy in "storage", which is still resulting in 2x the backup space for each snapshot being taken - because each day "storage" will change, and there will be a new folder added.
-
Retention policy - 1 day possible?@james Thanks! This was helpful. I made this change. Looking forward to the next sync so I can see if the data has been cut in half.
-
Retention policy - 1 day possible?@girish Great! Can you give an example of how, or point me to the documentation?
-
Retention policy - 1 day possible?Using Cloudron version 9, under
Configure Backup Schedule & Retentionis it possible to get an option for 1 day?I store my backups on a ZFS pool, so I automatically get daily, weekly, and monthly snapshots of my data. Having this as 2 days causes me to spend 2x the storage space (and bandwidth) on backups.
-
MiroTalk Update regularly fails after update@james Thanks for the quick reply. I didn't change any ports manually. Could it be that this and another application are conflicting and it's not being detected?
I'll run this lsof command next time it happens to see if I can identify the culprit.
-
MiroTalk Update regularly fails after updateDescription
MiroTalk SFU regularly gets stuck during the automatic update process:
Docker Error: (HTTP code 500) server error - failed to set up container networking: driver failed programming external connectivity on endpoint a877975d-38be-4088-bc92-e0d7a486a818 (2e5adaa635a95bd65ca0f290712065d444528e3420c49f2f88323b40c62caaa5): failed to bind host port for 0.0.0.0:40014:172.18.16.130:40014/tcp: address already in use
Steps to reproduce
Not sure. It happens during updates though.
Troubleshooting Already Performed
I've stopped the app, tried to retry the upgrade, retry the configure task. Sometimes it works after a few tries, other times I have to restart the server first.
System Details
Hetzner
vServer
4 Core "AMD EPYC-Milan Processor"
16.37 GB RAM & 4.29 GB SwapCloudron Version
9.0.15
Ubuntu Version
24.04
Output of
cloudron-support --troubleshootLinux: 6.8.0-88-generic Ubuntu: noble 24.04 Execution environment: kvm Processor: AMD EPYC-Milan Processor BIOS NotSpecified CPU @ 2.0GHz x 4 RAM: 15989992KB Disk: /dev/sda1 44G [OK] node version is correct [OK] IPv6 is enabled in kernel. No public IPv6 address [OK] docker is running [OK] docker version is correct [OK] MySQL is running [OK] nginx is running [OK] dashboard cert is valid [OK] dashboard is reachable via loopback [OK] No pending database migrations [OK] Service 'mysql' is running and healthy [OK] Service 'postgresql' is running and healthy [OK] Service 'mongodb' is running and healthy [OK] Service 'mail' is running and healthy [OK] Service 'graphite' is running and healthy [OK] Service 'sftp' is running and healthy [OK] box v9.0.15 is running [OK] netplan is good [OK] DNS is resolving via systemd-resolved [OK] Dashboard is reachable via domain name [WARN] Domain domain.com expiry check skipped because whois does not have this information [OK] unbound is running -
XMPP Server - Prosody@nebulon - Congrats on the release of version 9! One of my servers just updated and the new UI looks slick. When will we get to see a slick new XMPP app in the store?

-
sshfs backup duplicates network traffic?@james said in sshfs backup duplicates network traffic?:
So, sshfs tries to issue a remote copy command but falls back to sshfs based copy if it fails for some reason.
What is your provider for sshfs? Most people here use Hetzner Storage Boxes.
I'm my own provider
I'm just using a standard SSH install on proxmox, and the files are stored on a ZFS cluster. I don't know of anything that would stop the copy command from working; what can I do to check / troubleshoot this? -
sshfs backup duplicates network traffic?I am configuring my backup to use sshfs and noticed this while it was running:
Copying /mnt/cloudronbackup/snapshot/mail.tar.gz to /mnt/cloudronbackup/2025-07-27-215345-102/mail_v8.3.2.tar.gz
On the remote server I can see the
snapshotand timestamped directory (e.g.2025-07-27-215345-102) while the backup is running.Based on the network traffic, it seems that while it is moving each file from
snapshotto the timestamped directory, it is literally usingcopy, which means the file has to make another round trip. If I'm not mistaken, this means the network usage for this backup will be 3x the size of the file.It seems like this is happening:
- Cloudron makes an archive
- Cloudron sends this file to the
snapshotfolder - Cloudron receives the file back again (part of copy)
- Cloudron sends the file again to the timestamped folder (part of copy)
Wouldn't it be much more efficient (and faster) to issue a
mvcommand to move the file rather than have a round trip?Also, it seems like another side effect is the snapshot folder keeps the files there until the next run - requiring 2x the space for the backup.
-
SSHFS read speed significantly slower than scp with the same target@nebulon Can you tell me how / where to edit this so it uses the options
-o direct_io,compression=no? And if it's safe to do so, and how long the change will persist? -
XMPP Server - ProsodyI'm happy to say that I've moved my XMPP server from NethServer to Cloudron. While this is probably not a common move, I am sharing some notes here in case it helps someone else. Also, perhaps this'll cause Cloudron to show up in a few more searches.

-
Install XMPP on Cloudron using the steps above. A bit manual for now!
-
Dump your ejabberd data (that's the XMPP server NethServer uses) with this command:
/opt/ejabberd-20.04/bin/ejabberdctl --config-dir /etc/ejabberd dump /etc/ejabberd/xmpp_dump.txt -
Download this dump file locally
-
For ease, clone the source for prosody to your local computer so you can utilize the migration tools and not install needless packages on Cloudron. You'll need to run ./configure and ./make - but you don't need to actually install it.
-
Don't be a Lua noob. I spent a while struggling to get my Lua environment setup, and thought I needed to run the tools like
lua ejabberd2prosody.luabut got lots of errors about dependencies missing. Once I figured out you need to execute it directly like./ejabberd2prosody.luathings worked fine. -
run the
ejabberd2prosody.luascript on yourxmpp_dump.txtfile:
./tools/ejabberd2prosody.lua ~/Desktop/xmpp_migrate/xmpp_dump.txt -
Create a migrator configuration (or use the one I've pasted below). It basically takes everything from the file data format and puts it into the sqlite format, since that's how the Cloudron prosody is configured. Docs:
-
Run the migrator script:
./tools/migration/prosody-migrator.lua --config=./migrator.cfg.lua prosody_files database -
Turn off your Cloudron XMPP app
-
Copy the resulting
prosody.sqlitefile into your Cloudron XMPP's/app/datafolder. It will be in the/datafolder under your local prosody directory. -
Turn on your Cloudron XMPP app
Your bookmarks, rosters, etc. will now be transferred to your new server! This doesn't appear to move archive messages (mod_mam). Probably because most prosody servers aren't configured to store these permanently so they don't bother migrating them.
I only noticed one issue while migrating. When I first ran the migrator script it gave me errors about topics being empty on some MUCs. I thought I was being smart and edited the code to handle the blanks. This caused me to be unable to join the MUCs on Prosody on certain XMPP clients because Prosody expects there to be a Topic for every MUC.
Once I manually adjusted the MUC topics to be non-empty, the other clients started working fine.
Another almost-issue is that Gajim needed to be restarted a few times to start using OMEMO properly. I think the other MUC issues may have thrown it into an error state.
prosody_files { hosts = { -- each VirtualHost to be migrated must be represented ["domain.com"] = { "accounts"; "account_details"; "account_flags"; "account_roles"; "accounts_cleanup"; "auth_tokens"; "invite_token"; "roster"; "vcard"; "vcard_muc"; "private"; "blocklist"; "privacy"; "archive"; "archive_cleanup"; "archive_prefs"; "muc_log"; "muc_log_cleanup"; "persistent"; "config"; "state"; "cloud_notify"; "cron"; "offline"; "pubsub_nodes"; "pubsub_data"; "pep"; "pep_data"; "skeletons"; "smacks_h"; "tombstones"; "upload_stats"; "uploads"; }; ["conference.domain.com"] = { "accounts"; "account_details"; "account_flags"; "account_roles"; "accounts_cleanup"; "auth_tokens"; "invite_token"; "roster"; "vcard"; "vcard_muc"; "private"; "blocklist"; "privacy"; "archive"; "archive_cleanup"; "archive_prefs"; "muc_log"; "muc_log_cleanup"; "persistent"; "config"; "state"; "cloud_notify"; "cron"; "offline"; "pubsub_nodes"; "pubsub_data"; "pep"; "pep_data"; "skeletons"; "smacks_h"; "tombstones"; "upload_stats"; "uploads"; }; }; type = "internal"; -- the default file based backend path = "/home/user/code/prosody-build/prosody-0.12.4/data/"; } database { -- The migration target does not need 'hosts' type = "sql"; driver = "SQLite3"; database = "prosody.sqlite"; } -
-
Trying to add an sshfs mounted location as a regular file system volume type in CloudronI'm facing the same issue with Nextcloud and trying to tune the performance of SSHFS (https://forum.cloudron.io/topic/13852/sshfs-read-speed-significantly-slower-than-scp-with-the-same-target/9) . The answer, at least for Nextcloud, is to adjust the configuration file. Instead of trying to "trick" Cloudron into accepting an SSHFS mount point as the primary storage, just adjust the applications config file to point to the mount point. Not sure if it will work for Immich, but it works for Nextcloud.
Also, I'm doing this with Nextcloud for the exact same reason - I want to manage my pictures. I'm trying the "Memories" plugin in Nextcloud which has prettty good reviews. I'll probably move on to Immich next for testing.

-
SSHFS read speed significantly slower than scp with the same target@robi - I wonder if it still does any buffering when writing to /dev/null ? Since that's what the
ddread command above does. In any case, this suggestion caused me to revisit thedirect_iooption. It says it disables the kernel paging cache, which does seem to give the most consistent performance improvement.Yet Another Data Point - I did a lot more testing today, and I think I'm as far as I can go. The good news: I can consistently get 16 - 25 MB/s read speeds.
TL;DR: using this command gives me the best read performance (2x-3x improvement):
nice -n -10 sshfs -s -o direct_io,compression=noWhy I'm using these options:
direct_io
direct_iodisables caching, and had quite an interesting effect on reads.Using the
-f -doptions I was able to watch the packets going through. I was wrong before about the writes being bigger than the reads; they're not. But the writes are being done more parallel than the reads.Before direct_io:
[01315] READ [01308] DATA 32781bytes (31ms) [01309] DATA 32781bytes (31ms) [01310] DATA 32781bytes (31ms) [01311] DATA 32781bytes (31ms) [01316] READ [01317] READ [01318] READ [01319] READ [01312] DATA 32781bytes (31ms) [01313] DATA 32781bytes (31ms) [01314] DATA 32781bytes (31ms) [01315] DATA 32781bytes (31ms)READ requests 4 chunks at a time, waits for them, and then requests 4 more.
[05895] WRITE [05827] STATUS 28bytes (34ms) [05828] STATUS 28bytes (34ms) [05829] STATUS 28bytes (35ms) [05830] STATUS 28bytes (35ms) [05831] STATUS 28bytes (35ms) [05832] STATUS 28bytes (34ms) [05833] STATUS 28bytes (34ms) [05834] STATUS 28bytes (34ms) [05835] STATUS 28bytes (34ms) [05896] WRITE [05897] WRITEWRITE requests at least 60 chunks at a time, and sometimes I saw over 100 chunks pending.
After turning on direct_io, the reads look more like the writes:
[06342] READ [06343] READ [06344] READ [06313] DATA 32781bytes (31ms) [06314] DATA 32781bytes (31ms) [06315] DATA 32781bytes (31ms) [06316] DATA 32781bytes (31ms) [06317] DATA 32781bytes (32ms) [06318] DATA 32781bytes (32ms) [06319] DATA 32781bytes (32ms) [06320] DATA 32781bytes (32ms) [06321] DATA 32781bytes (33ms) [06322] DATA 32781bytes (35ms) [06323] DATA 32781bytes (35ms) [06324] DATA 32781bytes (36ms) [06325] DATA 32781bytes (36ms) [06326] DATA 32781bytes (36ms) [06327] DATA 32781bytes (37ms)Note the difference in the chunk IDs and you can see it's allowing at most 31 chunks to be pending before requesting more.
I think this is the primary reason for the speed increase.
-s for single threading
I noticed that running it on a single thread caused the degredation of the repeated file reads to be less pronounced. Instead of dropping back to 8 MB/s after a few reads, it does 25 MB/s read at least 5-6 times (500-600 MB) before dropping down to 16 MB/s. Also, it recovers back to 25 MB/s over time, whereas with multi-threaded I needed to restart the SSHFS connection in order to get 25 MB/s speeds again.
nice
Since there seems to be an element of CPU bottleneck (as resolved by running in a single process) I also wanted to give this process priority. It seems to help the session get more 25 MB/s reads before slowing down
compression=no
Because we're now on one thread, and we're hogging lots of CPU time, I disabled compression. I didn't notice a difference in throughput with it on, but turning it off helps reduce CPU load
Next Steps:
I will run this test a few more times, and probably even adjust my mount for the volume manually to see if it helps performance.
There is definitely some element of throttling / filling up, because repeated reads in the same session can get slower, and starting a new session can help the speed go back up. I'm not sure if this is on the client side or the server side. Any insights would be greatly appreciated.
Even though I wish there was a clearer answer, I'll be happy if the 2x boost to read speed works.
P.S. - I even tried a "high performance SSH" binary
hpnssh, and it did not make a noticeable difference in my tests. -
XMPP Server - Prosody@nebulon - as you may see from my other posts, I'm all in on Cloudron ;). I'm in the process of migrating my last server from NethServer to Cloudron; XMPP is one of those services I couldn't move without. I plan to move forward with my manual approach in this thread, but am still interested to see when this could become an official app. Does Cloudron 9 have an ETA?
-
SSHFS read speed significantly slower than scp with the same targetAnother datapoint. I tried using sshfs on my lan to the data server, and I got 112 MB/s write and 117 MB/s read - both of which are right at the theoretical limit of the gigabit connection. Not to mention it's taunting me with the read speed actually being faster than the write speed.

I decided to do another test with my laptop <-> hetzner server. So now we're transferring between the SSD on the VPS and the SSD on my laptop. The speeds are the same:
writing from laptop to VPS: 50 MB/s
reading from VPS to laptop: 7 MB/sI checked and both my laptop and Cloudron are using the same version of SSHFS and Fuse:
SSHFS version 3.7.3 FUSE library version 3.14.0 using FUSE kernel interface version 7.31 fusermount3 version: 3.14.0To get the best picture possible of the traffic, I put a fast watch on the connection:
watch -n 0 ss -t state established dst <server_ip>/24With this, I can see that the write sizes are ~10x bigger than the read sizes. I'm wondering if this is why the performance difference is only seen in WAN situations; because 10x the round trips hurts a lot more on the WAN than the LAN. And for those of us with storage boxes in Europe and servers outside of europe, 10X the round trips really hurts.
I finally stumbled across this article (https://www.admin-magazine.com/HPC/Articles/Sharing-Data-with-SSHFS) which does some pretty detailed performance testing and tuning with SSHFS.
The options they suggest for the sshfs mount didn't help much. I did notice that using
direct_iocan sometimes make the read speeds go up to 20 MB/s, but it's not reliable.I'm wondering if we're now into the realm of TCP configuration (which is the article's option #2) to increase a TCP buffer sizes. This would be a server-wide change, and is out of my depth. What are your thoughts, @nebulon ?