I'm happy to say that I've moved my XMPP server from NethServer to Cloudron. While this is probably not a common move, I am sharing some notes here in case it helps someone else. Also, perhaps this'll cause Cloudron to show up in a few more searches.
Install XMPP on Cloudron using the steps above. A bit manual for now!
Dump your ejabberd data (that's the XMPP server NethServer uses) with this command:
/opt/ejabberd-20.04/bin/ejabberdctl --config-dir /etc/ejabberd dump /etc/ejabberd/xmpp_dump.txt
Download this dump file locally
For ease, clone the source for prosody to your local computer so you can utilize the migration tools and not install needless packages on Cloudron. You'll need to run ./configure and ./make - but you don't need to actually install it.
Don't be a Lua noob. I spent a while struggling to get my Lua environment setup, and thought I needed to run the tools like lua ejabberd2prosody.lua but got lots of errors about dependencies missing. Once I figured out you need to execute it directly like ./ejabberd2prosody.lua things worked fine.
run the ejabberd2prosody.lua script on your xmpp_dump.txt file:
./tools/ejabberd2prosody.lua ~/Desktop/xmpp_migrate/xmpp_dump.txt
Create a migrator configuration (or use the one I've pasted below). It basically takes everything from the file data format and puts it into the sqlite format, since that's how the Cloudron prosody is configured. Docs:
https://prosody.im/doc/migrator
https://prosody.im/doc/storage
Run the migrator script:
./tools/migration/prosody-migrator.lua --config=./migrator.cfg.lua prosody_files database
Turn off your Cloudron XMPP app
Copy the resulting prosody.sqlite file into your Cloudron XMPP's /app/data folder. It will be in the /data folder under your local prosody directory.
Turn on your Cloudron XMPP app
Your bookmarks, rosters, etc. will now be transferred to your new server! This doesn't appear to move archive messages (mod_mam). Probably because most prosody servers aren't configured to store these permanently so they don't bother migrating them.
I only noticed one issue while migrating. When I first ran the migrator script it gave me errors about topics being empty on some MUCs. I thought I was being smart and edited the code to handle the blanks. This caused me to be unable to join the MUCs on Prosody on certain XMPP clients because Prosody expects there to be a Topic for every MUC.
Once I manually adjusted the MUC topics to be non-empty, the other clients started working fine.
Another almost-issue is that Gajim needed to be restarted a few times to start using OMEMO properly. I think the other MUC issues may have thrown it into an error state.
prosody_files {
hosts = {
-- each VirtualHost to be migrated must be represented
["domain.com"] = {
"accounts";
"account_details";
"account_flags";
"account_roles";
"accounts_cleanup";
"auth_tokens";
"invite_token";
"roster";
"vcard";
"vcard_muc";
"private";
"blocklist";
"privacy";
"archive";
"archive_cleanup";
"archive_prefs";
"muc_log";
"muc_log_cleanup";
"persistent";
"config";
"state";
"cloud_notify";
"cron";
"offline";
"pubsub_nodes";
"pubsub_data";
"pep";
"pep_data";
"skeletons";
"smacks_h";
"tombstones";
"upload_stats";
"uploads";
};
["conference.domain.com"] = {
"accounts";
"account_details";
"account_flags";
"account_roles";
"accounts_cleanup";
"auth_tokens";
"invite_token";
"roster";
"vcard";
"vcard_muc";
"private";
"blocklist";
"privacy";
"archive";
"archive_cleanup";
"archive_prefs";
"muc_log";
"muc_log_cleanup";
"persistent";
"config";
"state";
"cloud_notify";
"cron";
"offline";
"pubsub_nodes";
"pubsub_data";
"pep";
"pep_data";
"skeletons";
"smacks_h";
"tombstones";
"upload_stats";
"uploads";
};
};
type = "internal"; -- the default file based backend
path = "/home/user/code/prosody-build/prosody-0.12.4/data/";
}
database {
-- The migration target does not need 'hosts'
type = "sql";
driver = "SQLite3";
database = "prosody.sqlite";
}