@girish - Given that the only blocker (that I know of so far) is the TLD certificate availability, would it be a decent workaround to package this entire thing as an application, and then run a one-time command to symlink the TLD certificate into the application directory? This would allow the sys admin to choose which application deserves access to the TLD certificate, ensure the app always has access to the latest certificate, and work around the current limitation of apps not being able to request the TLD cert.
djxx
Posts
-
XMPP Server - Prosody -
Backup fails about 50-60% of the timeMy main finding was that the backup job (which worked fine before) has started to require about 8x of the size of the backup to run successfully. This means if your backup size is 1GB it won't have enough memory.
-
Backup fails about 50-60% of the time@bazinga - what do you have your backup size set to?
-
Configure custom bounceback messageCan you tell me what triggers this to be sent? I tested blocking one of my own GMail addresses but didn't see this e-mail template bounced back to me.
And I don't expect the bounce backs to cause my ticket to get attention I expect people who want to work with me to see it, reach out to me through another channel, and then I'll let them know their IT needs to put in a ticket with Google if they want to receive e-mail from me.
-
Configure custom bounceback messageI'm having an issue with GMail blocking my IP, even though I have a perfect reputation, am not on any blocklists, have a 10/10 rating on mail tester, and have submitted a ticket asking them to fix it.
I've read on some forums the next suggested action is to "block them back"; the idea is that by making my server bounce their e-mails back, it will increase visibility to them, or to people using their e-mail servers and trying to contact me to raise a ticket on their side.
How can I achieve this with Cloudron?
-
XMPP Server - ProsodyThe way both Snikket and Cloudron apps work, it would "technically" be a different domain to allow your XMPP usernames to be like: user@domain.com , instead of the uglier user@chat.domain.com . The SRV records are trivial, and the real limitation as far as Cloudron is concerned is the app getting the cert with the TLD.
I guess there may be other apps in the future that would need the primary cert in order to verify trust between the TLD and the app itself - so perhaps this could be a new app option?
Also, Snikket intends to be a "one and done" installation, and it packages its own TURN/STUN server. For that reason, I think it's not ideal to use as the XMPP solution since it will make the app heaver than it needs to be, and require more ports.
-
XMPP Server - ProsodyResearch done - it is indeed a limitation of XMPP. from https://snikket.org/faq/:
-
XMPP Server - ProsodyI was actually looking into Snikket again last night. Back when I did my first packaging of XMPP, Snikket was still a bit too limited in features. I'll take another look at Snikket, but I suspect it will have the same cert requirements as part of the XMPP protocol.
And @girish , I don't remember perfectly, but based on my notes above the cert was the only thing I couldn't work around in the app package approach. While checking out Snikket I'll confirm if it also needs it.
-
From NethServer to CloudronI'm not sure if this is allowed, and if not, feel free to delete this post.
I wrote a blog post to share my journey of migrating my self-hosted NethServer to Cloudron. Overall I've been quite happy with the experience and wanted to share it here in case anyone else is considering the same thing, or if I might be able to answer questions for people experiencing some of the same pains during migrations.
If at least one poor soul considering the move from NethServer to Cloudron finds this post and it helps them, it'll make me happy.
-
Keep Getting Backuptask crashed@matix131997 - I noticed the same.
@joseph - in case it helps, I did a few backups (tarball) and noticed a pattern. The backup app configuration indicates:
Multi-part upload part size. Up to 3 parts are uploaded in parallel and requires as much memory.
But in practice I saw that the backup was taking 8x as much memory. I watched it do a backup with 512MB chunks and memory went all the way up to 4GB before the multiple backup threads "backed off" on memory usage, and dropped down to 1.5GB again.
On a 256 MB backup with 2 GB of memory set as the limit I saw it go just above 2 GB (2050 MB) and then the memory usage backed off.
On a 128 MB backup with 1 GB of memory set as the limit I saw it go just above 1 GB (1032 MB) and then the memory usage backed off.
I wonder if there was a change introduced recently that caused the memory usage to go from 3x to 8x, and servers that worked fine before are now undersized.
I've set my backup to 128 MB chunk size and 1 GB RAM and have run it twice now manually without issue. We'll see if the automatic job succeeds as well.
-
How to run a GitLab container registry (2024)I couldn't find a good way to do it just with GitLab so I went ahead and installed another app. I'd still like to hear if this is possible with GitLab alone.
Overall, it took about 10 minutes to set it up this way, which is better than not having repository functionality. One note is that the way the docker registry app identifies itself, the port is VERY important in how you login. Previously I was able to auth with my GitLab hosted docker repository with a URL like:
docker login registry.mydomain.com
but this didn't work at all with the Cloudron Docker Registry hosted at the same domain. I had to change to:
docker login registry.mydomain.com:443
yes - the port is apparently important!
-
Mattermost import - boards brokenWho's ready for a book?
TL;DR - If you're trying to import boards from another server, especially with a different version, you're gonna have a bad time.
I'm nearly done moving one of my servers from NethServer to Cloudron. The Mattermost version on NethServer is very old, and it was before the forced migration to Postgres. This is already a bad thing, because there are known issues with doing a database restore from a MySQL instance into a Postgres instance.
I backed up my old Mattermost using the
mmctl
command, and after stepping on a few rakes I was finally able to get a file exported and imported. Tip: if you have any custom emoticons over their 512kb limit, your import will fail with obscure size errors. It's easiest to fix this in your original system and re-export, but you can also replace the emoticon file with a smaller version with the same name. Oh, and you'll most likely need to increase the RAM given to Mattermost in Cloudron if you're doing an import.After looking around for a bit at my imported instance, I noticed that my boards were gone. I'm not sure if they just weren't in the export file, if they were excluded because the plugin wasn't installed on the fresh instance before doing the import, or because the database change made it impossible to import them. In any case, I ended up with a Mattermost instance with no boards - even after re-installing the plugin.
A quick read through their documentation showed that I should be able to export a .boardarchive file from each board and import it into my new server. I resurrected the old server long enough to export these files and tried to import them, that's when I started having problems that prompted me to write this post.
There were two primary problems:
#1 - {boards} not found
This error message is very unhelpful and doesn't give a good indication of what's happening. It turns out, this was caused by app / cookie data stored in my browser. There are preferences stored called
lastViewId
andlastBoardId
which focalboard always uses to query when loading. Many requests to focalboard would succeed, but then this one would fail:/boards/<board_id>/blocks?all=true
Since the domain name of my restored instance was the same, my browser was using these preferences to fetch the board ID from my old instance, which didn't exist in the new instance.
The solution here is to clear your site data and then refresh. It will detect that you don't have a preference and then properly fetch and load the boards.
You may also need to do this if you're quickly creating and deleting boards while troubleshooting the import. You create a board, it stores it as your preference, you delete the board, BOOM, your UI is broken because it starts trying to fetch data for the board you just deleted.
#2 - 429 "Something went wrong"
As I was troubleshooting the above, and quickly flipping between boards, I noticed that the entire app would become broken, with lots of these 429 errors in the console. It looks like the default rate limit (either set by Mattermost or Cloudron) is WAY too low at 10 per second, and a burst of 100. Each page load on a board is at least 10-15 queries to get it fully loaded. So if you have a couple tabs open, or the app and a tab, you'll immediately hit the rate limit. Thankfully, Cloudron makes this very easy to fix: just use the File Manager to edit config.json, update the
RateLimitSettings
section, then restart the app.Now can I load my boards?
You wish.
When I went to import the .boardarchive files I noticed that it was setting all users to me, and all timestamps to the time of import. After inspecting the file I can see that it does have all this data, but it's not making it through to the import. One obvious thing is that the user IDs are not the same. Obviously user IDs aren't the same because I'm migrating to another server.I came up with a script (shown below) to update the .boardarchive files to do a find/replace of user IDs from old to new. This works, but only for assignee. I tried many things to get the created by and comment by to map properly, but nothing worked. Eventually I looked in the source code and it's obvious that it's not even trying. Their docs say
After importing a Focalboard backup from one Mattermost instance into another (such as during a migration from Mattermost Cloud to self-hosted), card timestamps will be updated based on the import date, and cards won't correctly identify users whose user IDs differ across Mattermost instances.
This is putting it gently. It doesn't, even, try:
... block.ModifiedBy = userID block.UpdateAt = now ...
It makes zero attempt to find the user, or restore the timestamps.
So the simple truth is that if you're importing a .boardarchive, you're going to lose almost all user and timestamp data.
I can see multiple issues open in their bug tracker that are variations of items #1 and #2 shown above, with no response for a long time. Overall, I'm very very disappointed with focalboards and am not sure how much longer it can hang on as a product with so many open bugs and poorly implemented features, such as imports.
If I could do it all over again, I would try harder to make the SQL import work to retain the data. Unfortunately, I've already wasted a lot of time on this, and my boards aren't that important (thankfully).
Next time I need to move these focalboards, I think I'll just switch to a different product. And next time I need to move chat system, I may look for something else that's self-hosted and has a good board integration.
Appendix - Fix your .boardarchive files to at least retain assignee
I wrote a script that will take two inputs:
- A file called ids.csv, with three columns: old_id, new_id, description
- .boardarchive files in the same directory.
So your directory structure would look like this:
- ids.csv
- some_board.boardarchive
- fix.sh
Running the fix.sh will do the following for each .boardarchive file in the current directory:
- Unzip the file to a temporary directory
- Find the board.jsonl file that contains the bord data
- Do a find/replace of every old ID to every new ID in the file
- Save the file
- Re-zip the archive
- Clean up the temporary directory
So as long as you export out the IDs from your old and new instance, put them in the CSV, and then run this script you can import a .boardarchive and retain the assigned users.
fix.sh
#!/bin/bash # Loop through all files in the current directory with the extension "boardarchive" for file in "$PWD"/*boardarchive; do echo "Processing $file" # Unzip the file and extract it to a temporary directory temp_dir=$(mktemp -d) unzip "$file" -d "$temp_dir" # Find the board.jsonl file in the unzipped archive jsonl_file=$(find "$temp_dir" -type f -name board.jsonl) if [ ! -f "$jsonl_file" ]; then echo "Error: Could not find board.jsonl file in $file" continue fi # Read ids from ids.csv and perform replacements on board.jsonl while IFS=, read -r old_id new_id description; do echo "Replacing $old_id with $new_id for $description" sed -i "s/$old_id/$new_id/g" "$jsonl_file" done < ids.csv # Update the archive with the updated board.jsonl file pushd $temp_dir zip -qr "$file" ./ popd echo "Updated $file" # Clean up temporary directory rm -rf "$temp_dir" done ``
-
Keep Getting Backuptask crashedI'm also getting this error message with a backblaze backup.
-
Mattermost import - boards brokenNot sure why this ticket didn't show up in searches before, but it looks like it's a known issue upstream: https://github.com/mattermost/focalboard/issues/5019
My source serve's database was MySQL. I imported using an
mmctl
export so I didn't expect to have to deal with database issues, but it seems there are still problems. The issue above is showing the exact same error messages as me (except can't open log file), so I think the log error is just a red herring. -
Mattermost import - boards brokenJust updated to 8.0.4 last night.
I touched the file and confirmed it exists. I got the same error so I also tried to
chmod 777
it, and it still gave the error. Maybe it's not writing where we think it is? Perhaps it's hard-coded to some path outside of the packaged directory?In any case, it looks like it first has an error not finding {boards}, and then complains about not being able to write that error out.
{"timestamp":"2024-08-23 18:35:40.374 Z","level":"error","msg":"{boards} not found","caller":"app/plugin_api.go:1011","plugin_id":"focalboard"} {"timestamp":"2024-08-23 18:35:40.375 Z","level":"info","msg":"can't open new logfile: open focalboard_errors.log: permission denied\n","caller":"io/io.go:432","plugin_id":"focalboard","source":"plugin_stderr"} {"timestamp":"2024-08-23 18:35:40.421 Z","level":"info","msg":"can't open new logfile: open focalboard_errors.log: permission denied\n","caller":"io/io.go:432","plugin_id":"focalboard","source":"plugin_stderr"} {"timestamp":"2024-08-23 18:35:40.422 Z","level":"error","msg":"{boards} not found","caller":"app/plugin_api.go:1011","plugin_id":"focalboard"}
-
Grist | The Evolution of Spreadsheets@umnz - I'm new here, but one thing that helps is someone wanting it enough to follow the guide to package it up, confirm it works, and notify the dev team. https://docs.cloudron.io/packaging/tutorial/ .
But, as someone who has packaged up an app that is waiting for attention I'd be a bit (selfishly) upset to see yet another "no code database" package on Cloudron before mine was selected.
-
Mattermost import - boards broken@girish Yes - same error:
root@systemid:/app/code# touch /app/code/focalboard_errors.log touch: cannot touch '/app/code/focalboard_errors.log': Read-only file system
-
How to run a GitLab container registry (2024)I can see some old posts from 2022 that have some complicated steps to install the Docker Registry app and link GitLab to it. Is it possible to use the built-in registry by making the configuration edits here?
It seems it would just need another open port, but as I understand Cloudron apps have to pre-define the ports they're allowed to listen on, so this option would need to be built into the Cloudron app definition and I don't see it anywhere.
I'd much rather use this "set and forget" container registry approach, since it makes it easier to connect to other parts of GitLab.
-
Mattermost import - boards broken@girish - Yes. I still see the read only error, and the boards still do not function and return 404 errors from the API call.
-
Mattermost import - boards broken@girish Thanks for the tip. Unfortunately the behavior is the same. I still get that message loop and I still see the 404 error coming back from the API call.