borg2 is in the works which reorganizes the storage backend to support any kind of remote storage, recently including everything supported by rclone.
infogulch
Posts
-
Add Borg as backup option -
Element X, Call and Server Suite are production readyFrom this release of ESS (24.09) onwards our server (Synapse) will come with [Server-side synchronisation] SSS as standard.
The latest version of synapse 1.115.0 is already packaged and available, and CHANGES.md has a lot of mentions of the Sliding Sync API, so maybe it just works now.
There's also Element Server Suite (ESS) which looks like an "enterprise ready" packaging of synapse and related components for kubernetes. Not sure how this relates to cloudron deployments. Introduction to Element Server Suite
-
Chat logs lost after update from 1.19 -> 1.20It looks like it's working now.
-
Chat logs lost after update from 1.19 -> 1.20I updated to package version io.github.thelounge@1.20.1-1 and I still see that logs is a symlink to the ephemeral run directory:
-
Chat logs lost after update from 1.19 -> 1.20Oh. So because logs were assumed to be app logs and redirected to the ephemeral /run directory so they were not backed up.
How did this issue persist when we had a whole discussion about how logs are chat logs not app logs nearly 6 months ago?
-
Chat logs lost after update from 1.19 -> 1.20Backups cover a lot of sins. I still have the archived backup from the last time this happened.
Now we need a process for restoring the chat logs. I guess we can start by cloning the app from an old backup, but I'm not sure how to copy files between containers. Any ideas?
-
Chat logs lost after update from 1.19 -> 1.20Ok well the new update loses chat logs again. I guess I can't use cloudron for irc.
-
Vaultwarden - Security Enhancement TipThis doesn't sound right. The number of iterations has to be stored in the database, and it is very often stored with the password hash. Changing to a "unique" number doesn't have any meaningful impact on security, aside from being big enough..
The iteration count is designed to be a flexible way to increase the computational effort required for each cracking attempt. This is helpful because Moore's Law is quite real and instead of inventing a new hash every 2 years, users and operators can just bump the iteration count to maintain the same expected level of effort an attacker would have to expend with new hardware.
-
SSO / OIDC ?doesn't say anything about being enterprise only
Oh
Nevermind then.
As far as the trick for an app that needs multiple pg databases, I think creating multiple schemas might work depending on if the db library used by the app supports setting the schema in the url or otherwise. Support for this feature seems patchy.
-
SSO / OIDC ?The SSO docs are in the Self Sosting Quickstart section of the docs and doesn't say anything about being enterprise only. The instructions looks pretty simple; quoted with edits:
Setting up OIDC login
- Set SAML_DATABASE_URL to a Postgres database. Please use a different database than the main Cal instance since the migrations are separate for this database. (snip)
- Set SAML_ADMINS to a comma separated list of admin emails who can configure the OIDC.
- Keep handy the Client Secret, Client ID and Well Known URL with you for the next step.
- Spin up cal.com on your server and login with the Admin user (the email ID of which was provided in step 2 for SAML_ADMINS environment variable).
- Visit {BASE_URL}/settings/security/sso
- Click on Configure SSO with OIDC, and then enter the Client Secret, Client ID and Well known URL from the Step 3, and click save.
- That's it.
The only thing that gives me pause is that it's asking for a separate Postgres database connection info, and I'm not sure if cloudron is able to do that. Maybe we can make the main app db and saml db use different pg schemas?
(Side note: the fact that the only way to configure OIDC and SAML through the web ui is... insane. Their OIDC E2E test scenario literally scripts the settings page to enter credentials; there's no way to configure it automatically 🤯)
-
Repository archives balooned to take up all space on diskRelease 1.20.0 included these PRs related to mirroring, maybe one of them caused the issue:
-
Repository archives balooned to take up all space on diskYeah that seems to be the issue:
It's currently at 30GB (was 30MB 19 hours ago), exactly:
31069204
Then I ran the "Update Mirrors" task, now it's:
31733536
-
Repository archives balooned to take up all space on diskYeah it probably has to do with the Update Mirrors cron task that runs every 10m, since that's the only activity on the server. I did notice that the Update Mirrors task run count was about 280 in both cases. Maybe a recent update is now leaving junk behind when it updates mirrors that isn't being cleaned up.
-
Repository archives balooned to take up all space on diskSo I don't know what "repository archives" are, what they're for, why they take up literally 12x the storage of the actual repository data, why clicking this button fixes it, and why that has no effect on the operation of the site.
-
Repository archives balooned to take up all space on diskThis happened again today. 48+GB of actually useless "repository archives" whatever that means. ~~Even login fails when the disk is full so I can't even log in to fix it. ~~ I was able to log in finally.
Is there a way to set disk quotas for apps so when they misbehave the whole cloudron doesn't go belly up? Sigh.
-
Chat logs lost after update from 1.19 -> 1.20My guess is that they are destroyed when upgrading to version 1.20 of the app, due to a change in the message history storage location described in the Lounge backups regularly stalling topic linked in the opening post of this thread. Maybe there's some mishap when 1.20+ is restoring files from a backup made at 1.19 or earlier.
This indicates a potential reproduction strategy:
- Create a new The Lounge app at version 1.19
- Log in and create some chat history
- Stop the app, validate the logs are present on the fs, and create a backup
- Upgrade the app to 1.20, observe the chat history is missing
@jdaviescoates In case you haven't already, I would strongly recommend that you pin the latest 1.19 backup so that it's not automatically cleaned up before this is sorted out.
-
Chat logs lost after update from 1.19 -> 1.20Sorry I didn't see your message until now.
Yes all chat logs were lost. I don't think it's a good idea to purge chat logs automatically. I have logs of dozens of channels over 10 years and it really doesn't take up that much space if you avoid channels with huge amounts of spam.
IRC has no mechanism for storing messages, the server only transmits them. If you're not present in the channel and store them yourself then you just miss those messages, including direct messages. This is the benefit and primary purpose of using an IRC bouncer, it stays connected so you can see and store chat history yourself.
-
Repository archives balooned to take up all space on diskI don't know if that's what is intended or not. What are the repo archives for? The live repo data was still present, so it doesn't affect live data.
I wanted sorted output because there were thousands of files, so
du -h
wasn't very helpful. -
Chat logs lost after update from 1.19 -> 1.20All my TheLounge chat logs have been lost. After testing backups I found the first backup missing logs occurred in the update from 1.19 to 1.20 on 12/15. I have archived the last good backup so they're not completely destroyed.
Maybe related to the change mentioned in Lounge backups regularly stalling?
Cloudron should take these actions at minimum:
- Affected users should be notified (anyone running TheLounge app) and instructed to archive the last 1.19 backup ASAP
- Identify the cause of the lost logs (just a mishap in the transition to symlinked logs?)
- Validate that logs are still being backed up at all
- Come up with a strategy to merge logs from the backup with the logs generated between 12/15 and today.
-
Repository archives balooned to take up all space on diskI just ran into an issue where the entire disk was consumed by gitea repository archives, roughly 48GB.
Here's the top several lines output from
du | sort -n -r
:46659924 . 44862768 ./appdata 43136132 ./appdata/repo-archive 33266924 ./appdata/repo-archive/19 9538276 ./appdata/repo-archive/4 1835424 ./appdata/repo-archive/19/4e 1797076 ./repository 1741536 ./repository/mirror 1725744 ./appdata/packages 1721100 ./appdata/repo-archive/19/75 1544116 ./appdata/repo-archive/19/ad 1217724 ./repository/mirror/erpnext.git 1210888 ./repository/mirror/erpnext.git/objects 1187404 ./repository/mirror/erpnext.git/objects/pack 1005688 ./appdata/repo-archive/19/29 981364 ./appdata/repo-archive/19/b1 953240 ./appdata/repo-archive/19/25 949260 ./appdata/repo-archive/19/b4 943956 ./appdata/repo-archive/19/da 932688 ./appdata/repo-archive/19/c9 918308 ./appdata/repo-archive/19/9d 916680 ./appdata/repo-archive/19/2b 914320 ./appdata/repo-archive/19/6b 911324 ./appdata/repo-archive/19/ac 900540 ./appdata/repo-archive/19/df 899540 ./appdata/repo-archive/19/ca 898788 ./appdata/repo-archive/19/b8 897392 ./appdata/repo-archive/19/15 895184 ./appdata/repo-archive/19/2e 890984 ./appdata/repo-archive/19/65 890616 ./appdata/repo-archive/19/9b 889036 ./appdata/repo-archive/19/f7 838684 ./appdata/repo-archive/19/55 816740 ./appdata/repo-archive/19/5e 815840 ./appdata/repo-archive/19/89
My gitea instance is barely used and only has a few repos that I'm mirroring.
I was able to solve the issue by logging in as root (I hadn't changed the root password ) and running the
Delete all repositories' archives (ZIP, TAR.GZ, etc..)
Cron Task from the Site Administration page. Now the app uses 3.3GB of disk.I hope "repository archives" aren't important but everything seems to be working ok again... I decided to report this in case there's an upstream issue.