What's coming in Cloudron 9
-
- Deprecate Ubuntu 20.04 support - we won't remove support yet, but you will get a notification that support is going away soon. I think Jul 2025 is when Ubuntu 20.04 reaches EOL.
- UI Redesign - this is a biggie and requires much work. We already started migrating to vuejs 3 months ago. We will also take this opportunity to fix the navigation in our UI. Currently, the whole navigation is crammed under the profile menu.
- App Level (Disk) Storage Limit - This will let you size the maximum disk storage an app can use. Currently, the plan is to add support for XFS Project Quota (supported on all the cloud block storage devices) and also a loopback device based backend. Maybe in future, we will add a LVM based backend as well.
- Backup integrity - store size and checksum of backups. Also provide a way to "verify" backup integrity in the remote.
- Show backup/restore progress
- Multiple Backup Destinations
- Granular Backup schedule
@girish said in What's coming in Cloudron 9:
Show backup/restore progress
Can you expand on this a bit? The current backup UI shows the progress.
Granular Backup schedule / Multiple Backup Desinations
As in per application? That would be awesome. For example, I would want say Nextcloud/Gitea/ Redmine backed up multiple places and daily, but MiroTalk/Searxng monthly and to one place. If it could be done off of tags, that would be wonderful. I would tag apps as stateful, stateless myself (and perhaps some kind of priority tag). If one could have some kind of Recovery Time Objective/Recovery Point Objective , that would be very "enterprise". Honestly just per app / tag granular backup schedule/destination would let one achieve that.
-
@girish said in What's coming in Cloudron 9:
Show backup/restore progress
Can you expand on this a bit? The current backup UI shows the progress.
Granular Backup schedule / Multiple Backup Desinations
As in per application? That would be awesome. For example, I would want say Nextcloud/Gitea/ Redmine backed up multiple places and daily, but MiroTalk/Searxng monthly and to one place. If it could be done off of tags, that would be wonderful. I would tag apps as stateful, stateless myself (and perhaps some kind of priority tag). If one could have some kind of Recovery Time Objective/Recovery Point Objective , that would be very "enterprise". Honestly just per app / tag granular backup schedule/destination would let one achieve that.
@charlesnw this feedback is very valuable!
I could bet girish will take it to heart -
@girish said in What's coming in Cloudron 9:
Show backup/restore progress
Can you expand on this a bit? The current backup UI shows the progress.
Granular Backup schedule / Multiple Backup Desinations
As in per application? That would be awesome. For example, I would want say Nextcloud/Gitea/ Redmine backed up multiple places and daily, but MiroTalk/Searxng monthly and to one place. If it could be done off of tags, that would be wonderful. I would tag apps as stateful, stateless myself (and perhaps some kind of priority tag). If one could have some kind of Recovery Time Objective/Recovery Point Objective , that would be very "enterprise". Honestly just per app / tag granular backup schedule/destination would let one achieve that.
@charlesnw said in What's coming in Cloudron 9:
Can you expand on this a bit? The current backup UI shows the progress.
We want to show a proper percent as well as time when uploading/downloading . The code does not know the size of the backups and thus cannot accurately guess how much time based on the upload/download speed. The end result will be "5 mins remaining..." like progress.
As in per application?
yes, that's the plan!
-
N nottheend referenced this topic on
-
Multiple backup destinations will be awesome. I have so much disk space reserved for resilience!
-
Is there any estimated timeframe for the release of 9.0?
-
J james forked this topic on
-
I once post in the forum that I hope in the next version there will be feature to customize reset password link (and text if possible) in case the cloudron instance user directory linked to another cloudron or other user directory providers. The latest version of Cloudron lack of this feature which I think is important to make it easy for regular user who dont remember the user directory url (which mostly dont remember)
-
Since many IPv6/PTR issues have been reported, I revisited the code to double-check.
I found two biggish bugs:
- There was a typo in the IPv4/IPv6 caching code
Because of this IPv6 will sometimes be returned as undefined
- On versions less than Ubuntu 24, unbound was configured to use IPv6 . Zen SpamHaus is not replying IPv6 queries for most of the public VPS providers . I made a fix to make unbound use IPv4 to query SpamHaus.
Finally, I also added IPv6 DNSBL checks . Also, double checked that SPF record "a:" includes AAAA .
I am hoping this helps the situation. If not, we can add a flag in 9.1 to make the mail server not use IPv6 at all.
- There was a typo in the IPv4/IPv6 caching code
-
G girish forked this topic on
-
I don't see much in terms of new networking features - is there anything on the roadmap, for example multiple NIC support?
https://forum.cloudron.io/topic/7839/more-than-1-network-nic-bind-container-to-networks
(would be a great way to separate local apps like home assistant or immich from externally reachable apps, using just the GUI)Looking forward to the new backup capabilities, thanks! : D
-
Since many IPv6/PTR issues have been reported, I revisited the code to double-check.
I found two biggish bugs:
- There was a typo in the IPv4/IPv6 caching code
Because of this IPv6 will sometimes be returned as undefined
- On versions less than Ubuntu 24, unbound was configured to use IPv6 . Zen SpamHaus is not replying IPv6 queries for most of the public VPS providers . I made a fix to make unbound use IPv4 to query SpamHaus.
Finally, I also added IPv6 DNSBL checks . Also, double checked that SPF record "a:" includes AAAA .
I am hoping this helps the situation. If not, we can add a flag in 9.1 to make the mail server not use IPv6 at all.
@girish said in What's coming in Cloudron 9:
Since many IPv6/PTR issues have been reported, I revisited the code to double-check.
I found two biggish bugs:
- There was a typo in the IPv4/IPv6 caching code
Because of this IPv6 will sometimes be returned as undefined
- On versions less than Ubuntu 24, unbound was configured to use IPv6 . Zen SpamHaus is not replying IPv6 queries for most of the public VPS providers . I made a fix to make unbound use IPv4 to query SpamHaus.
Finally, I also added IPv6 DNSBL checks . Also, double checked that SPF record "a:" includes AAAA .
I am hoping this helps the situation. If not, we can add a flag in 9.1 to make the mail server not use IPv6 at all.
With some much needed Mail improvements, I just wanted to raise this one to your radar in case this can make it in for 9.0 at all. Iβm surprised this isnβt making bigger headaches for people, likely just a particular set of circumstances, but I think this is an important fix or improvement to include if we can.
- There was a typo in the IPv4/IPv6 caching code
-
Also, still don't have an exact date for Cloudron 9 but the testing has been going on very well. I am hoping mid July we can publish the unstable.
@girish said in What's coming in Cloudron 9:
Also, still don't have an exact date for Cloudron 9 but the testing has been going on very well. I am hoping mid July we can publish the unstable.
Hi Girish. How's it going?
-
@girish said in What's coming in Cloudron 9:
Also, still don't have an exact date for Cloudron 9 but the testing has been going on very well. I am hoping mid July we can publish the unstable.
Hi Girish. How's it going?
@humptydumpty Getting there... Last week we landed the support for multiple backup destinations with independent backup schedules. Backups also now link internally to the backup destination. This way, if you delete a backup destination, it's clear that Cloudron has lost track of backups that were made in that destination i.e those backup entries are removed and are not listed in the UI (unlike now where they linger but it's actually not possible to restore).
Currently, working on adding integrity checks and better progress. With that we are done.
-
An update on backup integrity: integrity information is now stored in
.backupinfo
files in the remote along side the backups . The file contains sha256 of the backup . For tgz, it's a single hash. For rsync, it contains individual hashes. Further, the backupinfo file is signed using a private key and the signature is stored in Cloudron database. Using this setup, we can verify the authenticity and integrity of backupinfo file (i.e it was created by the Cloudron backup system and was not altered) and we can also check the backups itself are not corrupt using the sha256.While implementing this, I have also added fileCount and size to each of the backup entries. So, you can get an idea of how many files are in the backup and the total aggregated size of an individual backup.
Currently, working on the integrity verifier i.e you can click some button to say "Check integrity" and it will verify the integrity of the backup. This is a bit complicated because you have to download the backup to check the integrity...
-
An update on backup integrity: integrity information is now stored in
.backupinfo
files in the remote along side the backups . The file contains sha256 of the backup . For tgz, it's a single hash. For rsync, it contains individual hashes. Further, the backupinfo file is signed using a private key and the signature is stored in Cloudron database. Using this setup, we can verify the authenticity and integrity of backupinfo file (i.e it was created by the Cloudron backup system and was not altered) and we can also check the backups itself are not corrupt using the sha256.While implementing this, I have also added fileCount and size to each of the backup entries. So, you can get an idea of how many files are in the backup and the total aggregated size of an individual backup.
Currently, working on the integrity verifier i.e you can click some button to say "Check integrity" and it will verify the integrity of the backup. This is a bit complicated because you have to download the backup to check the integrity...
-
An update on backup integrity: integrity information is now stored in
.backupinfo
files in the remote along side the backups . The file contains sha256 of the backup . For tgz, it's a single hash. For rsync, it contains individual hashes. Further, the backupinfo file is signed using a private key and the signature is stored in Cloudron database. Using this setup, we can verify the authenticity and integrity of backupinfo file (i.e it was created by the Cloudron backup system and was not altered) and we can also check the backups itself are not corrupt using the sha256.While implementing this, I have also added fileCount and size to each of the backup entries. So, you can get an idea of how many files are in the backup and the total aggregated size of an individual backup.
Currently, working on the integrity verifier i.e you can click some button to say "Check integrity" and it will verify the integrity of the backup. This is a bit complicated because you have to download the backup to check the integrity...
@girish said in What's coming in Cloudron 9:
This is a bit complicated because you have to download the backup to check the integrity...
Why not have a simpler level 1 integrity light check that is online only, with the file sizes and any files one doesn't need to download for example.
The level 2 deep check is to download and sift through all files.
-
An update on backup integrity: integrity information is now stored in
.backupinfo
files in the remote along side the backups . The file contains sha256 of the backup . For tgz, it's a single hash. For rsync, it contains individual hashes. Further, the backupinfo file is signed using a private key and the signature is stored in Cloudron database. Using this setup, we can verify the authenticity and integrity of backupinfo file (i.e it was created by the Cloudron backup system and was not altered) and we can also check the backups itself are not corrupt using the sha256.While implementing this, I have also added fileCount and size to each of the backup entries. So, you can get an idea of how many files are in the backup and the total aggregated size of an individual backup.
Currently, working on the integrity verifier i.e you can click some button to say "Check integrity" and it will verify the integrity of the backup. This is a bit complicated because you have to download the backup to check the integrity...
@girish said in What's coming in Cloudron 9:
This is a bit complicated because you have to download the backup to check the integrity...
@robi said in What's coming in Cloudron 9:
Why not have a simpler level 1 integrity light check that is online only, with the file sizes and any files one doesn't need to download for example.
Yeah, couldn't Level 1 "Check integrity" just essentially be "do the hashes match"? (although presumably they always will otherwise the backup wouldn't be marked as having successfully completed?
)
And a Level 2 "Full Integrity Check" (this will take a Long Time as it requires downloading the full back-up to ensure..."
Either way, all sounds like great progress, thanks!
-
The idea was to check for bitrot and give a good feel about the backup. Just checking the sizes doesn't mean much. AFAIK, file metadata is also stored in different sectors of the disk compared to the real data. So accessing file sizes and them matching doesn't mean much. (For tgz, there is also only one file size to check).