Full Cloudron Backups are failing on Cloudron version 7.7.0
-
@nebulon I am getting all of the same results as @ChristopherMag and I have the following showing on my mnt-cloudronbackup.mount:
[Unit]
Description=cloudronbackupRequires=unbound.service
After=unbound.service
Before=docker.service[Mount]
What=//192.168.100.130/Backups/Backup/Cloudron Backup
Where=/mnt/cloudronbackup
Options=credentials=/home/yellowtent/platformdata/cifs/mnt-cloudronbackup.cred,rw,iocharset=utf8,file_mode=0666,dir_mode=0777,uid=yellowtent,gid=yellowtent
Type=cifs[Install]
WantedBy=multi-user.targetThat is the correct setup with my local Truenas File Server. I went onto my Truenas File Server and made sure SMB1 (CIFS) was enabled on the SAMBA service to rule that out. I did a reboot of my Cloudron server and tried again with the same issue. Weird thing, is that if I go to Backups and click the Remount Storage button, I still remounts and the green indicator is there.
-
This seems to be related to a kernel bug in
5.15.0-102
.I have run
sudo apt-get update
and then runsudo apt upgrade
but didn't say yes just to see what packages would be upgraded and I didn't see anything that would be upgrading the kernel:cmagnuson@cloudron2:~$ sudo apt upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Done Get more security updates through Ubuntu Pro with 'esm-apps' enabled: libjs-jquery-ui python3-scipy Learn more about Ubuntu Pro at https://ubuntu.com/pro The following NEW packages will be installed: ubuntu-pro-client The following packages will be upgraded: apt apt-utils collectd collectd-core collectd-utils coreutils ethtool firmware-sof-signed libapt-pkg6.0 libcollectdclient1 libldap-2.5-0 libldap-common libwbclient0 python3-update-manager snapd ubuntu-advantage-tools ubuntu-pro-client-l10n update-manager-core update-notifier-common 19 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 32.5 MB of archives. After this operation, 1,268 kB disk space will be freed. Do you want to continue? [Y/n] ^C
At this point it seems like a work around is needed to be able to bypass the df check or do something else to be able to get backups while waiting for ubuntu to release an update to the kernel they include.
I am going to try to reconfigured backups to go to local storage and them manually copy them over to the mountpoint so that at least I can get a backup for today.
@skeats Can you run
uname -r
to confirm what kernel version you are on? -
@ChristopherMag I am on kernel version 5.15.0-102
-
I have made a
/backups
directory and runsudo chown yellowtent:yellowtent /backups
and then configured the system to backup to that directory and am currently running a backup.After that completes I will work on copying that over to
/mnt/cloudronbackup
@skeats ps, if you want to paste logs I recommend putting ``` on a new line before the log output and ``` on the line after the log output and then the system will format it with a fixed width font and make it easier to read. Same thing when including commands inline by adding ` at the start of the command and ` at the end of the command so that ps -a turns into
ps -a
. You can try it out by editing your previous posts if you want to try it. -
If you’re on 22.04, you can safely update to the 6.x kernel: https://www.omgubuntu.co.uk/2023/08/ubuntu-22-04-linux-kernel-6-2
-
@nebulon I have added the previously used cloudron backup directory as a volume in cloudron and mounted it.
I am about to run
cp -r /backups/2024-04-11-161325-365 /mnt/volumes/43cfcd99b751486ea8b2f56a194eb88b
Given that the snapshot directories are different in the local
/backups
folder vs the remote/mnt/volumes/43cfcd99b751486ea8b2f56a194eb88b
is that going to cause an issue now or later when I switch back to using the original cifs share as the backup destination? -
@necrevistonnezr The article made it sound like running
sudo apt update && sudo apt full-upgrade
would get the update and this was back in 2023 so wouldn't we already have it given that cloudron runs apt to do updates periodically?I also didn't see anything that would update the kernel in the output of
sudo apt upgrade
listed in my post above. The article also indicated that this would be included in new ISO's but this server was just installed from a freshly downloaded iso about a month ago so it doesn't seem to be included in recent ISOs either. -
@necrevistonnezr I think if I am understanding this correctly ubuntu has this notion of a HardWare Enablement (HWE) kernel that they maintain for older releases but it isn't the default. So in my case I believe I would need to run
sudo apt install linux-generic-hwe-22.04
to get the HWE kernel which would be a 6.x kernel and should resolve this issue. -
@skeats I have run
sudo apt install linux-generic-hwe-22.04
, rebooted, confirmed viauname -r
that I am running6.5.0-27-generic
, reconfigured my backup settings (you probably wont have to do this as I was changing things to get a local backup), and now I can backup to the cifs share again.@necrevistonnezr Thank you for pointing me in the right direction to be able to use the HWE kernel to bypass this bug.
-
@ChristopherMag thank you because I ran that command and it worked and my backups are working again with the 6.5.0-27-generic kernel!
-
@ChristopherMag said in Full Cloudron Backups are failing on Cloudron version 7.7.0:
@necrevistonnezr The article made it sound like running
sudo apt update && sudo apt full-upgrade
would get the update and this was back in 2023 so wouldn't we already have it given that cloudron runs apt to do updates periodically?apt full-upgrade
is non-standard and not run by Cloudron, I believe. -
-
-
-
@girish I don't think it requires seal encryption.
I believe using seal encryption causes a different code path to be taken that avoids the underlying bug but it is not the presence or absence of seal encryption in and of itself that I believe fixes the issue.
After a kernel upgrade this works fine without seal encryption though I have now enabled it as it seems like a good idea to have anyway so making it the default is probably a good idea but as long as Ubuntu 22.04's default kernel version is
5.15.0-102
this issue will likely still occur. -