Everything I'm about to say is independent of your actual needs and what you want to achieve. Are you hosting for others? Do you need to be able to restore within hours? Then some of what I say is not for you. If you're instead looking for some "I messed up, everything is gone, and I need a way to recover, even if it takes me a few days," then some of what I say will be more applicable.
I have generally built my backups in tiers, for redundancy and disaster recovery.
I would backup the first tier to something close and "hot." That is, use rsync or direct mount to your instance's local disk.
I would consider SSHFS mounted disk a second option, if #1 does not have enough space.
I would have a cron that then backs up your backup. If you backup 3x/day using Cloudron (using hardlinks, to preserve space), I would then do a daily copy to "somewhere else." That could be via restic/duplicati/etc. as you prefer.
I would weekly dump to S3 (again, via cron if possible), and consider doing that as a single large file (if feasible), or multiple smaller files. Those could be straight tar files, or tar.gz if you think the data is compressible. Set up a lifecycle rule to move it quickly (one day?) to Glacier if you're thinking about cost.
At the end of the month, keep only one monthly in Glacier. Not sure what the deletion costs would be if you delete that quickly, so some thought may need to be given here.
That's perhaps a lot, but it depends on your data/what you're doing.
You could also go the other way: if you think your cloud backup costs will be too high, you could do the following:
Pick up a $300 NAS and a pair of 8-16TB hard drives
Install TrueNAS on it, and put the disks in a ZFS Mirror
Set up a cron on the NAS to pull down your Cloudron backups on a periodic (daily/weekly) basis. restic or similar will be your friend here.
That's the... $800 or so solution, but you would weigh that cost against how much you're going to be paying in cloud storage. (That is, if you decide you're going to be paying $200+/year for backups, perhaps the NAS is going to start to look attractive.) The incremental backups should get smaller once you get the initial pull done (in terms of size to pull down).
A version of the NAS is where you buy one external drive, run your backups locally, and pray the drive doesn't go underneath you, or worse, when you have to do a restore. I would personally chuck a single drive out the window, but some people love to gamble.
Recovery from the offline backup will be annoying/painful. You'd have to upload it, and then configure your restore to point at it. However, it would be your "last ditch" recovery approach. This would be to my opening point: your backups are dictated, in no small part, by your budget and needs.
If you have money to spare, use direct- or SSHFS-mounted disk, and just backup to it. If you are looking for some savings, you can price out S3-based storage (B2 tends to be cheapest, I think, but don't forget to estimate how many operations your backup will need---those API calls can get expensive if you have enough small objects in your backup.) Moving to Glacier is possible if you use AWS, which is significantly cheaper per TB. Having at least one disconnected backup (and a sequence) matters in the event of things like ransomware-style attacks (if that is a threat vector for you). Ultimately, each layer adds cost, complexity, and time to data recovery.
Finally, remember: your backups are only as good as your recovery procedures and testing. If you never test your backups, you might discover you did something wrong all along, and have been wasting time from the beginning. I find Cloudron's backups to be remarkably robust, and was surprised (pleasantly!) by a recent restore. But, if you mangle backups via cron, etc., then you're just spending a lot of money moving zeros and ones around...