As already suggested, for large data, it's best to use a hard disk / storage box. Just nfs or cigs mount it . S3 storage is not ideal for large backups, especially large file count. Ideally, choose a block storage which is in same data center.
In the future, we do plan to integrate something like restic or borg which will give us encrypted differential backups. I don't expect this in the next 3 months though.
I think I have looked into this ages ago but from what I remember it uses extensive use of plugins (like node-red). Is that still the case? If so, it's always a challenge to make a platform as an app on Cloudron.
@humptydumpty I think low-to-medium traffic is the key in that since it's latency that has the most difference on perceived speed with web browsing & app interfaces (as opposed to things like downloading/streaming).
updown.io is my go-to for monitoring TTFB times from various locations and the cheapest way I've found of doing that.
I usually target <200ms total TTFB from the nearest location and <1s from the furthest to be considered satisfying for all.
Much of the TTFB effect is from the app itself.
As an example; we have Windows servers on Contabo, accessed via Remote Desktop on, and the screencasting is as responsive and quick as any.
If you wanted higher port speed, you'd still probably find upgrading the package on Contabo cheaper than going to something that has the higher port speeds on the lower tier packages anyway.
For me the decisions are: Contabo when I need the cost-efficient CPU & RAM but Hetzner when I want the VPS to be collaborated on, or potentially moved between accounts without migrating the underlying VPS.
I tried pretty much every other provider over the years, and when you find one or two that have a solution for all you need, perform well, have the lowest prices, and all the features you need to get things done - they become a set & forget utility and I'm grateful not to have to think about moving anything again for the foreseeable.