You are not wrong in principle, distributing workloads reduces blast radius, and that approach makes perfect sense in many scenarios.
Our setup, however, is slightly different. We operate our own colocation facility (4K rack capacity) with direct, hands-on access to the hardware. For this particular server, there is a live mirror in place with continuous rsync-based synchronization, so the data is always effectively warm. Internally, the servers are connected over a dedicated 10 Gbps private network. In practical terms, this means restoring or promoting a mirror is not a lengthy operation.
It is also worth noting that these instances are not customer facing. They support our internal workflows, not production services for external clients, so the operational impact of a short outage is limited by design. Even in the worst case, an instance going down does not trigger customer downtime or a flood of support calls.
The main reason we run Cloudron on a single server is automation. We rely heavily on the Cloudron API for provisioning, backups, restores, and lifecycle management, and keeping everything under one Cloudron instance allows these processes to remain fully automated and consistent. But sure, it has some costs.
Running multiple Cloudron instances on separate servers would significantly increase operational complexity and require manual coordination, which defeats the purpose of our setup. Given our mirrored data, high-speed internal networking, and automated recovery, a single Cloudron control panel is a deliberate and acceptable trade-off for our use case.