Scaling / High Availability Cloudron Setup
-
This is something I've been batting around implementation ideas on for a little while now. There's a ton of variability provider-to-provider to account for on automating some of it, so I was leaning toward more capable cluster managers that are already available off the shelf. Easily the most capable is Kubernetes, but it comes with a lot of added complexity. It's distinctly possible based on the way Cloudron works to entertain some of this stuff, and I've been sketching out a number of different ideas. Nothing is formally roadmapped afaik right now.
That said, it would be helpful in thinking about options what you see as the changes in your experience that these sorts of ideas would enable. Adding an additional node, which seems to be what 3 of your 5 ideas are focused on (load balancing, hot stand-by, and auto-scaling), may or may not be the best approach to minimize downtime depending on how the "normal" use would pan out. It's already not all that difficult to keep an alternate machine restored from backups as a standby, but given the way the system handles app-level failures, it's hard to say in what cases that would be useful; there's added difficulty in reversing that failover and keeping it real-time.
Ultimately, what probably does make the most sense to close the gap on those goals while not messing too much with the underlying architecture and existing packaging is some sort of coordinated cluster manager that keeps the single-container approach but allows the system to reallocate those app containers across k different servers. Something short of Kubernetes could achieve this pretty easily, but will need a lot of work to pull off. For these reasons, I've started looking more at Hashicorp's Nomad as a potential solution to the cluster management side of things, but I'm still in the very early stages of what a Cloudron implementation would look like. At its full potential, this could enable things like multi-region and even multi-provider deployments. Ideally, the details of managing this would be hidden away behind the Cloudron interface, but I've not even yet begun to start spiking out an actual implementation.
I'd love some more thoughts and feedback on the approach generally though!
-
I would also love to see a HA setup for larger installations (which in my opinion in many cases have the need for some kind of identity provider solution such as shibboleth or FreeIPA for external apps as well though). The Nomad solution looks very promising and could possibly be implemented as a paid premium add-on for larger installations.
I was personally thinking about a very simple solution for an active-passive setup with just two instances using the snapshotted backups. The backups could be (incremental rsync) replicated to a passive instance that would store them locally for a very quick restore. Incremental syncs would not require much bandwidth or downtime and the restore of locally stored backups would be fairly quick.
Switching back from the the formerly passive instance to the previously failed of newly setup instance would most likely have to be done manually. A fully automated cluster with recovery would require at least three hosts (quorum) and might be too much overhead for smaller instances.
Would this be something that could be considered in future developments?
-
@NCKNE Pieces of that definitely could be - I wonder about the appetite for a hot/cold HA standby setup in the community versus an active-active clustered sort of approach. I know for myself, I'm not a big fan of paying for servers to sit there "just in case" as much as I prefer to utilize a little less across more machines and have normal operating headroom with some ability to absorb failures. That's just me though, so the more input on this topic we can get to inform what everyone values, the better!
Insofar as external app SSO goes, I very much agree that it is an important addition for the future, and I have a somewhat simplified solution that I'm working on (as opposed to the beast that is Shibboleth, since I've looked at packaging it for Cloudron and been...put off by the effort). The drive to do so has also been the thought in my mind that a big part of an app that would allow you to leverage Cloudron as an IdP would be a similar sort of flexibility that makes the rest of the system so strong. I'm aiming for a multi-system IdP app, in essence, which would allow for SAML, OAuth2, and potentially CAS exchanges for a start. I think it would be great to get RADIUS into the mix as well, though that may be better served as its own app. There are some outstanding challenges with the way the Cloudron LDAP system is set up presently, especially with respect to groups, as well as some profile fields, that will need to be sorted out before that's at its full potential, but hopefully we can get a proof of concept available at some point in the near future.
-
@jimcavoli said in Scaling / High Availability Cloudron Setup:
@NCKNE Pieces of that definitely could be - I wonder about the appetite for a hot/cold HA standby setup in the community versus an active-active clustered sort of approach. I know for myself, I'm not a big fan of paying for servers to sit there "just in case" as much as I prefer to utilize a little less across more machines and have normal operating headroom with some ability to absorb failures. That's just me though, so the more input on this topic we can get to inform what everyone values, the better!
I am absolutely with you here, having passive servers just sitting there being bored and wasting energy is nothing to aim for. I was just spinning ideas in my head to allow for a quick restore in case of a failure and a simple solution could be to back up to a remote servers disk. Having lots of data (TBs) in apps like Nextcloud and only having full backups made the wish for an incremental backup to a standby location come up.
Insofar as external app SSO goes, I very much agree that it is an important addition for the future, and I have a somewhat simplified solution that I'm working on (as opposed to the beast that is Shibboleth, since I've looked at packaging it for Cloudron and been...put off by the effort). The drive to do so has also been the thought in my mind that a big part of an app that would allow you to leverage Cloudron as an IdP would be a similar sort of flexibility that makes the rest of the system so strong. I'm aiming for a multi-system IdP app, in essence, which would allow for SAML, OAuth2, and potentially CAS exchanges for a start. I think it would be great to get RADIUS into the mix as well, though that may be better served as its own app. There are some outstanding challenges with the way the Cloudron LDAP system is set up presently, especially with respect to groups, as well as some profile fields, that will need to be sorted out before that's at its full potential, but hopefully we can get a proof of concept available at some point in the near future.
Wow! That would be awesome and in my opinion a HUGE step for cloudron to become enterprise ready. Together with high availability / load balancing clustering, cloudron could easily be used in larger environment as well.
-
@tkd said in Scaling / High Availability Cloudron Setup:
Ability to use floating IPs
Note that this is possible already. Get a floating IP and then go to
Network view
and put the IP there. Cloudron will now use that IP for the DNS. Many users already use it this way with Elastic IP as well.Ability to scale based on the number of applications running / resources needed - adding additional Cloudron nodes?
This is in our radar and definitely doable but the biggest challenge for us has been to justify implementing these features as we haven't found customers who would be willing to pay $ for complex features like these. If you are in the enterprise/medium business bracket and willing to work with us here, please contact us on support@cloudron.io.
-
Certainly a fan of Cloudron and subscriber.
Also a fan of (the perhaps lesser known) D2C.io, which does make clustered HA setups pretty easy but doesn't have the same App ecosystem or community yet.
Maybe you guys could collaborate?
-
I also note that Hetzner Cloud have just added a load balancers feature which I think could be used to scale Cloudron too, see
-
@jdaviescoates I think most hosts offer load balancers - but keeping the LB within the app would keep it more portable.
I respect that D2C has overlap but I still see them as distinct and potential cross-pollination. Any example of the clustered app setups and minimum containers can be seen here: https://docs.d2c.io/getting-started/stack-hub/
D2C is still a hosted and proprietary solution including support services, whereas Clouron is open-source but less GUI to tinker or be intimidating, so the target audiences are different, and it might be that the way D2C works not being open-source is not compatible and Kubernetes (a la https://kubeapps.com) would be a more compatible approach?
-
@robi From what I can tell, Portainer seems like a management interface for many orchestrators, which seems a level removed from the actual cluster scheduler itself, and therefore a bit higher-level than the next step we'd need in the cloudron journey to clustered operation. Frankly, even though I've advocated (and will likely continue to ) for a largely HashiCorp-based approach, the first/easiest thing might be to experiment with Docker Swarm. I'm not a particular fan of Swarm, but to be fair it's been a while since I seriously evaluated it. Still a big fan of Nomad specifically for this particular use case, and I think it is the best fit for the problem. I do have a bit of work done on the "HashiStack" approach already, but it's going to be a pretty seismic change if I get it finished, and I've not yet explored all the tendrils from the management/box side that will need to be updated. I can try to get some more serious progress laid down on that around the Christmas holidays, I hope.
-
I love the idea of the HashiCorp Nomad but never got around to testing.
Terraform, Vagrant and Vault being their better-known products that became standards for those interested.
The other option being a Cloudron home-cooked solution with HA-Proxy, Nginx-Cluster, Unison, and whatever DB clustered versions.
Caution being that clustered DBs will have performance trade-offs, more-so for mirrored multi-master, and less-so for master-slave failovers.
-
Starting from the data perspective:
- DRBD - https://www.linbit.com/drbd/
Distributed Replicated Storage System
DRBD– software is a distributed replicated storage system for the Linux platform. It is implemented as a kernel driver, several userspace management applications, and some shell scripts.
DRBD is traditionally used in high availability (HA) computer clusters, but beginning with DRBD version 9, it can also be used to create larger software defined storage pools with a focus on cloud integration.
- LinStor - https://github.com/LINBIT/linstor-server
LINSTOR is open-source software designed to manage block storage devices for large Linux server clusters. It’s used to provide persistent Linux block storage for cloudnative and hypervisor environments.
- OpenEBS - https://openebs.io/
OpenEBS enables Stateful applications to easily access Dynamic Local Persistent Volumes (PVs) or Replicated PVs. By using the Container Attached Storage pattern users report lower costs, easier management, and more control for their teams.
- Yugabyte DB - https://github.com/yugabyte/yugabyte-db
Developers get low latency reads, ACID transactions and globally consistent secondary indexes and full SQL. Develop both scale-out RDBMS and internet-scale OLTP apps with ease.
DBAs & Operations simplify operations with a single database that delivers linear scalability, automatic global data distribution, multi-TB density per node and rebalancing without performance bottlenecks or down time.
CEOs & Line of Business Owners reign in database sprawl and benefit from reduced infrastructure and software licensing costs. Deliver new features and enter new markets with more speed and agility.
Core Features
- Global Resilience
- Geo-replicated
- Strongly consistent across regions
- Extreme resilience to failures
High Performance
- Single-digit millisecond latency
- High throughput
- Written in C/C++
Internet Scale
- Massive write scalability
- App agility with flexible schemas
- Multi-TB data density per node
Cloud Native
- AWS, GCP, Azure, Pivotal
- Docker, Kubernetes
- Private data centers
Open Source
- 100% Apache 2.0 license
- PostgreSQL compatible
- Built-in enterprise features
Integrations
- Spring microservices
- Apache Kafka & KSQL
- Apache Spark
-
I'm not sure I'd want fancy, distributed filesystems on by default for most apps. I feel like most apps would need custom changes to explicitly support distributed storage, and I'm skeptical that a blanket drop-in distributed-fs solution could meet the performance and reliability needs of the diversity of cloudron users.
I'd rather have multi-node app management than distributed app runtime. Manage all your cloudron nodes and assign apps between them, migrate them etc, but most apps can still only be deployed to one cloudron instance at a time. At least I think this would be a better scaling/ha goal for a v1 implementation.
-
@infogulch said in Scaling / High Availability Cloudron Setup:
I'd rather have multi-node app management than distributed app runtime. Manage all your cloudron nodes and assign apps between them, migrate them etc, but most apps can still only be deployed to one cloudron instance at a time. At least I think this would be a better scaling/ha goal for a v1 implementation.
I totally agree, and I think it's the way the cloudron team is headed for the V1
-
@infogulch Yes, I recall overthinking it that way (i.e trying to scale and distribute etc) but @mehdi corrected my thoughts a while ago about this and mentioned focusing on just managing nodes. I remember writing this somewhere, but I cannot find my notes.
-
Having been down the high-availability setup path with K8S, it isn't a small ask and without compromises. I prefer to think of HA on the server level - so good servers with RAID10 or VPS that does all that for you, couple that with a solid backup and restore setup and you can get as close to HA as those more complex solutions.
I'd rather see focus on the multi-cloud control panel and granular backup policies first.
It's the same as encryption - everyone thinks they want it, until they realise how many people and policies there needs to be for key holders because of the vulnerability for loss moving from the technology to the people.
-
@marcusquinn More like common hypervisor HA features instead of full blow K8 HA? Mainly the ability to migrate an app to a different node and further move/manage its backup and DNS
-
k8s is not a great fit imo for cloudron without introducing much bigger changes...there are roads to that runtime with some intermediary schedulers as well though, which is why I like Nomad in this space the most. I've actually been working up a prototype using the HashiStack Consul/Nomad (plus or minus vault) to provide a distributed runtime, but that's a reasonably long way off seeing any sort of integration into the core of things. It's a big shift on its own, and needs a lot of refinement. Obviously so would a k8s approach. In the immediate term, managing across multiple full-on cloudron instances is fairly clean, and if implemented correctly, could actually still be useful in that world as well. It's the first, easiest, smallest thing to do and therefore in my opinion is valuable, regardless of where the higher-powered distributed runtime ideas go.
-
@plusone-nick I mean as in disk hardware redundancy. Most racks have 2 of everything else. In my experience a simple server setup on a good hardware rack will outperform K8S for uptime. I lost count of the times we were restarting one thing or another with Rancher to get something working that had no reason to fail than K8S getting it's knickers in a twist.
The biggest risk to data loss is always the simple minds of the users!
The biggest risk to availability is always the complex minds of the tools!
No-one really needs high-availability, online banking goes offline frequently for maintenance. If Google has a bad day, people make a beverage and talk to each other.
HA is snake oil in my experience.
-