• 0 Posts
  • 29 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle
  • To actually answer your question, you need some kind of job scheduling service that manages the whole operation. Whether that’s SSM or Ansible or something else. With Ansible, you can set a parallel parameter that will say that you only update 3 or so at a time until they are all done. If one of those upgrades fails, then it will abort the process. There’s a parameter to make it die if any host fails, but I don’t recall it right now.








  • There is a lot of complexity and overhead involved in either system. But, the benefits of containerizing and using Kubernetes allow you to standardize a lot of other things with your applications. With Kubernetes, you can standardize your central logging, network monitoring, and much more. And from the developers perspective, they usually don’t even want to deal with VMs. You can run something Docker Desktop or Rancher Desktop on the developer system and that allows them to dev against a real, compliant k8s distro. Kubernetes is also explicitly declarative, something that OpenStack was having trouble being.

    So there are two swim lanes, as I see it: places that need to use VMs because they are using commercial software, which may or may not explicitly support OpenStack, and companies trying to support developers in which case the developers probably want a system that affords a faster path to production while meeting compliance requirements. OpenStack offered a path towards that later case, but Kubernetes came in and created an even better path.

    PS: I didn’t really answer your question”capable” question though. Technically, you can run a kubernetes cluster on top of OpenStack, so by definition Kubernetes offers a subset of the capabilities of OpenStack. But, it encapsulates the best subset for deploying and managing modern applications. Go look at some demos of ArgoCD, for example. Go look at Cilium and Tetragon for network and workload monitoring. Look at what Grafana and Loki are doing for logging/monitoring/instrumentation.

    Because OpenStack lets you deploy nearly anything (and believe me, I was slinging OVAs for anything back in the day) you will never get to that level of standardization of workloads that allows you to do those kind of things. By limiting what the platform can do, you can build really robust tooling around the things you need to do.






  • You’re on the right track here. Longhorn kind of makes RAID irrelevant, but only for data stored in Longhorn. So anything on the host disk and not a PV is at risk. I tend to use MicroOS and k3s, so I’m okay with the risk, but it’s worth considering.

    For replicas, I wouldn’t jump straight to 3 and ignore 2. A lot of distributed storage systems use 3 so that they can resolve the “split brain” problem. Basically, if half the nodes can’t talk to each other, the side with quorum (2 of 3) knows that it can keep going while the side with 1 of 3 knows to stop accepting writes it can’t replicate. But Longhorn already does this in a Kubernetes native way. So it can get away with replica 2 because only one of the replicas will get the lease from the kube-api.


  • Longhorn is basically just acting like a fancy NFS mount in this configuration. It’s a really fancy NFS mount that will work well with kubernetes, for things like PVC resizing and snapshots, but longhorn isn’t really stretching its legs in this scenario.

    I’d say leave it, because it’s already setup. And someday you might add more (non-RAID) disks to those other nodes, in which case you can set Longhorn to replicas=2 and get some better availability.


  • The biggest danger you’re going to run into is that those distros all lie downstream of the real changes, so non-gaming (and potentially security related) fixes might be slow or incompatible.

    If you go with something like Fedora or Ubuntu, there is going to be full support on all the core things, and you can build the gaming experience you want on top. Any changes that Nobara or Drauger are making to their distros you could probably make yourself.

    (I’ve never used any of those distros, but I’ve found winehq and other tools on Fedora more than sufficient)


  • Can you easily switch drives in your system? I’ll often do that on my computer because little m.2 SSDs are so darn cheap now. It’s easier and cheaper to pick up a little 64GB drive for one off projects than it is to do a proper backup and restore.

    Also, I’d just go with Tumbleweed. I don’t distro hop like I used to, but that’s because as everyone else is saying, most of the distros have gotten really good. Most of the time, my little projects are trying out specific features of a different distros. So I’ll just pop a new drive in, test drive it, then either switch back or not.