I set up LinkWarden about a month ago for the first time and have been enjoying it. Thank you!
I do have some feature requests – is GitHub the best place to submit those?
I set up LinkWarden about a month ago for the first time and have been enjoying it. Thank you!
I do have some feature requests – is GitHub the best place to submit those?
Here they’re pushing the “must be within 60 miles from the office” trope; I bet they’d say to drive in if it’s after hours.
I’m a big fan of netdata; it’s part of my standard deployment. I put in some custom configs depending on what services are running on what servers. If there’s an issue it sends me an email and posts into a slack channel.
Next step is an influxdb backend to keep more history.
I also use monit to restart certain services in certain situations.
I wish it was database agnostic. And I’m slightly concerned about the version three rewrite.
It does look awesome, and I’ll revisit it to see where things are in six months.
Yup! Since 1993… Started Linux on my desktop and haven’t looked back.
I thought you were going to say you liked lint (the source code checker).
Thank you. I hadn’t considered the payment part. The cloud system that I manage is in education, so everyone pays in advance.
This makes sense, and I’ll start with a lower number and ask it to go up later. It will take a couple of months to migrate everything from Linode anyhow, so I don’t need them all at once.
My identity infrastructure alone uses a whole bunch of servers.
There are the three Kerberos servers, the two clusters of multiple LDAP servers behind HAProxy, the rabbitmq servers to pass requests around, the web servers also balanced/HA behind HAProxy… For me, service reliability and security are two of the biggest factors, so I isolate services and use HA when available.
I told them everything that I wrote here in my original request – I need 25 now, but would like a quota of 50 to maintain elasticity, testing, etc.
They followed up with the request for actual resources needed.
I haven’t answered since then.
I’ve been doing this for 30+ years and it seems like the push lately has been towards oversimplification on the user side, but at the cost of resources and hidden complexity on the backend.
As an Assembly Language programmer I’m used to programming with consideration towards resource consumption. Did using that extra register just cause a couple of extra PUSH and POP commands in the loop? What’s the overhead on that?
But now some people just throw in a JavaScript framework for a single feature and don’t even worry about how it works or the overhead as long as the frontend looks right.
The same is true with computing. We’re abstracting containers inside of VMs on top of base operating systems which is adding so much more resource utilization to the mix (what’s the carbon footprint on that?) with an extremely complex but hidden backend. Everything’s great until you have to figure out why you’re suddenly losing packets that pass through a virtualized router to linuxbridge or OVS to a Kubernetes pod inside a virtual machine. And if one of those processes fails along the way, BOOM! it’s all gone. But that’s OK; we’ll just tear it down and rebuild it.
I get it. I understand the draw, and I see the benefits. IaC is awesome, and the speed with which things can be done is amazing. My concern is that I’ve seen a lot of people using these things who don’t know what’s going on under the hood, so they often make assumptions or mistakes that lead to surprises later.
I’m not sure what the answer is other than to understand what you’re doing at every step of the way, and always try to choose the simplest route (but future-proofed).