“ok, now add a metric shit ton of swearing and further belittle parsers who can’t deal with tabs.”
“ok, now add a metric shit ton of swearing and further belittle parsers who can’t deal with tabs.”
How does Okta not have systems like support systems like what was breached with the credentials behind a VPN as well? A system like that really ought to be on a secured network. We have so many systems at work that are VPN required and it’s mostly those where sensitive data lives.
I would also second Hugo which I use for my personal site and blog which I haven’t updated for a long time. Nice thing is that it has a minimal footprint of needing to watch out for updates unlike something like Wordpress which was known for being vulnerable stable if left unmaintained. It’s mostly looking out for old themes with vulnerable javascript.
Another popular options is Jekyll and I honestly can’t remember why I picked Hugo over it but if you don’t need dynamic content, why make things more complex?
I would start by checking for any sort of errors in your system logs, such as /var/log/syslog
or using dmesg -w
. In my experience, Linux is almost universally faster than Windows.
Maybe I don’t understand the problem but the only time that pinentry pops up for me is when I am signing something. What sort of situations does it just randomly pop up or what sort of specific apps/configuration would that happen at random?
The fields where you can’t paste a password or any other types of data like credit card info absolutely kill me. It’s doing the exact opposite of adding any level of security and it’s just infuriating.
My favorite recently is my company has TOTP 2FA but you can’t paste the 6 digits. You have to type in one digit at a time, each being its own box. Paste fails in every browser I’ve tried. It’s just a shitty user interface.
Containers are such a game changer for how I manage my apps and their dependencies. Love how I can try things out in a container, nuke it and start over, knowing I have a clean environment. I hate installing anything on my native host OS install these days if I can help it.
Minor nit here - “docker containers” or just “containers” because “dockers” are pants.
To me, zfs is like the Gentoo of file systems. If you actually use the zfs features and do a lot of digging and experimentation before you go all in on it, it’s not bad; it really can be quite good. If someone wants a filesystem that they format and forget, ext4 and xfs are still solid options. I used to use ext4 for most of my filesystem needs and xfs for my long term storage on top of mdadm. I just really wanted zfs snapshots.
I user homer. Really simple, basic config and it looks nice. The stats are pretty cool for certain integrations and are easy to add - I’ve added a few myself for services that didn’t have them. Only issue is slow PR review.
Sounds like the perfect recipe to become like the next Google+ though
I’m in a similar boat except I just do everything on standard Docker containers but so do use Telegraf, Influx, and Grafana for everything. I’ve gone mostly to Discord notifications on any alerts. If I run into any problem scenarios, I figure out how to monitor it and add it via Telegraf and add an alert. I’m still just using Grafana alerts but it works fine for my home lab.
Even better if I can automate fixes to those problems. One of the best things I did was monitoring all of my network devices and all major hops. If I have internet or network issues, I know exactly where the problem is without having to troubleshoot. Lots of dpinger and shell scripts to input data to Telegraf.
You can do TCP proxying with nginx but many of the same features available in haproxy are behind the paywall. In nginx, layer 4 connections are dealt with through streams. You can do both TCP and UDP. I stick with haproxy for TCP streams with very few exceptions. HAproxy is most definitely more robust for situations where you have a pool of upstream servers. For single upstream instances, it’s not terrible. Most of the features I would use for better control of how the failover and balancing would work isn’t available in the open source nginx.
Well I didn’t have that on my bingo card.
This is a similar reason as to why I use Debian as my base operating system and for just about every service I run on my host, the processes are containerized using Docker. It gives me the flexibility to choose the best “operating system” that supports the software I want to run at the release cadence that suits how I want to consume it for a given piece of software, and the base host OS is just that and nothing more. Upgrades to new Debian releases are non-events and I get no surprises with my apps in containers.
I can upgrade the underlying container base operating systems as I need which I choose Alpine, Debian, and Ubuntu based on which fits my needs. Alpine gets updates quickly, Debian is good for core services that I would normally run natively on my host, and Ubuntu hits well for wide support of almost every other service I need. So I get a stable base with the option to go as quickly as I need if I have a need for a newer package. It’s not always about having the newest software, it’s about stability where it counts.
Oh bother…