• 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle



  • Why would you want another year of their software for free? This is their second screw up (apparently they sent out a bad update that affected some Debian and RHEL machines a couple years ago). I’d be transitioning to a competitor at the first opportunity. It seems they aren’t testing releases before pushing them out to customers, which is about as crazy to me as running alpha software on a production system.

    I’m sure you have reasons, and this isn’t really meant to be directed at you personally, it’s just boggling to me that the IT sector as a whole hasn’t looked at this situation and collectively said “fuck that.”



  • I remember ordering some samples from them when they were a newer company, and how cool it was when they added metal as a material option. Sad to see them go. Seems like much more of another company ruined by going public than a failure of their business model. I guess the silver lining is that they simply went under rather than morphing into a worst possible version of what they were trying to squeeze every penny in the pursuit of infinite growth (or maybe they tried that for a while and it failed too, I’ll admit I haven’t been paying attention to the scene for the last several years).



  • Not sure exactly how good this would work for your use case of all traffic, but I use autossh and ssh reverse tunneling to forward a few local ports/services from my local machine to my VPS, where I can then proxy those ports in nginx or apache on the VPS. It might take a bit of extra configuration to go this route, but it’s been reliable for years for me. Wireguard is probably the “newer, right way” to do what I’m doing, but personally I find using ssh tunnels a bit simpler to wrap my head around and manage.

    Technically wireguard would have a touch less latency, but most of the latency will be due to the round trip distance between you and your VPS and the difference in protocols is comparatively negligible.





  • I think that my skepticism and desire to have docker get out of my way, has more to do with already knowing the underlying mechanics, being used to managing services before docker was a thing, and then docker coming along and saying “just learn docker instead.” Which is fine, if it didn’t mean not only an entire shift from what I already know, but a separation from it, with extra networking and docker configuration to fuss with. If I wasn’t already used to managing servers pre-docker, then yeah, I totally get it.


  • I’ll probably make the jump when Plasma 6.1 releases with their “real, fake session restore” functionality, was hoping that would make it in to Plasma 6, and I am daily driving Wayland on my laptop now, but I kinda need my programs (or at least file managers and terminal windows) to re-open the way they were between reboots.

    Thanks to kscreen-doctor, I’ve been able to port most of my desktop scripts that I use for managing my multiple monitors to work on Wayland, and krdc/krfb have been a decent enough replacement for x11vnc or x2go for accessing the desktop on my home server/NAS remotely (I know, desktops on servers are considered sacrilege, but for me it’s been useful too many times to get rid of at this point).

    Where Wayland currently shines for me is VR, Steam VR works better, and more consistently on Plasma Wayland than X11 at this point, which is probably more of a Valve thing than a Wayland thing. When I first got my Index, X11 worked fine, but there have been times when Steam VR on Linux being “broken” has made the news on Phoronix/Gaming on Linux, but still worked fine on Plasma Wayland (which seems to be where Valve is doing most of their SteamVR Linux testing as of late).

    As an end user, I do wish that the Wayland specification was organized better, because as an outsider, it seems a lot of the bickering that goes on has more to do with everyone having different end goals. I think if they would split out the different styles of window management to have their own sub-specs or extensions and then figure out what of that could be moved into the core after everyone has built what they need would be better than their current approach of compromising their way through every little decision that doesn’t always make sense for every use case. Work together when it makes sense, but understand that there are times when that doesn’t make sense, and sometimes you can’t please every stick in the mud, and are going to have to do your own thing without them. I do get the appeal of doing things right the first time too though, even if it takes more time. But it seems like usability is always the thing that gets sacrificed when compromises are made.


  • That’s a big reason I actively avoid docker on my servers, I don’t like running a dozen instances of my database software, and considering how much work it would take to go through and configure each docker container to use an external database, to me it’s just as easy to learn to configure each piece of software for yourself and know what’s going on under the hood, rather than relying on a bunch of defaults made by whoever made the docker image.

    I hope a good amount of my issues with docker have been solved since I last seriously tried to use docker (which was back when they were literally giving away free tee shirts to get people to try it). But the times I’ve peeked at it since, to me it seems that docker gets in the way more often than it solves problems.

    I don’t mean to yuck other people’s yum though, so if you like docker, and it works for you, don’t let me stop you from enjoying it. I just can’t justify the overhead for myself (both at the system resource level, and personal time level of inserting an additional layer of configuration between me and my software).




  • I’ve dabbled with some monitoring tools in the past, but never really stuck with anything proper for very long. I usually notice issues myself. I self-host my own custom new-tab page that I use across all my devices and between that, Nextcloud clients, and my home-assistant reverse proxy on the same vps, when I do have unexpected downtime, I usually notice within a few minutes.

    Other than that I run fail2ban, and have my vps configured to send me a text message/notification whenever someone successfully logs in to a shell via ssh, just in case.

    Based on the logs over the years, most bots that try to login try with usernames like admin or root, I have root login disabled for ssh, and the one account that can be used over ssh has a non-obvious username that would also have to be guessed before an attacker could even try passwords, and fail2ban does a good job of blocking ips that fail after a few tries.

    If I used containers, I would probably want a way to monitor them, but I personally dislike containers (for myself, I’m not here to “yuck” anyone’s “yum”) and deliberately avoid them.


  • I’d even go further and say that if you are using a “high level” language that requires you to re-invent the wheel for simple things (for example JS not having built in functions to shuffle an array or, clamp an number to a range) are indications of poor language design that have lead to the prevalence of all the bloated JS frameworks like jQuery. Obviously I don’t think every language should have a Python-tier standard library, but I’d really like to not have to download half a language from every site I visit because every site uses jQuery for a lot of things that come standard in better languages.



  • Not saying there aren’t any benefits to docker, migration to a different host distro and dependency conflicts are the big two. But for me they are kinda the only two, I find for what I do it’s just as easy to write a shell script that downloads and unpacks software, and copies my own config files into place than it is to deal with basically doing the same thing, but with docker. I could use ansible or something similar for that, but for me, shell scripts are easier to manage.

    Don’t get me wrong, docker has its place. I just find that it gets in my way with it’s own quirks almost as much as it helps in other areas, especially for web apps like Nextcloud that are already just a single folder under the web root and a database.

    One additional benefit I get from not using docker, is that I can do more with a lower-powered server, since I’m not running multiple instances of PHP and nginx across multiple containers.