This approach sounds good.
I think the correct approach is both, if you have the option.
Most devices accept two name servers. Redundancy is always good, especially for DNS.
This approach sounds good.
I think the correct approach is both, if you have the option.
Most devices accept two name servers. Redundancy is always good, especially for DNS.
If you weren’t at a university it was generally a challenge to get hold of disks. Downloading at home took forever on a 28.8 or even 56k modem (ie. 56 kilobits per second).
Slackware and Redhat disk sets were the thing, in my experience. But generally that only gave you the compiled code, not the source (although there was an another set of disks with the source packages).
If you wanted to recompile stuff you had to download the right set of packages, and be prepared to handle version conflicts on your own (with mailing list and usenet support).
Recompiling the kernel with specific patches for graphics cards, sound cards, modems and other devices (I remember scanners in particular), or specific combinations of hardware was relatively common. “Use the source, Luke!” was a common admonition. Often times specific FAQ pages or howtos would be made available for software packages, including games.
XFree86 was very powerful on hardware it supported, but was very finnicky. See the other posts about the level of detail that had to be supplied to get combinations of graphics cards and monitors working without the appearance of magic smoke.
Running Linux was mostly a enthusiast/hobbyist/geek thing, for those who wanted to see what was possible, and those who wanted to tinker with something approaching Unix, and those who wanted to stretch the limits of what their hardware could do.
Many of those enthusiasts and hobbyists and geeks discovered that Linux could do far more than anyone previously had been prepared to admit or realise. They, and others like them, took it with them into progressively more significant, and valuable projects, and it began to take over the world.
SSH along with the extra stuff it comes with like scp is the way forward.
The two following suggestions make use of secure shell.
Termux and then pkg install mc
(MC is Midnight Commander)
Alternatively, if you are feeling brave and GUI, Total Commander here.
Consider using tar to create an archive of your home directory, and then unpacking that on the new machine. This will help to capture all the links as well as regular files, and their permissions.
Take a minute to think what else you have changed on the old machine, and then take another minute to think how tricky it would be to replicate on a new machine. Downloading the apps again is gloriously easy. Replacing configs, or keys and certificates, is not!
I normally archive /etc as well, and then I can copy out the specific files I need.
Did you install databases? You’ll want to follow specific instructions for those.
Have you set up web sites? You’ll want to archive /var/www as well.
I have never knowingly used Arch. Am I allowed to like this song?
Also, Taylor Swift, is that you?
Awesome. Now stick with it!
And remember, different isn’t wrong, it’s different.
Check that it works with Klipper!
The convenience and control Klipper provides is phenomenal. You don’t have to use it if it turns out you dont like it, but I feel like ruling it out as an option now would be a shame.
I would also point out that you should not be put off by the “official” supported printers list for Klipper, a bit of Googling will often turn up some mini projects where people are actively working on supporting the printer with Klipper before the main project gets round to adding official support.
Thank so much for this survey. So much useful detail. Great stuff!
You mention Blender in passing. Any thoughts on using it for CAD design for 3d printing? “Keep Making” on YouTube seems to love it for that, once some plugins are installed.
I think such a dataset would be very useful. I’m just getting in to 3D printing and have spent a little bit of time hunting for this type of information already. I’ve had to stick to star ratings on vendor sites so far.
(Edit: typo)
Procmail for the old school win.
Yes.
Not tried the app version. Been using Fairemail for a while now, since k9 was unmaintained.
Fairemail is well maintained. Quick. Supports multiple accounts very well. Loads of features (could be a downside for those who like things simple). Designed with security and privacy as top priorities right from the start. Open source development. For a long time its been the best email client on Android IMHO.
I cut my teeth on an early version of The Linux Networking Howto, still available at tldp.org. That’s a little bit out of date now :-) but the basic IPv4 networking concepts are still good.
These days so much is implementation or distribution dependent. There has been so much very rapid development in this field during the internet era that the age of documentation matters significantly.
A mitigating, but also confusing, factor is that different generations of networking tools have backwards compatibility built in so that it has been possible to build firewalls on kernels running nftables using iptables utilities in userspace.
I think you could do worse than starting with the Debian wikis and then drilling down into other documentation for the specific distributions or applications you want to use.
I seem to remember that openwrt.org and shorewall.org (though that product is EOL) also have some good overarching network stuff. I think Hurricane Electric he.com may still do their free basic IPv6 certificate programme?
Wikipedia is also your friend in this, especially the references.
I’ve enjoyed onemarcfifty.com’s videos too, but that format isn’t what you are looking for, and the transcripts I have seen are not formatted.
I like this idea so much. The problem is quality control.
Uber Eats here in UK really struggles to delivery an accurate order. And where there is a problem the driver blames the restaurant, the restaurant blames the driver, and Uber or the restaurant (it’s frequently not clear where to begin) may or may not issue a refund and perhaps an apology, but that doesn’t solve the problem which is you don’t have the food you were promised and that you paid for. No one takes responsibility for that.
Who in a decentralised system can or should take responsibility?
Amazon, for all their many faults, claim to be trying to make the most customer-centric company on earth. A lot of their early success came from a stellar returns policy, shouldering responsibility for products they dispatched, as well as excellent prices. Not so much now, but certainly during their incredible retail growth period.
How do you code for that in a federated system? And, if you can, how do you compete in a wider marketplace with an Amazon monolith?
And will be cancelled in 18 months with 2 weeks notice.