I’ve never had issues with TERM=xterm
I’ve never had issues with TERM=xterm
Kitty if you have a GPU and run programs that have a lot of output (build scripts and emerge). It uses the GPU for better performance.
If it will be used by non-tech savvy people, why do you care about snap and IBM? Do the people care about that?
When you start getting super specific about which distro you want, I think you should start looking towards a DIY distro.
deleted by creator
I wanted to use fio to benchmark my root drive. I had seen a tutorial saying that the file=
parameter should point to the device file, so I pointed it at /dev/sda. As you might expect, the write test didn’t go so well.
Before installing Arch on a USB flash drive, I disabled ext4 journaling in order to reduce disk reads and writes, being fully aware of the implications (file corruption after unexpected power loss). I was confident that I would never have to pull the plug or the drive without issuing a normal shutdown first. Unfortunately, there was one possibility I hadn’t considered: sometimes, there’s that one service preventing your PC from turning off, and at that stage there’s no way to kill it (besides waiting for systemd to time out, but I was impatient).
So I pulled the plug. The system booted fine, but was missing some binaries. Unfortunately, I couldn’t use pacman to restore them because some of the files it relied on were also destroyed.
This was not the last time I went through this. Luckily I’ve learned my lesson by now
deleted by creator
New comments have appeared
Before you can fix a bootloader, you first need to learn how to install and set up a bootloader. I think most people learn that part when they try Arch
Why do you advocate for keeping /home separate?
I personally don’t do it because the more partitions you have, the more often you need to fiddle around in GParted when one partition gets full. This is also why I use swap files instead of swap partitions
As far as I can tell, unless you distro-hop, separating /home doesn’t have any advantages. Even then, sharing one /home directory between multiple desktop environments can cause some problems
I agree with making and testing backups, though. My current strategy is to back everything up to a 4.2 TiB ZFS pool with daily snapshots on my LAN, and back up the most important data on that to the cloud
borked my bootloader and had to do a fresh install
That’s where you’re wrong :)
OP did not take this picture. Their story is made up. Here’s the original: https://lkml.org/lkml/2011/11/3/110
Neat, thanks for sharing
Here’s the above pseudocode in bash:
find /home/ -mindepth 1 -maxdepth 1 -type d -exec mount none {}/.cache/ -t tmpfs -o size=16G \;
for
doesn’t work here because it uses spaces to delimit strings, which could cause issues with filenames that contain spaces
You can also create a systemd user service, which is useful if you don’t have root access. The above mount command requires root, but the following doesn’t and is more robust than symlinking to /tmp/:
ln -s $(mktemp -dp /var/tmp/) ~/.config/
You: It’s a single user system
Also you: Tmpfs would have to be done for every user
And a /tmp/ symlink would have to be created for every user too, so I don’t get your point
Tmpfs is just as easy as making a symlink, but without the filename conflicts between files in ~/.config/ and /tmp/. You just need to add a line to /etc/fstab
This seems like a filename conflict waiting to happen. Why not just mount a tmpfs there?
/run/ contains such a directory
It’s likely. mkdir fails to create a subdirectory such as ~/.cache/mozilla/ if ~/.cache/ doesn’t exist, unless -p
is explicitly passed to mkdir
Of course, not everything is a shell script, but I imagine the directory creation functions in many languages work similarly
That’s an interesting way to spell proprietary