• 6 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle
  • You probably already know, but scheme artists avoid pure #000000 out of contrast concerns. (e.g. DarkReader can give some headaches if the background is straight black with offwhite text). That makes a #000000 scheme very rare - manual intervention required :P

    If you still wanna get crackin’, just tweak a preexisting dark theme and change navies/greys to black. And if you’re talking about the palette instead of actual themes to install, this still works – just check the source for whatever colors they’re using and tweak those. (grep for hexes then sort uniq? shell exercise is left to the reader)

    I’d recommend taking one of vinceliuice’s themes and just turning navy blues into blacks. For example, Graphite-gtk (has a matching qt theme) is pretty grey even with a --black tweak, but you could blacken it with some effort. Same with Colloid-gtk (also has a --black tweak).

    You could probably even blackify the KDE theme’s greys if you so fancied, but then you’d need to tune down the contrast on the other colors in the set. And this and that…

    If this is too inexact an answer, then ouch. I wish you luck!



  • So instead of commenting inside of nix files, you put nix files into .org documents and collate them so you can make your nix files an OS and a website and a zettelkasten-looking set of linked annotated nodes.

    That puts a stupid grin on my face (ᐖ )

    Dammit I was sure I was just going to stick with Arch until I saw this

    Questions:

    • You have home on tmpfs. Isn’t that volatile? Where do you put your data/pictures/random git projects? Build outputs? How’s your RAM? (Sorry if I’m missing something obv)
    • What’s your bootup like?
    • Another commenter mentioned difficulties in setting up specialized tools w/o containerizing, and another mentioned that containers still have issues. Have you run into a sitch where you needed to workaround such a problem? (e.g. something in wine, or something that needs FHS-wrangling)

  • The “stable unstable” setup is a beautiful concept. Thanks for the dotfiles mention – I keep hearing “you need to rebuild if you edit a dotfile” but I guess that’s a myth encountered by people trying to nix too nixily, falling into said archetypal rabbit hole

    Questions:

    1. Does mixing streams “infect” other packages? I remember an old Gentoo thing where ~amd64 unstable packages would want to spread on its own. Since it’s nix I assume that an unstable package will require a bunch of unstables but they’d be installed alongside respective stable versions – i.e. taking up disk space but not “spreading” per se

    For packages its basically 0 time.

    Is that really true for you? I assume you refer to the length of time it takes to copy paste a flake from online but how reliable is that really? And the other commenters mention that there’s still wrestling to be had for certain tools


  • Thanks for the input!

    I’m nervous about faking FHS as well, especially for specialized stuff. I don’t know much about steam-run or its caveats – so I can’t debug it (Maybe it turns out to be really simple and solid? Who knows…)

    Thanks for mentioning the gpu accel issues in distrobox – I was considering using containerization to fight off any FHS issues but it seems I can’t jump the gun. I’ll probably just tighten dev envs by trickling in nix-shell usage; multiple versions of a package at once is an issue I’d def love to solve (in a way that’s more than just dockerfile)

    Interesting that this is the third comment suggesting just using btrfs snapshots to resist Arch update experiences. I have root and home on two flat btrfs subvols so it shouldn’t be that hard to implement. (yeah yeah “What backup?” is bad)

    Seems like the simplest way out is those two smallish changes. Wish I could transcend into declarativity but the thread’s nix survivor ratio is grim




  • Building on this, I recommend zoxide instead of only fzfing or regexping.

    For people who like to keep everything they ever create, like college students, you can use z 18.04/1 to get to a directory like ~/hw/random-school/fresh-1/analysis-18.04/pset1.

    Lets you nest without fear.

    (Also, about your question: I’ve personally used ~/git/<projname>/ and ~/git/<org>/<projname> at the same time – e.g. ~/git/aur/fuzzel-git)




  • Yeah I was considering using one of these two, out of curiosity.

    I’ve heard complaints about CMake… on pre-2015 forums, so I don’t know where it’s at now.

    I’ve done very little from the developer side of Meson but I do recall having tried a sound theme that, inexplicably, had a Meson-based installer. (It was just .ogg files iirc.) That’s probably a good sign if someone picked it over an install.sh

    Though you’re right, there’s probably little advantage in me not using a Makefile here, except again, curiosity





  • Not fishy at all! It’s like a lockpicking fan asking about locksport.

    If you’re looking for examples, GitHub has a lot of CVE proof-of-concepts and there are lots of payload git repos across git hosts in general, but if you’re looking for a one-stop-shop “Steal all credentials,” or “Work on all OSes/architectures just by switching the compile target,” then you’ll have a harder time. (A do-one-thing-well approach is more maintainable after all.)

    If you want to make something yourself that still tries to pull off the take-as-much-as-you-can, you should just search up how different apps store data and whether it’s easy to grab. Like, where browsers store their cookies, or the implications of X11’s security model (Linux-specific), or where Windows/Windows apps’ credentials and hashes are stored. Of course, there’s only much a payload can do without a vulnerability exploit to partner with (e.g. Is privilege escalated? Are we still in userland? is this just a run-of-the-mill Trojan?).

    Apologies if my answer is too general.


  • Lots of good answers here but I’ll toss in my own “figure out what you need” experience from my first firewall funtime. (Disclaimer: I used nftables – it should be similar to ufw in terms of defaults though).

    • Right off the bat, everything unneeded was blocked. I “needed” no configuration, except for maybe…
    • Whatever CUPS runs on (when I use it)
    • Sometimes I ran python -m http.server – I unblocked port 8000 for personal use.
    • I chose to unblock port 53 (DNS). I wanted to connect to another computer via hostname IIRC (e.g. connecting to raspberry-pi.local. I might be misremembering this though).
    • At one point I played with NGINX – that’s port 80 (HTTP) and port 443 (HTTPS).
    • SSH was already permitted (port 22 – you need root access to enable traffic through ports below 1024 anyway so this wasn’t an issue for running typical apps)

    I didn’t use WireShark back then, really. I think I just ran something like

    sudo lsof -nP -iTCP -sTCP:LISTEN
    

    which showed me a bunch of port traffic (mostly just harmless language servers).

    You don’t have to dive to deep into all the “egress” and “ingress” and whatnot unless you’re doing something special. Or your software uses a weird port. (LocalSend lol)


  • Obligatory Linux comment (Lemmy moment):

    Windows is used often for its compatibility and defaultness but Linux is interesting in the sense that everything is patchable, everything is tinkerable and configurable. The low resistance to tinkering makes lots of Linux users tinkerers – including tinkering via code.

    I’m not saying wipe your hard drive or even dual-boot. Maybe an older computer or VM could help, depending on what you have. But just in the past week I’ve screwed around in low-to-medium-difficulty Linux projects that configured my lockscreen with C, that implemented mildly usable desktop GUIs with TypeScript, among others – just not-too-committal stuff that has a return value I literally see every time I lock my computer.

    Windows equivalent projects can be harsher on the beginning-to-intermediate curve (back when I first tried out Linux Mint, I’d been struggling to make a bookmark inspector in Visual Studio – ended up Pythoning it instead) – not to say that Windows fun is by any means out-of-reach.


  • My friends Leetcoded and Codeforced quite a lot. Advent of Code is up there too, with the interesting caveat that Advent of Code also teaches you refactoring (due to the two-part nature of every problem).

    However, when I was younger I had contempt for the whiteboard-problem-esque appearances of these, but everyone is different.

    If you look hard enough there is always a project at medium difficulty – not way too hard, like a huge project you feel won’t give you returns – not way too easy, like some cowsay clone. Ever tried making a blog? You can host for free on most Git pages implementations (codeberg, github, gitlab…).

    As for programming books, consider trying security books like Art of Exploitation – in the same strain, CTFs can use a decent amount of code, and they’re fun in terms of raw problem-solving. I started with the Bandit wargame, which does Linux problem solving from any machine that has SSH.

    I’m not by any means a l33t hax3r but I found them pretty fun in my learning journey.



  • According to tab autocomplete…

    $ git
    zsh: do you wish to see all 141 possibilities (141 lines)?
    

    But what about the sub options?

    $ git clone https://github.com/git/git
    $ cd git/builtin
    # looking through source, options seem to be declared by OPT
    # except for if statements, OPT_END, bug checks, etc.
    $ grep -R OPT_ | grep --invert-match --count -E \
    "OPT_END|BUG_ON_OPT|if |PARSE_OPT|;$|struct|#define"
    1517
    

    Maybe 1500 or so?

    edit: Indeed, maybe this number is too low. git show has a huge amount of possibilities on its own, though some may be duplicates and rewords of others.

    $ git show --
    zsh: do you wish to see all 489 possibilities (163 lines)?
    $ man git-show | col -b | grep -E "^       -" --count
    98
    

    An attempt at naively parsing the manpages gives a larger number.

    $ man $(find /usr/share/man -name "git*") \
    | col -b | grep -E "^       -" -c 
    1849
    

    Numbers all over the place. I dunno.


  • Huh, TIL.

    To be fair, git switch was also derived from the features of git checkout in >2.23, but like git restore, the manual page warns that behavior may change, and neither are in my muscle memory (lmao).

    I’ll probably keep using checkout since it takes less kb in my head. Besides, we still have to use checkout for checking out a previous commit, even if I learn the more ergonomically appropriate switch and restore. No deprecation here so…

    edit: maybe I got that java 8 mindset

    edit 2: Correction – git switch --detach checks out previous commits. Git checkout may only be there for old scripts’ sake, since all of its features have been split off into those two new functions… so there’s nothing really keeping me from switch.


  • It probably is, but I think their main point is the protest against the age-old delineation into “GUI vs CLI” camps. I’m not saying that you’re elitist, even if your statement might be interpreted as such (it’s hard to communicate tone online but the quotations around “their workflow” could appear mocking), but regarding the structure of your statement, I had a “Windows users are all button-presser noobs” phase and would’ve typed something similar about the Git CLI if time was decently rewound (sans the kindness of a “use what you like” statement). They could be interpreting your statement as a propagation of the anti-GUI stereotyping.

    Evidently they prefer GUI but can effectively use the CLI – no one disagrees that the CLI is more functional.


  • Click to view diffs is super ergonomic; on the other hand, I actually have a story about the Git CLI trumping the GUI (spoiler: reflog).

    In high school we had gotten the funding to build a robot, and one of the adults in charge – guy was brilliant – was using GitHub Desktop to conduct a feature merge with the student who served as team lead. The thing was, he was used to older codebases, so all of his experience was with CVS instead of Git – so when the two slightly messed up the git merge, they discussed recloning everything instead of wasting time plumbing the error (relevant xkcd).

    That was one of the earliest times I had the cajones to walk up to a superior and say “No, you’re doing this totally wrong. You don’t have to do that.”

    He looked at me and nodded. “What would you do instead?”

    “Reflog.”

    “Reflog? I’ve never heard of it before. Can you show us?”

    I hopped onto the laptop and clicked around GitHub Desktop, but couldn’t manage to find any buttons related to reflog… so I went straight to cmd.exe instead.

    git reflog
    git reset --hard "HEAD@{7}"
    

    “Done. We can continue rebasing.”

    And after that, the advisor complimented me for using the command line tool!

    “Lots of GUI apps are just limited frontends to the real meat and potatoes, the command line. Nice job!”

    I felt like a wizard! And so I became the team’s Git-inator.

    edit: pruned story