• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle
  • My own classic was fiddling with the nvidia PRIME config to try and get rid of some very mildly irritating screen tearing. No graphics output at all. Now this is fixable of course, but it’s a pig.

    And I’d decided to do this 2 hours before an incredibly important progress review meeting for my PhD.

    Got it back with about 10 mins to spare and decided just to leave the driver config alone after that.

    Bonus round

    Also a friend managed to bork his ubuntu 16 laptop by trying to switch from unity to gnome and ending up with sort of neither. That was reinstall territory right there.


  • Yeah as an ecologist that same thing made me giggle. I suppose why not the lesser-spotted 🍆warbler :P

    In terms of exposing it only to bots, that is a frustration, unless you make it seamless then it does become kinda trivial to mitigate. Otherwise the approach I’d take to mitigate it is to adapt a lemmy client that already does the filtering or reverse-engineer the deciding element of the app. Similarly if you use garbage then you need it to look enough like normal words for it to be hard to classify as AI generated.

    The funny thing is that LLMs are not actually much good at telling whether something is ai generated, you need to train another model to do that, but to train that ai you need good sources of non-corrupt data. Also the whole point of generative AI language models is that they are actively trying to pass that test by design so it becomes an arms race that they can never really win!

    Man, what a shitshow generative ai is


  • Radical and altogether stupid idea (but a fun thought) is this:

    Were lemmy to have a certain percentage of AI content seamlessly incorporated into its corpus of text, it would become useless for training LLMs on (see this paper for more technical details on the effects of training LLMs on their own outputs, a phenomenon called “model collapse”).

    In effect this would sort of “poison the well”, though given that we all drink the water, the hope would be that our tolerance for a mild amount of AI corruption would be higher than an LLM creator’s.

    This poisoning approach amusingly benefits from being a thing that could be advertised heavily, basically saying “lemmy is useless for training LLMs, don’t bother with it”.

    Now I must say personally I think that I don’t really think this is a sensible or viable strategy, and that I think the well is already poisoned in this regard (as I think there is already a non-negligible amount of LLM-sourced content on lemmy). But yes, a fun approach to consider: trading integrity for privacy.