maybe this will work
linting and unit tests
maybe this will work
linting and unit tests
I’m mostly talking about dry ingredients, which I can mash down, level off, leave heaping…
Most of the issues I had with cooking are a result of how recipes are. Recipe says dice a thing? How small? A teaspoon of something? The hell does that mean? I can fit a ton of stuff in there if I mash it down. Salt to taste? Forget about it. Pretty soon I’m operating in panic mode and maybe the recipe turns out but I’m too stressed out to enjoy it.
Enter Sohla’s cookbook, which explains everything. It’s part cookbook, part autobiography, and part reference manual. Her youtube videos are tremendous fun, too.
“You love the boats. I do not, but I love what they mean.” sweeping gesture toward the window
Imagine being the ruler of that city and letting him get “cured” instead of having him infodump / give daily reports about this.
A thing that hallucinates uncompilable code but somehow convinces your boss it’s a necessary tool.
Better than my method:
I’m fine with that. They’ve called us weird forever because they think it should bother us. IMO it doesn’t, but saying they’re weird bothers them even more.
A stool sample? Well I guess I could break the back off one of the kitchen chairs but I don’t see how that’s going to help.
As someone whose employer is strongly pushing them to use AI assistants in coding: no. At best, it’s like being tied to a shitty intern that copies code off stack overflow and then blows me up on slack when it magically doesn’t work. I still don’t understand why everyone is so excited about them. The only tasks they can handle competently are tasks I can easily do on my own (and with a lot less re-typing.)
Sure, they’ll grow over the years, but Altman et al are complaining that they’re running out of training data. And even with an unlimited body of training data for future models, we’ll still end up with something about as intelligent as a kid that’s been locked in a windowless room with books their whole life and can either parrot opinions they’ve read or make shit up and hope you believe it. I’ll think we’ll get a series of incompetent products with increasing ability to make wrong shit up on the fly until C-suite moves on to the next shiny bullshit.
That’s not to say we’re not capable of creating a generally-intelligent system on par with or exceeding human intelligence, but I really don’t think LLMs will allow for that.
tl;dr: a lot of woo in the tech community that the linux community isn’t as on board with
Also, telling me to relax is like telling a tree to do pushups.
Steps to test: “Idk try some shit”
Yep. This is the way. Also, you’d be surprised how many devs don’t run through their own QA steps before asking other people to verify.
Yuck.
Given the deeply adversarial relationship I have with any GPT I’ve used, I doubt this.
Yeah, the answer to “can you do me a big favor?” is “what’s up?”
Yeah my wife calls it Resting Warlock Face.
Authority is a privilege and a responsibility, not a virtue or a right. If you are in a place of authority your life should be harder, not full of fawning sycophants that give you an ego boost.
I agree with you. Even if you never touch it, it’s nice to know what the libraries you’re calling are doing under the hood.