- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://programming.dev/post/8121843
~n (@[email protected]) writes:
This is fine…
“We observed that participants who had access to the AI assistant were more likely to introduce security vulnerabilities for the majority of programming tasks, yet were also more likely to rate their insecure answers as secure compared to those in our control group.”
[Do Users Write More Insecure Code with AI Assistants?](https://arxiv.org/abs/2211.03622?
I think this is extremely important:
Bad programmers + AI = bad code
Good programmers + AI = good code
LLMs amplify biases by design, so this tracks.
What do you mean? Sounds to me like any other tool, it takes skill to use it well. Same as stack overflow, built in code suggestions or IDE generated code.
Not to detract from the usefulness of it just in terms of the fact that it requires knowledge to use well.
As someone currently studying machine learning thoery and how these models are built, I’m explaining that built into the models at their core are functions that amplify the bias of the training data by identifying and using mathematical associations within the training data to create output. Because of that design, a naive approach to its use would result in amplified bias of not only the training data but also the person using the tool.
This. As an experienced developer I’ve released enough bugs to miss-trust my own work and spend as much time as I can afford in the budget on my own personal QA process. So it’s no burden at all to have to do that with AI code. And of course, a well structured company has further QA outside of that.
If anything, I find it easier to do that with code I didn’t write myself. Just yesterday I merged a commit with a ridiculous mistake that I should have seen. A colleague noticed it instantly when I was stuck and frustrated enough to reach out for a second opinion. I probably would’ve noticed if an AI had written it.
Also - in hindsight - an AI code audit would have also picked it up.
The quote above covered exactly what you just said: “yet were also more likely to rate their insecure answers as secure compared to those in our control group” at work :-)
I find that the people who complain the most about AI code aren’t professional programmers. Everyone at my company and my friends who are in the industry are all very positive towards it
I’m still of the opinion that…
Good programmers = best code
eh, I’ve known lots of good programmers who are super stuck in their ways. Teaching them to effectively use an LLM can help break you out of the mindset that there’s only one way to do things.
I find it’s useful when writing new code because it can give you a quick first draft of each function, but most of the time I’m modifying existing applications and it’s less useful for that. And you still need to be able to judge for yourself whether the code it offers is any good.
I find it’s great for explaining convoluted legacy code, it’s all about utilizing it effectively
It really depends
If you want to avoid these issues I’d suggest to first read the docs, then look up stack overflow or likely name of a function you need to write on grep.app, then use a LLM as your last resort. Good for prototyping usually, less so for more specific things.
I think that’s one of the best use cases for AI in programming; exploring other approaches.
It’s very time-consuming to play out how your codebase would look like if you had decided differently at the beginning of the project. So actually comparing different implementations is very expensive. This incentivizes people to stick to what they know works well. Maybe even more so when they have more experience, which means they really know this works very well, and they know what can go wrong otherwise.
Being able to generate code instantly helps a lot in this regard, although it still has to be checked for errors.
Good programmers + AI = extra, unnecessary work just to end up with equal quality code
Not even close to true but ok