In the article they proudly report that less experienced devs are “getting much better productivity gains” (by accepting more copilot suggestions).
As a natural-born cynic, I instead would say that maybe: “less experienced devs lack the experience to know why some copilot suggestions have unintended consequences / are a bad design choice”.
They are also probably being posed significantly easier challenges as well. I would say I’m not necessarily faster than a junior developer, I just spend my time wrestling with harder questions.
What, your company doesn’t throw the hardest jobs at the newest devs to give them a proper trial by fire? Smh 😂
This is really dangerous and shows a vital flaw in capitalism. People will probably adopt AI for programming even if it means using proprietary software, meaning the industry will become under the control of people who have the massive resources to build those AI tools. Rejecting them forms a prisoner’s dilemma because you need to sacrifice short-term dollar-gains for the long-term survival of programming as an open, accessible industry. The people who use them will contribute to damaging the ecosystem but get more money while doing it.
And of course, remember that GitHub/Microsoft are creating these automation tools based on your own code. They are selling open source back to us and putting us out of a job while doing it. People really, really, really, need to stop engaging with Microsoft.
To play the devil’s advocate, most professional development happens on proprietary IDEs but software development isn’t under their control. AI assisted intellisense isn’t much different. The nature of our work will become more abstract as our tools improve. There are at least some open LLMs. One of them might surface as the alternative.
most professional development happens on proprietary IDEs
I don’t think this was always true. If it is now, that’s a bad thing.
Given how unstable and user unfriendly computers are now, just imagine a future where programmers know even less about what they’re doing.
I once thought that it might turn into a “one-eyed man is king” situation, but now I’m not even that sure.
Alright, guess I’ll reiterate my usual beats here. AI code assistance is interesting, and I’m not against it. However, every current solution is inadequate, until it does the following:
- Runs locally, or in an on-prem instance. I’m not taking it up with legal or security if I’m allowed to send our proprietary code off to be analyzed on a foreign server. And I’m not doing it without asking. It just isn’t happening.
- It has to be free, or paid for by my company. It’s cool, and it might help me work, but paying a subscription fee on something that only benefits me at work is essentially the same as a pay cut. Not interested.
- It has to analyze the entire repo. In my current tests of ChatGPT, for most cases I’ve spent long enough giving it context that I could’ve just… solved the problem myself. It needs to have that context already.
- I agree
- GitHub Copilot is free for students, and that’s the way I use it
- GitHub Copilot takes some context from recent files, but iirc they are working on “copilot for your codebase” or something like that
Ah, it makes way more sense for students, absolutely. None of your code is proprietary, so that’s not a concern, student pricing makes things easier.
Plus, your tech stacks are much simpler. Usually just… Java, or Python, or something. Not a python webserver using X framework for templating, Y framework for typing, and Z framework for API calls to some undocumented internal API.