I am the lead programmer for a local robotics team. As lead programmer, it’s my responsibility to train some of the rookies for this upcoming offseason so that they can start individually contributing core robot code, and while I was working one of these said rookies, I was trying to ask them questions to evoke some thoughts about how to solve the issue I was helping them with. However, as I’m speaking, this young rookie begins typing something, sees GitHub Copilot pop up a suggestion, hits tab, and I have to quickly pivot into figuring a nice way to say “that was wrong and it’s wrong to use a tool like that while learning.”
This student is otherwise a pretty good student. He shows up, he pays attention (or at least, as much as a high school freshman can pay attention), and that’s all I can really ask of these students for now. But what struck me as intriguing in this interaction was that this young student, who hardly knew the language, was trying to use AI to help him code. For some reason or another, this student felt compelled to install tools to write code for him, despite his lack of experience and knowledge when it came to coding.
Now, how am I supposed to trust that this student knows what he’s doing? We work with 150 pound robots that can run across the field faster than most people can run. We tune big, heavy, mechanical arms and elevators that can crush your fingers in less than a second. It takes a serious amount of attention to detail and experience to safely engineer a high performing machine, particularly in the six week build period of our competition.
My concerns with the use of AI aren’t solely for this niche interest, but also in learning and productivity as a whole. As I mentioned earlier, this student is not experienced with programming, and was having difficulties writing basic function calls. However, because the AI did all the hard syntax for him, he didn’t have to worry about it. He didn’t have to worry about the logic or how ergonomic his code was either, and this is terrifying.
If inexperienced programmers feel tempted to rely on AI, how will they ever learn anything? Almost all of my learning has come from fucking around and finding out. AI removes the fucking around aspect, thus diminishing the amount of finding out.
To revisit my niche example of robotics, though this would likely apply to many different fields that involve programming, say our team is at a competition after I’ve graduated, and the students have to fix a bug in the code. I imagine, if these students are to continue learning with AI by their side, the process would go something like this:
- Something goes wrong during a match
- “Fix that please”
- “I don’t know how it works”
- “What do you mean you don’t know how it works? Didn’t you engineer the code for this robot?”
- “Uh… I sorta did… using my, uh… tools”
But AI is not only a problem for programmers still learning the basic constructs of the language they have to work in. Even senior developers seem to be falling for this trap. I recently read an article from a senior developer detailing their experience with AI, and it quite frankly shocked me. This developer admits to generating swaths of code with AI, delegating their mentorship to ChatGPT, blowing off code reviews, and it’s stunning to see this behavior from a supposed veteran of the industry, particularly when they’re criticizing junior developers for doing the same.
What shocks me the most about this article are the supposed “solutions” this author details. Such solutions include:
- “Explain It Back”, where you pollute your code with comments on every AI generated line explaining what it does, consuming twice the amount of time it would have taken to just write the code normally
- “AI-Free Fridays”, where the author encourages the reader to take off the training wheels and learn something, as if being frustrated while learning is atypical or dangerous
- My favorite one of all, “Actually Mentor”, where the author literally just says to not use AI to blow off your junior developers.
Here’s a solution that works for every problem brought up in both my article and his: DON’T USE AI! It baffles me that people act as though AI is simply inevitable, and that if you don’t use it, you’ll get left behind in the dust. This is simply not true. Who will develop AI tooling? Who will fix the terrible architecture and bugs brought from shitty AI code? Who will be adapting the codebase to new requirements? Who will be developing something new, and therefore have no training data for AI to rely on? Who will stand up and call out the bullshit developers are submitting and pretending to have written themselves?
A story I was told when I took a course on HTML and CSS was about a Reddit user who forgot how to code after automating his job for six years. He managed to fly by at the company he worked for, making $95,000 a year, but his entire foundation was built on lies. He pretended as though he was a hard working, trustworthy software developer, but was in fact “using his tools” to do nothing at all (sounds somewhat familiar, huh?).
I understand that the above example is an extreme, and you’re not going to completely forget all programming knowledge if you start using AI. However, consider what you’re doing to your brain with AI. In using it, you are creating a dependency upon it. You are allowing your cognitive function to be offloaded to the machine, and you are thereby diminishing your cognitive capibilities in turn.
In order to properly learn and maintain your skills as a developer, it is essential to understand that AI is a children’s toy used for cheating on essays and making Daniel Stenberg’s life harder. By using AI in any significant fashion, you are tossing your belief in your own abilities and the skills required to maintain your knowledge into the dumpster and setting them on fire. Don’t listen to the tech bros using AI to make bots that dump out AI videos on Instagram. Listen to what’s right, and stop using AI for coding, period.