People seem to think that if they can code, they can simply drop AI into the process and instantly get flawless results. That’s like knowing how to ride a bike and assuming you can skip driving school. There’s a learning curve, and you actually have to climb it.
“AI made huge changes to my code and did things I didn’t ask for.”
What did you actually ask it to do? Prompting well is a skill. Vague instructions usually get vague results.
“I don’t like the coding style AI uses.”
Did you tell it what style you want? You can set personal or project-wide instructions so it produces code and documentation in your preferred style.
“AI makes a lot of bugs.”
And you never do? That’s why we test. Some bugs happen because the AI misunderstood your prompt. Others because it missed a corner case. Test properly and automate tests to prevent regression.
“AI makes the codebase unmaintainable.”
Humans do that all the time as well. You can ask AI to refactor when you spot duplication or technical debt. Don’t just complain, have it fix the mess.
“AI doesn’t understand our codebase.”
If it takes a human months to understand your system, why would AI instantly get it? Either give it the same onboarding you’d give a new dev (docs, context) or make your codebase more approachable for both humans and AI.