For most companies and projects, the answer is that humans are still very much needed in the loop. Without strong processes and automation to maintain code quality, the productivity gains from automated development are short-lived. Architecture drifts, bugs pile up, and knowledge of the codebase slowly evaporates.
Automated testing helps catch regression bugs, which is where a large chunk of testing time typically goes. If we can trust that existing functionality won’t break, we can focus human attention on the new stuff.
We can also lean on code scanners and agentic QA to enforce style, naming conventions, and check for common bugs or vulnerabilities. While these tools don’t guarantee functional correctness, they can filter out the obviously broken code before it ever reaches a human. Let the AI clean up the basics, then hand it over for review.
In an ideal future, humans define business-level requirements and sign off on feature-level testing. Architects would occasionally review the big picture, steering the design and queuing up refactoring to prevent architectural decay.
But getting there takes serious investment. Documentation and instructions must be good enough for AI to generate consistent, high-quality code. Test coverage must be close to 100 percent. Specs must be detailed enough to leave no room for interpretation.
This isn’t meant to discourage. It is just a bit of realism about where we are now and what could eventually be possible. Building this kind of infrastructure takes time. But if we keep investing and improving the process step by step, we can gradually reduce the number of human fingerprints needed throughout the software development lifecycle.
So yes, maybe someday we won't have to read all the code. Just... most of it. For now.