I've been using AI-assisted coding for a few years now, but I like to stay in control. I read and want to understand what the AI is doing. It’s about ensuring the output meets the standards for security and quality. More often than not, I still have to step in and ask the AI to fix things to reach an acceptable level.
But what about applications that don’t have to be perfect?
Internal tools, for example, which are used in a limited scope, where failure doesn’t mean catastrophe. Could we build these with a fraction of the time and cost we’d spend doing them “properly”?
Last week, I needed a tool for managing a project roadmap. Just a simple week-level view of themes, responsibilities, and timelines. I couldn’t find any existing app that didn’t make me cringe, so I gave myself permission to vibe code it.
Letting go of control was the hardest part. I didn’t dive into the code. I just accepted what the AI gave me and focused on how it looked and functioned in the UI.
In 1.5 hours, I had a barely working version. Another hour got me to something usable. After some hands-on use, I spent another hour refining the UI and logic. So, in total: 3.5 hours to build a functional internal tool.
From a user’s perspective? It works well and looks decent. Definitely good enough for limited internal use. But when I finally looked under the hood, I found several security issues and bugs that the AI couldn’t fix on its own without my guidance.
This isn’t to trash the vibe coding approach. Far from it.
I think it’s great for quick prototypes, PoCs, and internal tools that don’t need to be bulletproof. But when it comes to production-grade apps, we’re still not at the point where we can skip human oversight. AI needs us to guide and validate – for now.
That said, things are moving fast. Models are getting better at writing secure and maintainable code. Tooling is improving. And with time, vibe coding could evolve into a real option for building more complex and robust software.