This poses a challenge when implementing Retrieval-Augmented Generation (RAG) models to power AI agents that assist employees with questions about processes and best practices. The irony? Most of these questions (and their answers) are repetitive, yet they remain scattered across various platforms, leading to inefficiencies and lost productivity.
In today's remote work culture, these questions aren't asked in person but in digital spaces - chat platforms, forums, and emails. While these interactions aren't traditionally considered documentation, they are searchable and valuable knowledge assets. So why not harness this existing Q&A data to enhance both documentation and AI capabilities?
Here's how:
✅ Deploy a specialised AI observer to monitor internal communication channels for Q&A discussions.
✅ When the AI detects new insights not found in existing documentation, it suggests adding them.
✅ If contradictions arise, the AI flags them and recommends updates.
✅ Human reviewers validate and approve suggestions, continuously refining documentation and improving AI accuracy.
We've implemented this approach with our customers, and it has proven to be a game-changer in transforming semi-tacit knowledge into structured, reusable documentation. The best part? There's no need for massive documentation overhaul projects when rolling out AI assistants.