CTO AI Corner: How do you manage data security while using tools like Cursor or GitHub Copilot?

As part of my AI coaching work, I’ve encountered a wide range of policies and concerns around protecting company data, personal information, and sensitive assets when using AI-assisted development tools. I’ve also gathered some best practices along the way.

Here’s a few foundational practices:

Review your AI vendor agreements. Ensure your contract explicitly states that data passed into the model won't be used for training, shared with third parties, or otherwise exposed. This is your first line of defense. It will save you if sensitive data is ever input unintentionally.

Check your tooling configurations. Some platforms default to using your data to improve their models. Disable model training and data sharing wherever possible, and double-check privacy and security settings in your AI tools.

Control the AI’s access. In tools like Cursor and GitHub Copilot, use .gitignore or equivalent mechanisms to exclude files you don’t want indexed.

Keep secrets out of your IDE. Use template configuration files to guide the AI while keeping actual secrets and credentials outside the project context. This helps maintain security without sacrificing developer productivity.

Sanitize your logs. Make sure your logs don’t include personal or sensitive user data. It’s common for developers to paste logs and stack traces into AI chats. Make sure those logs are safe to share.

Keep sensitive IP offline. If you're working with proprietary algorithms or patentable code, keep it separate. Avoid using AI tooling on these parts to prevent intellectual property risks and simplify legal ownership.

April 3, 2025
Authors
Tomi Leppälahti
Share

Jätä viesti ja kartoitetaan yhdessä, miten ja missä hyödyntää tekoälyä.

Kiitos viestistäsi! Olemme pian yhteydessä.
Hupsis! Jotain meni pieleen lomakkeen lähetyksessä.