The recent DeepSeek news has set off a flurry of hot takes and opinion pieces, but for most organizations, the immediate effect is a desire to understand if and how employees are using the DeepSeek app. Perhaps the terms of service allow the company to use the data your employees share, does that fit your AI acceptable use policy?
We are already hearing concern from enterprise security and privacy teams about DeepSeek use. If this applies to your organization, there are three steps you might consider immediately:
- Get basic visibility. How many employees are using DeepSeek at work right now? Can you answer this? Your firewall or proxy might be able to do this, but some can’t.
- Gain visibility into the actual conversations and data being shared with DeepSeek. Your firewall almost certainly can’t do this, but it’s important. What data is being shared and is it okay?
- Apply some policy to this use. For example, department A can use it but department B cannot. And no one can share customer data or corporate IP. Do you have a way to enforce this type of policy?
The broader AI industry is constantly evolving. Expect new tools to emerge and expect your people to want to try them. At the same time, solutions for governing and managing risk from corporate AI use exist.
Related blog post: Enabling the Secure Use of AI: Why NGFWs Aren’t Enough