Today most organizations have limited ability to restrict and secure their employees’ use of generative AI. For every one LLM that an organization might deploy internally, employees access dozens of external gen AI bots and apps. New chatbots appear on the Web every week, making it difficult to block access by IP address or domain name. Moreover, actual intent (e.g. use a bot to help me write a contract) may not be obvious. This makes it difficult to ensure compliance with an agency’s AI acceptable use policies.
In this webinar, we address some of the most pressing use cases in user activity monitoring and governance for generative AI, an architecture for analyzing that activity, and approaches to integrating it with existing enterprise tools. If you are a security or privacy professional trying to better understand and protect employees’ use of both internal and external genAI models, bots, and apps, join us to learn new approaches for doing so.