Today most organizations have limited ability to restrict and secure their employees’ use of generative AI. For every one LLM that an organization might deploy internally, employees access dozens of external gen AI bots and apps. New chatbots appear on the Web every week, making it difficult to block access by IP address or domain name. Moreover, actual intent (e.g. use a bot to help me write a contract) may not be obvious. This makes it difficult to ensure compliance with an agency’s AI acceptable use policies.

In this webinar, we address some of the most pressing use cases in user activity monitoring and governance for generative AI, an architecture for analyzing that activity, and approaches to integrating it with existing enterprise tools. If you are a security or privacy professional trying to better understand and protect employees’ use of both internal and external genAI models, bots, and apps, join us to learn new approaches for doing so.

Rick Caccia

Speaker:

Rick Caccia is CEO and co-founder of WitnessAI, a software company offering solutions for AI risk management, security, and governance. He has a long history of bringing successful security and compliance products to market, and has held leadership roles at Palo Alto Networks, Google, ArcSight, and Symantec. He was most recently SVP of Marketing at Palo Alto Networks for security operations and threat intelligence.

Webinar Thumbnail

Watch Now