Understanding user intent is crucial when integrating AI technologies. For the first time, we can iteratively interact with machines, refining their responses and turning them into valuable assistants capable of addressing meaningful challenges. However, with this new form of human-machine interface comes the critical responsibility of ensuring secure and ethical usage.

From an IT practitioner’s perspective, how can we secure this evolving technology while enabling its potential? As organizations adopt AI tools, it’s essential to safeguard confidential information and prevent unintended data sharing with platforms like OpenAI, Gemini, or Microsoft. For example, when users interact with AI, are they inadvertently training these systems to benefit competitors? Or worse, are they enabling AI tools to become competitors themselves? To navigate these risks, organizations must implement robust guardrails that balance usability with security.

Understanding Usage Through Intent Based Observability

The first step in managing AI securely is understanding how it is used within the organization. AI observability is essential for identifying the intents behind user interactions. Unlike emails or documents, AI prompts lack conventional summaries like subject lines or titles. The most effective way to monitor these interactions is by identifying the user’s intent whether they aim to write code, draft contracts, create financial reports, or design spreadsheets.

We call this approach intent based observability. It focuses on fine grained intents, such as “write a Python function” or “draft an employment contract.” These detailed intents provide insights into user activities without requiring exhaustive manual review of all interactions. Once these intents are identified, they can be analyzed in real time or retrospectively.

Moving from Observability to Policy

After understanding usage patterns, the next step is to create and enforce policies. Fine grained intents provide detailed insights, but for efficient runtime decisions, coarse grained intents are more practical. For example, coarse intents might include broader categories such as coding, contract drafting, or email editing. Administrators can define these categories and use them to determine whether a prompt should be allowed, blocked, or routed to a specific large language model (LLM).

Policy creation should be flexible and context aware, enabling organizations to:

  1. Define which user groups (e.g., through Entra or similar identity platforms) can perform specific intents.
  2. Tailor policies based on user location, applications accessed, and the nature of the interaction.
  3. Apply appropriate guardrails, such as warnings, blocks, API calls, or redirection to approved LLMs.

This hierarchical policy structure ensures that AI usage aligns with organizational priorities while mitigating risks.

Benefits of an Intent Based Framework

Adopting intent based observability and policy enables organizations to:

  • Summarize and understand the most critical AI use cases.
  • Securely integrate AI by enforcing targeted policies.
  • Empower users with cutting edge tools while protecting sensitive data.

By leveraging this framework, organizations can embrace AI confidently and securely. We invite you to explore our solution and share your thoughts on how it can further support your organization’s goals. Your feedback is invaluable as we refine this approach to meet the evolving needs of modern IT environments.