In part two of our blog series, we’ll explore a few ways AI usage will become more complex and complicated in 2025. IT Security leaders will see AI apps start to interact directly, the usage model will expand beyond text-based chat in a browser, AI apps will return problematic responses, and cost concerns will become more prominent. 

AI-AI Decision Making – Increasingly there are purpose-built and trained AI models that evaluate data in order to present it to another AI model which itself has been purpose-built and trained to create a certain type of output. This AI-AI decision making is being seen increasingly in multiple fields. For example, in drug research, one AI could suggest molecular structures based on desired properties, and then using this information another AI simulates interactions of these molecules with biological targets. This process means that a model issue in one AI can provide significant downstream effects without oversight.

Multimodal AI – We are already seeing Multimodal AI transforming business. From OpenAI’s Advanced Voice, and DALL-E image generation, to Google’s NotebookLM generating hosted podcasts. Each day, the market is asking AI Providers to accept more types of inputs and their corresponding outputs. AI will stretch the idea of generated content as rapidly as we can think it up. Multimodal AI is creating and executing entire processes. This means CISOs have a larger attack surface that is not just text analysis, or even OCR. This clearly necessitates a new way of thinking about detecting anomalous behavior, and risk to the business’ operations.

AI Models Gone Wild (The Response Matters Too) – A great deal of emphasis has been placed on protecting models from abuse. There are also companies like WitnessAI that have made investments into not just protecting models, but also protecting business’ safe usage of AI technologies of all kinds. One part of AI security that is demanding more attention is the responses that AI provides to various prompts. AI is by nature a non-deterministic technology, this also means that responses have the capacity to produce suggestions to users that could be dangerous, illegal, or unethical. A recent news article detailed an AI model designed to provide companionship that encouraged the user to harm themself resulting in a tragic outcome.

Choosing Models Based on Cost and Risk  – There are many models to choose from, and options are unlikely to shrink in 2025. However, different models provide different capabilities and associated costs. As companies build new AI applications, they will better understand which models are best suited for a particular app. Routing across models based on prompts will become more prevalent in 2025, with specialized tools available to provide pre-prompt routing.

Part 1: AI Gets Aggressive

Part 3: AI Gets Compliant