News: WitnessAI Raises $58 Million for Global Expansion Read More
We're here to talk about what's going on in the world of artificial intelligence and security challenges around that. Super fast moving space. You don't really know what's gonna happen in January, let alone by the end of twenty twenty six, but we're gonna try to figure out what you guys are hearing, what you're thinking. Rick is the CEO of Witness. Previous to that, very successful operator in a bunch of security companies, Chronicle and Google, ArcSight, Symantec. I'm curious what you're hearing when you're talking to these large enterprise CISO these days. The way I would put it is in twenty four, twenty twenty four, CISOs were trying to figure out what the heck to do with AI. This year, twenty twenty five, it was much more, I would describe it, as compliance focused. How do we start to proactively invest and spend to prepare for what's coming? I think what we've seen near the end of the year and definitely going to next year is a shift from compliance stance to security stance. If I go back a year or two, you didn't have that many models and apps in production. The ones you had were sort of small. If you look going forward, these apps are gonna get scaled up. You have companies standing up apps to sell tickets, cars, financial transactions. There's money there. And when there's money in apps, they start to get attacked. So I think with AI, with the enterprise, you're gonna see CISOs shifting from, okay, year was about putting things in place to prepare. In the year ahead, we're expecting attacks. We're gonna start seeing bots and models get hit. You're gonna see people put agents into production and those are gonna get knocked over. And so you're gonna see a very big shift to security stance versus compliance stance. I've been hearing similar things, especially around AgenTic. The security risks around previous gens of AI, they're there for sure, but people are a lot more concerned about agents and security. There's a lot more attack surface there. Absolutely. So let's look at the other ways before agents. So you have a model, you train it, you stand up some sort of app, and maybe your employees use that third party app or you've stood up an app to your customers. There are security risks to that, but they're relatively constrained. It might get jailbroken. It might have a prompt injection. It might cough up data. The agents are a little different, right? Those can take action on a user's behalf. They can use identity of their their human. They can do lots of things and so the attack surface sort of opens up considerably. And so the security risk that goes along with that's gonna be pretty large. And I think in twenty twenty six, we're gonna start reading about more and more of these. You're gonna start seeing customer databases getting knocked over, money being stolen. And that's gonna rapidly shift security spend to defense as opposed to compliance. So Barmak, you're the founding partner here at Ballistic. But mostly you've been an operator in your career. CEO of AlienVault, you started at Fortify, lots of experience. And one of the reasons that I always enjoy chatting with you is unlike most investors that have been on my boards, you really understand the product world and the landscape of of CISOs and cyber security, and and so I wanna pull on some of that background as well. When you think about companies using AI as part of their applications, where do you think that's going this year? I often get asked this question from our investors and the fund and from fellow CISOs and other folks in the security industry, when are we gonna solve the cyber problem? And the real answer is you don't solve the cyber problem, you fight the cyber problem. And there's two reasons for it. You have the adversarial angle, which is the adversary is constantly inventing new threat vectors and approaches to break into systems. But to your question, the attack surface is expanding exponentially. So for every new innovation inflection that happens, you went from mainframes to mini computers to open systems to the web. And now with the advent of the Internet, really all that's doing is expanding the attack surface. In this specific case, AI has given birth to this exponential expansion of the attack surface. And so AI based applications are gonna be a really interesting attack surface for exploits by hackers. To give you a concrete example, as annoying as it was three years ago to interact with an AI agent or an agent you guys will remember when you called an airline, often, you know, they would basically ask you questions automatically around what your what your objective was, and they wouldn't get it right until you would get to a live human being agent. Now it's gotten to a point that the vast majority of the interactions are happening by AI agents and to a large degree of fidelity and accuracy. And I predict in twenty twenty six, the chatbot agents are gonna be the main drivers of interaction with external customers, with employees, and a lot of the work that human beings were doing essentially are gonna be done by chatbots. And the inadvertent consequence is it gives the attackers an ability to now attack the chatbots. So this brand new attack surface, prompt injection, model poisoning are great ways to get in these chat bots, and they'll do inadvertent things that would not only put the company at risk, but actually do the consumer a lot of injustice. AI based applications are gonna exponentially be on the rise, and that attack surface is gonna be great grounds for hackers to go after. One of these new attack services you're describing is human facing chatbots, where a person, potentially an adversary, potentially a bot, is talking to that and trying to crack in. There's another one which is MCP. It's an interface specifically designed for an AI agent to have a conversation, to call an API with an organization. How are how are companies thinking about MCP and whatever its successors are in terms of their posture? MCP stands for model context protocols. It's kind of the first real open system that's been introduced, to your point Dan, for agents and AI applications to be able to interact with external systems. Those systems primarily include data sources because kind of the key for AI to be accurate in its inference is the data that is being fed. And that data is sitting in silos across the enterprise. As the question becomes, how do you have AI based application of agents access that data but do it in a consistent way so everybody can essentially talk to the same API? And MCP introduced a set of APIs for you to be able to access that. It's now been expanded so that you can do agent to agent communication, you can do service to service communication. It was the adoption of MCP as kind of a gateway protocol or a communication protocol for agents and AI systems. And to your point, Dan, unto itself is a prime example of an attack surface that can be basically breached. And a breach of of an MCP server is very sinister because you don't know that a breach has occurred and yet the communication might be thrown off altogether. And that's gonna have inadvertent consequences. We're gonna probably think about the security ramifications as an afterthought, and this is the one time in the industry that we can actually grab the security and risk elements of MCP on the ascendancy and stay ahead of the eight ball. But it's a big attack surface. Let's go to the cost of all of this for a second. So all this AI defense, it's consuming GPUs, looking at models that are running trying to detect these attacks. You can think about the cost of defense and all of the GPU challenges that are out there. Let's go back to the world of CPUs for a second before we get to GPUs. We had this problem with CPU compute in the nineties that gave birth to elastic computing, grid computing, which eventually became the underpinning of cloud computing. And the idea behind that was that in the old days, you would essentially allocate compute statically to an application and you couldn't dynamically expand compute or contort compute depending on the needs of the application. You probably remember Dan from the old days that grid computing was an academic research based on how can you allocate CPU compute to an application resource when it needs it at the point of time that it needs it. Without which cloud computing today wouldn't be possible. So if you didn't have elastic computing, you really couldn't have the hyperscalers that you have today and and be able to run-in AWS, Google Cloud, Oracle, Azure, etcetera. We're facing the same problem with GPUs. So GPUs are the new compute unit for AI based applications. And it's really interesting because it sort of happened happenstance a little bit. GPUs were built initially for high compute graphical interfaces for gaming systems. It just so happened that they found out that the application of it for AI is pretty interesting. You see the huge success of Nvidia and all the GPU vendors essentially as a result of that tailwind that happened with AI. Part of the problem right now is we're going through the same phase where GPU compute is allocated statically to AI inference and agents. And we're gonna run out of that compute fairly quickly no matter how many GPU farms are gonna get built. And so the cost is gonna be fairly prohibitive unless the industry comes up with a brand new way much the same way we did with elastic computing where we can sort of extend the concepts of elastic and grid computing, cloud computing to the GPU world. And there's a whole debate going on around scale up versus scale out. It's a big problem enterprises are gonna face. So Rick, let's shift gears for a second. We've been talking a lot about AI security challenges. If you are the CIO at these organizations, the CEO at these organizations, you're trying to adopt AI as a company. You're trying to adopt it to gain competitive advantage, to accelerate your time to market for new products and services, to automate a bunch of, you know, tasks to be more efficient. So there's this top down drive in organizations to purposely adopt AI as quickly as possible. But because of all the risks we're talking about, companies are hesitant. They're going slow. They're blocking things. What's your perspective on what companies are really looking for in order to be able to use AI the way they'd like to. Most of the time, when you talk to companies and they wanna bring some new initiative on and they struggle with it, the barrier is tech debt. Oh, we wanna move into a new business but our systems can't support it. It's gonna take too long to do. With AI, the problem's not tech debt. It's tech doubt. We doubt that my employees are using these random third party apps safely. I have doubt that when we stand up a bot to do business that it's gonna work safely and securely. My developers wanna build all these cool new agents and I doubt that they have the right permissions. At the top level, what companies are looking for is a way to have confidence instead of doubt so they can do more. They're looking for a way to build some unified way for confidence. So why is that hard? The challenges in AI in the enterprise have evolved over time. Quickly but over time. So you might say, well, my employees are using ChatGPT. I don't want them to leak customer data. We're gonna build a little browser plugin to protect that. Oops, Microsoft Copilot came along and it's built into the operating system and my browser plugin can't see it. Then at the same time companies are building their own, there's a whole different set of challenges around that. Now we have AgenTic. So the notion is if I'm a CISO or CIO, and my CEO is laying the law down and saying, adopt AI quickly. Well, I already bought some tools, they work on pieces of it, they don't work together, I've got new things coming. How do I work when workflows cover agents and apps and models and employees and customers? You need some coherent confidence building layer to let you see all the stuff across there, let you put policies that control it and protect it. And I think CIOs and CISOs and then sort of second order effect CEOs are struggling with how do we put that in place so we can actually do the things we wanna do. Because if we don't do it, our competitors surely will and we'll be left behind. I think that for a lot of these companies, their adoption of AI is existential. They'll be able to do it and get ahead in their industry, or they won't and their competitors will swoop past them. It's probably the most existential change that's happened maybe since the industrial revolution. I'll give you an example with one company we talked to. The security and IT departments were looking at AI controls for employee use. The security and IT department were looking at this over time, being very thoughtful about how they were gonna adopt it. And after months, the board and the CEO said, enough. Time to start letting employees use AI. Time to move. You have thirty days. And that challenge of like, hey, how do we put these things in place? That ability to have that confidence to move forward quickly is what companies are looking for. And they're looking for technology that can actually solve that problem, and very few can't. Barmak, what's your take on AI confidence? Rick pointed out something really interesting. For as long as I've been in the cyber industry, the primary marketing message of cyber has been fear, uncertainty, and doubt. If you don't do something, something bad's gonna happen. And I draw this analogy for a lot of people that are frustrated that boards or CEOs don't pay attention as much to information security and risk as they should. And it's really interesting because the frontal lobe of the human brain is a very optimistic part of the brain. So, you know, especially if you're talking to developers and engineers and people who build things, you need that optimism to go and build stuff. It's very antithetical to a cyber thought process, which is a very obsessive compulsive thought process around what could happen when things don't go as intended. And so a lot of humanity, whether it comes to physical security or cybersecurity, is oriented towards bad things don't happen because they haven't happened yet. So typically, what you see in the cyber world, compliance and regulatory requirements force the need for cyber on one threat or, God forbid, a breach happens and that elevates the importance of cybersecurity. That's how companies typically respond and buy cyber controls. This is the one time, because we're catching market timing right on the ascendancy that we can be in front of the eight ball and actually makes a cyber an enablement factor rather than a FUD factor. So instead of us going to the boards, to Rick's point, and taking an extreme position saying nobody can use AI or agentic, or people should use AI and agentic without any unfettered guardrails in place, We could actually use the enablement message and say, look, we can enable and accelerate the use of AI in agentic as long as you take the appropriate observability, guardrails, and risk policies when it comes to safe and compliant use of agents in AI. And it's a very fresh and enlightening way to approach cyber. I bet you CISOs and chief risk officers are gonna welcome it because they're they're finally aligned with the board and the objectives of the business rather than creating this FUD all the time over and over again. I had a recent opportunity to talk to a few different CISOs at banks about this, and it was really interesting to see how different their posture was. There was one CISO who was talking about how they'd enabled all their employees to use AI to take advantage of AI with appropriate observability and guardrails. And the guy across the table was looking at him stunned and responding that they've just blocked everything. And they felt that they were more secure by taking all this block position. And immediately, it went back and forth of, you think you've blocked them, but they're going around those blocks, so they're using it anyway in this unsupervised, ungoverned way. But I think that there's a lot of companies, call it their risk posture, their risk tolerance, that are looking at AI in different ways and and are trying to find a way to kind of grapple with their culture of risk and challenge as they adopt AI. Precisely. Lots of things are moving quickly in this world. I don't think I've ever experienced technology change faster than right now. Probably if we do this conversation in a month, you'll have a whole new range of predictions and things to talk about.
See how WitnessAI empowers secure, responsible AI adoption—book a personalized demo with our security experts.