Securing AI-Driven Enterprise: Challenges and Strategies

Securing AI-Driven Enterprise: Challenges and Strategies
As AI becomes foundational to enterprise operations, it brings a unique set of security, governance, and compliance challenges. At VULNCON 2025, a panel of industry experts moderated by Shivakumar Dhakshinamoorthy—including Krishna Pandey from Xerox, Muslim Koser from Fortinet, and Shikhil Sharma from Astra Security—convened to discuss how Chief Information Security Officers (CISOs) can navigate this rapidly evolving landscape.

The conversation shifted from the theoretical to the practical, addressing how organisations can harness AI's speed while defending against its inherent risks.

The Rise of Shadow AI and Visibility Gaps

One of the most pressing hurdles for the modern CISO is the emergence of Shadow AI. Similar to the Shadow IT issues of previous decades, business units are now independently procuring AI licenses—from ChatGPT to Llama—without central oversight.

  • Unprecedented Scale: Experts predict there will be roughly 12,000 AI applications in the enterprise ecosystem by 2030.
  • Data Governance: AI requires massive amounts of data to function, yet many organisations lack a solid data strategy or classification foundation.
  • DLP Limitations: Traditional Data Loss Prevention (DLP) tools are struggling; research suggests they fail to detect 80% of sensitive data related to AI, while producing 95% false positives.

The Productivity Paradox: Faster Code, More Vulnerabilities

AI is hailed as a "great equaliser" that allows developers to write code at twice the speed. However, this increased productivity often expands the attack surface at an unprecedented pace.

Panelists noted that AI-generated web apps tend to have 50% to 60% more vulnerabilities than those written by humans. Because many generative AI models were trained on data that did not necessarily follow secure coding practices, they frequently produce vulnerable code. This creates a "wipe" of coding for engineers but leaves security teams struggling to keep up with the mess.

Safeguarding the Models: Red Teaming and Zero Trust

Protecting the AI model itself is as critical as protecting the data it processes. The sources suggest that the principles of security remain the same, but the variables have changed.

  1. Red Teaming AI: Experts suggest thinking of AI as a "pre-teen kid"—very sharp, but easily swayed to the "dark side". It can leak information through simple conversation or codified prompts in uploaded PDFs.
  2. Adversarial Training: Models must be protected against "inference attacks," where bad actors provide numerous inputs to capture the essence of a machine learning model.
  3. Zero Trust & Micro-segmentation: Organisations should adopt a "trust but verify" approach. This includes using robust Role-Based Access Control (RBAC) to ensure that tools like Microsoft Copilot only access documents the user is authorised to see.
  4. Defence in Depth: Recent exploits, such as a zero-click exploit in Microsoft M365 Copilot (CVE-2025-3271) that allowed data exfiltration via a crafted email, highlight the need for layered security.

The Vendor Dilemma and the Shift to Local AI

Enterprises must vet their third-party AI vendors with extreme care. Many existing vendors are "asynchronously" pushing AI features into their apps without updated contracts regarding data usage.

There is also a growing trend toward on-device or "hybrid" AI models. By hosting models locally (similar to recent Apple developments), organisations ensure that sensitive data stays on the device rather than being sent to the cloud, significantly reducing geopolitical and privacy risks.

The Human Element: Upskilling for an AI World

The panel concluded that while AI may not take away jobs, it will fundamentally change them. For students and professionals, the key is to be a creator rather than just a consumer.

  • Adaptability: The speed of change is so high that LLMs may soon be obsolete in favour of Artificial General Intelligence (AGI).
  • Original Thinking: AI lacks original creativity; it is trained on existing data. Humans must maintain their "thinking power" and use AI as a tool to augment, rather than replace, their critical decision-making.

Conclusion

Think of integrating AI into an enterprise like installing a high-speed jet engine on an old wooden ship. While the engine provides incredible speed (productivity), the increased force will quickly expose every rot-spot and loose nail in the hull (security vulnerabilities). To stay afloat, you cannot just focus on the engine; you must reinforce the entire structure of the ship (governance and data security) to handle the new velocity.