AI-Powered Threats vs. AI-Powered Defenses: Who Wins the Cyber Arms Race?

AI-Powered Threats vs. AI-Powered Defenses: Who Wins the Cyber Arms Race?
The intersection of Artificial Intelligence (AI) and cybersecurity is rapidly changing the threat landscape. At VULNCON 2025, a panel of CXOs explored the tension inherent in this shift: AI-Powered Threats versus AI-Powered Defenses, delving into topics like autonomous malware, deep fake phishing, and associated ethical challenges. Here are the critical insights shared by industry leaders Sanil Nadkarni, Praveen Kumar Motupalli, Ankit Agarwal, and Vishal Kalro on navigating this intensifying cyber arms race.

The AI Offensive: Weaponized Threats are Already Here

The panel highlighted that the offensive use of AI is becoming dangerously accessible. The dark web is already populated with "bad GPTs," specialized AI models designed for malicious use.

  • Weaponization in the Dark Web: Tools such as Wolf GPT, Dark Bird GPT, and WormGPT are available, some free of charge and others via subscription models. These models are specifically weaponized to streamline threats for companies.
  • Automating Financial Crime: These tools are being used to automate previously manual tactics, such as carrying out sophisticated Business Email Compromise (BEC) schemes, potentially swallowing a whale (targeting C-suite executives).

AI in Defense: Augmentation Over Automation

While the threats are escalating, AI provides critical advantages for defenders, particularly in improving operational efficiency and scalability. However, leaders emphasized that AI must augment human capabilities, not replace them.

  • Combating SOC Burnout: AI is critical for the Security Operations Center (SOC), an area often associated with high burnout. By integrating AI systems with human collaboration, companies can leverage AI for enhanced threat detection, improving speed and scalability significantly.
  • Focus on Efficiency and ROI: From a management and board perspective, the primary questions revolve around AI adoption: how is AI being used to bring operational efficiencies? What value and ROI is it adding, and how will it improve security? The goal is to repurpose human time for more strategic tasks.
  • The Power of UEBA: AI has particularly benefited User and Entity Behavior Analytics (UEBA), which was often lacking previously. AI is able to correlate data and create digital user profiles (e.g., tracking how frequently a user accesses a VPN or changes network locations), thereby elevating the security posture of endpoints and users.

The Imperative of Responsible AI and Data Guardrails

A significant challenge raised was the leakage of proprietary data when employees use public Large Language Models (LLMs). This risk demands rigorous internal controls.

  • The Risk of Public LLMs: Cases exist where employees have used public LLMs, such as ChatGPT, resulting in proprietary code becoming publicly available. This risk is heightened when individuals use free trial versions of these tools or integrate them carelessly with source code repositories like GitLab.
  • Classifying Data: Teams must be trained not only on how the AI tool works but also on data classification—understanding what kind of data exists and what sharing practices are involved.
  • Mitigation Strategies: To balance AI usage with security, organizations must adopt strategies such as implementing private LLMs (hosting them internally on platforms like Azure), using traditional controls like Data Loss Prevention (DLP) and Cloud Access Security Brokers (CASB), and setting up strong "guardrails". One panelist noted that their digital engineering company blocks all code uploads to public GitHub repositories and monitors/tags developer prompts used in public AI tools.

Shaping the Next Generation of Cyber Talent

For students and early career professionals, the adoption of AI is fundamentally changing required skillsets and career trajectories.

  • Essential Skills for the AI Era: While technical skills like Python and programming experience are valued because they help candidates understand how toolsets work and how data correlation occurs, non-negotiable attributes include responsibility, patience, and human integrity. Students should also learn the basic terminology ("jargons") and language models related to AI.
  • The Danger of Outsourcing Intelligence: A critical warning was issued regarding relying too heavily on AI tools. By outsourcing tasks like advanced writing or complex calculations, employees risk losing their intelligence and thought processes. If AI tools are suddenly unavailable (e.g., due to company policies), productivity can drop significantly—a phenomenon likened to outsourcing one's IQ.
  • Evaluating Potential Employers: Students looking to work in AI security should treat interviews as a two-way street and ask strategic questions. To evaluate if a company is serious about AI, look at the overall company culture, observe whether leaders and board members are discussing AI capabilities, and determine if the team you are joining aligns with your career vision.

Conclusion:

In conclusion, the cyber arms race powered by AI demands a smarter security strategy. Defenders must leverage AI to enhance capabilities while ensuring that fundamental security principles remain rigid (like proper patching and infrastructure setup). The ultimate success in this landscape relies not just on sophisticated tools, but on augmented human intelligence, ethical deployment, and unwavering patience.