Systematic Analysis of AI Code Security Landscape

Surveillance Data: Incidents, Failures & Research (2024-2025)

A comprehensive investigation of documented security vulnerabilities, operational failures, and research findings in AI-assisted code development. This is an ongoing longitudinal study with inherent reporting biases and coverage limitations.

RESEARCH STATUS:Active Data Collection • n=169 • Coverage Est. ~5-10%

Total Dataset

0

Entries (incidents + reports)

Security & Operational Incidents

0

88.2% of dataset

Research Reports

0

11.8% of dataset

Public Sources

0

33.1% publicly verifiable

Vulnerability Rate

0%

Range: 29.5-72%

Direct Losses

$0.0M

13 cases with financial impact

Dataset Composition

Analysis of 169 entries categorized by type and impact

Security Incidents

40

CVEs, vulnerabilities, breaches, exploits, and malicious attacks

  • 8 registered CVEs
  • • Remote code execution attacks
  • • Data breaches and exposures
  • • Supply chain compromises

Operational Failures

109

Production outages, data loss, corruption, and system failures

  • • Database deletions
  • • Code corruption incidents
  • • Memory leaks and crashes
  • • Service outages

Research Findings

20

Academic studies, industry reports, surveys, and statistical analyses

  • • Vulnerability rate studies
  • • Industry adoption statistics
  • • Developer sentiment surveys
  • • Predictive analyses

Research Methodology

Study Population

Mixed dataset of AI-assisted code security incidents, operational failures, and research findings collected through systematic monitoring of public sources, industry reports, and private disclosures.

Data Collection

Multi-source intelligence: CVE databases, security advisories, developer forums, academic research, industry case studies, and private disclosures (2024-2025).

Categorization Process

Automated classification supplemented by manual review. Entries categorized as security incidents, operational failures, or research findings based on content analysis.

Severity Distribution Analysis

Combined severity classification with 95% confidence intervals (n=169)

criticaln=33
19.5%
95% CI: 19.7-20.7%
19.7%20.7%
highn=118
69.8%
95% CI: 68.9-69.9%
68.9%69.9%
mediumn=17
10.1%
95% CI: 10.6-11.3%
10.6%11.3%
lown=1
0.6%
95% CI: 1.6-1.8%
1.6%1.8%

High-Impact Public CVEs

NIST NVD registered vulnerabilities from security incidents only

CVE-2025-32711
CVSS 9.3
CRITICAL
Security Vulnerability
Microsoft 365 Copilot vulnerability exposed chat logs, OneDrive files, SharePoin...
Click to view in NIST NVD →
CVE-2025-54135
CVSS 8.6
HIGH
Security Vulnerability
6 vulnerability allowed remote attackers to modify sensitive MCP files through i...
Click to view in NIST NVD →
CVE-2025-54136
CVSS 7.2
HIGH
Security Vulnerability
Cursor AI vulnerability allowing silent swap of approved MCP configurations for ...
Click to view in NIST NVD →
CVE-2024-38206
CVSS 7
HIGH
Security Vulnerability
Critical SSRF vulnerability in Copilot Studio allowing unauthorized server-side ...
Click to view in NIST NVD →
CVE-2025-53773
CVSS 7
HIGH
Security Vulnerability
Remote code execution via prompt injection by modifying settings
Click to view in NIST NVD →
CVE-2025-55284
CVSS 7
HIGH
Security Vulnerability
Claude Code permissive default allowlist enables unauthorized file read and netw...
Click to view in NIST NVD →
CVE-2025-3248
CVSS 7
HIGH
Security Vulnerability
Langflow Python AI framework contains unauthenticated remote code execution vuln...
Click to view in NIST NVD →
CVE-2025-50050
CVSS 6.3
MEDIUM
Security Vulnerability
High-severity flaw in Meta's Llama LLM framework allowing arbitrary code executi...
Click to view in NIST NVD →

Study Limitations & Bias Assessment

Coverage Estimate

~5-10%

Of actual incidents (reporting bias)

Public Source Rate

33.1%

Publicly verifiable sources

Mixed Data Types

12%

Research findings vs incidents

Contribute to Research

This is an ongoing longitudinal study combining incident surveillance with research synthesis. Help improve data quality by reporting incidents and validating existing cases.

Research Objectives

📊Quantify AI code security patterns with mixed-methods analysis
🔍Identify systematic weaknesses through incident analysis
Track temporal trends and emerging threat vectors
🛡️Synthesize research for evidence-based recommendations

Research Ethics: All data collection follows responsible disclosure principles. No private or confidential information is published without consent.
Last Updated: August 2025 • Status: WIP