Surveillance Data: Incidents, Failures & Research (2024-2025)
A comprehensive investigation of documented security vulnerabilities, operational failures, and research findings in AI-assisted code development. This is an ongoing longitudinal study with inherent reporting biases and coverage limitations.
0
Entries (incidents + reports)
0
88.2% of dataset
0
11.8% of dataset
0
33.1% publicly verifiable
0%
Range: 29.5-72%
$0.0M
13 cases with financial impact
Analysis of 169 entries categorized by type and impact
CVEs, vulnerabilities, breaches, exploits, and malicious attacks
Production outages, data loss, corruption, and system failures
Academic studies, industry reports, surveys, and statistical analyses
Mixed dataset of AI-assisted code security incidents, operational failures, and research findings collected through systematic monitoring of public sources, industry reports, and private disclosures.
Multi-source intelligence: CVE databases, security advisories, developer forums, academic research, industry case studies, and private disclosures (2024-2025).
Automated classification supplemented by manual review. Entries categorized as security incidents, operational failures, or research findings based on content analysis.
Combined severity classification with 95% confidence intervals (n=169)
NIST NVD registered vulnerabilities from security incidents only
~5-10%
Of actual incidents (reporting bias)
33.1%
Publicly verifiable sources
12%
Research findings vs incidents
This is an ongoing longitudinal study combining incident surveillance with research synthesis. Help improve data quality by reporting incidents and validating existing cases.
Research Ethics: All data collection follows responsible disclosure principles. No private or confidential information is published without consent.
Last Updated: August 2025 • Status: WIP