← Back to Insights
2 min readpulse-insightsdata-handling

One in three GenAI errors involves unsafe data handling

AGASI Team

Share

Not all errors are created equal

When people make mistakes with GenAI, the instinct is to focus on output quality — bad prompts, irrelevant responses, hallucinations. But the most consequential errors often happen on the input side: what people share with AI tools in the first place.

What the data shows

Across 533 categorized errors from 160 respondents, Data Leaker errors — sharing sensitive information with AI tools — are the single largest category.

Distribution of GenAI judgement errors — % of all errors by type (533 total errors)

Data Leaker errors account for 30% of all mistakes, followed by Tangential (27%), Oracle Truster (23%), and Passive Prompter (20%). The distribution is broad — no single error type dominates completely — but unsafe data handling leads the field.

Why it matters

Data handling errors carry disproportionate risk because they are often irreversible. Once sensitive information — client data, internal financials, proprietary strategy — enters an AI tool, it cannot be recalled. Unlike a poorly worded prompt or an unchecked output, a data leak has compliance, reputational, and legal consequences.

The challenge is that many users do not perceive pasting internal data into an AI tool as a risk. It feels like using a search engine. Without clear, concrete guidance on what can and cannot be shared, accidental exposure is inevitable. The GenAI Capability Pulse can identify which teams and roles are most prone to these errors before they escalate.

What to do about it

  • Treat safe data handling as a baseline requirement: Every GenAI user needs clear, non-negotiable rules on what can and cannot be shared — before they get tool access.
  • Add guardrails, not just training: Redaction guidance, approved tool lists, and input checkpoints for sensitive workflows reduce reliance on individual judgement.
  • Provide concrete examples: Role-specific do/don't lists are more effective than abstract policy documents. Show people exactly what safe looks like in their context.

Safe-input rules and redaction guidance need to be non-negotiable for every GenAI user, not an afterthought.

These findings are drawn from the GenAI Capability Pulse — a scenario-based assessment that measures what non-technical teams actually do with GenAI, not what they think they can do. If your organization is scaling GenAI adoption, start with a baseline.

Source: AGASI GenAI Capability Pulse. Error categorization based on 533 errors across 160 respondents. Percentages reflect share of all errors, not share of respondents.

Share