Where GenAI breaks down
GenAI tools can draft, summarize, and analyze at speed. But the step between receiving an output and acting on it — verification — is where most people fall short. And the step before the prompt — what gets shared with the tool — is where data handling failures accumulate.
Together, these two dimensions account for the largest share of capability gaps and error volume. And one of them has a blindspot that won't self-correct.
What the data shows
When respondents are classified by their lowest-performing dimension, verification dominates.
Nearly half (48%) of all respondents are weakest in verification — more than data handling (31%) and far more than prompting (3%). This is the single most common capability gap across the entire sample.
When errors are categorized by type, data handling failures lead.
Data Leaker errors — sharing sensitive information with AI tools — account for 30% of all mistakes, followed by Tangential (27%), Oracle Truster (23%), and Passive Prompter (20%). Verification failures (Oracle Truster) and data handling failures (Data Leaker) together account for more than half of all errors.
The third finding is the blindspot: among respondents whose weakest area is Data Handling, 93% prefer training in other topics.
93%
do NOT ask for Data Handling training
They request Prompting (37%) and Workflow (30%) instead. Data Handling ranks dead last at 7% — despite being their most critical gap. The people who most need data handling skills will never self-select that training.
Why it matters
Verification failures mean unvetted AI outputs reach decisions, clients, or public-facing work. Data handling failures mean sensitive information enters AI tools that may not be secure — and once shared, cannot be recalled.
The blindspot compounds the problem. If the people who most need data handling skills never select that training, the gap persists indefinitely. Meanwhile, they continue using GenAI tools daily — pasting sensitive information and operating without the mental model for what is and is not safe to input.
What to do about it
- Make verification the baseline capability: "Verify first" should be the non-negotiable habit before any GenAI output enters a decision or reaches a client. Embed checkpoints in workflow.
- Treat safe data handling as a baseline requirement: Every GenAI user needs clear, non-negotiable rules on what can and cannot be shared — before they get tool access.
- Assign Data Handling training via diagnostics: The people who need it most will never choose it. Use scenario-based assessment to identify who is weak on data handling and route them to the right module automatically.
- Add guardrails alongside training: Redaction guidance, approved tool lists, and input review steps reduce reliance on awareness alone.
Verification and data handling are the two dimensions where gaps are largest and errors most consequential — and one of them won't self-correct without objective targeting.
These findings are drawn from the GenAI Capability Pulse — a scenario-based assessment that measures what non-technical teams actually do with GenAI, not what they think they can do. If your organization is scaling GenAI adoption, start with a baseline.
Source: AGASI GenAI Capability Pulse. Weakest dimension: N=153. Error distribution: 533 errors across 160 respondents. Data Handling blindspot: n=46 (Data Handling-weak subgroup).