Same scores, different failure modes
The previous finding showed that GenAI capability is broadly consistent across roles. So why would enablement need to differ?
Because where people score the same on average, they fail in different ways. And the type of error determines the type of intervention.
What the data shows
When incorrect responses are categorized by error type, clear patterns emerge across roles. Some functions over-trust AI outputs. Others leak sensitive data. Others default to vague, low-effort prompts.
| Error Type | |||||
|---|---|---|---|---|---|
| Role | Oracle Truster | Tangential | Passive Prompter | Data Leaker | No Error |
| Customer Success (n=17) | 47% | 12% | 18% | 12% | 12% |
| Finance (n=11) | 9% | 27% | 36% | 18% | 9% |
| HR / People (n=42) | 31% | 14% | 24% | 10% | 21% |
| Marketing (n=19) | 37% | 16% | 21% | — | 26% |
| Operations (n=8) | 75% | 13% | — | 13% | — |
| Product / Design (n=6) | 17% | 50% | 33% | — | — |
| Sales (n=7) | 57% | 14% | 14% | — | 14% |
| Strategy / PMO (n=10) | 50% | 10% | — | 20% | 20% |
Three enablement streams surface from the data: Verification (Oracle Truster errors dominate in Customer Success, Operations, Sales, and Strategy), Prompting & Task Framing (Passive Prompter errors concentrate in Finance and Product/Design), and Data Handling (Data Leaker errors appear across Finance and Strategy/PMO).
Why it matters
Generic training treats all roles the same. But a Customer Success team that blindly trusts AI-generated responses needs a fundamentally different intervention than a Finance team that pastes sensitive data into public models. One needs verification habits. The other needs safe-input rules.
If enablement ignores these patterns, training hours get spent on skills that are already adequate while the actual failure mode goes unaddressed. A diagnostic like the GenAI Capability Pulse can segment roles into the right stream before training begins.
What to do about it
- Segment roles by enablement stream: After a shared baseline, route each function into the stream where its error patterns concentrate — Data Handling, Verification, or Prompting & Task Framing.
- Match the intervention to the stream: Data Handling needs safe-input rules and redaction checkpoints. Verification needs output-checking habits and escalation protocols. Prompting needs scoping and context-setting practice.
- Validate with your own data: These cross-organization patterns are directional. Run your own diagnostic to confirm which streams apply to your specific teams.
Same baseline, different failure modes. Segment enablement by error type, not seniority or department.
These findings are drawn from the GenAI Capability Pulse — a scenario-based assessment that measures what non-technical teams actually do with GenAI, not what they think they can do. If your organization is scaling GenAI adoption, start with a baseline.
Source: AGASI GenAI Capability Pulse. Error type segmentation based on n=120 respondents (excluding outliers/speeders). Sample includes respondents from multiple organizations; results reflect cross-organization patterns by role (directional only).