The same baseline, different failure modes
When organizations plan GenAI enablement, they often assume some functions are further ahead than others. Marketing adopted tools early. Finance is cautious. Customer Success is somewhere in between. The natural move seems to be department-by-department training, starting with the "advanced" teams.
The data tells a different story — and then a more nuanced one.
What the data shows
Across 153 respondents spanning six functional groups, GenAI capability scores fall within a narrow 12-point band. No role is meaningfully ahead. The differences are not statistically significant.
The first implication is clear: start with a shared baseline. Enterprise-wide fundamentals on safe use, verification, and prompting — not department-by-department sequencing.
But capability is only half the picture. When incorrect responses are categorized by error type, clear patterns emerge across roles.
| Error Type | |||||
|---|---|---|---|---|---|
| Role | Oracle Truster | Tangential | Passive Prompter | Data Leaker | No Error |
| Customer Success (n=17) | 47% | 12% | 18% | 12% | 12% |
| Finance (n=11) | 9% | 27% | 36% | 18% | 9% |
| HR / People (n=42) | 31% | 14% | 24% | 10% | 21% |
| Marketing (n=19) | 37% | 16% | 21% | — | 26% |
| Operations (n=8) | 75% | 13% | — | 13% | — |
| Product / Design (n=6) | 17% | 50% | 33% | — | — |
| Sales (n=7) | 57% | 14% | 14% | — | 14% |
| Strategy / PMO (n=10) | 50% | 10% | — | 20% | 20% |
Three enablement streams surface: Verification (Oracle Truster errors dominate in Customer Success, Operations, Sales, and Strategy), Prompting & Task Framing (Passive Prompter errors concentrate in Finance and Product/Design), and Data Handling (Data Leaker errors appear across Finance and Strategy/PMO). Same average scores — different failure modes.
Why it matters
Generic training treats all roles the same. But a Customer Success team that blindly trusts AI-generated responses needs a fundamentally different intervention than a Finance team that pastes sensitive data into public models. One needs verification habits. The other needs safe-input rules.
If enablement ignores these patterns, training hours get spent on skills that are already adequate while the actual failure mode goes unaddressed. The right sequence: shared baseline first, then segment by error type.
What to do about it
- Enterprise-wide fundamentals first: Build a shared baseline on safe use, verification, relevance, and prompting before investing in role-specific modules.
- Segment roles by enablement stream: After the baseline, route each function into the stream where its error patterns concentrate — Data Handling, Verification, or Prompting & Task Framing.
- Match the intervention to the stream: Data Handling needs safe-input rules and redaction checkpoints. Verification needs output-checking habits and escalation protocols. Prompting needs scoping and context-setting practice.
- Set an objective proficiency standard: Use scenario-based assessment to define "ready," not self-reported confidence. A diagnostic like the GenAI Capability Pulse can establish the baseline and segment roles in one pass.
Same baseline, different failure modes. Build the shared floor first — then segment enablement by error type, not department.
These findings are drawn from the GenAI Capability Pulse — a scenario-based assessment that measures what non-technical teams actually do with GenAI, not what they think they can do. If your organization is scaling GenAI adoption, start with a baseline.
Source: AGASI GenAI Capability Pulse (N=153). Capability by role: ANOVA F=0.85, p=0.516. Enablement profile: error type segmentation n=120 (excluding outliers/speeders).