The assumption that use builds capability
A common belief in enterprise AI adoption is that capability develops through use. Give people tools, let them experiment, and competence will follow naturally. It is an appealing idea — and it is wrong.
The data shows a different picture: frequent use without training amplifies errors, training is the strongest predictor of capability, organizational programmes outperform self-directed learning, and governance — when done right — meaningfully reduces mistakes.
What the data shows
Among daily and weekly GenAI users — the people generating the most outputs and making the most AI-informed decisions — roughly two-thirds have never received any formal training.
63% of daily users and 64% of weekly users report no GenAI training at all. The most frequent users are no more likely to have been trained than occasional ones.
But when training exposure and usage frequency are both tested as predictors of capability, only one is statistically significant: training.
Respondents with any formal training scored 80.4 on the SJT assessment, compared to 69.9 for untrained respondents — a 10.5-point gap (p=0.008). In the multivariate model, training was the only significant predictor. Usage frequency was not.
Not all training is equal. When capability is broken down by training type, organizational training stands out.
Organizational training (employer-provided programmes) is associated with a mean score of 86.9 — a 12-point lift over the untrained baseline of 74.9. Self-directed learning alone scores 74.2, statistically indistinguishable from no training at all.
Finally, governance works — when tools and policy align.
+9.2 points (p=0.006)
−34% fewer errors
Respondents who have both approved tool access and have completed a policy review score 9.2 SJT points higher (p=0.006) and make 34% fewer errors than those missing one or both. Access without awareness still leaves gaps.
Why it matters
Untrained frequent users are a compounding risk. Every day without training is another day of unverified outputs, unchecked data handling, and uninformed decisions repeated across dozens of workflows. "Learning by doing" reinforces habits — both good and bad — and the data shows it does not build capability on its own.
Organizational training works because it is structured, scenario-based, and aligned to the specific risks and workflows that employees encounter. Self-directed learning feels productive but often does not transfer to the judgement calls that matter. And governance only works when the person using the tool also understands the rules.
What to do about it
- Prioritize frequent users first: They drive the most AI-informed decisions, so capability gaps scale fastest with them. Require a short Verification + Data Handling baseline before broader tool rollout.
- Invest in training, not just adoption: Training delivers a measurable 10-point capability uplift. Adoption tools and licenses are necessary but not sufficient.
- Make organizational training the default: Self-directed resources are useful for reinforcement but should not be the primary strategy. Pair usage with formal training and workflow reinforcement.
- Gate provisioning: Make policy review a prerequisite for tool access. Pair access with targeted enablement on Verification and Data Handling. Embed guardrails where AI outputs enter decisions.
"Learn by doing" is not enough. Pair structured training with governance gates — and prioritize the people who use GenAI most.
These findings are drawn from the GenAI Capability Pulse — a scenario-based assessment that measures what non-technical teams actually do with GenAI, not what they think they can do. If your organization is scaling GenAI adoption, start with a baseline.
Source: AGASI GenAI Capability Pulse (N=153). Training by usage: n=133 (daily+weekly). Training impact: +10.5 pts (p=0.008). Governance: both (n=61) vs not both (n=92), p=0.006.