And what to measure instead
Every compliance manager knows the feeling. The training campaign closes. The dashboard turns green. One hundred percent completion. You send the screenshot to the CISO, who sends it to the board, who nods approvingly and moves on to the next agenda item.
Three months later, someone clicks a phishing link. A USB drive goes home in someone's bag. A client's data gets sent to the wrong email address because someone was in a hurry and didn't stop to check.
The training was completed. The behaviour didn't change.
This is the central fraud of most compliance training programmes — not that organisations don't run them, but that they've confused a process metric for an outcome metric. Completion tells you that someone opened the course. It tells you almost nothing about whether they'll make better decisions under pressure.
Why passive content doesn't change behaviour
There's a substantial body of research on how adults learn, and very little of it supports the lecture-then-test format that most compliance training still uses. The core problem is cognitive passivity. When someone watches a video or clicks through slides, the brain isn't doing the kind of active processing that forms durable memories. It's receiving. Cataloguing. Moving on.
This isn't a technology problem — it's a learning design problem. The format itself is wrong for the goal.
Behavioural change requires a different kind of engagement. It requires the learner to make decisions, not just absorb information. To face the consequence of a wrong choice in a context that feels real enough to be meaningful. To retrieve and apply knowledge rather than simply receive it.
This is why experienced professionals who know the theory still make bad decisions under pressure. Knowing that phishing emails often create urgency doesn't help if you've never practised what it feels like to be targeted by one. Knowledge and instinct are different things. Training that only produces the first is not doing the job.
The measurement problem
When organisations measure completion, they're measuring the training, not the learner. It's the equivalent of measuring whether students showed up to class rather than whether they can apply what was taught.
The metrics that actually matter are harder to collect but not impossible:
Knowledge retention over time. Not whether someone passed an end-of-module quiz immediately after watching the content — they'll pass that regardless, because the information is still in short-term memory. What matters is whether they retain it weeks later. Spaced repetition assessments, revisiting key concepts at intervals, give you a much more accurate picture of actual retention.
Decision quality in simulated scenarios. Phishing simulations are the most widely used version of this, but the principle extends further. When a learner faces a realistic scenario — a suspicious invoice, a data request from an unknown party, a social engineering attempt — do they make the right call? Scenario performance is a far better proxy for real-world behaviour than quiz scores.
Competency scores, not completion rates. The distinction matters. Completion is binary — done or not done. Competency is a spectrum that can be tracked, improved, and evidenced over time. An organisation that can show improving competency scores across its workforce has a demonstrably stronger security posture than one that can show 100% completion of a video course.
Behavioural indicators post-training. Phishing click rates. Policy violation rates. Incident reports that trace back to human error. These are lagging indicators, but they're the ones that actually tell you whether training is working.
What good training looks like
The shift is from passive consumption to active engagement. From information delivery to decision practice.
In practice this means scenario-based learning — presenting realistic situations that require the learner to respond, not just read. It means consequences that feel meaningful, even in a training context: making the wrong call in a scenario should feel different from making the right one. It means regular reinforcement rather than annual campaigns, because one-off training produces one-off retention.
It also means collecting the right data. Not "did they finish?" but "how did they perform?" and "are they improving?"
The audit problem
There's an irony in how compliance training is typically evidenced. Organisations show completion records to regulators and auditors because completion records are easy to produce. But completion records don't evidence competency — they evidence process. A regulator looking at them can confirm that the training happened; they cannot confirm that it worked.
As regulators become more sophisticated, this distinction is starting to matter. The question is no longer just "do you have a training programme?" It's "can you demonstrate that your people understand their obligations and apply them?" Completion data answers the first question. It says nothing about the second.
Organisations that can evidence improving competency scores, scenario performance data, and certification outcomes are in a materially stronger position — not just operationally, but in their relationship with regulators and auditors.
A different question to ask
The next time your training campaign closes and the dashboard turns green, ask a different question before you send the screenshot upstairs.
Not: did everyone complete it?
But: do our people make better decisions today than they did before?
If you can't answer the second question, the first one isn't worth very much.
StaySecure LEARN™ is built around this principle — scenario-based, conversational learning that measures competency, not completion. Learn more →