Decisions as the Atomic Unit of Learning
Why learning should be measured at the level where understanding actually happens.
Decisions as the Atomic Unit of Learning
Most learning systems treat all questions as equal. One question attempted. One question answered correctly or incorrectly. One unit of progress recorded.
This approach is simple — but it ignores how learning is assessed in almost every formal examination system.
Why Questions Are the Wrong Unit
In exams, questions are not treated equally. A short question worth five marks is typically all-or-nothing. The student either demonstrates the required understanding and receives full credit, or they do not.
A longer question, worth forty or sixty marks, is treated very differently. It contains multiple components, each testing a different aspect of understanding. Marks are awarded incrementally, reflecting partial correctness, sound method, or correct reasoning even when the final answer is incomplete or wrong.
Educators instinctively understand this distinction. Phlow Academy applies the same principle to learning itself.
Decisions and Exam Marking: A Direct Parallel
In Phlow, a decision is analogous to a marking point in an exam question.
A simple question may contain a single decision — much like a short exam question with a single marking point. A more complex question may contain several decisions, each representing a meaningful commitment of understanding.
Just as examiners do not judge a sixty-mark question as a single yes-or-no outcome, Phlow does not judge complex learning tasks as a single binary event. Instead, it observes how understanding unfolds across decisions.
Why This Matters for Fair Assessment
If two exam questions are treated identically simply because they are both “one question”, assessment becomes unfair. The same is true for learning systems.
A one-step question and a multi-step question cannot be judged using the same rules without distorting progress. Counting questions alone ignores partial understanding, correct reasoning with incorrect execution, improvement within a task, and consistency across components.
Decision-based analysis avoids this by aligning learning measurement with how understanding is already assessed in high-stakes exams. It judges quality of reasoning, not just final outcomes.
Normalising Complexity Without Penalising Learners
Decision-level analysis naturally normalises complexity. A question with multiple decisions contributes multiple learning signals, just as a multi-mark exam question provides multiple opportunities to demonstrate understanding.
Learners working on complex tasks are not unfairly slowed down by having to complete many full questions before progress is recognised. Learners struggling within a task reveal where the difficulty lies, rather than being reduced to a single wrong answer.
Progress becomes proportional to demonstrated understanding, not to surface structure.
From Exam Intuition to Learning Analytics
What examiners do by judgement and experience, Phlow does systematically.
By treating decisions as the atomic unit of learning, the system mirrors how educators already think about assessment: breaking complex understanding into meaningful components, recognising partial correctness, and distinguishing conceptual errors from execution mistakes.
This makes progression fairer, feedback more precise, and learning data far more informative.
Decisions as the True Measure of Understanding
Questions are containers. Marks reveal structure. Decisions reveal thinking.
By aligning learning analytics with the logic of exam marking, Phlow Academy ensures that learners are judged in a way that is both rigorous and humane — recognising effort, understanding, and growth, rather than reducing learning to right-or-wrong outcomes.
