Home » Commentary » Opinion » Benchmarked into oblivion
· SPECTATOR FLAT WHITE
The week’s newspapers have been full of stories about Australia’s latest NAPLAN results. Politicians and education officials took turns reassuring the public that, despite an occasional hiccup, all is broadly well.
Student performance was ‘stable’. A few green shoots were detected. The new reporting scale was invoked like a conjuring trick. There was even the suggestion that higher standards were now in place. Crisis averted.
But before we all start dancing around the maypole, perhaps we ought to ask: stable compared to what?
As ever with NAPLAN, the answer is: compared to ourselves. Year after year, we test Australian students using an assessment designed by Australians, aligned with the Australian Curriculum, marked against past Australian cohorts, and then conclude how marvellously — or miserably— we’re progressing.
NAPLAN is not so much a mirror as a house of mirrors, and we are its distorted reflections.
I say this not as a cynic but as a former Chair of the Board of ACARA, the agency responsible for NAPLAN. During my tenure, there were attempts to raise the standards of the test to reflect the kinds of skills young people need in the modern world — deep understanding, problem-solving, the ability to apply knowledge in unfamiliar settings. In short, the things that international tests like PISA attempt to measure. But inertia is a formidable foe, and our efforts to lift the bar came to little.
This matters. Because while NAPLAN flatters us with stability, PISA —which compares us not to ourselves but to our international peers — tells a far grimmer story.
In the 2022 PISA results, Australia ranked 9th in reading, 9th in science, and 10th in mathematics among OECD countries. Respectable, maybe, until you remember that we once sat in the top handful globally. Our scores have fallen steadily over the past two decades.
In mathematics alone, the average 15-year-old today performs more than a year behind their peers in 2000. Maths wasn’t tested until 2003 and since then achievement has fallen two years. Further, it has fallen one year in science (since 2006) and a year and a half in reading (since 2000).
Meanwhile, the gap between advantaged and disadvantaged students is yawning wider. And First Nations students continue to lag far behind on every measure. But NAPLAN, ever the polite houseguest, mostly looks the other way.
In 2023, ACARA launched a new NAPLAN reporting system. The familiar ten bands and national minimum standards are gone. Instead, there are four broad categories: Needs additional support; Developing;Strong; and Exceeding.
We are told that the new scale raises the bar. Maybe. However, the core content of the test seems unchanged in any meaningful way.
This is a little like rearranging the deckchairs on the Titanic, and then announcing that the ship is now more comfortable. It may soothe the passengers, but it won’t stop the iceberg.
Let’s be clear: NAPLAN is not a bad tool. When used for its intended purpose — tracking broad trends over time, identifying struggling students, and supporting curriculum delivery — it has value. But it was never meant to measure how we stack up against the rest of the world.
And when politicians wave around NAPLAN results as evidence of improvement while ignoring the far more sobering international comparisons, they are — to put this delicately — confusing reassurance with reality.
There is a larger problem here: our national habit of self-benchmarking. We measure ourselves by our intentions, not by our outcomes. We celebrate effort over excellence. We set our own standards, congratulate ourselves for meeting them, and then act surprised when international tests reveal that we’re falling behind.
If Australia is serious about educational improvement, we need more than cosmetic changes to reporting scales. We need to ask harder questions. Why are our students losing ground on tests that assess higher-order thinking? Why has performance declined despite increased spending? And what can we learn from countries that are doing better?
Raising standards is hard. It demands political courage, professional resolve, and public honesty. But without it, we risk deceiving ourselves into complacency — benchmarking our way into oblivion.
Emeritus Professor Steven Schwartz is a Senior Fellow at the Centre for Independent Studies and the former Chair of the ACARA Board
Benchmarked into oblivion