We need to reform NAPLAN to make it more useful
The National Assessment Program — Literacy and Numeracy (NAPLAN) has now been in place for a decade. Some 4.5 million young Australians between the ages of nine and 24 have taken NAPLAN tests at some point during their schooling.
But NAPLAN has its critics and, as with all testing programs, would benefit from ongoing review and refinement.
Here are two suggestions that might make these tests more useful to classroom teaching and learning.
Abandon public comparisons of school results
In common with the state-based tests (for example, the NSW Basic Skills Test) it replaced, NAPLAN was introduced to provide parents, teachers and schools with objective information about students' foundational literacy and numeracy skills.
What is NAPLAN and is it important?
- The National Assessment Program tests the literacy and numeracy skills of students in years 3, 5, 7 and 9
- Students cannot pass or fail the assessment
- The annual testing is designed to help governments and schools gauge whether students are meeting key educational outcomes
- The results help identify strengths and address areas that need to be improved
- Schools and parents can see how an individual student's learning is tracking compared to their classmates and the national average
This was after unacceptably low levels of reading and numeracy were going undetected and unaddressed in Australian schools.
Since the introduction of NAPLAN, there has been a marked increase in the stakes attached to these tests. School results have been made available for public comparison on the My School website. Some schools even use NAPLAN in their marketing and student selection processes.
Other schools and school systems use NAPLAN to hold teachers and school leaders accountable for improvement, including making test results part of performance reviews. And there have been proposals to make NAPLAN results the basis of teacher performance pay and financial rewards for school improvement.
As a result, parents, teachers and schools now place greater importance on NAPLAN results in comparison to the earlier state-based tests. This has led to reports of inappropriate levels of practice testing and increased student test anxiety. It has also narrowed teaching to the test, and led to occasional cheating.
The decision to make all schools' NAPLAN results public was based on a belief this would provide parents with better information when choosing schools.
This was a market-driven belief that, for schools, the risk of losing students would be a powerful incentive to improve. But test-based incentives have proven largely ineffective in driving school improvement.
Parents have sometimes drawn incorrect conclusions about the quality of a school from publicly reported test results. And public comparisons of schools have resulted in a range of unanticipated negative consequences such as narrowing teaching and increasing levels of teacher and student stress.
An obvious strategy is to stop reporting school results publicly and to restrict access to school-level NAPLAN data to individual schools and school systems. The primary focus of literacy and numeracy testing might then return to its original purpose of informing teaching and learning.
Enhance the instructional value of NAPLAN
NAPLAN is a paper test given to students in years three, five, seven and nine, although this year some schools will administer the test online for the first time. The instructional value of these tests appears to be limited in several ways.
- Because common year-level tests are too difficult for some students in each year level and too easy for others, NAPLAN provides little information to guide the teaching of these students.
- Because the marking of paper tests is a time consuming process, results are provided many weeks after testing, limiting their usefulness to teaching.
- NAPLAN tests include only a few items on each literacy and numeracy skill and so are of limited diagnostic value for individual students.
The delivery of NAPLAN online, being gradually implemented from this year, offers an opportunity to address these limitations.
The first step is to continue to get rid of fixed, year-level tests and fully replace them with online "adaptive" tests. In adaptive testing, students are given test questions that are directly targeted to individual students' skill levels. Adaptive testing provides more precise information about the points individuals have reached in their learning, regardless of their year level.
The benefits of adaptive testing are best realised when the purpose of testing is to establish and understand where individuals are in their skill development. This requires the substantive interpretation of test results by reference to a well-constructed map of long-term skill development.
NAPLAN scores are expressed on a numerical scale that extends across years three, five, seven and nine. These scores are interpreted with reference to a hierarchy of skill "bands" (or proficiency levels).
The instructional usefulness of NAPLAN will be enhanced by working to describe and illustrate these skill levels in ways that maximise guidance to teaching and learning and by making them the direct reference for understanding students' NAPLAN performances.
Online delivery and scoring will provide more immediate feedback to teachers and students thus improving NAPLAN's instructional value.
Delivery in an online environment also introduces the possibility of changing NAPLAN itself. For example, the wrong answers a student gives in numeracy could be used to draw automatic conclusions about the mistakes they are making. This could be checked by giving more questions of the same type and feeding the results back to the teacher.
Geoff Masters is CEO of the Australian Council for Educational Research. This article was originally published on The Conversation.