Authored by Pauline Taylor-Guy, Dr Jarrod Hingston and Dr Pina Tarricone, Australian Council for Educational Research
The research findings are clear: data are crucial to school improvement and school leaders play a critical role in identifying reliable and meaningful data, write ACER’s Pauline Taylor-Guy, Jarrod Hingston and Pina Tarricone.
The Australian Council for Educational Research (ACER) has worked with schools and education systems around the world on school and system improvement for more than a decade. We have developed a suite of practical and effective evidence-based tools to use in this work, including the National School Improvement Tool (NSIT, © ACER), the Principal Performance Improvement Tool (PPIT), the Education System Improvement Tool (ESIT, © ACER), and a range of psychometrically valid and reliable assessments such as the International Schools Assessment (ISA). These tools are underpinned by evidence from research into the practices of highly effective schools, school leaders and education systems internationally, and our work has afforded us the opportunity to work with and collect evidence from hundreds of schools, and thousands of school leaders and teachers, in Australia and globally.
The evidence we have gathered has enabled us to identify commonalities across incredibly diverse country contexts and, in turn, to make recommendations that are effective in a range of educational environments. It is the kind of critical evidence school leaders need when making decisions about improvement initiatives, and when measuring their impact.
Leading data-based change
The international literature is consistent in identifying the importance of leadership and evidence in driving school improvement. Critical to improvement is that ‘the school leadership team and/or governing body have established and are driving a strong improvement agenda for the school, grounded in evidence from research and practice, and expressed in terms of improvements in measurable student outcomes’ (Domain 1, NSIT, p3). Similarly, Domain 5 of the PPIT, ‘Driving data-informed practice’, identifies that ‘Highly effective principals… expect school initiatives and classroom teaching to be guided by, and to respond to, evidence of existing student needs and performance. They promote the school-wide collection and use of quality data to identify starting points for action, to set goals for improvement, monitor changes over time, and to evaluate the impact of actions and decisions to improve student outcomes and wellbeing’ (p19).
All change should be carefully considered, well planned and monitored, and data are fundamental to the process of identifying a starting point and a desired destination. While every school collects data about learning through formal and informal student assessment activities, the school leader plays a key role in discriminating between data that are reliable and meaningful and data that are not.
Historically, school leaders predominantly referred to summative assessments for evidence-based decision-making; while this is still a common practice, the value of information derived from summative assessments is increasingly up for debate. Are the data a valid measure of learning in a particular area, especially in a group of learners with diverse abilities? Are they capable of revealing something meaningful that can be used to inform teaching? Data from summative assessments provide valuable norm comparisons within the student cohort but give little information about where each student is in their learning or how much progress he or she has made over time.
Additionally, the increased use of school information systems and data reporting tools has resulted in a deluge of data. In many cases the information appears to provide valuable evidence about student outcomes and may even try to identify factors that influence these outcomes. As useful as the data appear, however, school leaders still need to be ‘data literate’ enough to interpret what they actually mean in the classroom.
Measuring learning progress
Access to reliable and meaningful data that school leaders can use in decision-making and monitoring the impact of initiatives is critical. In order to identify whether assessment data are meaningful, school leader and teachers alike must first consider what the data indicate about where students have reached in their learning, whether that is at the individual student or cohort level.
In recent years, ACER has made the communication of student achievement a research priority, and in particular the development of learning progressions in mathematics and reading. ACER’s learning progressions describe a continuum of knowledge, skills and understandings that students attain in order to progress in a learning domain. The progressions are based on evidence collected through ACER’s research and are underpinned by a reliable measurement scale. In addition to providing a progression map, ACER learning progressions also describe levels of learning and the types of knowledge, skills and understandings that students typically demonstrate within the domain.
Our research shows that assessments become more meaningful when they are aligned to a learning progression. The learning progression enables the assessment to establish a student’s level of attainment and substantively describe learning, which then helps teachers identify likely next steps in a student’s or a group of students’ learning. Administering subsequent assessments aligned to the same learning progression allows leaders and teachers to measure growth in learning over time. From the perspective of the school leader, this is particularly critical for reflecting on the impact of programs and initiatives in the school. However, the learning progression, as a qualitative as well as quantitative tool, enables school leaders to develop a greater understanding of what the impact actually means for improving student outcomes.
Leading a culture of improvement
ACER’s highly adaptable tools, underpinned as they are by a broad evidence base from international research and practice from hundreds of schools around the world, allow those schools seeking to improve to learn from the practices of high performing schools operating in highly effective systems. What does this look like in practice?
The research reveals many factors that contribute to and influence student achievement, at both the individual student and student cohort level. For example, ACER has been heavily involved with the Program for International Student Assessment (PISA) and Trends in International Mathematics and Science Study (TIMSS), renowned international studies that collect evidence of factors that influence student achievement, and which provide important system-level insights.
By contrast, the NSIT allows us access to school-level insights. Schools that rate highly against the NSIT framework do not necessarily have higher achieving students than schools that rate lower against the framework. However, what we see in schools with a culture of continuous improvement is every student improving in their learning over time. Factors observable in a culture of improvement include: the use of valid, reliable and direct measures of student outcomes to pinpoint where students are in their learning; the use of these measures in planning next steps in terms of teaching and learning; and monitoring progress over time. In addition, teachers in schools with highly effective practices are highly skilled in using assessment data to ensure all students are progressing and in developing quality classroom-based assessments.
Charlotte Waters on Learning Progressions in ACER’s Work
More on ACER’s work with the Centre for Assessment Reform and Innovation
About the authors:
Professor Pauline Taylor-Guy is Director of the ACER Institute and Centre for School and System Improvement. She has spent many decades in school, education system and teacher education leadership in the UK, Africa, Asia and Australia. Email: Pauline.firstname.lastname@example.org
Dr Jarrod Hingston manages ACER’s School Assessment Services, which delivers the International Schools Assessment (ISA) program to over 400 schools around the world each year. Jarrod was previously head of student assessment and reporting at the Abu Dhabi Education Council and has worked in various assessment policy-related roles for Australian federal and state departments of education. Email: email@example.com
Dr Pina Tarricone is Principal Research Fellow at ACER and manages its office in Perth, Western Australia. She is a published author and has a PhD in Educational/Cognitive Psychology.
The Association of International Schools in Africa is dedicated to serving its members throughout Africa. Please let us know if you require additional or specific information, resources and or support, and we will endeavour to assist you as soon as possible.