The School Education Bunfight or how Populism, Ideology and Political Cowardice distorts Policy
In December every year since 2008, the results from NAPLAN get reported. NAPLAN involves every school student in every state and territory in alternate years from year 3 through 9 sit a standardised test which is intended to assess proficiency in literacy across several dimensions including reading and writing and numeracy. NAPLAN results are posted for each school in the MySchool website which also contains information about socio economic profile of each school based on self-reports in respect of the parents and also financial support, both operational and capital, by source. Reports compare states, schools and trends. Test scores are averaged for schools and for states and variations noted. NAPLAN and MySchool are managed by ACARA, the Australian Curriculum, Assessment and Reporting Authority, established by the Gillard Government.
The issues taken up in this essay follow earlier essays on the subjects of standardised testing, public vs private schooling and school choice. Fuller treatments are to be found in the Springer publication “Education: The Unwinding of Intelligence and Creativity”
The results for 2016 in general showed, with the exception of Year 9 writing, that student achievement has stayed the same for the eight years the tests have been run. Year 5 students performed significantly better in reading and numeracy in 2016 than in 2008. Otherwise there were small improvements in some areas and in some states and declines in others. ACARA provides detailed results and analysis. Indigenous students showed improvements in some areas greater than did non-Indigenous students.
NAPLAN is grounded in the belief of enhanced accountability providing opportunity for choice, in this case by parents about which school they should send their children to. The assumptions are that the MySchool data, including NAPLAN results, are critical to parents who want to make choices, that parents will be able to evaluate the information and make valid judgements. Every one of the assumptions is problematic. In particular, no matter how the data are presented comparisons between schools are made. However, such comparisons are of no significant value, as shown by numerous studies and as explained further below.
In 2016, the release of NAPLAN results followed close behind the release of results of two large international tests TIMSS and PISA.
TIMSS (Trends in International Mathematics and Science Study), as explained by ACER, the Australian Council for Educational Research, which compiles the results, is “designed, broadly, to align with the mathematics and science curricula in the participating education systems and countries, and focuses on assessment at Year 4 and Year 8. It also provides comparative perspectives on trends in achievement in the context of different education systems, school organisational approaches and instructional practices; and to enable this, TIMSS collects a rich array of background data from students (in years 4 and 8), schools and teachers, and also collects data about the education systems themselves.”
PISA (Program for International Student Assessment) assesses the knowledge or literacy in the domains of science, mathematics and reading on the basis of a sample of 15 year old students from countries in the OECD member states and “partner” countries; it does not assess knowledge of any specific curriculum. PISA assesses skills in analysis, reasoning and communication. Each country is responsible for analysis of the data; in Australia the analyses are done by ACER. Results from PISA are analysed by numerous other researchers world-wide.
The international tests were greeted with the usual lamentations and criticisms about Australian students falling further behind students of other countries. That the scores for PISA were higher than those of students from the UK, most of Europe and many other countries is seldom mentioned.
Some of the commentary is silly as well as inflammatory: one of the interesting observations was that Australian students’ scores were lower than those from Kazakhstan: eight years ago, Justine Ferrari in The Australian of 10 December 2008 screamed, “Doesn’t add up: Borat kids beat Aussies in maths and science” referring to the TIMSS results for 2007. Predictably commentator Kevin Donnelly in 2008 labelled the results a “wake-up call to education chiefs”. Considering the latest results it would seem they did not listen to Mr Donnelly.
In all these tests, a link is asserted between educational attainment as measured by test scores and economic growth as if all that is needed is to improve educational achievement. The assertions of people like Eric Hanushek and Ludger Woessman of Stanford University’s Hoover Institute of how much increase in economic growth will follow from tests core increases are accepted at face value.
In other words, the social determinants of educational achievement, like the social determinants of health advanced by the 2016 Boyer Lecturer Sir Michael Marmot, are ignored. The fact that gains are very much more likely at the lower end of the continuum than at the top and that the low average is due mainly to the large number of low scores tends to be forgotten in the rush to look at rank based on averages. In the best performing systems the number of low scores are relatively small: the range is smaller. Behavioural economics tells us that at the top variation is not insignificantly a function of chance.
The results for TIMSS showed Australia around the middle of the pack, the scores for both year 4 and year 8 students in mathematics at about the same level as in 1995 and the scores for science were similar. Results for some countries in TIMSS, such as the US and UK, are significantly better relatively than they are in PISA.
In the case of PISA the further drift down in the rankings for Australian students in the 2015 results were highlighted again. The superior performance of students from Singapore were especially noted. Little or no commentary was made as to what it was that happens in the Singapore system and why its students did so well. (Information is fairly readily available!) Little or no comments were made about the relative performance of other countries’ students.
Sydney Morning Herald Education Editor Kelsey Munro pointed out, comparing the top and bottom quarter according to their parents’ socio economic background, the bottom 25 per cent are on average three years of schooling behind the top 25 per cent. She quotes Dr Sue Thomson from the Australian Council for Education Research (who is in charge of compiling the country reports from PISA and TIMSS) as pointing out, we’re just not dealing with the equity gap. “I was quite saddened to look at that data,” she said. “There’s no difference over 16 years of reading, 13 years of maths – no changes. We are still not attending to those gaps.”
Federal Minister Birmingham asserted that the international tests showed that increased funding for schools was not the answer. It was especially disappointing, the Minister observed, considering the substantial financial resources which had been allocated. Attempting yet again to fend off criticism of the government’s refusal to fund the last years of what has become known as the Gonski reforms, the Minister talked up the substantial additional money that had been and was going to be allocated: no mention was made of the lack of equity in the system. What was mentioned was that the quality of teaching was important but what precisely that means was not explored.
Commenting on the NAPLAN results Minister Birmingham said, “Our performance as a country is not meeting the high standards we should expect with the growth in investment we have had in our schools in recent years. In fact in some areas we have seen, at best, a plateauing and elsewhere measures are showing a decline in our performance.” He said the report showed sector reform was required, including a focus on evidence-based strategies to boost student results. He also said “politicians needed to work together to address the standards”.
Of all the comments this last is the most irresponsible. The fact is that politicians cannot have any direct influence on what happens in schools. They may vary financial support, change standards, approve curriculum changes and requirements for teacher qualifications. And fund capital developments, or not. But learning happens in the school class, and elsewhere, not in the meeting rooms of politicians! This is a point taken up below in the commentary by former teacher Gabrielle Stroud.
There are serious questions here and important issues treated superficially. More than that, the features which we can say on the basis of research contribute most significantly to achievement in school education are largely ignored in favour of by ideology, personal experience and general lack of analytical and reasoning skills. The Minister’s assertions that substantial money had been spent on education was generally accepted. It is wrong!
The increase in funding has been at the most modest and most of it has gone to independent schools, most of which are already well funded; in some cases states have decreased their funding of public schools. This is dealt with in more detail below. It is de rigeur that differences in the socioeconomic background of students attending independent schools, which select which students to enrol, compared with those attending public schools that must take whatever students turn up to enrol, are not highlighted. Nor is the critical influence of early childhood experience examined.
In this post the conclusions which can be validly drawn from the three sets of tests are reviewed and their reliability considered.
The validity of the conclusions drawn by media and politicians are then examined.
And last, some of the most important recent research is reviewed.
This essay started out as a commentary on the reportage by one media, the ABC’s 7.30 Report of 13 December 2016. But the recent research and the contrast between the conclusions from that and what has passed for commentary is far too great to merit more than a few words on that. Unfortunately, a claim was made on the 7.30 Report of “massive funding”. Jennifer Buckingham of the Centre for Independent Studies talked of little progress being made in the last 10 years, growing investment and gaps between lowest and highest students in terms of socioeconomic status. Professor Geoff Masters of ACER talked of a lot of resources being “thrown at” the issue and concluded by focusing on improvement in the teaching that is occurring in the classroom.
Dylan William is Emeritus Professor of Educational Assessment at University College, London. With Paul Black, William showed the importance of formative assessment in improving learning and identified self-assessment by students as critical to that.
William recently pointed out that the PISA results can be a veritable feast for anyone to draw whatever conclusions they like in their recommendations as to what should be done to improve results. For instance that teachers should be drawn from the highest-achieving college students as in Singapore and Finland (forgetting the Republic of Ireland which also does that but does not achieve impressive results), that teacher autonomy is important (Netherlands) or that central control (Singapore) is, that standardised testing is important (as in Alberta but not in Finland).
More importantly, William points out that the results in the PISA tests are the result of 10 years of teaching, not just the last three years. And what was happening 10 years ago was likely the result of policy decisions 10 years before that, ie 20 years before the present results were obtained. Whilst countries try to get a representative sample that can be difficult, as in China where parents who move to Shanghai for work leave their children with grandparents in the country.
“Third, and perhaps most important, educational systems have to be understood in the context of the societies in which they operate. Solutions that work in countries where there is faith in the wisdom of appointed civil servants are unlikely to work as well in systems where the chief officer of a local education authority is elected by the local citizens every four years.
“The truth is, despite all the heat that will be generated by discussions of the PISA results, we will learn little. Our world is becoming more and more complex, and so higher and higher levels of educational achievement will be needed to be in control of one’s own life, to understand one’s culture, to participate meaningfully in democracy, and to find fulfilling work.”
The Funding of Education: Money makes a Difference and it has gone to Those Who Don’t Need it.
Before proceeding to consider recent analyses of what does in fact make a difference and what ought to be the focus of resource and policy it is necessary to address the actual financial resources. In this respect, again, the argument advanced by the Abbott and Turnbull governments amounts to gross untruths.
The allocations over the years have been analysed by several, especially Trevor Cobbold of Save Our Schools. Cobbold is a former Productivity Commission economist. In several posts on the SOS website, Cobbold has pointed to the way the government distorts the funding argument. Within the last few months the Sydney Morning Herald reported the distortions revealed by Cobbold in a severe criticism of the Productivity Commission. The facts are these, as detailed in just one of Cobbold’s numerous careful posts, “Australia’s Unfair School Funding System Must Be Overhauled” of February 15, 2016.
Over the past 15 years, total Commonwealth and state government funding for private schools has grown at more than twice the rate of funding for public schools, and in more recent years funding for public schools has been cut while private school funding still increased. Between 1998-99 and 2013-14, government funding per private school student, adjusted for inflation, increased by 39% compared with only 17% for public schools. And between 2009-10 and 2013-14, public school funding per student fell by 3% while private school funding increased by 10%.
Worse, since 2009, total government funding per student for many high fee, exclusive private schools in Victoria and NSW increased by several times more than for many highly disadvantaged schools.
Cobbold concluded, “It is apparent that Australia has an incoherent and unfair school funding system that favours advantaged students and discriminates against disadvantaged students. Funding increases over the past 15 years have been woefully misdirected.” He referred to David Gonski’s assertion that funds were not applied on a needs based aspirational system.
“There can be little wonder that Australia has failed to improve the results of disadvantaged students or to reduce the large achievement gaps between advantaged and disadvantaged students over the past 15 years. Public schools bear the very large burden of disadvantage but received less than half the funding increase provided to private schools. This incoherent and unfair funding system is set to continue.”
State and territory governments are the main source of funding for public schools, but nearly all cut public school funding in real terms between 2009 and 2013 and most refuse to commit the final two years of the Gonski plan, Cobbold pointed out.
To argue that the results of student achievement are particularly disappointing given that considerable financial support has been allocated is not simply a distortion. The public has been sold the line that choice of school is important and that the substantial finding from the Federal government is based on that and helps parents get the best education for their children. It does no such thing! Nor does it save taxpayer money, as has been pointed out elsewhere by other careful analyses by researchers such as Lyndsay Connors and Jim McMorrow as reported elsewhere on this site.
Abbott and Turnbull government ministers, Christopher Pyne and Simon Birmingham, have consistently claimed that money is not important in improving education outcomes. That is also wrong, again as Cobbold and others have shown. For instance, in “Another Study Shows That Money Matters in Education” (February 4, 2016), Cobbold quotes a review of empirical studies of the relationship between funding and student outcomes: “On average, aggregate measures of per-pupil spending are positively associated with improved or higher student outcomes. The size of this effect is larger in some studies than in others, and, in some cases, additional funding appears to matter more for some students than for others. Clearly, there are other factors that may moderate the influence of funding on student outcomes, such as how that money is spent. In other words, money must be spent wisely to yield benefits. But, on balance, in direct tests of the relationship between financial resources and student outcomes, money matters.”
Cobbold further points to a study showing that teacher salaries do affect the quality of the teaching workforce, which in turn affects student outcomes. “The review also found ample research indicating that children in smaller classes achieve better outcomes, both academic and otherwise, than in large classes and that class size reduction can be an effective strategy for closing racial and socioeconomic achievement gaps… The available evidence leaves little doubt: Sufficient financial resources are a necessary underlying condition for providing quality education.”
There is a final deceit in the campaign by the Coalition government to convince the populace that the magnitude of money is not that important. At the same time as repeating the view that where the money is spent is the important issue, it is refusing to allocate the extra funds promised as part of the agreement between the Gillard-Rudd governments and some states to flow in the last two years of the six-year plan. And meanwhile substantial funds continue to flow to private already well-funded schools. I return to this in summarising a recent contribution from Dr Bronwyn Hinz on why Simon Birmingham is wrong.
What is the Value of NAPLAN
There are serious questions about standardised tests in general: they increase stress on students and parents, they force a narrowing of the curriculum and they make no contribution to improvement of achievement. There is a further major issue which applies to all externally imposed and monitored evaluations, both measures and assessments of performance: they remove control from the person subject to the evaluation. The inherent assumption is that the persons concerned are unlikely to evaluate their own performance reliably but will more likely exaggerate positive achievement and conceal the negative. This conflicts with the notion that the most powerful motivator is intrinsic motivation, noticeable especially for sportspersons.
The externally monitored and interpreted evaluation increases the power of the external evaluator whose bona fides cannot be regarded as intrinsically dispassionate. What the external evaluator chooses to disclose and not disclose affects the way the results are interpreted: thus, if there is a 10% decline from one year to the next, the ultimate judgement will be a great deal different from the response to a gain of over 5%. And in neither case is remedial action likely to be undertaken unless the factors contributing to the outcome are clearly specified and agreed upon, that is the reasons for the difference are understood. The problem with the reports of these standardised tests is that the reasons are not explained.
Summative evaluation, which is what these tests are, deals with following and not leading indicators: by the time the results are out it is too late. Very likely what should be looked at are the processes which contribute to the results. But those with most influence don’t have the knowledge about that: what they have is microeconomic data about costs and employment statistics and teacher qualifications, numbers of children, class sizes, school enrolments and socioeconomic levels (or proxies for them). We know, or most of us know, there are relationships between the last and achievement but the commentary almost never explains that: it is fundamental to the issue.
External evaluation of teacher performance is characterised by disempowerment of the teacher: performance is judged by outsiders. Those persons are usually ones with a particular ideological approach to school education as the driver of student performance which marginalises all other factors. The effective teacher on the other hand has a good idea of what it is they contribute, to an extent distinct from what parents and peers and other factors, most especially the student’s early years, contribute. Those exercising most influence poorly understand the dynamics of the classroom and the influence of other factors. This is a major point made by Professor John Hattie in his writings (see separate essay).
Criticism of NAPLAN has been a major issue but the government ploughs on regardless. Detailed consideration of the validity of NAPLAN was undertaken in July 2010 at a conference organised by a number of organisations including the Australian Education Union: researchers in the area from disciplines of education and statistics drew attention to the way the tests narrowed the focus and in essence privileged the later benefit of education to the economy as opposed to the individual’s own life. Professor Alan Luke pointed to the US situation where “testing had led to a virtual industry of student measures … tests have become the de facto curriculum which inflated attention to the end?of?year results and exaggerate basics skills”.
To an extent the issues about standardised tests apply to PISA but there are gains in that exercise from the substantial analysis of the data and the exploration of the factors contributing to educational achievement.
So to the 2016 NAPLAN exercise. Minister Birmingham complained that the results showed that achievement had “flatlined”. Some of the more expansive media commentary noted that in some cases achievements of Indigenous students showed higher increases than those of non-Indigenous students. And there were similar observations for students from a language background other than English.
In essence, considering the detailed and relatively minor differences between the scores in each year and between the students of each state and territory, most commentary on NAPLAN adds virtually nothing to our understanding of what is going on in schools and how to improve the outcomes. For the most part it is a waste of time.
The main conclusions to be drawn are that in those states where the economic conditions are reasonable and where education facilities are at least moderately satisfactory, the results are quite good, with variations from year to year of little if any significance. The achievement levels of students in those area which are remote or very remote are poor, and that is especially so in the case of Indigenous students, is low. Thus the results for the Northern Territory are consistently very much lower than for everywhere else and the results for Western Australia and for Queensland, where there are more remote schools with Indigenous students, are also lower.
Data has been collected on parents’ education and on family socioeconomic level. As every single study has without fail shown, those are very significant influences, most especially in early childhood. The expense of NAPLAN is hardly needed in order to show that. The gains which would come from teaching Indigenous children in their own language, which is fundamental to identity, in their earliest years at least is not discussed. Arguments between different approaches to school education of Indigenous children in northern Queensland have become intense with little intelligent attention to what might be contributing to gains and losses in each. As in some other areas of concern such as climate change there are of course dissenters from all this whose arguments ignore all available evidence.
What the NAPLAN program does do is unfortunate. Together with the MySchool website it encourages the view that there are some schools where performance is significantly better than others that the difference is due significantly to the school. If only students could attend the school with better results they themselves would achieve better results and go on to do better in later life because of their educational achievement. Considering the small range of skills examined, notwithstanding their general relationship with other skills, this is doubtfully valid.
The outcome, as noted in previous essays, is the well-publicised drift of students from public schools to independent schools driven very much by marketing by those schools in order to attract schools in order to make more money – they are nonprofit- which of course has nothing to do with improving education.
Studies of the achievement of students at university reveal that public school students do better than students from private schools and studies of salary data show little relationship with school test scores which led James Heckman of the University of Chicago to point out that rewarding teachers on the basis of test scores distorted policy. The principal variation is within schools – due mainly to teachers – and not between them: fortunately choice does not extend to parents choosing what teachers their children have, as Hattie notes.
The government is spending more money to bring the tests on line. Senator Birmingham is talking of having year one students join the NAPLAN exercise. Some persons are advocating introducing standardised tests for pre-school children. Companies involved in text books of various kinds and booklets which allow practicing how to answer tests are making a lot of money. So are individuals and small businesses offering coaching. How is any of that likely to lead increased knowledge and understanding by students? The answer is it won’t make any difference and is a complete waste of money!
The importance of PISA
There is an extremely important point to make about PISA, a point lost in the turmoil of arguments about standardised testing and the ability of countries to control their education systems in the face of those demanding accountability and advocating parent choice. PISA is a project of the economic organisation, the OECD. PISA not only reports results of student scores. It has a comprehensive analysis of the data relating features relevant to the results including especially differences between types of school and relationships between student achievement and socioeconomic background of the student. Its conclusions on both these issues is unequivocal!
Equity is revealed as a major issue in the PISA commentary and once SES is considered there is no gain from attendance at independent as opposed to public (government) schools. Choice is of no consequence. Anyone knowing anything of recent advances in behavioural economics would be unsurprised by that conclusion. Government ministers do not study behavioural economics and neither do the ideologues who influence them!
The high level of inequity in the US education system, due significantly to funding of school districts based on rates charged to residents, was the subject of a special PISA report in 2009. The response was to brand PISA as ideological and highlight all sorts of features, such as excessive out-of-school coaching, of some countries’ systems were highlighted in order to render irrelevant PISA’s conclusions. That the US system was not following the practices of countries like Finland whose students consistently perform well was highlighted by many education researchers such as Diane Ravitch and David Berliner. But ignored. Charter schools continue to receive support, especially from the super-rich who have most influence. Campaigns for election to school boards involves trying to get rid of teacher union representatives. A very recent contribution on the Forbes website claims the PISA results of gamed through selections of students by country and city authorities in Asia on a non-random basis to have the best performing students sit the tests. That hardly explains the situation in Finland and some European countries or Canada.
The standard response by the media and politicians to the publication of the PISA results (and every other test), is to focus on rank, where the scores of students from our country come in the table of all students’ scores. This is nonsensical and unhelpful! The PISA reports are careful to group the scores for the students from each country in groups pointing out those that differ significantly from one another and those that do not. That is done in the tables AND in the commentary. This the rank as reported by the media is meaningless: the student scores of country A which do not differ significantly from those of countries B, C and F and rank as 10, below three other countries, two of which are not significantly different from each other, is properly judged as equal 3 and not 10!
For instance, Australian students’ scores for reading literacy were surpassed by 11 other countries in 2015 whereas in 2012 students from 9 other countries exceeded Australian students. In science literacy the scores of Australian students in 2015 were not as good as those of nine other countries of which four were OECD countries but in 2012 10 countries were better, three of which were OECD countries. In mathematical literacy in 2015 Australia joined four other countries, notably Finland, Korea and New Zealand whose students have high average scores, and the Slovak Republic in having a significantly higher proportion of low performers than in 2012.
The countries from which students came whose scores were higher than Australian students’ are from Asia and include Singapore, whose students topped the three domains of literacy in 2015, and Shanghai China. Much fuss was made in earlier years of the lower place of Australian students after a number of partner countries in Southeast Asia became part of the study for the first time. Such superficiality is out of place.
Much is made of the decline in Australian student scores: indeed this is the headline in the media reports and the comments of the minister. And others. The decline is important but is it critical. In mathematical literacy the decline was 10 points (standard error SE for both years 1.6). But we should note that he decline for nine other countries was greater, that for Korea 30 score points, three times Australia’s. Whilst that is a large decline, Korea was still 34 score points higher than Australian students. Simply pointing to the decline does not help.
The declines and advances need to be considered in relation to other countries, the features of the systems in other countries and the difference between the high and low scores, the measure of equity, focused on. And it these last two aspects which are missed.
It is the inequity which is dragging Australian average scores lower. The fundamental issue of education policy in respect of resource allocation should be the students who perform poorly. That is the thrust of the Gonski reforms. That commentary from media such as The Australian and the politically right-leaning think tanks focuses on the top performing schools and does so in the context of neoclassical economics, competition, individual achievement, teacher valuation, choice and the notion that anything funded or run by government will be inferior. Teachers’ knowledge and the content of the curriculum are given a lot of attention. These most especially have been the focus of the Howard, Abbott and Turnbull governments and their Ministers, Vanstone, Kemp, Nelson and Bishop and in the least five years Pyne and Birmingham. And they are not the most important!
There are numerous reports on the link between inequality and educational attainment, amongst the most recent the work of former school principals Chris Bonner and Bernie Shepherd. Indeed inequality is increasingly a matter of intense research by economists such as Thomas Piketty and a host of others. It seems not to have reached the minds of some government ministers!
Neoclassical theory is of no value: poverty will not be eradicated by education alone and educational achievement will not be achieved solely by a focus on schools. Professor John Hattie makes cogent points as to whether parents and ordinary citizens actually understand what does make a difference. Governments playing to popular opinion should be condemned everywhere that they ignore evidence simply to gain political points. Democracy needs to attend to the views of the populace but not when it flagrantly disregards any sensible analysis of the facts!
Reactions to the commentary on the TIMSS, PISA and NAPLAN results
Chris Bonner and Bernie Shepherd, past principals and now commentators on education, responded to the PISA results by pointing to the importance of equity! Again. They referred to the view of Amanda Ripley in the New York Times and to John Hattie’s Keating Lecture.
Ripley claimed, “the smartest countries tend to be those that have acted to make teaching more prestigious and selective; directed more resources to their neediest children; enrolled most children in high-quality preschools; helped schools establish cultures of constant improvement; and applied rigorous, consistent standards across all classrooms.”
She also talked up the Common Core Standards adopted by some US States and supported by state Governors.
Gabrielle Stroud, a former teacher who resigned in frustration at the emphasis given to [requirements for performance results of schools to be forwarded to “Canberra”], talks of the trust in teachers being eroded. She points to the Melbourne Declaration, formulated by government ministers of education, as a “manifesto written by politicians in 2008 that promised every child a world-class education”. The declaration was the progenitor of NAPLAN and MySchool whose website “led to parents regarding schools like insurance companies where they could “shop on line and compare”. (The Declaration was heavily criticised by the National Secondary Principals Council for its blurred vision and lack of commitment.)
She lamented “the endless paperwork associated with Professional Teaching Standards means teachers are required to “prove themselves” over and over, which takes them away from the valuable work they want to do: teaching students. Classrooms have gradually become centres of accountability, where learning is prescriptive and curriculums are imposed. Teachers can no longer meet learners at their point of need, teach them and then celebrate their progress.”
Stroud observes, “Senator Birmingham has suggested introducing reforms that will identify struggling students earlier and allow for more targeted interventions. This translates to testing students when they’re younger and then attempting to “fill the gaps” with prescriptive remedial programs. But student learning doesn’t work that way — we need our youngest and most vulnerable students to fall in love with learning. We need them to discover that school is a place where they are valued, where there are opportunities, and that coming to understand new and interesting things about the world is a wonderful and important skill to have.”
Stroud joins the many condemning standardised testing. “We need to stop holding up only standardised test results to show how students, teachers and schools performed. Instead, we should start having conversations with teachers about our children’s development, effort and interests. Our children should be celebrated for having a go, for shifting out of their comfort zone, for creating, for having an opinion.”
Lifting the standard of teaching requires, in Stroud’s view, handing trust back to teachers. There are those who will label such a view as a wish to avoid accountability. They should note that in Finland, whose students consistently perform near the best in PISA, do not have standardised tests but trust the teachers.
Similar claims about the obsession with accountability were made by Ned Manning: every spare moment of a teacher’s life is spent not preparing lessons or finding resources but satisfying bureaucratic demands including filling out paper work that could be done by anyone.
There is an obsession with testing, turning the gaining of a particular score into a gateway to the future: it may not be possible in the future to sit for Higher School Certificate in New South Wales, which shuts off the path to university, unless Band 8 in Year 9 is achieved! How many outstanding writers, musicians, dancers, scientists even, have passed these kinds of tests?
The recently appointed Chief Justice of the High Court, Susan Kiefel AC, dropped out of school at age 15 upon completing year 10 and worked as a secretary, later completing secondary school and then undertaking her law degree at night school.
Christofer Tomazou of Imperial College London failed his 11-plus exams, left school at age 16 finding that it failed to inspire him but went on to be appointed Professor at Imperial College at age 33, a leading inventor in the biomedical world and entrepreneur. He was of the youngest academics ever to be appointed a Regius Professor, under Royal patronage.
Sir John Gurdon, joint winner of the Nobel Prize for Physiology or Medicine in 2012, was last in his class in high school science and was told by his teacher he should give up his ambition to be a scientist. Would these three persons have passed Band 8 in Year 9?
Funding Australian Schools
Substantial commentary on the unresolved issue of funding of schools continues unabated with little reference by the Federal Government to student outcomes but interestingly , considering the political influence of the independent schools sector that did for a previous Labor Party leader.
Bronwyn Hinz at the Mitchell Institute simply asserted that Minister Birmingham was wrong about school funding: “axing Gonski now cannot be justified”!
“While educational performance as measured by national and international standardized tests (a limited measure) have stagnated or fallen in most states as government funding for schools has increased, this is largely explained by the fact that this government funding has not been allocated to the schools that most need it, and it has not always been allocated to the most effective programs, such as investing in teachers and investing in high quality early education (such as preschool) for all kids, so that they are better able to amplify their development and learning at school.”
Julie Sonneman and Peter Goss at the Grattan Institute pointed out that the slip in international rankings was simplistic and unproductive: “First, these results only show student achievement, not student progress (growth over time), which is a far more important measure of the value-add of an education system, as well as student resilience for later life. Second, they should not be whipped up into a frenzy for political ends, as further evidence that “money doesn’t matter” (given results have not improved as funding has increased).”
Goss, with Kate Griffiths, entered the debate about the funding of private schools and the claim of overfunding of some schools which Minister Birmingham has also talked of. They refer to the Grattan Institute Report and advocate redistribution of existing funding away from schools in some states and territories, notably those in the ACT and also Western Australia, to other jurisdictions.
“Lifting all schools to their target funding levels is extremely costly under the current model … more than $3.5 billion each and every year to fund all schools even at 95 per cent of their target… We propose a new deal that aligns funding to need for the same amount of money. We create big savings by reducing the automatic annual growth on school funding (indexation), affecting all schools. We then reallocate these funds to the most under-funded, getting all schools to their target by 2023.”
This would work by “funding arrangements to set all schools on a course to their target within six years. In parallel, we recommend reviewing the formula for determining needs-based targets to ensure we are aiming for the right target, and adjusting targets if required. The second step is to introduce transparency in funding arrangements through an independent body, to ensure funding goes where it is needed most… current arrangements ensure that the winners stay winners and losers stay losers because school funding grows according to what you got last year, not what you need this year.”
Such an independent body – to administer the student resource standard – was recommended by the Gonski Panel: the Gillard Government did not adopt it.
Goss and Griffiths point out, “School costs are mostly wages, so school funding indexation should be linked to wage growth in order to maintain its real value over time. But the current (fixed) indexation rates were designed when wages growth was higher and are now over-generous. Changing indexation arrangements will affect all schools – it slows the growth of every school’s funding target, as well as the actual funding they receive, in line with real cost growth. The budgetary savings these changes generate are significant and should be redistributed to closing the needs-based funding gap. The proposals would be politically challenging because some independent schools will likely lose.”
Laura Perry and Emma Rowe of Murdoch and Deakin Universities respectively pointed out in The Conversation that the overfunding of some 150 private schools (as reported by the Sydney Morning Herald) amounts to more than $215 million a year. They say, “funding should be based on the needs of students an communities not on an illogical basis of entitlement.
In a concise summary of the situation Sydney Morning Herald’s Kelsey Munroe points out that when German students in PISA 2000 performed poorly, in fact below the OECD average – a quarter were functionally illiterate and innumerate – education authorities responded. By 2012 Germany’s student results were above the OECD average. Inequality related to the ghettoization of migrants was the main driver of the poor results in 2000. “Germany turned things around by tackling structural inequalities in the high school system; investing in early childhood education; and closing the equity gap by focusing resources on the most disadvantaged students… In Germany, after the 2000 PISA shock, state and federal leaders got together and agreed on a set of common standards and reforms. They stuck with it. It took a decade, but it worked.”
The progress in Finland followed a similar path in the late 1900s! In Ontario one-time Premier took step[s immediately upon his election to introduce special funding for disadvantaged students, especially the children of migrants.
Despite the fact that the major political parties say they are in favour of needs-based school funding it seems they are unable to join together in the way that political leaders in Germany, Finland and Ontario did.