In-School Variation and how schools can respond to it

In the UK, the performance gaps within schools (often called in-school variation or ISV) overwhelm the differences between schools as a factor in national outcomes.  At KS4, ISV is 14 times greater than between-school variation. If the performance in subjects below the average became average, there would be a 5% improvement in pupil attainment. Reducing in-school variation has been described as “the greatest educational challenge of our time”.  ISV was a focus for the National College for School Leadership (NCSL). An outcome of this work was a suggested school approach to ISV.  Today, ISV remains a suitable focus for school-improvement initiatives set-up to investigate ISV, to quantify its effects, and design strategies to reduce it.

Should school leaders be concerned by In School Variation?
   Consider this question:
Do Summer-Born pupils in your school perform differently to other pupils?
– Do we know the answer?
– Does it matter if we don’t know the answer?
– If a school leader didn’t know the answer, would this tell us anything about their leadership?

The question of the performance of Summer-born pupils is one which crops up frequently. The differences between Summer born and other pupils are often observable in the early years, but does the effect continue into secondary education?
A school leader might be expected to know what the position was in their school. If they don’t know the answer, it could indicate that they don’t think it is important to find out about this – and perhaps also other school performance issues. Yet we might expect a switched-on school leadership team to know about any under-performing groups in their school and have a strategy in place to support the progress made by such groups.
It makes sense for school leaders, subject leaders and teachers to be specialists in what happens in their schools, and to understand the effect of those factors which are unique to their situation and which influence the success of their pupils.

How easy is it to find the answer to this question?
To investigate the Summer-Born question we could tag all birthdays which fall in June, July or August and compare the performance of this quartile in their GCSE results with that of other pupils.  We could refine this further by looking at just those pupils with August birthdays, where in some cases they could be almost a year younger than their contemporaries. How many Summer-Born pupils in our school overcame the age disadvantage that they may have experienced as early learners by the time they undertake their GCSE studies?

Because this is so easy to find out about, it would be reasonable that any leadership teams would be curious enough to know the position for their school, if anyone was to ask.
Summer-Born pupils are just one interesting category of pupil that sits outside the usual groups – like Pupil Premium pupils – when we are looking for differences in performance.
There are a great many more interesting groups that could similarly be investigated; and if our assessment systems were reliable enough, we could get an early indication of potential gaps in performance based on current progress, and put in place strategies to reduce any gaps that we found.

What is In-School Variation
In-School Variation is the variation in provision as experienced by different groups of learners. ISV occurs when a characteristic shared by a group of learners influences the outcomes to learning for these pupils. ISV can have a positive or negative influence upon learning effectiveness.
Some examples may be related to pupil disposition or home circumstances. Other examples may be due to school effects, i.e. differences in provision as experienced by different pupils, and which may be observable as a ‘group effect’ once statistically significant. This might include a subject that performs less-well or better than this subject nationally, or particular classes are more or less than effective than other classes in a subject. As a quality issue, school leaders should understand where school provision is having an inconsistent impact. Investigating ‘local contextual circumstances’ will be part of a process by which schools can develop knowledge about themselves, and by which the school can aim to specialise in effective teaching and learning for the pupils who attend the school, and the particular context in which the school operates.

Untitled-2

In-School Variation is often more significant than Between School Variation. In School Variation (ISV) is primarily about differences in performance within schools relative to the differences between schools. It remains the case that at secondary school level, ISV often swamps Between School Variation (BSV) in its effect in most schools.

PISA studies first reported that the ratio of within-school variation (ISV) to between-school variation averaged 2:1 across OECD countries, but it was highest in the UK at around 4:1.
A DfES study based on 2003 data reported that in-school variation in England was even greater than the OECD had found.  In value-added terms, in-school variation during Key Stage 2 (KS2) appeared to be five times greater than between-school variation. During KS3 the ratio was 11 times, and for KS4 was a 14 times greater.  Ref: NCSL Closing the Gap. In the secondary school this is a huge effect which increases from key stage 3 to key stage 4. Put starkly, it means that the class that a pupil is in is much more significant than the school they are in.

Many school improvement initiatives are based on the idea that pupils can be helped to get better at what they do.  So pupil premium funding in a school might be focused on remediation activities, rather than on teacher development, or improvements to the climate of the school, or the ability of parents to support their children’s’ learning.

If we just consider the example of the under-performance of some subjects in a school, the Fischer Family Trust have suggested that if subjects below the average in a school became average, nationally, we would see an improvement in attainment at GCSE of around 5%.

Studies into In-School Variation
The work on ISV by Professor David Reynolds and Professor David Hopkins in 2003 provides a point of reference to how schools might understand and respond to ISV:
http://www.highreliabilityschools.co.uk/_resources/files/downloads/within-school-variation/wsvreport2004.pdf
In-School Variation was also the subject of a two-phased project by the National College for School Leadership (NCSL) in 2007. They found that In-School Variation was:

NCSL findings2

At the heart of a response to ISV is the need for a school to achieve consistent teaching within subjects. The NCSL project built on the work of Professors Reynolds and Hopkins by setting up school projects to investigate the issue and find ways to address their findings.
The document arising from this work was ‘Schools learning from their best
http://dera.ioe.ac.uk/7381/1/download%3Fid%3D17377%26filename%3Dschools-learning-from-their-best.pdf

It noted the difficulties in addressing the issues of tackling ISV.  It outlined the particular problem of quantifying ISV and measuring any changes to it over time. They also noted that Subject Leaders were the key agents of change if departments were to look at achieving teaching that was more consistent and as effective as the best subjects in the school.  The title of the document – ‘Schools learning from their best‘ – also indicated an improvement model whereby teachers in a school known to be particularly effective could lead CPD sessions on ‘How I teach my subject‘.

openness_trust3

How to create the climate for projects to address ISV
In 2009 the Teacher Development Agency Published a document on Reducing in-school variation entitled ‘Making effective practice standard practice’:
http://dera.ioe.ac.uk/1276/1/isv_guide.pdf
This document made practical suggestions about how schools respond to ISV through a positive and optimistic approach based on three tenets: Openness, Trust and Collegiality.

The NCSL found in its work that a ‘top down’ approach was less effective than a ‘bottom up’ one. In a top-down approach school leaders might hand out tables to show which classes were under performing, and ask for improvements to be made. With a bottom-up approach, teachers could be invited to use data tools to explore how well their subject was performing, to discuss improvement projects, and to report back to SLT on their progress over time.
A clear finding was that schools which encourage an action-research approach to school improvement are more likely to experience success.

climate3

The Systemic Influences of ISV
Discussion point: “School senior leaders don’t improve achievement. They set the climate where teachers can be successful”. This question usually gets a lively discussion going, but it also serves to emphasise that it is subject leaders and their colleagues who have the means to manage improvements in their subject area. It is the key role of school leaders to set a climate where teachers can be successful with this task. So senior leaders can, for example, make sure that behaviour management strategies are working and that other procedures are in place to ensure that teachers are empowered to be successful in their work.
Factors which influence learning effectiveness include teaching effectiveness, learner disposition and the context within which learning takes place.  The most important influence on learning effectiveness is the teacher – but the context in which pupils learn, and other factors, can influence how they learn

Making Action Research Effective
The NCSL project pointed out the importance of trying to quantify the effects of ISV.  In the first phase of their project they found that without measures, reporting became descriptive or anecdotal.  In the second phase, spreadsheets were used to try to quantify the effects and track the changes over time. It was at this time that we shared with project schools a technique we were using which calculated the residuals of selected groups of pupils.

Today, ISV can be investigated much more readily because there are a greater range of software tools available to schools.  Many schools’ use of data has come in for criticism of late by contributing to excessive workload. This issue arises where schools have insisted on collecting data too frequently and doing too little with it.  Like many things, it is a question of balance and clarity of purpose, and it would be an unwise school which didn’t explore the data available to it to see what could be learned about how well the school was doing.

teachersplanning2

But there is a need for change in the way that schools use data. The 2016 document ‘Eliminating unnecessary workload associated with data management‘ offered good advice and some useful principles to guide schools, and there is still a need for practical guidance about how schools should be using data effectively.

Data is the currency of a diagnostic profession
In medical practice, doctors will take readings in order to gather information which they will then analyse before use their professional judgement to diagnose the condition, and then decide on an appropriate treatment.  There are similarities here with teaching. A teacher will similarly refer to a range of information before providing a diagnose of how well a pupil is learning and what they need to do to improve.  If we are going to expand the scope of this into also looking into teaching effectiveness, how well the school is doing, and the performance of groups of learners, then a range of sensible data techniques will need to be deployed to help schools to find the answers.

What are suitable questions for us to explore in our school?
I started this article off by posing the question about whether Summer-Born pupils in the secondary school perform differently to other pupils. This is just one of many other questions that we could ask.  What other things like this currently lie hidden in a school’s data?
In particular, which factors are influencing learning which may be unique to our school and are having a significant effect?  Are we insufficiently curious that we will sit on our school’s data and not seek to find out what it can reveal?
To answer these questions we need easy-to-use data tools which have been designed to reduce workload – rather than add to it.

Wherever there is a group of pupils in a school that has had a different experience to other pupils, there is scope for checking to see if this experience has had a positive or negative affect upon their learning.  We might find there is no effect, and this is evidence in itself.  In our work with schools we have asked for examples of where schools have created a research group and tracked their progress and attainment.  By comparing the residual for the group with other pupils it is possible to see if there are any ‘group effects’ associated with the subject of the research group.
Some examples of research groups reported by schools are as follows:

research3

A healthy situation to arrive at is when bottom-up research is encouraged and teachers have the tools to explore such factors. This is a big step away from data analysis being something that is done by the few and passed down to the many, but now involving all teachers finding out about the impact of their teaching on different groups of learners.

In-School Variation may be “the greatest educational challenge of our time” but a leadership team could start to tackle it by seeking to create a climate of openness, trust and collegiality in their school. They could further encourage CPD run by individuals and departments using the theme of ‘Learning from the Best’.  They could use a bottom-up approach to invite their subject leaders and their colleagues to undertake research-led school improvement using easy-to-use data tools.
The process of finding where variation exists and addressing it more specifically offers a more systemic way to look at school improvement, with the aim of achieving greater consistency of provision, and the possibility of fairer outcomes for pupils.

How can we explore and quantify In-School Variation?
The software tools available to the schools in the NCSL ISV project were simple spreadsheet applications which let teachers compare pupils’ grades in particular subjects with their average grades in all subjects, and look for patterns (residual values) which were of statistical significance. We could do this for groups of pupils with a common experience or intervention (like pupils who attended a revision class) to see if there was a ‘group effect’ of that intervention on the grades attained by pupils.  If a school invests time and money on an intervention it needs to know that it is having a measurable impact – else why do it?   Use of data tools can provide this information.

These software tools have continued to be developed and today are very much more sophisticated as we can see in the picture below.

Here we can see for the chosen subject, English Language, the grades of pupils who do this subject compared to their average grades, arranged in rank order from pupils with the highest average grades to those with the lowest average grades. A pupil are highlighted and we can compare their grade in this subject to their average grade (their position on the green line). We can also explore how well groups of pupils perform in this subject, including research groups selected from the right-hand column. This shows 6 pupils highlighted and a high residual of 1.37 grades for this group’s performance. The second column shows the auto-generated commentary for the relative performance of the subject itself.  We can use tools like this to identify effects early in a key stage based on estimated grades, and follow their progress through to their GCSE results.

In an ideal situation, subject leaders and their colleagues would use tools like these to explore the effectiveness of the subject, examine group effects, and design interventions aimed at raising the attainment of any under-performing groups of pupils.  This will help to identify effects which might otherwise remain unseen, and identify the ‘local contextual circumstances’ which influence learning in a particular school.  An effective use of data tools will support an action-research approach to school improvement, and contribute to school self-evaluation as a more quantitative, and less descriptive process.

Exploring differences in subject performance
In every school, it is subject leaders who are the ‘agents of change’. Whilst senior school leaders can set the climate in a school for teachers to be successful, it is subject leaders working with colleagues in their department who will bring about improvements in how pupils learn in their subject.
This starts with quantifying how well the subject is doing relative to other subjects in the school. We can start by looking at school ‘subject residuals’ – the differences in the average performance of pupils in each subject in the school. However, for a fair comparison, we need to take account of the ‘national subject residual’, i.e. the average performance of all pupils nationally who took the subject at GCSE. By subtracting the national subject residual from the school residual we are then able to compare ‘easy’ subjects with ‘hard’ subjects, as indicated by differences in overall pass rates at national level.
A more direct way is to look at the Subject Value Added (SVA) for each subject to compare each subject with the subject at a national level. Software tools like 4Matrix can calculate SVA and compare each subject in a school with the subject nationally. Duncan Baldwin of ASCL highlights the importance of this school measure because it compares “Apples with apples and pears with pears”.

Collaboration can provide timely comparative data
National subject residual data is not always available early enough after Results Day to perform this comparison, but a useful proxy can be generated within the software itself by averaging the anonymised data from all users of the software using a collaborative accumulation mechanism called ‘Share and Compare‘. The proxy ‘national subject residual’ values can then be subtracted from the school residuals to allow subject leaders to see how well their subject compares with the subject at national level.

An important focus for SLT and subject leaders to know about ISV is whether subjects are performing as well as the same subject nationally. This measure – Subject Value Added (SVA) – can be calculated from the results of all schools taking each subject. If there is a significant difference between a subject in the school and the subject nationally, then support will be needed to improve the impact of teaching in that subject for groups of learners who may be under-performing.
In each subject we can also use data tools to find out whether there are groups of pupils which perform better or worse than other pupils in the subject. Interventions can then be aimed at these groups of pupils.
We can establish the performance profile for each subject by analysing Summer examination results – which is the most important point in the year for providing standardised measures. This will tell us which groups of pupils in each subject are performing differently to other pupils and which interventions are having the greatest impact. We can then track these categories for the new year 11 cohort through the year using estimated grades to check how our interventions are impacting on pupil progress.

CPD2.jpg

Learning from the Best
In every school there will be particular teachers whose teaching is consistently effective in giving rise to positive outcomes for the pupils in their classes. This may be due to an experience crafted over time of using techniques which are particularly effective in that subject, or how the teacher organises the teaching, or how they interact with pupils, or their expectations of pupils, or the nature of the support which they provide for learners.
Some of the most effective CPD can evolve from how a school makes use of the expertise that it has within a school. Good CPD doesn’t always need to involve an external speaker. The main tasks will be organising the session and overcoming people’s natural modesty to talk about themselves. But hearing from an experienced practitioner about how they organise and deliver teaching in their subject will usually be a valuable treat to teachers of all subjects. And making this a regular slot could serve to normalise the pattern of sharing techniques that work across a teaching community.

Diagnostic data tools can help schools to investigate In School Variation and quantify the differences in the performance of subject and different groups of pupils. We should expect a switched-on, self-evaluating school to be using such tools to support an action-research approach to school improvement within a climate designed to make it possible for teachers to be successful.

Questions for reflection:

How suitable is In-School Variation as a focus for raising school standards?
How important is it to find out what the data we have tells us?
To what extent do subject leaders exercise a QA role for their subject?
How well-developed are the diagnostic data tools that teachers could be using?

What are the common obstacles to making smarter use of performance data?
Do school leaders know how well each subject performs compared with subjects nationally?
Does the school know which groups need support to improve their performance?
Does the school’s development plan include a strategy for ‘closing the gaps’?


mike_small

Mike Bostock is an educationalist specialising in the use of performance data tools for school improvement. Mike’s company produces the 4Matrix school performance system. Mike is currently working with the GwE School Effectiveness and Improvement Service in North Wales with a research project to investigate In-School Variation.

mike_wales_vsm3