Our Perspectives - Dr Linda Bendikson

The UACEL Perspectives are opinion pieces that highlight current and topical educational leadership matters. Linda Bendikson writes candidly about her views and offers a fresh perspective on today’s educational leadership challenges.  Occasionally, other faculty / team members contribute to this Newsletter is published four times a year.

Subscribe now to Our Perspectives

View Linda's Profile

Vision without focus is worthless - empty words of aspiration

It is term two. Your school improvement (annual) plans are completed and have been handed into the Ministry.  But do your ‘plans’ really help you to focus?  Do you even refer to them?  I suspect for most people they still only serve as an artefact of compliance.

So do plans really help?  Should we care?  I argue that they do help and we should care.  Plans don’t mean you avoid problems, but they tend to help you keep focus while you deal with the inevitable distractions and problems that occur (Louis & Miles, 1991).  A good plan helps schools to narrow their focus – and this narrowing down is critical if you wish to improve.   In an ideal world, plans should articulate a few narrow, achievable steps towards a school’s ultimate vision – but few do.

It is only after several years of working closely with schools and groups of schools to get improvement in outcomes both in Australia and New Zealand that I have come to realise that we have made a lot of incorrect assumptions about leaders’ capability in planning for improvement – some would call it ‘evaluative capability’. In short, we at UACEL adhere to a school of thought that, as the old adage suggests, if you can’t measure it, you can’t manage it.  We also believe that you can measure anything if you have the will to do so –  and that anyone can learn the skill of doing this well enough to help them improve outcomes.  I know some of you will be preparing your email response as you read these words, suggesting that ‘the pig doesn’t get fatter by constantly weighing it’.  I agree that there is a culture of collecting data and not using it well.  But I don’t agree that you can improve outcomes for students without measuring them – and measuring them in short cycles such as every six weeks or a term – not at the end of the year. This is the essence of improvement science  (Bryk, Gomez, Grunow, & LeMahieu, 2015; Timperley, Kaser, & Halbert, 2014) and, in my view, we (collectively) have paid too little attention to it.

For some reason educationalists often have a frame of mind that adheres to the theory that education is not a business, nor is it like business.  And thus, measuring progress is anathema to them.  I argue education is very much like a business; instead of measuring profits, you measure the value added to student learning.  Just like a business, you need to be very purposeful about that focus or you will not make that ‘profit’ or add the value that you would like – so while measuring the pig does not make it fatter – you need to know if it is getting fatter – and you cannot wait until the end of the year to know.  We already know that some teachers add a lot more value than others, and some schools add a lot more value than others – so it does matter what class you are in and what school you are in, and therefore it is important for leaders to know if their students are getting better and their school is getting better at improving results – and you need to know it throughout the year, and know it quite precisely.  The end of the year is too late.

But how do you do this measuring without creating an unreasonable burden and, often, without standardised tools that allow for interim measures?    At the system level and at the school level, in my view, we should only measure what we really value and will use.  Further, less is better.  So we need to narrow our focus, and measure progress on small, containable ‘problems’ (gaps between the current and desired state) and preferably with one core focus at a time.  There are so much data floating around in school systems, but so little of it is put to any use.  We would all be better off if we collected less, but with more purpose.

It has been exciting to see some schools do this successfully.  Not only have they learnt about their school’s tightly defined initial problem area (e.g., lateness to class; ability to write paragraphs; ability to use wide and rich vocabulary), but more importantly they are learning the science of improvement and will be able to use it independent of our help in the future.   In other words, we haven’t just taught them how to solve the lateness problem or the vocabulary problem, but we have taught them how to improve and how to know they are improving.

But some schools have struggled to see the point of narrowing the focus and gaining improvement on one defined area before moving onto another; or they have struggled to understand how to measure progress with tightly defined problems; or they have struggled to actually do anything different from what they have always done.  I have tried to analyse what is getting in the way of their learning and the answers have surprised me.  One obvious problem is the predilection for educationalists to see any reference to a need for measurement as ‘weighing the pig’ or as ‘compliance activity’.  In short, some leaders have a mind-set that prevents them from systematically engaging in focused improvement efforts.  Other leaders have the right mind-set, but lack the skill set.  In short, they:

  1. struggle with how to measure the problem.  They cannot articulate the key outcomes they want and translate them into small measureable and graduated bite sizes but rather try to work with large global outcome statements (which are unmeasurable as they stand) and jump to solutions they hope will work.

  2. struggle to organise data once they have it, and thus, to make any meaning of it.  Examples of this are schools that show you 20 graphs but cannot talk about what they show or why anyone should care.  Another example is schools that have a lot of raw data that is neither organised nor analysed.
  3. don’t know how to create tables and  graphs which are powerful in helping them, along with their teachers and learners, to see both the problem and the progress they are making, or do not know how different kinds of graphs help them understand different stories behind the data.
  4. don’t know how to enlist the help of all teachers in the school or to deal with the resistance of some.

The first four of these points come under what I call ‘evaluative capability’ and it is a capability that is central to improvement or, if you like, to ‘change management’ which the fifth point also alludes to.  So, what does this have to do with school improvement plans and do plans really help?

We improve because we are motivated towards some better state. We are motivated by the progress we make.  Measuring progress is therefore motivating but you can’t measure and improve on big global outcomes such as ‘Improve Literacy’ or ‘Improve Behaviour’ in the short-term and thereby get the motivation to do more. You can measure and see improvement when you concentrate on a target associated with those big outcomes such as ‘improve knowledge and use of rich vocabulary’ or ‘decrease the number of students who come late to school’.  And when you create achievable but challenging targets in these tightly defined areas and check progress on them (which is not time consuming because they are so tightly defined), teachers and students themselves are motivated to improve more and more.  When carried out in this way, improvement can be addictive.  Further, measuring these small intermediate outcomes contributes to your big global goals such as ‘Improve Behaviour’ or Improve Literacy’ or ‘Improve NCEA results” – and you know you are making progress towards those goals.

And what does improvement have to do with school improvement plans? Your annual plan should articulate your priority ‘intermediate outcomes’ – and be where you record the real things you need to do to ensure a shift in those outcomes – that is all.  It should guide you during the year but be a living document that you vary and add to as the results come in term by term and you learn from each cycle of improvement.

Our research on plans has looked at their quality but we can’t really make any comment on how useful they have been in practice.  What we can say is that most schools (whose plans we reviewed) do have clear goals and targets, but then they have to nowadays – so it could be no more than an act of compliance. Similarly, most have some baseline data to justify their focus – but again, schools are expected to and would possibly have their plans ‘rejected’ by the Ministry of Education in New Zealand if they did not have baseline.

Still, I feel we can draw some conclusions from the plans we analyse.  First of all, the number of targets within these plans is, in my view, very high.  In the latest batch of plans we reviewed, the median number was seven.  This large number of targets suggests that the leaders are probably not focused and clear on the outcomes they are seeking.  The sheer number suggests dispersal of energy and also suggests that it is unlikely each of these outcomes is being monitored during the year and fed back to students and teachers.  It would be virtually impossible.  Yet, this feedback is critical if targets are to be useful, because everyone needs to be able to answer the questions – where are we going,  how are we going, and where to next?  (Hattie & Tmperley, 2007)

In short, vision is great.  But vision without focus and regular feedback loops on how you are going is just empty words.  Effective plans have few targets that allow everyone in the school to be clear about their priorities.  Once one problem area is mastered, another can be moved onto – but trying to focus on lots of targets and often in different curriculum areas at once, is ineffective – just empty ‘vision words’ on paper.

Effort without focus is unlikely to produce results.  None of this is new but people find it so hard to do.   Leadership “is the result of the ‘activities’ that one does” (Fullan & Newton, 1988, p. 190).  The ability of a leader to understand the cycle of improvement and to narrow their focus and the focus of their staff on narrow, well-defined ‘problems’ of the learners, is a critical skill – but it also takes a mind-set.

Actions:   E-mail | Permalink |