sudokuAn Improvement-by-Design challenge is very like a Sudoku puzzle. The rules are deceptively simple but the solving the puzzle is not so simple.

For those who have never tried a Sudoku puzzle the objective is to fill in all the empty boxes with a number between 1 and 9. The constraint is that each row, column and 3×3 box (outlined in bold) must include all the numbers between 1 and 9 i.e. no duplicates.

What you will find when you try is that, at each point in the puzzle solving process there are more than one choice for  most empty cells.

The trick is to find the empty cells that have only one option and fill those in. That changes the puzzle and makes it ‘easier’.

And when you keep following this strategy, and so long as you do not make any mistakes, then you will solve the puzzle.  It just takes concentration, attention to detail, and discipline.

In the example above, the top-right cell in the left-box on the middle-row can only hold a 6; and the top-middle cell in the middle-box on the bottom-row must be a 3.

So we can see already there are three ways ‘into’ the solution – put the 6 in and see where that takes us; put the 3 in and see where that takes us; or put both in and see where that takes us.

The final solution will be the same – so there are multiple paths from where we are to our objective.  Some may involve more mental work than others but all will involve completing the same number of empty cells.

What is also clear is that the sequence order that we complete the empty cells is not arbitrary. Usually the boxes and rows with the fewest empty cells get competed earlier and those with the most empty cells at the start get completed later.

And even if the final configuration is the same, if we start with a different set of missing cells the solution path will be different. It may be very easy, very hard or even impossible without some ‘guessing’ and hoping for the best.


Exactly the same is true of improvement-by-design challenges.

The rules of flow science  are rather simple; but when we have a system of parallel streams (the rows) interacting with parallel stages (the columns); and when we have safety, delivery, and economy constraints to comply with at every part of the system … then finding and ‘improvement plan’ that will deliver our objective is a tough challenge.

But it is possible with concentration, attention-to-detail and discipline; and that requires some flow science training and some improvement science practice.

OK – I am off for lunch and then maybe indulge in a Sudoku puzzle or two – just for fun – and then maybe design an improvement plan to two – just for fun!

 

<Lesley> Hi Bob, how are your today?

<Bob> I’m OK thanks Lesley. Having a bit of a break from the daily grind.

<Lesley> Oh! I am sorry, I had no idea you were on holiday. I will call when you are back at work.

<Bob> No need Lesley. Our chats are always a welcome opportunity to reflect and learn.

<Lesley> OK, if your are sure.  The top niggle on my list at the moment is that I do not feel my organisation values what I do.

<Bob> OK. Have you done the right-to-left map from that top niggle?

<Lesley>Yes. The most recent event is that I was asked to justify my improvement role.

<Bob> OK, and before that?

<Lesley> There have been some changes in the senior management team.

<Bob> OK. This sounds like the ‘New Brush SweepsClean’ effect.

<Lesley> I have heard that phrase before. What does it mean in this context?

<Bob> Senior management changes are very disruptive events, and the more senior the change and the more visible the role the more disruptive it is.  Let us call it call it a form of ‘Disruptive Innovation’.  The trigger for the change is important.  One trigger might be a well respected and effective leader retiring or moving to an even more senior role.  This leaves a leadership gap and is an opportunity for someone to grow and develop.  Another might be a less effective and less respected leader moving on and leaving a trail of rather visible failures. It is the latter tends to be associated with the New Broom effect.

<Lesley> How is that?

<Bob>Well, put yourself in the shows of the New Leader who has inherited a ‘Trail of Disappointment’ – you need to establish your authority and expectation quickly and decisively. Ambiguity and lack of clarity will only contribute to further disappointment.  So you have to ask everyone to justify what they do.  And if they cannot then you need to know that.  And if they can then you need to decide if what they do is aligned with your purpose.  This is the New Brush.

<Lesley> So what if I can justify what I do and that does not fit with the ‘New Leader’s Plan’?

<Bob> If what you do is aligned to your Life Purpose but not with the New Brush then you have to choose.  And experience shows that the road to long term personal happiness is the one the aligns with your individual purpose.  And often it is just a matter of timing. The New Brush is indiscriminate and impatient – anything that does not fit neatly into the New Plan has to go.

<Lesley> OK my purpose is to improve the safety, flow, quality and productivity of healthcare processes – for the benefit of all. That is not negotiable. It is what fires my passion and fuels my day.  So does it matter really where or how I do that?

<Bob> Not really.  You do need be mindful of the pragmatic constraints though … your life circumstances.  There are many paths to your Purpose, so it is wise to choose one that is low enough risk to both you and those you love.

<Lesley> Ah! Now I see why you say that timing is important. You need to prepare to be able to make the decision.  You do not what to be caught by surprise and off balance.

<Bob>Yes. That is why as an ISP you always start with your own Purpose and your own Right-to-Left Map.  Then you will know what to prepare and in what order so that you have the maximum number of options when you have to make a choice.  Sometimes the Universe will create the trigger and sometimes you have to initiate it yourself.

<Lesley> So this is just another facet of Improvement Science?

<Bob>  Yes.  

buncefield_fireFires are destructive, indifferent, and they can grow and spread very fast.

The picture is of  the Buncefield explosion and conflagration that occurred on 11th December 2005 near Hemel Hempstead in the UK.  The root cause was a faulty switch that failed to prevent tank number 912 from being overfilled. This resulted in an initial 300 gallon petrol spill which created the perfect conditions for an air-fuel explosion.  The explosion was triggered by a spark and devastated the facility. Over 2000 local residents needed to be evacuated and the massive fuel fire took days to bring under control. The financial cost of the accident has been estimated to run into tens of millions of pounds.

The Great Fire of London in September 1666 led directly to the adoption of new building standards – notably brick and stone instead of wood because they are more effective barriers to fire.

A common design to limit the spread of a fire is called a firewall.

And we use the same principle in computer systems to limit the spread of damage when a computer system goes out of control.


Money is the fuel that keeps the wheels of healthcare systems turning.  And healthcare is an expensive business so every drop of cash-fuel is precious.  Healthcare is also a risky business – from both a professional and a financial perspective. Mistakes can quickly lead to loss of livelihood, expensive recovery plans and huge compensation claims. The social and financial equivalent of a conflagration.

Financial fires spread just like real ones – quickly. So it makes good sense not to have all the cash-fuel in one big pot.  It makes sense to distribute it to smaller pots – in each department – and to distribute the cash-fuel intermittently. These cash-fuel silos are separated by robust financial firewalls and they are called Budgets.

The social sparks that ignite financial fires are called ‘Niggles‘.  They are very numerous but we have effective mechanisms for containing them. The problem happens when a multiple sparks happen at the same time and place and together create a small chain reaction. Then we get a complaint. A ‘Not Again‘.  And we are required to spend some of our precious cash-fuel investigating and apologizing.  We do not deal with the root cause, we just scrape the burned toast.

And then one day the chain reaction goes a bit further and we get a ‘Near Miss‘.  That has a different  reporting mechanism so it stimulates a bigger investigation and it usually culminates in some recommendations that involve more expensive checking, documenting and auditing of the checking and documentation.  The root cause, the Niggles, go untreated – because there are too many of them.

But this check-and-correct reaction is also  expensive and we need even more cash-fuel to keep the organizational engine running – but we do not have any more. Our budgets are capped. So we start cutting corners. A bit here and a bit there. And that increases the risk of more Niggles, Not Agains, and Near Misses.

Then the ‘Never Event‘ happens … a Safety and Quality catastrophe that triggers the financial conflagration and toasts the whole organization.


So although our financial firewalls, the Budgets, are partially effective they also have downsides:

1. Paradoxically they can create the perfect condition for a financial conflagration when too small a budget leads to corner-cutting on safety.

2. They lead to ‘off-loading’ which means that too-expensive-to-solve problems are chucked over the financial firewalls into the next department.  The cost is felt downstream of the source – in a different department – and is often much larger. The sparks are blown downwind.

For example: a waiting list management department is under financial pressure and is running short staffed as a recruitment freeze has been imposed. The overburdening of the remaining staff leads to errors in booking patients for operations. The knock on effect that is patients being cancelled on the day and the allocated operating theatre time is wasted.  The additional cost of wasted theatre time is orders of magnitude greater than the cost-saving achieved in the upstream stage.  The result is a lower quality service, a greater cost to the whole system, and the risk that safety corners will be cut leading to a Near Miss or a Never Event.

The nature of real systems is that small perturbations can be rapidly amplified by a ‘tight’ financial design to create a very large and expensive perturbation called a ‘catastrophe’.  A silo-based financial budget design with a cost-improvement thumbscrew feature increases the likelihood of this universally unwanted outcome.

So if we cannot use one big fuel tank or multiple, smaller, independent fuel tanks then what is the solution?

We want to ensure smooth responsiveness of our healthcare engine, we want healthcare  cash-fuel-efficiency and we want low levels of toxic emissions (i.e. complaints) at the same time. How can we do that?

Fuel-injection.

fuel_injectorsElectronic Fuel Injection (EFI) designs have now replaced the old-fashioned, inefficient, high-emission  carburettor-based engines of the 1970′s and 1980′s.

The safer, more effective and more efficient cash-flow design is to inject the cash-fuel where and when it is needed and in just the right amount.

And to do that we need to have a robust, reliable and rapid feedback system that controls the cash-injectors.

But we do not have such a feedback system in healthcare so that is where we need to start our design work.

Designing an automated cash-injection system requires understanding how the Seven Flows of any  system work together and the two critical flows are Data Flow and Cash Flow.

And that is possible.

tornada_150_wht_10155The image of a tornado is what many associate with improvement.  An unpredictable, powerful, force that sweeps away the wood in its path. It certainly transforms – but it leaves a trail of destruction and disappointment in its wake. It does not discriminate  between the green wood and the dead wood.

A whirlwind is created by a combination of powerful forces – but the trigger that unleashes the beast is innocuous. The classic ‘butterfly wing effect’. A spark that creates an inferno.

This is not the safest way to achieve significant and sustained improvement. A transformation tornado is a blunt and destructive tool.  All it can hope to achieve is to clear the way for something more elegant. Improvement Science.

We need to build the capability for improvement progressively and to build it effective, efficient, strong, reliable, and resilient. In a word  – trustworthy. We need a durable structure.

But what sort of structure?  A tower from whose lofty penthouse we can peer far into the distance?  A bridge between the past and the future? A house with foundations, walls and a roof? Do these man-made edifices meet our criteria?  Well partly.

Let us see what nature suggests. What are the naturally durable designs?

Suppose we have a bag of dry sand – an unstructured mix of individual grains – and that each grain represents an improvement idea.

Suppose we have a specific issue that we would like to improve – a Niggle.

Let us try dropping the Improvement Sand on the Niggle – not in a great big reactive dollop – but in a proactive, exploratory bit-at-a-time way.  What shape emerges?

hourglass_150_wht_8762What we see is illustrated by the hourglass.  We get a pyramid.

The shape of the pyramid is determined by two factors: how sticky the sand is and how fast we pour it.

What we want is a tall pyramid – one whose sturdy pinnacle gives us the capability to see far and to do much.

The stickier the sand the steeper the sides of our pyramid.  The faster we pour the quicker we get the height we need. But there is a limit. If we pour too quickly we create instability – we create avalanches.

So we need to give the sand time to settle into its stable configuration; time for it to trickle to where it feels most comfortable.

And, in translating this metaphor to building improvement capability in system we could suggest that the ‘stickiness’ factor is how well ideas hang together and how well individuals get on with each other and how well they share ideas and learning. How cohesive our people are.  Distrust and conflict represent repulsive forces.  Repulsion creates a large, wide, flat structure  – stable maybe but incapable of vision and improvement. That is not what we need

So when developing a strategy for building improvement capability we build small pyramids where the niggles point to. Over time they will merge and bigger pyramids will appear and merge – until we achieve the height. Then was have a stable and capable improvement structure. One that we can use and we can trust.

Just from sprinkling Improvement Science Sand on our Niggles.

hold_your_ground_rope_300_wht_6223[Dring Dring] The telephone soundbite announced the start of mentoring session.

<Bob> Good morning Leslie. How are you today?

<Leslie> I have been better.

<Bob> You seem upset. Do you want to talk about it?

<Leslie> Yes, please. The trigger for my unhappiness is that last week I received an email demanding that I justify the time I spend doing improvement work and  a summons to a meeting to ‘discuss some issues that have been raised‘.

<Bob> OK. I take it that you do not know what or who has triggered this inquiry.

<Leslie> You are correct. My working hypothesis is that it is the end of the financial year and budget holders are looking for opportunities to do some pruning – to meet their cost improvement program targets!

<Bob> So what is the problem? You have shared the output of your work. You have demonstrated significant improvements in safety, flow, quality and productivity and you have described both them and the methodology clearly.

<Leslie> I know. That us why I was so upset to get this email. It is as if everything that we have achieved has been ignored. It is almost as if it is resented.

<Bob> Ah! You may well be correct.  This is the nature of paradigm shifts. Those who have the greatest vested interest in the current paradigm get spooked when they feel it start to wobble. Each time you share the outcome of your improvement work you create emotional shock-waves. The effects are cumulative and eventually there will be is a ‘crisis of confidence’ in those who feel most challenged by the changes that you are demonstrating are possible.  The whole process is well described in Thomas Kuhn’s The Structure of Scientific Revolutions. That is not a book for an impatient reader though – for those who prefer something lighter I recommend “Our Iceberg is Melting” by John Kotter.

<Leslie> Thanks Bob. I will get a copy of Kotter’s book – that sounds more my cup of tea. Will that tell me what to do?

<Bob> It is a parable – a fictional story of a colony of penguins who discover that their iceberg is melting and are suddenly faced with a new and urgent potential risk of not surviving the storms of the approaching winter. It is not a factual account of a real crisis or a step-by-step recipe book for solving all problems  – it describes some effective engagement strategies in general terms.

<Leslie> I will still read it. What I need is something more specific to my actual context.

<Bob> This is an improvement-by-design challenge. The only difference from the challenges you have done already is that this time the outcome you are looking for is a smooth transition from the ‘old’ paradigm to the ‘new’ one.  Kuhn showed that this transition will not start to happen until there is a new paradigm because individuals choose to take the step from the old to the new and they do not all do that at the same time.  Your work is demonstrating that there is a new paradigm. Some will love that message, some will hate it. Rather like Marmite.

<Leslie> Yes, that make sense.  But how do I deal with an unseen enemy who is stirring up trouble behind my back?

<Bob> Are you are referring to those who have ‘raised some issues‘?

<Leslie> Yes.

<Bob> They will be the ones who have most invested in the current status quo and they will not be in senior enough positions to challenge you directly so they are going around spooking the inner Chimps of those who can. This is expected behaviour when the relentlessly changing reality starts to wobble the concrete current paradigm.

<Leslie> Yes! That is  exactly how it feels.

<Bob> The danger lurking here is that your inner Chimp is getting spooked too and is conjuring up Gremlins and Goblins from the Computer! Left to itself your inner Chimp will steer you straight into the Victim Vortex.  So you need to take it for a long walk, let it scream and wave its hairy arms about, listen to it, and give it lots of bananas to calm it down. Then put your put your calmed-down Chimp into its cage and your ‘paradigm transition design’ into the Computer. Only then will you be ready for the ‘so-justify-yourself’ meeting.  At the meeting your Chimp will be out of its cage like a shot and interpreting everything as a threat. It will disable you and go straight to the Computer for what to do – and it will read your design and follow the ‘wise’ instructions that you have put in there.

<Leslie> Wow! I see how you are using the Chimp Paradox metaphor to describe an incredibly complex emotional process in really simple language. My inner Chimp is feeling happier already!

<Bob> And remember that you are in all in the same race. Your collective goal is to cross the finish line as quickly as possible with the least chaos, pain and cost.  You are not in a battle – that is lose-lose inner Chimp thinking.  The only message that your interrogators must get from you is ‘Win-win is possible and here is how we can do it‘. That will be the best way to soothe their inner Chimps – the ones who fear that you are going to sink their boat by rocking it.

<Leslie> That is really helpful. Thank you again Bob. My inner Chimp is now snoring gently in its cage and while it is asleep I have some Improvement-by-Design work to do and then some Computer programming.

patient_stumbling_with_bandages_150_wht_6861Primum non nocere” is Latin for “First do no harm”.

It is a warning mantra that had been repeated by doctors for thousands of years and for good reason.

Doctors  can be bad for your health.

I am not referring to the rare case where the doctor deliberately causes harm.  Such people are criminals and deserve to be in prison.

I am referring to the much more frequent situation where the doctor has no intention to cause harm – but harm is the outcome anyway.

Very often the risk of harm is unavoidable. Healthcare is a high risk business. Seriously unwell patients can be very unstable and very unpredictable.  Heroic efforts to do whatever can be done can result in unintended harm and we have to accept those risks. It is the nature of the work.  Much of the judgement in healthcare is balancing benefit with risk on a patient by patient basis. It is not an exact science. It requires wisdom, judgement, training and experience. It feels more like an art than a science.

The focus of this essay is not the above. It is on unintentionally causing avoidable harm.

Or rather unintentionally not preventing avoidable harm which is not quite the same thing.

Safety means prevention of avoidable harm. A safe system is one that does that. There is no evidence of harm to collect. A safe system does not cause harm. Never events never happen.

Safe systems are designed to be safe.  The root causes of harm are deliberately designed out one way or another.  But it is not always easy because to do that we need to understand the cause-and-effect relationships that lead to unintended harm.  Very often we do not.


In 1847 a doctor called Ignaz Semmelweis made a very important discovery. He discovered that if the doctors and medical students washed their hands in disinfectant when they entered the labour ward, then the number of mothers and babies who died from infection was reduced.

And the number dropped a lot.

It fell from an annual average of 10% to less than 2%!  In really bad months the rate was 30%.

The chart below shows the actual data plotted as a time-series chart. The yellow flag in 1848 is just after Semmelweis enforced a standard practice of hand-washing.

Vienna_Maternal_Mortality_1785-1848

Semmelweis did not know the mechanism though. This was not a carefully designed randomised controlled trial (RCT). He was desperate. And he was desperate because this horrendous waste of young lives was only happening on the doctors ward.  On the nurses ward, which was just across the corridor, the maternal mortality was less than 2%.

The hospital authorities explained it away as ‘bad air’ from outside. That was the prevailing belief at the time. Unavoidable. A risk that had to be just accepted.

Semmeleis could not do a randomized controlled trial because they were not invented until a century later.

And Semmelweis suspected that the difference between the mortality on the nurses and the doctors wards was something to do with the Mortuary. Only the doctors performed the post-mortems and the practice of teaching anatomy to medical students using post-mortem dissection was an innovation pioneered in Vienna in 1823 (the first yellow flag on the chart above). But Semmelweis did not have this data in 1847.  He collated it later and did not publish it until 1861.

What Semmelweis demonstrated was the unintended and avoidable deaths were caused by ignorance of the mechanism of how microorganisms cause disease. We know that now. He did not.

It would be another 20 years before Louis Pasteur demonstrated the mechanism using the famous experiment with the swan neck flask. Pasteur did not discover microorganisms;  he proved that they did not appear spontaneously in decaying matter as was believed. He proved that by killing the bugs by boiling, the broth in the flask  stayed fresh even though it was exposed to the air. That was a big shock but it was a simple and repeatable experiment. He had a mechanism. He was believed. Germ theory was born. A Scottish surgeon called Joseph Lister read of this discovery and surgical antisepsis was born.

Semmelweis suspected that some ‘agent’ may have been unwittingly transported from the dead bodies to the live mothers and babies on the hands of the doctors.  It was a deeply shocking suggestion that the doctors were unwittingly killing their patients.

The other doctors did not take this suggestion well. Not well at all. They went into denial. They discounted the message and they discharged the messenger. Semmelweis never worked in Vienna again. He went back to Hungary and repeated the experiment. It worked.


Even today the message that healthcare practitioners can unwittingly bring avoidable harm to their patients is disturbing. We still seek solace in denial.

Hospital acquired infections (HAI) are a common cause of harm and many are avoidable using simple, cheap and effective measures such as hand-washing.

The harm does not come from what we do. It comes from what we do not do. It happens when we omit to follow the simple safety measures that have be proven to work. Scientifically. Statistically Significantly. Understood and avoidable errors of omission.


So how is this “statistically significant scientific proof” acquired?

By doing experiments. Just like the one Ignaz Semmelweis conducted. But the improvement he showed was so large that it did not need statistical analysis to validate it.  And anyway such analysis tools were not available in 1847. If they had been he might have had more success influencing his peers. And if he had achieved that goal then thousands, if not millions, of deaths from hospital acquired infections may have been prevented.  With the clarity of hindsight we now know this harm was avoidable.

No. The problem we have now is because the improvement that follows a single intervention is not very large. And when the causal mechanisms are multi-factorial we need more than one intervention to achieve the improvement we want. The big reduction in avoidable harm. How do we do that scientifically and safely?


About 20% of hospital acquired infections occur after surgical operations.

We have learned much since 1847 and we have designed much safer surgical systems and processes. Joseph Lister ushered in the era of safe surgery, much has happened since.

We routinely use carefully designed, ultra-clean operating theatres, sterilized surgical instruments, gloves and gowns, and aseptic techniques – all to reduce bacterial contamination from outside.

But surgical site infections (SSIs) are still common place. Studies show that 5% of patients on average will suffer this complication. Some procedures are much higher risk than others, despite the precautions we take.  And many surgeons assume that this risk must just be accepted.

Others have tried to understand the mechanism of SSI and their research shows that the source of the infections is the patients themselves. We all carry a ‘bacterial flora’ and normally that is no problem. Our natural defense – our skin – is enough.  But when that biological barrier is deliberately breached during a surgical operation then we have a problem. The bugs get in and cause mischief. They cause surgical site infections.

So we have done more research to test interventions to prevent this harm. Each intervention has been subject to well-designed, carefully-conducted, statistically-valid and very expensive randomized controlled trials.  And the results are often equivocal. So we repeat the trials – bigger, better controlled trials. But the effects of the individual interventions are small and they easily get lost in the noise. So we pool the results of many RCTs in what is called a ‘meta-analysis’ and the answer from that is very often ‘not proven’ – either way.  So individual surgeons are left to make the judgement call and not surprisingly there is wide variation in practice.  So is this the best that medical science can do?

No. There is another way. What we can do is pool all the learning from all the trials and design a multi-facetted intervention. A bundle of care. And the idea of a bundle is that the  separate small effects will add or even synergise to create one big effect.  We are not so much interested in the mechanism as the outcome. Just like Ignaz Semmelweiss.

And we can now do something else. We can test our bundle of care using statistically robust tools that do not require a RCT.  They are just as statistically valid as a RCT but a different design.

And the appropriate tool for this to measure the time interval between adverse the events  - and then to plot this continuous metric as a time-series chart.

But we must be disciplined. First we must establish the baseline average interval and then we introduce our bundle and then we just keep measuring the intervals.

If our bundle works then the interval between the adverse events gets longer – and we can easily prove that using our time-series chart. The longer the interval the more ‘proof’ we have.  In fact we can even predict how long we need to observe to prove that ‘no events’ is a statistically significant improvement. That is an elegant an efficient design.


Here is a real and recent example.

The time-series chart below shows the interval in days between surgical site infections following routine hernia surgery. These are not life threatening complications. They rarely require re-admission or re-operation. But they are disruptive for patients. They cause pain, require treatment with antibiotics, and the delay recovery and return to normal activities. So we would like to avoid them if possible.

Hernia_SSI_CareBundle

The green and red lines show the baseline period. The  green line says that the average interval between SSIs is 14 days.  The red line says that an interval more than about 60 days would be surprisingly long: valid statistical evidence of an improvement.  The end of the green and red lines indicates when the intervention was made: when the evidence-based designer care bundle was adopted together with the discipline of applying it to every patient. No judgement. No variation.

The chart tells the story. No complicated statistical analysis is required. It shows a statistically significant improvement.  And the SSI rate fell by over 80%. That is a big improvement.

We still do not know how the care bundle works. We do not know which of the seven simultaneous simple and low-cost interventions we chose are the most important or even if they work independently or in synergy.  Knowledge of the mechanism was not our goal.

Our goal was to improve outcomes for our patients – to reduce avoidable harm – and that has been achieved. The evidence is clear.

That is Improvement Science in action.

And to read the full account of this example of the Science of Improvement please go to:

http://www.journalofimprovementscience.net

It is essay number 18.

And avoid another error of omission. Do not omit to share this message – it is important.

Chimp_BattleImprovement implies change.
Change implies action.
Action implies decision.

So how is the decision made?
With Urgency?
With Understanding?

Bitter experience teaches us that often there is an argument about what to do and when to do it.  An argument between two factions. Both are motivated by a combination of anger and fear. One side is motivated more by anger than fear. They vote for action because of the urgency of the present problem. The other side is motivated more by fear than anger. They vote for inaction because of their fear of future failure.

The outcome is unhappiness for everyone.

If the ‘action’ party wins the vote and a failure results then there is blame and recrimination. If the ‘inaction’ party wins the vote and a failure results then there is blame and recrimination. If either party achieves a success then there is both gloating and resentment. Lose Lose.

The issue is not the decision and how it is achieved.The problem is the battle.

Dr Steve Peters is a psychiatrist with 30 years of clinical experience.  He knows how to help people succeed in life through understanding how the caveman wetware between their ears actually works.

In the run up to the 2012 Olympic games he was the sports psychologist for the multiple-gold-medal winning UK Cycling Team.  The World Champions. And what he taught them is described in his book – “The Chimp Paradox“.

Chimp_Paradox_SmallSteve brilliantly boils the current scientific understanding of the complexity of the human mind down into a simple metaphor.

One that is accessible to everyone.

The metaphor goes like this:

There are actually two ‘beings’ inside our heads. The Chimp and the Human. The Chimp is the older, stronger, more emotional and more irrational part of our psyche. The Human is the newer, weaker, logical and rational part.  Also inside there is the Computer. It is just a memory where both the Chimp and the Human store information for reference later. Beliefs, values, experience. Stuff like that. Stuff they use to help them make decisions.

And when some new information arrives through our senses – sight and sound for example – the Chimp gets first dibs and uses the Computer to look up what to do.  Long before the Human has had time to analyse the new information logically and rationally. By the time the Human has even started on solving the problem the Chimp has come to a decision and signaled it to the Human and associated it with a strong emotion. Anger, Fear, Excitement and so on. The Chimp operates on basic drives like survival-of-the-self and survival-of-the-species. So if the Chimp gets spooked or seduced then it takes control – and it is the stronger so it always wins the internal argument.

But the human is responsible for the actions of the Chimp. As Steve Peters says ‘If your dog bites someone you cannot blame the dog – you are responsible for the dog‘.  So it is with our inner Chimps. Very often we end up apologising for the bad behaviour of our inner Chimp.

Because our inner Chimp is the stronger we cannot ‘control’ it by force. We have to learn how to manage the animal. We need to learn how to soothe it and to nurture it. And we need to learn how to remove the Gremlins that it has programmed into the Computer. Our inner Chimp is not ‘bad’ or ‘mad’ it is just a Chimp and it is an essential part of us.

Real chimpanzees are social, tribal and territorial.  They live in family groups and the strongest male is the boss. And it is now well known that a troop of chimpanzees in the wild can plan and wage battles to acquire territory from neighbouring troops. With casualties on both sides.  And so it is with people when their inner Chimps are in control.

Which is most of the time.

Scenario:
A hospital is failing one of its performance targets – the 18 week referral-to-treatment one – and is being threatened with fines and potential loss of its autonomy. The fear at the top drives the threat downwards. Operational managers are forced into action and do so using strategies that have not worked in the past. But they do not have time to learn how to design and test new ones. They are bullied into Plan-Do mode. The hospital is also required to provide safe care and the Plan-Do knee-jerk triggers fear-of-failure in the minds of the clinicians who then angrily oppose the diktat or quietly sabotage it.

This lose-lose scenario is being played out  in  100′s if not 1000′s of hospitals across the globe as we speak.  The evidence is there for everyone to see.

The inner Chimps are in charge and the outcome is a turf war with casualties on all sides.

So how does The Chimp Paradox help dissolve this seemingly impossible challenge?

First it is necessary to appreciate that both sides are being controlled by their inner Chimps who are reacting from a position of irrational fear and anger. This means that everyone’s behaviour is irrational and their actions likely to be counter-productive.

What is needed is for everyone to be managing their inner Chimps so that the Humans are back in control of the decision making. That way we get wise decisions that lead to effective actions and win-win outcomes. Without chaos and casualties.

To do this we all need to learn how to manage our own inner Chimps … and that is what “The Chimp Paradox” is all about. That is what helped the UK cyclists to become gold medalists.

In the scenario painted above we might observe that the managers are more comfortable in the Pragmatist-Activist (PA) half of the learning cycle. The Plan-Do part of PDSA  – to translate into the language of improvement. The clinicians appear more comfortable in the Reflector-Theorist (RT) half. The Study-Act part of PDSA.  And that difference of preference is fueling the firestorm.

Improvement Science tells us that to achieve and sustain improvement we need all four parts of the learning cycle working  smoothly and in sequence.

So what at first sight looks like it must be pitched battle which will result in two losers; in reality is could be a three-legged race that will result in everyone winning. But only if synergy between the PA and the RT halves can be achieved.

And that synergy is achieved by learning to respect, understand and manage our inner Chimps.

stick_figure_scribble_pen_150_wht_6418[Beep Beep] The alarm on Bob’s smartphone was the reminder that in a few minutes his e-mentoring session with Lesley was due. Bob had just finished the e-mail he was composing so he sent it and then fired-up the Webex session. Lesley was already logged in and on line.

<Bob> Hi Lesley. What aspect of Improvement Science shall we talk about today? What is next on your map?

<Lesley> Hi Bob. Let me see. It looks like ‘Employee Engagement‘ is the one that we have explored least yet – and it links to lots of other things.

<Bob> OK. What would you say the average level of Employee Engagement is in your organisation at the moment? On a scale of zero to ten where zero is defined as ‘complete apathy’.

<Lesley> Good question. I see a wide range of engagement and I would say the average is about four out of ten.  There are some very visible, fully-engaged, energetic, action-focused  movers-and-shakers.  There are many more nearer the apathy end of the spectrum. Most employees seem to turn up, do their jobs well enough to avoid being disciplined, and then go home.

<Bob> OK. And do you feel that is a problem?

<Lesley> You betcha!  Improvement means change and change means action.  Disengaged employees are a deadweight. They do not actively block change – they will go along with it if pushed  – but they do not contribute to making it happen. And that creates a different problem. The movers-and-shakers get frustrated and eventually get tired trying to move the deadweight up hill and give up  and then can become increasingly critical and then cynical. After they give up in despair they then actively block any new ideas saying – “Do not try you will fail.”

<Bob> So how would you describe the emotional state of those you describe as “disengaged”?

<Lesley> Miserable.

<Bob> And who is making them feel miserable?

<Lesley> That is another good question. They appear to be making themselves feel miserable. And it is not what is happening that triggers this emotion. It is what is not happening. Apathy seems to be self-sustaining.

<Bob> Can you explain in a bit more about what you mean by that and maybe share an example?

<Lesley> An example is easier.  I have reflected on this a bit and I have used one of the 6M Design® techniques to help me understand it better.  I used a Right-2-Left® map to compare a personal example of when I felt really motivated and delivered a significant and measurable improvement; with one where I felt miserable and no change happened.

<Bob> Excellent. What did you discover?

<Lesley> I discovered that there were four classes of  difference between the two examples. And I then understood what you mean by ‘Acts and Errors of  Omission and Commission’.

<Bob> OK. And which was the commonest of the four combinations in your example?

<Lesley> The Errors of Omission. And within just that group there were three different types that were most obvious.

<Bob> Can you list them for me?

<Lesley> For sure. The first is the miserableness I felt when what I was doing felt to me that it was irrelevant. When what I was being asked to do had no worthwhile purpose that I was aware of.

<Bob> So which was it? No worth or not being aware of the worth?

<Lesley>Me not being aware of the worth. I hoped it was of value to someone higher up the corporate food chain otherwise I would not have been asked to do it! But I was never sure. And that uncertainty generated some questions. What if what I am doing is of no worth to anyoneWhat if I am just wasting my lifetime doing it? That fearful thought left me feeling more miserable than motivated.

<Bob> OK. What was the second Error of Omission?

<Lesley> It is linked to the first one. I had no objective way of knowing if I was doing a worthwhile job.  And the word objective is important.  I am not asking for subjective feedback – there is too much expectation, variation, assumption, prejudgement and politics mixed up in opinions of what I achieve.  I needed specific, objective and timely feedback. I associated my feeling of miserableness with not getting objective feedback that told me what I was doing was making a worthwhile difference to someone else. Anyone else!

<Bob> I thought that you get a lot of objective feedback on a whole raft of organisational performance metrics?

<Lesley> Oh yes! We do!! The problem is that it is high level, aggregated, anonymous, and delayed. To get a copy of a report that says as an organisation we did or did not meet last quarters arbitrary performance target for x, y or z usually generates a ‘So what? How does that relate to what I do?’ reaction. I need objective, specific and timely feedback about the effects of my work. Good or bad.

<Bob> OK.  And Error of Omission Three?

<Lesley> This was the trickiest one to nail down. What it came down to was being treated as a process and not as a person.  I felt anonymous.  I was just  a headcount, a number on a payroll ledger, an overhead cost. That feeling was actually the most demotivating of all.

<Bob> And did it require all Three Errors of Omission to be present for the ‘miserableness’ to become manifest?

<Lesley> Alas no! Any one of them was enough. The more of them at the same time the deeper the feeling of misery the less motivated I felt.

<Bob> Thank you for being so frank and open. So what have you ‘abstracted’ from your ‘reflection’?

<Lesley> That employee engagement requires that these Three Errors of Omission must be deliberately checked for and proactively addressed if discovered.

<Bob> And who would, could or should do this check-and-correct work?

<Lesley> H’mm. Another very god question. The employee could do it but it is difficult for them because a lot of the purpose-setting and feedback comes from outside their circle of control and from higher up. Approaching  a line-manager with a list of their Errors of Omission will be too much of a challenge!

<Bob> So?

<Lesley> The manager should do it.  They should ask themselves these questions.  Only they can correct their  own Errors of Omission.  I doubt if that would happen spontaneously though! Humility seems a bit of a rare commodity.

<Bob> I agree. So what can the employee do to help their boss?

<Lesley> They could ask how they can be of most value to their boss and they could ask for objective and timely feedback on how well they are performing as an individual on those measures of worth. It sounds so simple and obvious when said out loud. So why does no one do it?

<Bob> A very good question. Some do and they are the often described as ‘motivating leaders’. So does this insight suggest to you any strategies for grasping the ‘Employee Engagement’ nettle without getting stung?

<Lesley> Yes indeed! I am already planning my next action. A chat with my line-manager about what I could do. Thanks Bob.

<Bob> My pleasure. And remember that the same principle works for everyone that we work directly with – especially those immediately ‘upstream’ and ‘downstream’ of us in our daily work.

ViewFromSpaceThis is a picture of Chris Hadfield. He is an astronaut and to prove it here he is in the ‘cupola’ of the International Space Station (ISS). Through the windows is a spectacular view of the Earth from space.

Our home seen from space.

What is remarkable about this image is that it even exists.

This image is tangible evidence of a successful outcome of a very long path of collaborative effort by 100′s of 1000′s of people who share a common dream.

That if we can learn to overcome the challenge of establishing a permanent manned presence in space then just imagine what else we might achieve?

Chis is unusual for many reasons.  One is that he is Canadian and there are not many Canadian astronauts. He is also the first Canadian astronaut to command the ISS.  Another claim to fame is that when he recently lived in space for 5 months on the ISS, he recorded a version of David Bowie’s classic song – for real – in space. To date this has clocked up 21 million YouTube hits and had helped to bring the inspiring story of space exploration back to the public consciousness.

Especially the next generation of explorers – our children.

Chris has also written a book ‘An Astronaut’s View of Life on Earth‘ that tells his story. It describes how he was inspired at a young age by seeing the first man to step onto the Moon in 1969.  He overcame seemingly impossible obstacles to become an astronaut, to go into space, and to command the ISS.  The image is tangible evidence.

We all know that space is a VERY dangerous place.  I clearly remember the two space shuttle disasters. There have been many other much less public accidents.  Those tragic events have shocked us all out of complacency and have created a deep sense of humility in those who face up to the task of learning to overcome the enormous technical and cultural barriers.

Getting six people into space safely, staying there long enough to conduct experiments on the long-term effects of weightlessness, and getting them back again safely is a VERY difficult challenge.  And it has been overcome. We have the proof.

Many of the seemingly impossible day-to-day problems that we face seem puny in comparison.

For example: getting every patient into hospital, staying there just long enough to benefit from cutting edge high-technology healthcare, and getting them back home again safely.

And doing it repeatedly and consistently so that the system can be trusted and we are not greeted with tragic stories every time we open a newspaper. Stories that erode our trust in the ability of groups of well-intended people to do anything more constructive than bully, bicker and complain.

So when the exasperated healthcare executive exclaims ‘Getting 95% of emergency admissions into hospital in less than 4 hours is not rocket science!‘ – then perhaps a bit more humility is in order. It is rocket science.

Rocket science is Improvement science.

And reading the story of a real-life rocket-scientist might be just the medicine our exasperated executives need.

Because Chris explains exactly how it is done.

And he is credible because he has walked-the-talk so he has earned the right to talk-the-walk.

The least we can do is listen and learn.

Here is is Chris answering the question ‘How to achieve an impossible dream?

Nerve_CurveThe emotional roller-coaster ride that is associated with change, learning and improvement is called the Nerve Curve.

We are all very familiar with the first stages – of Shock, Denial, Anger, Bargaining, Depression and Despair.  We are less familiar with the stages associated with the long climb out to Resolution: because most improvement initiatives fail for one reason of another.

The critical first step is to “Disprove Impossibility” and this is the first injection of innovation. Someone (the ‘innovator’) discovers that what was believed to be impossible is not. They only have to have one example too. One Black Swan.

The tougher task is to influence those languishing in the ‘Depths of Despair’ that there is hope and that there is a ‘how’. This is not easy because cynicism is toxic to innovation.  So an experienced Improvement Science Practitioner (ISP) bypasses the cynics and engages with the depressed-but-still-healthy-skeptics.

The challenge now is how to get a shed load of them up the hill.

When we first learn to drive we start on the flat, not on hills,  for a very good reason. Safety.

We need to learn to become confident with the controls first. The brake, the accelerator, the clutch and the steering wheel.  This takes practice until it is comfortable, unconscious and almost second nature. We want to achieve a smooth transition from depression to delight, not chaotic kangaroo jumps!

Only when we can do that on the flat do we attempt a hill-start. And the key to a successful hill start is the sequence.  Hand brake on  for safety, out of gear, engine running, pointing at the goal. Then we depress the clutch and select a low gear – we do not want to stall. Speed is not the goal. Safety comes first. Then we rev the engine to give us the power we need to draw on. Then we ease the clutch until the force of the engine has overcome the force of gravity and we feel the car wanting to move forward. And only then do we ease the handbrake off, let the clutch out more and hit the gas to keep the engine revs in the green.

So when we are planning to navigate a group of healthy skeptics up the final climb of the Nerve Curve we need to plan and prepare carefully.

What is least likely to be successful?

Well, if all we have is our own set of wheels,  a cheap and cheerful mini-motor, then it is not going to be a good idea to shackle a trailer to it; fill the trailer with skeptics and attempt a hill start. We will either stall completely or burn out our clutch. We may even be dragged backwards into the Cynic Infested Toxic Swamp.

So what if we hire a bus, load up our skeptical passengers, and have a go.  We may be lucky -  but if we have no practice doing hill starts with a full bus then we could be heading for disappointment; or disaster.

So what is a safer plan:
1) First we need to go up the mountain ourselves to demonstrate it is possible.
2) Then we take one or two of the least skeptical up in our car to show it is safe.
3) We then invite those skeptics with cars to learn how to do safe hill starts.
4) Finally we ask the ex-skeptics to teach the fresh-skeptics how to do it.

Brmmmm Brmmmm. Off we go.

hurry_with_the_SFQP_kit[Dring] Bob’s laptop signaled the arrival of Leslie for their regular ISP remote mentoring session.

<Bob> Hi Leslie. Thanks for emailing me with a long list of things to choose from. It looks like you have been having some challenging conversations.

<Leslie> Hi Bob. Yes indeed! The deepening gloom and the last few blog topics seem to be polarising opinion. Some are claiming it is all hopeless and others, perhaps out of desperation, are trying the FISH stuff for themselves and discovering that it works.  The ‘What Ifs’ are engaged in war of words with the ‘Yes Buts’.

<Bob> I like your metaphor! Where would you like to start on the long list of topics?

<Leslie> That is my problem. I do not know where to start. They all look equally important.

<Bob> So, first we need a way to prioritise the topics to get the horse-before-the-cart.

<Leslie> Sounds like a good plan to me!

<Bob> One of the problems with the traditional improvement approaches is that they seem to start at the most difficult point. They focus on ‘quality’ first – and to be fair that has been the mantra from the gurus like W.E.Deming. ‘Quality Improvement’ is the Holy Grail.

<Leslie>But quality IS important … are you saying they are wrong?

<Bob> Not at all. I am saying that it is not the place to start … it is actually the third step.

<Leslie>So what is the first step?

<Bob> Safety.  Eliminating avoidable harm. Primum Non Nocere. The NooNoos. The stuff that generates the most fear for everyone.

<Leslie> You mean having a service that we can trust not to harm us unnecessarily?

<Bob> Yes. It is not a good idea to make an unsafe design more efficient – it will deliver even more harm!

<Leslie> OK. That makes perfect sense to me. So how do we do that?

<Bob> It does not actually matter.  Well-designed and thoroughly field-tested checklists have been proven to be very effective in the ‘ultra-safe’ industries like aerospace and nuclear.

<Leslie> OK. Something like the WHO Safe Surgery Checklist?

<Bob> Yes, that is a good example – and it is well worth reading Atul Gwande’s book about how that happened – “The Checklist Manifesto“.   Gwande is a surgeon who had published a lot on improvement and even so was quite skeptical that something as simple as a checklist could possibly work in the complex world of surgery. In his book he describes a number of personal ‘Ah Ha!’ moments that illustrate a phenomenon that I call Jiggling.

<Leslie> OK. I have made a note to read Checklist Manifesto and I am curious to learn more about Jiggling – but can we stick to the point? Does quality come after safety?

<Bob> Yes, but not immediately after. As I said, Quality is the third step.

<Leslie> So what is the second one?

<Bob> Flow.

There was a long pause – and just as Bob was about to check that the connection had not been lost – Leslie spoke.

<Leslie> But none of the Improvement Schools teach basic flow science.  They all focus on quality, waste and variation!

<Bob> I know. And attempting to improve quality before improving flow is like papering the walls before doing the plastering.  Quality cannot grow in a chaotic context. The flow must be smooth before that. And the fear of harm must be removed first.

<Leslie> So the ‘Improving Quality through Leadership‘ bandwagon that everyone is jumping on will not work?

<Bob> Well that depends on what the ‘Leaders’ are doing. If they are leading the way to learning how to design-for-safety and then design-for-flow then the bandwagon might be a wise choice. If they are only facilitating collaborative agreement and group-think then they may be making an ineffective system more efficient which will steer it over the edge into faster decline.

<Leslie>So, if we can stabilize safety using checklists do we focus on flow next?

<Bob>Yup.

<Leslie> OK. That makes a lot of sense to me. So what is Jiggling?

<Bob> This is Jiggling. This conversation.

<Leslie> Ah, I see. I am jiggling my understanding through a series of ‘nudges’ by you.

<Bob>Yes. And when the learning cogs are a bit rusty, some Improvement Science Oil and a bit of Jiggling is more effective and much safer than whacking the caveman wetware with a big emotional hammer.

<Leslie>Well the conversation has certainly jiggled Safety-Flow-Quality-and-Productivity into a sensible order for me. That helped a lot. I will sort my to-do list into that order and start at the beginning. Let me see. I have a plan for safety, now I can focus on flow. Here is my top flow niggle. How do I design the resource capacity I need to ensure the flow is smooth and the waiting times are short enough to avoid ‘persecution’ by the Target Time Police?

<Bob> An excellent question! I will send you the first ISP Brainteaser that will nudge us towards an answer to that question.

<Leslie> I am ready and waiting to have my brain-teased and my niggles-nudged!

London_UndergroundSystems are built from intersecting streams of work called processes.

This iconic image of the London Underground shows a system map – a set of intersecting transport streams.

Each stream links a sequence of independent steps – in this case the individual stations.  Each step is a system in itself – it has a set of inner streams.

For a system to exhibit stable and acceptable behaviour the steps must be in synergy – literally ‘together work’. The steps also need to be in synchrony – literally ‘same time’. And to do that they need to be aligned to a common purpose.  In the case of a transport system the design purpose is to get from A to B safety, quickly, in comfort and at an affordable cost.

In large socioeconomic systems called ‘organisations’ the steps represent groups of people with special knowledge and skills that collectively create the desired product or service.  This creates an inevitable need for ‘handoffs’ as partially completed work flows through the system along streams from one step to another. Each step contributes to the output. It is like a series of baton passes in a relay race.

This creates the requirement for a critical design ingredient: trust.

Each step needs to be able to trust the others to do their part:  right-first-time and on-time.  All the steps are directly or indirectly interdependent.  If any one of them is ‘untrustworthy’ then the whole system will suffer to some degree. If too many generate dis-trust then the system may fail and can literally fall apart. Trust is like social glue.

So a critical part of people-system design is the development and the maintenance of trust-bonds.

And it does not happen by accident. It takes active effort. It requires design.

We are social animals. Our default behaviour is to trust. We learn distrust by experiencing repeated disappointments. We are not born cynical – we learn that behaviour.

The default behaviour for inanimate systems is disorder – and it has a fancy name – it is called ‘entropy’. There is a Law of Physics that says that ‘the average entropy of a system will increase over time‘. The critical word is ‘average’.

So, if we are not aware of this and we omit to pay attention to the hand-offs between the steps we will observe increasing disorder which leads to repeated disappointments and erosion of trust. Our natural reaction then is ‘self-protect’ which implies ‘check-and-reject’ and ‘check and correct’. This adds complexity and bureaucracy and may prevent further decline – which is good – but it comes at a cost – quite literally.

Eventually an equilibrium will be achieved where our system performance is limited by the amount of check-and-correct bureaucracy we can afford.  This is called a ‘mediocrity trap’ and it is very resilient – which means resistant to change in any direction.


To escape from the mediocrity trap we need to break into the self-reinforcing check-and-reject loop and we do that by developing a design that challenges ‘trust eroding behaviour’.  The strategy is to develop a skill called  ‘smart trust’.

To appreciate what smart trust is we need to view trust as a spectrum: not as a yes/no option.

At one end is ‘nonspecific distrust’ – otherwise known as ‘cynical behaviour’. At the other end is ‘blind trust’ – otherwise  known and ‘gullible behaviour’.  Neither of these are what we need.

In the middle is the zone of smart trust that spans healthy scepticism  through to healthy optimism.  What we need is to maintain a balance between the two – not to eliminate them. This is because some people are ‘glass-half-empty’ types and some are ‘glass-half-full’. And both views have a value.

The action required to develop smart trust is to respectfully challenge every part of the organisation to demonstrate ‘trustworthiness’ using evidence.  Rhetoric is not enough. Politicians always score very low on ‘most trusted people’ surveys.

The first phase of this smart trust development is for steps to demonstrate trustworthiness to themselves using their own evidence, and then to share this with the steps immediately upstream and downstream of them.

So what evidence is needed?

SFQP1Safety comes first. If a step cannot be trusted to be safe then that is the first priority. Safe systems need to be designed to be safe.

Flow comes second. If the streams do not flow smoothly then we experience turbulence and chaos which increases stress,  the risk of harm and creates disappointment for everyone. Smooth flow is the result of careful  flow design.

Third is Quality which means ‘setting and meeting realistic expectations‘.  This cannot happen in an unsafe, chaotic system.  Quality builds on Flow which builds on Safety. Quality is a design goal – an output – a purpose.

Fourth is Productivity (or profitability) and that does not automatically follow from the other three as some QI Zealots might have us believe. It is possible to have a safe, smooth, high quality design that is unaffordable.  Productivity needs to be designed too.  An unsafe, chaotic, low quality design is always more expensive.  Always. Safe, smooth and reliable can be highly productive and profitable – if designed to be.

So whatever the driver for improvement the sequence of questions is the same for every step in the system: “How can I demonstrate evidence of trustworthiness for Safety, then Flow, then Quality and then Productivity?”

And when that happens improvement will take off like a rocket. That is the Speed of Trust.  That is Improvement Science in Action.

single_file_line_PA_150_wht_3113The modern era in science started about 500 years ago when an increasing number of people started to challenge the dogma that our future is decided by Fates and Gods. That we had no influence. And to appease the ‘Gods’ we had to do as we were told. That was our only hope of Salvation.

This paradigm came under increasing pressure as the evidence presented by Reality did not match the Rhetoric.  Many early innovators paid for their impertinence with their fortunes, their freedom and often their future. They were burned as heretics.

When the old paradigm finally gave way and the Age of Enlightenment dawned the pendulum swung the other way – and the new paradigm became the ‘mechanical universe’. Isaac Newton showed that it was possible to predict, with very high accuracy, the motion of the planets just by adopting some simple rules and a new form of mathematics called calculus. This opened a door into a more hopeful world – if Nature follows strict rules and we know what they are then we can learn to control Nature and get what we need without having to appease any Gods (or priests).

This was the door to the Industrial Revolutions – there have been more that one – each lasting about 100 years (18th C, 19th C and 20th C). Each was associated with massive population growth as we systematically eliminated the causes of early mortality – starvation and infectious disease.

But not everything behaved like the orderly clockwork of the planets and the pendulums. There was still the capricious and unpredictable behaviour that we call Lady Luck.  Had the Gods retreated but were still playing dice?

Progress was made here too – and the history of the ‘understanding of chance’ is peppered with precocious and prickly mathematical savants who discovered that chance follows rules too. Probability theory was born and that spawned a troublesome child called Statistics. This was a trickier one to understand. To most people statistics is just mathematical gobbledygook.

But from that emerged a concept called the Rational Man – which underpinned the whole of Economic Theory for 250 years. Until very recently.  The RM hypothesis stated that we make unconscious but rational judgements when presented with uncertain win/lose choices.  And from that seed sprouted concepts such as the Law of Supply and Demand – when the supply of things we  demand are limited then we (rationally) value them more and will choose to pay more so prices go up so fewer can afford them so demand drops. Foxes and Rabbits. A negative feedback loop. The economic system becomes self-adjusting and self-stabilising.  The outcome of this assumption is a belief that ‘because people are collectively rational the economic system will be self-stabilising and it will correct the adverse short term effects of any policy blunders we make‘.  The ‘let-the-market-decide’ belief that experimental economic meddling is harmless over the long term and what is learned from ‘laissez-faire’ may even be helpful. It is a no-lose long term improvement strategy. Losers are just unlucky, stupid or both.

In 2002 the Nobel Prize for Economics was not awarded to an economist. It was awarded to a psychologist – Daniel Kahneman – who showed that the model of the Rational Man did not stand up to rigorous psychological experiment.  Reality demonstrated we are Irrational Chimps. The economists had omitted to test their hypothesis. Oops!


This lesson has many implications for the Science of Improvement.  One of which is a deeper understanding of the nemesis of improvement – resistance to change.

One of the surprising findings is that our judgements are biased – and our bias operates at an unconscious level – what Kahneman describes as the System One level. Chimp level. We are not aware we are making biased decisions.

For example. Many assume that we prefer certainty to uncertainty. We fear the unpredictable. We avoid it. We seek the predictable and the stable. And we will put up with just about anything so long as it is predictable. We do not like surprises.  And when presented with that assertion most people nod and say ‘Yes’ – that feels right.

We also prefer gain to loss.  We love winning. We hate losing. This ‘competitive spirit’ is socially reinforced from day one by our ‘pushy parents’ – we all know the ones – but we all do it to some degree. Do better! Work harder! Be a success! Optimize! Be the best! Be perfect! Be Perfect! BE PERFECT.

So which is more important to us? Losing or uncertainty? This is one question that Kahneman asked. And the answer he discovered was surprising – because it effectively disproved the Rational Man hypothesis.  And this is how a psychologist earned a Nobel Prize for Economics.

Kahneman discovered that loss is more important to us than uncertainty.

To demonstrate this he presented subjects with a choice between two win/lose options; and he presented the choice in two ways. To a statistician and a Rational Man the outcomes were exactly the same in terms of gain or loss.  He designed the experiment to ensure that it was the unconscious judgement that was being measured – the intuitive gut reaction. So if our gut reactions are Rational then the choice and the way the choice was presented would have no significant effect.

There was an effect. The hypothesis was disproved.

The evidence showed that our gut reactions are biased … and in an interesting way.

If we are presented with the choice between a certain gain and an uncertain gain/loss (so the average gain is the same) then we choose the certain gain much more often.  We avoid uncertainty. Uncertainly =1 Loss=0.

BUT …

If we are presented with a choice between certain loss and an uncertain loss/gain (so the average outcome is again the same) then we choose the uncertain option much more often. This is exactly the opposite of what was expected.

And it did not make any difference if the subject knew the results of the experiment before doing it. The judgement is made out of awareness and communicated to our consciousness via an emotion – a feeling – that biases our slower, logical, conscious decision process.

This means that the sense of loss has more influence on our judgement than the sense of uncertainty.

This behaviour is hard-wired. It is part of our Chimp brain design. And once we know this we can see the effect of it everywhere.

1. We will avoid the pain of uncertainty and resist any change that might deliver a gain when we believe that future loss is uncertain. We are conservative and over-optimistic.

2. We will accept the pain of uncertainty and only try something new (and risky) when we believe that to do otherwise will result in certain loss. The Backs Against The Wall scenario.  The Cornered Rat is Unpredictable and Dangerous scenario.

This explains why we resist any change right up until the point when we see Reality clearly enough to believe that we are definitely going to lose something important if we do nothing. Lose our reputation, lose our job, lose our security, lose our freedom or lose our lives. That is a transformational event.  A Road to Damascus moment.

monkey_on_back_anim_150_wht_11200Understanding that we behave like curious, playful, social but irrational chimps is one key to unlocking significant and sustained improvement.

We need to celebrate our inner chimp – it is key to innovation.

And we need to learn how to team up with our inner chimp rather than be hijacked by it.

If we do not we will fail – the Laws of Physics, Probability and Psychology decree it.

boss_dangling_carrot_for_employee_anim_150_wht_13061[Beep Beep] Bob’s laptop signaled the arrival of Leslie to their regular Webex mentoring session. Bob picked up the phone and connected to the conference call.

<Bob> Hi Leslie, how are you today?

<Leslie> Great thanks Bob. I am sorry but that I do not have a red-hot burning issue to talk about today.

<Bob> OK – so your world is completely calm and orderly now. Excellent.

<Leslie> I wish! The reason is that I have been busy preparing for the monthly 1-2-1 with my boss.

<Bob> OK. So do you have a few minutes to talk about that?

<Leslie> What can I tell you about it?

<Bob> Can you just describe the purpose and the process for me?

<Leslie> OK. The purpose is improvement – for both the department and the individual. The process is that all departmental managers have an annual appraisal based on their monthly 1-2-1 chats and the performance scores for their departments are used to reward the top 15% and to ‘performance manage’ the bottom 15%.

<Bob> H’mmm.  What is the commonest emotion that is associated with this process?

<Leslie> I would say somewhere between severe anxiety and abject terror. No one looks forward to it. The annual appraisal feels like a lottery where the odds are stacked against you.

<Bob> Can you explain that a bit more for me?

<Leslie> Well, the most fear comes from being in the bottom 15% – the fear of being ‘handed your hat’ so to speak. Fortunately that fear motivates us to try harder and that usually saves us from the chopper because our performance improves.  The cost is the extra stress, working late and taking ‘stuff’ home.

<Bob> OK. And the anxiety?

<Leslie> Paradoxically that mostly comes from the top 15%. They are anxious to sustain their performance. Most do not and the Boss’s Golden Manager can crash spectacularly! We have seen it so often. It is almost as if being the Best carries a curse! So most of us try to stay in the middle of the pack where we do not stick out – a sort of safety in the herd strategy.  It is illogical I know because there is always a ‘top’ 15% and a ‘bottom’ 15%.

<Bob> You mentioned before that it feels like a lottery. How come?

<Leslie> Yes – it feels like a lottery but I know it has a rational scientific basis. Someone once showed me the ‘statistically significant evidence’ that proves it works.

<Bob> That what works exactly?

<Leslie> That sticks are more effective than carrots!

<Bob> Really! And what does the performance run charts look like – over the long term – say monthly over 2-3 years?

<Leslie> That is a really good question. They are surprisingly stable – well completely stable in fact. The wobble up and down of course but there is no sign of improvement over the long term – no trend. If anything it is the other way.

<Bob> So what is the rationale for maintaining the stick-is-better-than-the-carrot policy?

<Leslie> Ah! The message we are getting  is ‘as performance is not improving and sticks have been scientifically proven to be more effective than carrots then we will be using a bigger stick in future‘.

<Bob> Hence the atmosphere of fear and anxiety?

<Leslie> Exactly. But that is the way it must be I suppose.

<Bob> Actually it is not. This is an invalid design based on rubbish intuitive assumptions and statistical smoke-and-mirrors that creates unmeasurable emotional pain and destroys both people and organisations!

<Leslie> Wow! Bob! I have never heard you use language like that. You are usually so calm and reasonable. This must be really important!

 <Bob> It is – and for that reason I need to shock you out of your apathy  – and I can do that best by you proving it to yourself – scientifically – with a simple experiment. Are you up for that?

<Leslie> You betcha! This sounds like it is going to be interesting. I had better fasten my safety belt! The Nerve Curve awaits.


 The Stick-or-Carrot Experiment

<Bob> Here we go. You will need five coins, some squared-paper and a pencil. Coloured ones are even better.

<Leslie> OK. Does it matter what sort of coins?

<Bob> No. Any will do. Imagine you have four managers called A,B,C and D respectively.  Each month the performance of their department is measured as the number of organisational targets that they are above average on. Above average is like throwing a ‘head’, below average is like throwing a ‘tail’. There are five targets – hence the coins

<Leslie>OK. That makes sense – and it feels better to use the measured average – we have demonstrated that arbitrary performance targets are dangers – especially when imposed blindly across all departments.

<Bob> Indeed. So can you design a score sheet to track the data for the experiment.

<Leslie>Give me a minute.  Will this suffice?

Stick_and_Carrot_Fig1<Bob> Perfect! Now simulate a month by tossing all five coins – once for each manager – and record the outcome of each as H to T , then tot up the number of heads for each manager.

<Leslie>  OK … here is what I got.

Stick_and_Carrot_Fig2<Bob>Good. Now repeat this 11 more times to give you the results for a whole year.  In the n(Heads) column colour the boxes that have scores of zero or one as red – these are the Losers. Then colour the boxes that have 4 or 5 as green – these are the Winners.

<Leslie>OK, that will take me a few minutes – do you want to get a coffee or something.

[Five minutes later]

Here you go. That gives 96 opportunities to win or lose and I counted 9 Losers and 9 Winners so just under 20% for each. The majority were in the unexceptional middle. The herd.

Stick_and_Carrot_Fig3<Bob> Excellent.  A useful way to visualise this is using a Tally chart. Just run down the column of n(Heads) and create the Tally chart as you go. This is one of the oldest forms of counting in existence. There are fossil records that show Tally charts being used thousands of years ago.

<Leslie> I think I understand what you mean. We do not wait until all the data is in then draw the chart, we update it as we go along – as the data comes in.

<Bob> Spot on!

<Leslie> Let me see. Wow! That is so cool!  I can see the pattern appearing almost magically – and the more data I have the clearer the pattern is.

 <Bob>Can you show me?

<Leslie> Here we go.

Stick_and_Carrot_Fig4<Bob> Good.  This is the expected picture. If you repeated this many times you would get the same general pattern with more 2 and 3 scores.

Now I want you to do an experiment.

Assume each manager that is classed as a Winner in one month is given a reward – a ‘pat on the back’ from their Boss. And each manager that is classed as a Loser is given a ‘written warning’. Now look for  the effect that this has.

<Leslie> But we are using coins – which means the outcome is just a matter of chance! It is a lottery.

<Bob> I know that and you know that but let us assume that the Boss believes that the monthly feedback has an effect. The experiment we are doing is to compare the effect of the carrot with the stick. The Boss wants to know which results in more improvement and to know that with scientific and statistical confidence!

<Leslie> OK. So what I will do is look at the score the following month for each manager that was either a Winner or a  Loser; work out the difference, and then calculate the average of those differences and compare them with each other. That feels suitably scientific!

<Bob> OK. What do you get.

<Leslie> Just a minute, I need to do this carefully. OK – here it is.

<Bob>Stick_and_Carrot_Fig5 Excellent.  Just eye-balling the ‘Measured improvement after feedback’ columns I would say the Losers have improved and the Winners have deteriorated!

<Leslie> Yes! And the Losers have improved by 1.29 on average and the Winners have deteriorated by 1.78 – and that is a big difference for such small sample. I am sure that with enough data this would be a statistically significant difference! So it is true, sticks work better than carrots!

<Bob>Not so fast. What you are seeing is a completely expected behaviour called “Regression to the Mean“. Remember we know that the score for each manager each month is the result of a game of chance, a coin toss, a lottery. So no amount of stick or carrot feedback is going to influence that.

<Leslie>But the data is saying there is a difference! And that feels like the experience we have – and why fear stalks the management corridors. This is really confusing!

<Bob>Remember that confusion arises from invalid or conflicting unconscious assumptions. There is a flaw in the statistical design of this experiment. The ‘obvious’ conclusion is invalid because of this flaw. And do not be too hard on yourself. The flaw eluded mathematicians for centuries. But now you know there is one can you find it?

<Leslie>OMG!  The use of the average to classify the managers into Winners or Losers is the flaw!  That is just a lottery. Who the managers are is irrelevant. This is just a demonstration of how chance works.

But that means … OMG!  If the conclusion is invalid then sticks are not better than carrots and we have been brain-washed for decades into accepting a performance management system that is invalid – and worse still is used to ‘scientifically’ justify systematic persecution! I can see now why you get so angry!

<Bob>Bravo Leslie.  We  need to check your understanding. Does that mean carrots are better than sticks?

<Leslie>No!  The conclusion is invalid because the assumptions are invalid and the design is fatally flawed. It does not matter what the conclusion actually is.

<Bob>Excellent. So what conclusion can you draw?

<Leslie>That this short-term carrot-or-stick feedback design for achieving improvement in a stable system  is both ineffective and emotionally damaging. In fact it could well be achieving precisely the opposite effect that it is intended to. It may be preventing improvement! But the story feels so plausible and the data appears to back it up. What is happening here is we are using statistical smoke-and-mirrors to justify what we have already decided – and only an true expert would spot the flaw! Once again our intuition has tricked us!

<Bob>Well done! And with this new insight – how would you do it differently?  What would be a better design?

<Leslie>That is a very good question. I am going to have to think about that – before my 1-2-1 tomorrow. I wonder what might happen if I show this demonstration to my Boss? Thanks Bob, as always … lots of food for thought.


four_way_puzzle_people_200_wht_4883Improvement implies change.

Change follows action. Action follows planning. Effective planning follows from an understanding of the system because it is required to make the wise decisions needed to achieve the purpose. The purpose is the intended outcome.

Learning follows from observing the effect of change – whatever it is. Understanding follows from learning to predict the effect of both actions and in-actions.

All these pieces of the change jigsaw are different and they are inter-dependent. They fit together. They are a system.

And we can pick out four pieces: the Plan piece, the Action piece, the Observation piece and the Learning piece – and they seem to follow that sequence – it looks like a learning cycle.

This is not a new idea.

It is the same sequence as the Scientific Method: hypothesis, experiment, analysis, conclusion. The preferred tool of  Academics – the Thinkers.

It is also the same sequence as the Shewhart Cycle: plan, do, check, act. The preferred tool of the Pragmatists – the Doers.

So where does all the change conflict come from? What is the reason for the perpetual debate between theorists and activists? The incessant game of “Yes … but!”

One possible cause was highlighted by David Kolb  in his work on ‘experiential learning’ which showed that individuals demonstrate a learning style preference.

We tend to be thinkers or doers and only a small proportion us say that we are equally comfortable with both.

The effect of this natural preference is that real problems bounce back-and-forth between the Tribe of Thinkers and the Tribe of Doers.  Together we are providing separate parts of the big picture – but as two tribes we appear to be unaware of the synergistic power of the two parts. We are blocked by a power struggle.

The Experiential Learning Model (ELM) was promoted and developed by Peter Honey and Alan Mumford (see learning styles) and their work forms the evidence behind the Learning Style Questionnaire that anyone can use to get their ‘score’ on the four dimensions:

  • Pragmatist – the designer and planner
  • Activist – the action person
  • Reflector – the observer and analyst
  • Theorist – the abstracter and hypothesis generator

The evidence from population studies showed that individuals have a preference for one of these styles, sometimes two, occasionally three and rarely all four.

That observation, together with the fact that learning from experience requires moving around the whole cycle, leads to an awareness that both individuals and groups can get ‘stuck’ in their learning preference comfort zone. If the learning wheel is unbalanced it will deliver an emotionally bumpy ride when it turns! So it may be more comfortable just to remain stationary and not to learn.

Which means not to change. Which means not to improve.


So if we are embarking on an improvement exercise – be it individual or collective – then we are committed to learning. So where do we start on the learning cycle?

The first step is action. To do something – and the easiest and safest thing to do is just look. Observe what is actually happening out there in the real world – outside the office – outside our comfort zone. We need to look outside our rhetorical inner world of assumptions, intuition and prejudgements.

The next step is to reflect on what we see – we look in the mirror – and we compare what are actually seeing with what we expected to see. That is not as easy as it sounds – and a useful tool to help is to draw charts. To make it visual. All sorts of charts.

The result is often a shock. There is often a big gap between what we see and what we perceive; between what we expect and what we experience; between what we want and what we get.

That emotional shock is actually what we need to power us through the next phase – the Realm of the Theorist – where we ask three simple questions:
Q1: What could be causing the reality that I am seeing?
Q2: How would I know which of the plausible causes is the actual cause?
Q3: What experiment can I do to answer my question and clarify my understanding of Reality?

This is the world of the Academic.

The third step is design an experiment to test our new hypothesis.  The real world is messy and complicated and we need to be comfortable with ‘good enough’ and ‘reasonable uncertainty’.  Design is about practicalities – making something that works well enough in practice – in the real world. Something that is fit-for-purpose. We are not expecting perfection; not looking for optimum; not striving for best – just significantly better than what we have now. And the more we can test our design before we implement it the better because we want to know what to expect before we make the change and we want to avoid unintended negative consequences – the NooNoos.

twisting_arrow_200_wht_11738Then we do it … and the cycle of learning has come one revolution … but we are not back at the start – we have moved forward. Our understanding is already different from when were were at this stage before: it is deeper and wider.  We are following the trajectory of a spiral – our capability for improvement is expanding over time.

So we need to balance our learning wheel before we start the journey or we will have a slow, bumpy and painful ride!


One plausible approach is to stay inside our comfort zones, play to our strengths and to say “What we need is a team made of people with complementary strengths. We need a Department of Action for the Activists; a Department of Analysis for the Reflectors; a Department of Research for the Theorists and a Department of Planning for the Pragmatists.

But that is what we have now and what is happening? The Four Departments have become super-specialised and more polarised.  There is little common ground or shared language.  There is no common direction, no co-ordination, no oil on the axle of the wheel of change. We have ground to a halt. We have chaos. Each part is working but independently of the others in an unsynchronised mess.

We have vehicular fibrillation. Change output has dropped to zero.


A better design is for everyone to focus first on balancing their own learning wheel by actively redirecting emotional energy from their comfort zone, their strength,  into developing the next step in their learning cycle.

Pragmatists develop their capability for Action.
Activists develop their capability for Reflection.
Reflectors develop their capability for Hypothesis.
Theorists develop their capability for Design.

The first step in the improvement spiral is Action – so if you are committed to improvement then investing £10 and 20 minutes in the 80-question Learning Style Questionnaire will demonstrate your commitment – not only to others – more importantly to yourself.

 

figure_snowblowing_150_wht_13606It is the time of year when our minds turn to self-improvement.

New Year.

We re-affirm our Resolutions from last year and we vow to try harder this year. As we did last year. And the year before that. And we usually fail.

So why do we fail to keep our New Year Resolutions?

One reason is because we do not let go of the past. We get pulled back into old habits too easily. To get a new future we have to do some tidying up. We need to get The Shredder. We need to make the act of letting go irreversible.

Bzzzzzzz …. Aaaaah. That feels better.

Why does this work?

First, because it feels good to be taking definitive action.  We know that resolutions are just good intentions. It is not until we take action that change happens.  Many of us are weak on the Activist dimension. We talk a lot about what we should do but we do not walk as much as we could do.

Second, because  we can see the evidence of the improvement immediately. We get immediate, visual, positive feedback. That heap of old bills and emails and reports that we kept ‘just in case’ is no longer cluttering up our desks, our eyes, our minds and our lives.  And we have ‘recycled’ it which feels even better.

Third, because we have challenged our own Prevarication Policy. And if we can do that for ourselves we can, with some credibility, do the same for others. We feel more competent and more confident.

Fourth, because we have freed up valuable capacity to invest.  More space. More time (our prevarication before kept us busy but wasted our limited time). More motivation (trying to work around a pile of rubbish day-in and day-out is emotionally draining).

So all we need to do in the New Year is stay inside our circle of control and shred some years of accumulated rubbish.

figure_picking_up_trash_150_wht_11857And it is not just tangible rubbish we can dispose of.  We can shred some emotional garbage too. The list of “Yes … But” excuses that we cling on to.  The sack of guilt for past failures that weighs us down. The flag of fear that we wave when we surrender our independence and adopt the Victim role.  The righteous indignation that we use to hide our own self-betrayal.

And just by putting that lot through The Shredder we release the opportunity for improvement.

The rest just happens – as if by magic.

clock_hands_spinning_import_150_wht_3149[Hmmmmmm] The desk amplified the vibration of Bob’s smartphone as it signaled the time for his planned e-mentoring session with Leslie.

[Dring Dring]

<Bob> Hi Leslie, right-on-time, how are you today?

<Leslie> Good thanks Bob. I have a specific topic to explore if that is OK. Can we talk about time traps.

<Bob> OK – do you have a specific reason for choosing that topic?

<Leslie> Yes. The blog last week about ‘Recipe for Chaos‘ set me thinking and I remembered that time-traps were mentioned in the FISH course but I confess, at the time, I did not understand them. I still do not.

<Bob> Can you describe how the ‘Recipe for Chaos‘ blog triggered this renewed interest in time-traps?

<Leslie> Yes – the question that occurred to me was: ‘Is a time-trap a recipe for chaos?’

<Bob> A very good question! What do you feel the answer is?

<Leslie>I feel that time-traps can and do trigger chaos but I cannot explain how. I feel confused.

<Bob>Your intuition is spot on – so can you localize the source of your confusion?

<Leslie>OK. I will try. I confess I got the answer to the MCQ correct by guessing – and I wrote down the answer when I eventually guessed correctly – but I did not understand it.

<Bob>What did you write down?

<Leslie>“The lead time is independent of the flow”.

<Bob>OK. That is accurate – though I agree it is perhaps a bit abstract. One source of confusion may be that there are different causes of of time-traps and there is a lot of overlap with other chaos-creating policies. Do you have a specific example we can use to connect theory with reality?

<Leslie> OK – that might explain my confusion.  The example that jumped to mind is the RTT target.

<Bob> RTT?

<Leslie> Oops – sorry – I know I should not use undefined abbreviations. Referral to Treatment Time.

<Bob> OK – can you describe what you have mapped and measured already?

<Leslie> Yes.  When I plot the lead-time for patients in date-of-treatment order the process looks stable but the histogram is multi-modal with a big spike just underneath the RTT target of 18 weeks. What you describe as the ‘Horned Gaussian’ – the sign that the performance target is distorting the behaviour of the system and the design of the system is not capable on its own.

<Bob> OK and have you investigated why there is not just one spike?

<Leslie> Yes – the factor that best explains that is the ‘priority’ of the referral.  The  ‘urgents’ jump in front of the ‘soons’ and both jump in front of the ‘routines’. The chart has three overlapping spikes.

<Bob> That sounds like a reasonable policy for mixed-priority demand. So what is the problem?

<Leslie> The ‘Routine’ group is the one that clusters just underneath the target. The lead time for routines is almost constant but most of the time those patients sit in one queue or another being leap-frogged by other higher-priority patients. Until they become high-priority – then they do the leap frogging.

<Bob> OK – and what is the condition for a time trap again?

<Leslie> That the lead time is independent of flow.

<Bob>Which implies?

<Leslie> Um. let me think. That the flow can be varying but the lead time stays the same?

<Bob> Yup. So is the flow of routine referrals varying?

<Leslie> Not over the long term. The chart is stable.

<Bob> What about over the short term? Is demand constant?

<Leslie>No of course not – it varies – but that is expected for all systems. Constant means ‘over-smoothed data’ – the Flaw of Averages trap!

<Bob>OK. And how close is the average lead time for routines to the RTT maximum allowable target?

<Leslie> Ah! I see what you mean. The average is about 17 weeks and the target is 18 weeks.

<Bob>So what is the flow variation on a week-to-week time scale?

<Leslie>Demand or Activity?

<Bob>Both.

<Leslie>H’mm – give me a minute to re-plot flow as a weekly-aggregated chart. Oh! I see what you mean – both the weekly activity and demand are both varying widely and they are not in sync with each other. Work in progress must be wobbling up and down a lot! So how can the lead time variation be so low?

<Bob>What do the flow histograms look like?

<Leslie> Um. Just a second. That is weird! They are both bi-modal with peaks at the extremes and not much in the middle – the exact opposite of what I expected to see! I expected a centered peak.

<Bob>What you are looking at is the characteristic flow fingerprint of a chaotic system – it is called ‘thrashing’.

<Leslie> So I was right!

<Bob> Yes. And now you know the characteristic pattern to look for. So what is the policy design flaw here?

<Leslie>The DRAT – the delusional ratio and arbitrary target?

<Bob> That is part of it – that is the external driver policy. The one you cannot change easily. What is the internally driven policy? The reaction to the DRAT?

<Leslie> The policy of leaving routine patients until they are about to breach then re-classifying them as ‘urgent’.

<Bob>Yes! It is called a ‘Prevarication Policy’ and it is surprisingly and uncomfortably common. Ask yourself – do you ever prevaricate? Do you ever put off ‘lower priority’ tasks until later and then not fill the time freed up with ‘higher priority tasks’?

<Leslie> OMG! I do that all the time! I put low priority and unexciting jobs on a ‘to do later’ heap but I do not sit idle – I do then focus on the high priority ones.

<Bob> High priority for whom?

<Leslie> Ah! I see what you mean. High priority for me. The ones that give me the biggest reward! The fun stuff or the stuff that I get a pat on the back for doing or that I feel good about.

<Bob> And what happens?

<Leslie> The heap of ‘no-fun-for-me-to-do’ jobs gets bigger and I await the ‘reminders’ and then have to rush round in a mad panic to avoid disappointment, criticism and blame. It feels chaotic. I get grumpy. I make more mistakes and I deliver lower-quality work. If I do not get a reminder I assume that the job was not that urgent after all and if I am challenged I claim I am too busy doing the other stuff.

<Bob> Have you avoided disappointment?

<Leslie> Ah! No – that I needed to be reminded meant that I had already disappointed. And when I do not get a reminded does not prove I have not disappointed either. Most people blame rather than complain. I have just managed to erode other people’s trust in my reliability. I have disappointed myself. I have achieved exactly the opposite of what I intended. Drat!

<Bob> So what is the reason that you work this way? There will be a reason.  A good reason.

<Leslie> That is a very good question! I will reflect on that because I believe it will help me understand why others behave this way too.

<Bob> OK – I will be interested to hear your conclusion.  Let us return to the question. What is the  downside of a ‘Prevarication Policy’?

<Leslie> It creates stress, chaos, fire-fighting, last minute changes, increased risk of errors,  more work and it erodes both quality, confidence and trust.

<Bob>Indeed so – and the impact on productivity?

<Leslie> The activity falls, the system productivity falls, revenue falls, queues increase, waiting times increase and the chaos increases!

<Bob> And?

<Leslie> We treat the symptoms by throwing resources at the problem – waiting list initiatives – and that pushes our costs up. Either way we are heading into a spiral of decline and disappointment. We do not address the root cause.

<Bob> So what is the way out of chaos?

<Leslie> Reduce the volume on the destabilizing feedback loop? Stop the managers meddling!

<Bob> Or?

<Leslie> Eh? I do not understand what you mean. The blog last week said management meddling was the problem.

<Bob> It is a problem. How many feedback loops are there?

<Leslie> Two – that need to be balanced.

<Bob> So what is another option?

<Leslie> OMG! I see. Turn UP the volume of the stabilizing feedback loop!

<Bob> Yup. And that is a lot easier to do in reality. So that is your other challenge to reflect on this week. And I am delighted to hear you using the terms ‘stabilizing feedback loop’ and ‘destabilizing feedback loop’.

<Leslie> Thank you. That was a lesson for me after last week – when I used the terms ‘positive and negative feedback’ it was interpreted in the emotional context – positive feedback as encouragement and negative feedback as criticism.  So ‘reducing positive feedback’ in that sense is the exact opposite of what I was intending. So I switched my language to using ‘stabilizing and destabilizing’ feedback loops that are much less ambiguous and the confusion and conflict disappeared.

<Bob> That is very useful learning Leslie … I think I need to emphasize that distinction more in the blog. That is one advantage of online media – it can be updated!

 <Leslie> Thanks again Bob!  And I have the perfect opportunity to test a new no-prevarication-policy design – in part of the system that I have complete control over – me!

boxes_group_PA4_150_wht_4916There are only four ingredients required to create Chaos.

The first is Time.

All processes and systems are time-dependent.

The second ingredient is a Metric of Interest (MoI).

That means a system performance metric that is important to all – such as a Safety or Quality or Cost; and usually all three.

The third ingredient is a feedback loop of a specific type – it is called a Negative Feedback Loop.  The NFL  is one that tends to adjust, correct and stabilise the behaviour of the system.

Negative feedback loops are very useful – but they have a drawback. They resist change and they reduce agility. The name is also a disadvantage – the word ‘negative feedback’ is often associated with criticism.

The fourth and final ingredient in our Recipe for Chaos is also a feedback loop but one of a different design – a Positive Feedback Loop (PFL)- one that amplifies variation and change.

Positive feedback loops are also very useful – they are required for agility – quick reactions to unexpected events. Fast reflexes.

The downside of a positive feedback loop is that increases instability.

The name is also confusing – ‘positive feedback’ is associated with encouragement and praise.

So in this context it is better to use the terms ‘stabilizing feedback’ and ‘destabilizing feedback’  loops.

When we mix these four ingredients in just the right amounts we get a system that may behave chaotically. That is surprising. It is counter-intuitive. It is also how the Universe works.

For example:

Suppose our Metric of Interest is the amount of time that patients spend in a Accident and Emergency Department. We know that the longer this time is the less happy they are and the higher the risk of avoidable harm – so it is a reasonable goal to reduce it.

Longer-than-possible waiting times have many root causes – it is a non-specific metric.  That means there are many things that could be done to reduce waiting time and the most effective actions will vary from case-to-case, day-to-day and even minute-to-minute.  There is no one-size-fits-all solution.

This implies that those best placed to correct the causes of the delays are the people who know the specific system well – because they work in it. Those who actually deliver urgent care. They are the stablizing agent in our Recipe for Chaos.

The destabilizing feedback loop is the beat-the-arbitrary-target policy.

This policy is typically involves:
(1) Setting a target that is impossible for the current design to achieve reliably;
(2) inspecting how close to the target we are; then
(3) using the data to justify threats of dire consequences for failure.

Now we have a Recipe for Chaos.

The higher the failure rate the more inspection, reports, meetings, exhortation, threats, interruptions, and interventions that are generated.  Fear-fuelled management meddling. This behaviour consumes valuable time – so leaves less time to do the worthwhile work. The pressure increases and makes the system even more sensitive to small changes. Delays multiply and errors occur more often.  Tempers become frayed and molehills become magnified into mountains. Irritations become arguments.  And all of this makes the problem worse rather than better. Less stable. More variable. More chaotic.

It is actually possible to write a simple equation that describes this characteristic behaviour of real systems – and that was a very surprising finding when it was discovered in 1976 by a mathematician called Robert May.

This equation is called the logistic equation.

Here is the abstract of his seminal paper.

Nature 261, 459-467 (10 June 1976)

Simple mathematical models with very complicated dynamics

First-order difference equations arise in many contexts in the biological, economic and social sciences. Such equations, even though simple and deterministic, can exhibit a surprising array of dynamical behaviour, from stable points, to a bifurcating hierarchy of stable cycles, to apparently random fluctuations. There are consequently many fascinating problems, some concerned with delicate mathematical aspects of the fine structure of the trajectories, and some concerned with the practical implications and applications. This is an interpretive review of them.

The fact that this chaotic behaviour is completely predictable and does not need any ‘random’ element was a big surprise. Chaotic is not the same as random. The observed chaos in the urgent healthcare care system is the result of the design of the system – or more specifically the current healthcare system management policies.

This has a number of profound implications – the most important of which is this:

If the chaos we observe in our health care systems is the predictable and inevitable result of the management policies we ourselves have created and adopted – then eliminating the chaos will only require us to re-design these policies.

In fact we only need to tweak one of the ingredients of the Recipe for Chaos – the strength of the destabilizing feedback loop. The gain. The volume control on the variation amplifier!

This is called the MM factor – otherwise known as ‘Management Meddling‘.

We need to keep all four ingredients though – because we need our system to have both agility and stability.

The flaw is not the Managers themselves – it is their learned behaviour – the Meddling.  It is learned so it can be unlearned. We need to keep the Managers and to change their role slightly. As they unlearn their old habits they move from being Policy-Enforcers and Fire-Fighters to becoming Policy-Engineers and Chaos-Calmers. They focus on learning to understand the root causes of variation that come from outside the circle of influence of the Workers.   They learn how to rationally and radically redesign system policies to achieve both agility and stability.

And doing that requires developing systemic-thinking and learning Improvement Science skills – because chaos is counter-intuitive. If it were intuitively-obvious we would have discovered the nature of chaos thousands of years ago. The fact that it was not discovered until 1976 demonstrates this.

It is our homo sapiens intuition that got us into this mess!  The inherent flaws of the  caveman wetware between our ears.  Our current management policies are intuitively-obvious, collectively-agreed, rubber-stamped and wrong! They are a Recipe for Chaos.

And when we learn to re-design our system policies and upload the new system software then the chaos evaporates as if a magic wand had been waved.

That comes as a big surprise!

And what also comes as a big surprise is just how small the counter-intuitive policy design tweaks often are.

Smooth, effective, efficient,safe and productive flow is restored. Calm confidence reigns. Safety, Quality and Productivity all increase – at the same time.  The emotional storm clouds dissipate and the cultural sun shines again.

Everyone feels better. Everyone. Patients, workers and managers.

This is Win-Win-Win improvement by design. Improvement Science.

Locked_DoorIf we were exploring the corridors in an unfamiliar building and our way forward was blocked by a door that looked like this … we would suspect that something of value lay beyond.

We know there is an unknown.

The puzzle we have to solve to release the chain tells us this. This is called an “affordance” – the design of the lock tells us what we need to do.

More often however what we need to move forward is unknown to us and the problems face afford no clues as to how to solve them.  Worse than that – the clues the do offer are misleading. Our intuition is tricked. We do the ‘intuitively obvious’ thing and the problem gets worse.

It is easy to lose confidence, become despondent and even start to believe there is no solution. To assume the problem is impossible for us to solve.

Then one day someone shows us how to solve an “impossible” problem. And with the benefit of our new perspective the solution looks simple and how it works is obvious. But only in retrospect.

Our unknown was known all along. But not by us. We were ignorant.

And our intuitions are flaky, forgetful and fickle. Not to be trusted. And our egos are fragile too – we do not like to feel flaky, forgetful and fickle. So we lie to ourselves and we confuse obvious-in-hindsight with obvious-in-foresight. They are not the same.

Suppose we now want to demonstrate our new understanding to someone else – to help them solve their “impossible” problem. How do we do that?

Do we say “But it is obvious – if you cannot see it you must be blind or stupid!” How can we say that when it was not to use only a short time ago? Is our ego getting the in way again? Can our intuition or ego be trusted at all?

To help others gain insight and deepen their understanding we must put ourselves back into the shoes we used to be in – or rather their shoes now:  and we need to and look at the problem again from their perspective. With the benefit of the three views of the problem: our old one, their current one and our new one we may be able to then see where the Unknown-Known is for them.

Only then can we help them discover it for themselves … and then they can help others discover their Unknown-Knowns.  That is know understanding spreads.

And understanding is the bridge between Knowledge and Wisdom.

And it is a wonderful thing to see someone move from confusion to clarity by asking them just the right question at just the right time in just the right way.

No more than that.

Socrates knew how to do this a long time ago – which is why it is called the Socratic Method.

 

computer_power_display_glowing_150_wht_9646A healthcare system has two inter-dependent parts. Let us call them the ‘hardware’ and the ‘software’ – terms we are more familiar with when referring to computer systems.

In a computer the critical-to-success software is called the ‘operating system’ – and we know that by the brand labels such as Windows, Linux, MacOS, or Android. There are many.

It is the O/S that makes the hardware fit-for-purpose. Without the O/S the computer is just a box of hot chips. A rather expensive room heater.

All the programs and apps that we use to to deliver our particular information service require the O/S to manage the actual hardware. Without a coordinator there would be chaos.

In a healthcare system the ‘hardware’ is the buildings, the equipment, and the people.  They are all necessary – but they are not sufficient on their own.

The ‘operating system’ in a healthcare system are the management policies: the ‘instructions’ that guide the ‘hardware’ to do what is required, when it is required and sometimes how it is required.  These policies are created by managers – they are the healthcare operating system design engineers so-to-speak.

Change the O/S and you change the behaviour of the whole system – it may look exactly the same – but it will deliver a different performance. For better or for worse.


In 1953 the invention of the transistor led to the first commercially viable computers. They were faster, smaller, more reliable, cheaper to buy and cheaper to maintain than their predecessors. They were also programmable.  And with many separate customer programs demanding hardware resources – an effective and efficient operating system was needed. So the understanding of “good” O/S design developed quickly.

In the 1960′s the first integrated circuits appeared and the computer world became dominated by mainframe computers. They filled air-conditioned rooms with gleaming cabinets tended lovingly by white-coated technicians carrying clipboards. Mainframes were, and still are, very expensive to build and to run! The valuable resource that was purchased by the customers was ‘CPU time’.  So the operating systems of these machines were designed to squeeze every microsecond of value out of the expensive-to-maintain CPU: for very good commercial reasons. Delivering the “data processing jobs” right, on-time and every-time was paramount.

The design of the operating system software was critical to the performance and to the profit.  So a lot of brain power was invested in learning how to schedule jobs; how to orchestrate the parts of the hardware system so that they worked in harmony; how to manage data buffers to smooth out flow and priority variation; how to design efficient algorithms for number crunching, sorting and searching; and how to switch from one task to the next quickly and without wasting time or making errors.

Every modern digital computer has inherited this legacy of learning.

In the 1970′s the first commercial microprocessors appeared – which reduced the size and cost of computers by orders of magnitude again – and increased their speed and reliability even further. Silicon Valley blossomed and although the first micro-chips were rather feeble in comparison with their mainframe equivalents they ushered in the modern era of the desktop-sized personal computer.

In the 1980′s players such as Microsoft and Apple appeared to exploit this vast new market. The only difference was that Microsoft only offered just the operating system for the new IBM-PC hardware (called MS-DOS); while Apple created both the hardware and software as a tightly integrated system – the Apple I.

The ergonomic-seamless-design philosophy at Apple led to the Apple Mac which revolutionised personal computing. It made them usable by people who had no interest in the innards or in programming. The Apple Macs were the “designer”computers and were reassuringly more expensive. The innovations that Apple designed into the Mac are now expected in all personal computers as well as the latest generations of smartphones and tablets.

Today we carry more computing power in our top pocket than a mainframe of the 1970′s could deliver! The design of the operating system has hardly changed though.

It was the O/S  design that leveraged the maximum potential of the very expensive hardware.  And that is still the case – but we take it for completely for granted.


Exactly the same principle applies to our healthcare systems.

The only difference is that the flow is not 1′s and 0′s – it is patients and all the things needed to deliver patient care. The ‘hardware’ is the expensive part to assemble and run – and the largest cost is the people.  Healthcare is a service delivered by people to people. Highly-trained nurses, doctors and allied healthcare professionals are expensive.

So the key to healthcare system performance is high quality management policy design – the healthcare operating system (HOS).

And here we hit a snag.

Our healthcare management policies have not been designed using the same rigor as the operating systems for our computers. They have not been designed using the well-understood principles of flow physics. The various parts of our healthcare system do not work well together. The flows are fractured. The silos work independently. And the ubiquitous symptom of this dysfunction is confusion, chaos and conflict.  The managers and the doctors are at each others throats. And this is because the management policies have evolved through a largely ineffective and very inefficient strategy called “burn-and-scrape”. Firefighting.

The root cause of the poor design is that neither healthcare managers nor the healthcare workers are trained in operational policy design. Design for Safety. Design for Quality. Design for Delivery. Design for Productivity.

And we are all left with a lose-lose-lose legacy: a system that is no longer fit-for-purpose and a generation of managers and clinicians who have never learned how to design the operational and clinical policies that ensure the system actually delivers what the ‘hardware’ is capable of delivering.


For example:

Suppose we have a simple healthcare system with three stages called A, B and C.  All the patients flow through A, then to B and then to C.  Let us assume these three parts are managed separately as departments with separate budgets and that they are free to use whatever policies they choose so long as they achieve their performance targets -which are (a) to do all the work and (b) to stay in budget and (c) to deliver on time.  So far so good.

Now suppose that the work that arrives at Department B from Department  A is not all the same and different tasks require different pathways and different resources. A Radiology, Pathology or Pharmacy Department for example.

Sorting the work into separate streams and having expensive special-purpose resources sitting idle waiting for work to arrive is inefficient and expensive. It will push up the unit cost – the total cost divided by the total activity. This is called ‘carve-out’.

Switching resources from one pathway to another takes time and that change-over time implies some resources are not able to do the work for a while.  These inefficiencies will contribute to the total cost and therefore push up the “unit-cost”. The total cost for the department divided by the total activity for the department.

So Department B decides to improve its “unit cost” by deploying a policy called ‘batching’.  It starts to sort the incoming work into different types of task and when a big enough batch has accumulated it then initiates the change-over. The cost of the change-over is shared by the whole batch. The “unit cost” falls because Department B is now able to deliver the same activity with fewer resources because they spend less time doing the change-overs. That is good. Isn’t it?

But what is the impact on Departments A and C and what effect does it have on delivery times and work in progress and the cost of storing the queues?

Department A notices that it can no longer pass work to B when it wants because B will only start the work when it has a full batch of requests. The queue of waiting work sits inside Department A.  That queue takes up space and that space costs money but the queue cost is incurred by Department A – not Department B.

What Department C sees is the order of the work changed by Department B to create a bigger variation in lead times for consecutive tasks. So if the whole system is required to achieve a delivery time specification – then Department C has to expedite the longest waiters and delay the shortest waiters – and that takes work,  time, space and money. That cost is incurred by Department C not by Department B.

The unit costs for Department B go down – and those for A and C both go up. The system is less productive as a whole.  The queues and delays caused by the policy change means that work can not be completed reliably on time. The blame for the failure falls on Department C.  Conflict between the parts of the system is inevitable. Lose-Lose-Lose.

And conflict is always expensive – on all dimensions – emotional, temporal and financial.


The policy design flaw here looks like it is ‘batching’ – but that policy is just a reaction to a deeper design flaw. It is a symptom.  The deeper flaw is not even the use of ‘unit costing’. That is a useful enough tool. The deeper flaw is the incorrect assumption that by improving the unit costs of the stages independently will always get an improvement in whole system productivity.

This is incorrect. This error is the result of ‘linear thinking’.

The Laws of Flow Physics do not work like this. Real systems are non-linear.

To design the management policies for a non-linear system using linear-thinking is guaranteed to fail. Disappointment and conflict is inevitable. And that is what we have. As system designers we need to use ‘systems-thinking’.

This discovery comes as a bit of a shock to management accountants. They feel rather challenged by the assertion that some of their cherished “cost improvement policies” are actually making the system less productive. Precisely the opposite of what they are trying to achieve.

And it is the senior management that decide the system-wide financial policies so that is where the linear-thinking needs to be challenged and the ‘software patch’ applied first.

It is not a major management software re-write. Just a minor tweak is all that is required.

And the numbers speak for themselves. It is not a difficult experiment to do.


So that is where we need to start.

We need to learn Healthcare Operating System design and we need to learn it at all levels in healthcare organisations.

And that system-thinking skill has another name – it is called Improvement Science.

The good news is that it is a lot easier to learn than most people believe.

And that is a big shock too – because how to do this has been known for 50 years.

So if you would like to see a real and current example of how poor policy design leads to falling productivity and then how to re-design the policies to reverse this effect have a look at Journal Of Improvement Science 2013:8;1-20.

And if you would like to learn how to design healthcare operating policies that deliver higher productivity with the same resources then the first step is FISH.

box_opening_up_closing_150_wht_8035 Improvement Science requires the effective, efficient and coordinated use of diagnosis, design and delivery tools.

Experience has also taught us that it is not just about the tools – each must be used as it was designed.

The craftsman knows his tools and knows what instrument to use, where and when the context dictates; and how to use it with skill.

Some tools are simple and effective – easy to understand and to use. The kitchen knife is a good example. It does not require an instruction manual to use it.

Other tools are more complex. Very often because they have a specific purpose. They are not generic. And they may not be intuitively obvious how to use them.  Many labour-saving household appliances have specific purposes: the microwave oven, the dish-washer and so on – but they have complex controls and settings that we need to manipulate to direct the “domestic robot” to deliver what we actually want.  Very often these controls are not intuitively obvious – we are dealing with a black box – and our understanding of what is happening inside is vague.

Very often we do not understand how the buttons and dials that we can see and touch – the inputs – actually influence the innards of the box to determine the outputs. We do not have a mental model of what is inside the Black Box. We do not know – we are ignorant.

In this situation we may resort to just blindly following the instructions;  or blindly copying what someone else does; or blindly trying random combinations of inputs until we get close enough to what we want. No wiser at the end than we were at the start.  The common thread here is “blind”. The box is black. We cannot see inside.

And the complex black box is deliberately made so – because the supplier of the super-tool does not want their “secret recipe” to be known to all – least of all their competitors.

This is a perfect recipe for confusion and for conflict. Lose-Lose-Lose.

Improvement Science is dedicated to eliminating confusion and conflict – so Black Box Tools are NOT on the menu.

Improvement Scientists need to understand how their tools work – and the best way to achieve that level of understanding is to design and build their own.

This may sound like re-inventing the wheel but it is not about building novel tools – it is about re-creating the tried and tested tools – for the purpose of understanding how they work. And understanding their strengths, their weaknesses, their opportunities and their risks or threats.

And doing that requires guidance from a mentor who has been through this same learning journey. Starting with simple, intuitive tools, and working step-by-step to design, build and understand the more complex ones.

So where do we start?

In the FISH course the first tool we learn to use is a Gantt Chart.

It was invented by Henry Laurence Gantt about 100 years ago and requires nothing more than pencil and paper. Coloured pencils and squared paper are even better.

Gantt_ChartThis is an example of a Gantt Chart for a Day Surgery Unit.

At the top are the “tasks” – patients 1 and 2; and at the bottom are the “resources”.

Time runs left to right.

Each coloured bar appears twice: once on each chart.

The power of a Gantt Chart is that it presents a lot of information in a very compact and easy-to-interpret format. That is what Henry Gantt intended.

A Gantt Chart is like the surgeon’s scalpel. It is a simple, generic easy-to-create tool that has a wide range of uses. The skill is knowing where, when and how to use it: and just as importantly where-not, when-not and how-not.

DRAT_04The second tool that an Improvement Scientist learns to use is the Shewhart or time-series chart.

It was invented about 90 years ago.

This is a more complex tool and as such there is a BIG danger that it is used as a Black Box with no understanding of the innards.  The SPC  and Six-Sigma Zealots sell it as a Magic Box. It is not.

We could paste any old time-series data into a bit of SPC software; twiddle with the controls until we get the output we want; and copy the chart into our report. We could do that and hope that no-one will ask us to explain what we have done and how we have done it. Most do not because they do not want to appear ‘ignorant’. The elephant is in the room though.  There is a conspiracy of silence.

The elephant-in-the-room is the risk we take when use Black Box tools – the risk of GIGO. Garbage In Garbage Out.

And unfortunately we have a tendency to blindly trust what comes out of the Black Box that a plausible Zealot tells us is “magic”. This is the Emporer’s New Clothes problem.  Another conspiracy of silence follows.

The problem here is not the tool – it is the desperate person blindly wielding it. The Zealots know this and they warn the Desperados of the risk and offer their expensive Magician services. They are not interested in showing how the magic trick is done though! They prefer the Box to stay Black.

So to avoid this cat-and-mouse scenario and to understand both the simpler and the more complex tools, and to be able to use them effectively and safely, we need to be able to build one for ourselves.

And the know-how to do that is not obvious – if it were we would have already done it – so we need guidance.

And once we have  built our first one – a rough-and-ready working prototype – then we can use the existing ones that have been polished with long use. And we can appreciate the wisdom that has gone into their design. The Black Box becomes Transparent.

So learning how the build the essential tools is the first part of the Improvement Science Practitioner (ISP) training – because without that knowledge it is difficult to progress very far. And without that understanding it is impossible to teach anyone anything other than to blindly follow a Black Box recipe.

Of course Magic Black Box Solutions Inc will not warm to this idea – they may not want to reveal what is inside their magic product. They are fearful that their customers may discover that it is much simpler than they are being told.  And we can test that hypothesis by asking them to explain how it works in language that we can understand. If they cannot (or will not) then we may want to keep looking for someone who can and will.

line_figure_phone_400_wht_9858<Lesley>Hi Bob! How are you today?

<Bob>OK thanks Lesley. And you?

<Lesley>I am looking forward to our conversation. I have two questions this week.

<Bob>OK. What is the first one?

<Lesley>You have taught me that improvement-by-design starts with the “purpose” question and that makes sense to me. But when I ask that question in a session I get an “eh?” reaction and I get nowhere.

<Bob>Quod facere bonum opus et quomodo te cognovi unum?

<Lesley>Eh?

<Bob>I asked you a purpose question.

<Lesley>Did you? What language is that? Latin? I do not understand Latin.

<Bob>So although you recognize the language you do not understand what I asked, the words have no meaning. So you are unable to answer my question and your reaction is “eh?”. I suspect the same is happening with your audience. Who are they?

<Lesley>Front-line clinicians and managers who have come to me to ask how to solve their problems. There Niggles. They want a how-to-recipe and they want it yesterday!

<Bob>OK. Remember the Temperament Treacle conversation last week. What is the commonest Myers-Briggs Type preference in your audience?

<Lesley>It is xSTJ – tough minded Guardians.  We did that exercise. It was good fun! Lots of OMG moments!

<Bob>OK – is your “purpose” question framed in a language that the xSTJ preference will understand naturally?

<Lesley>Ah! Probably not! The “purpose” question is future-focused, conceptual , strategic, value-loaded and subjective.

<Bob>Indeed – it is an iNtuitor question. xNTx or xNFx. Pose that question to a roomful of academics or executives and they will debate it ad infinitum.

<Lesley>More Latin – but that phrase I understand. You are right.  And my own preference is xNTP so I need to translate my xNTP “purpose” question into their xSTJ language?

<Bob>Yes. And what language do they use?

<Lesley>The language of facts, figures, jobs-to-do, work-schedules, targets, budgets, rational, logical, problem-solving, tough-decisions, and action-plans. Objective, pragmatic, necessary stuff that keep the operational-wheels-turning.

<Bob>OK – so what would “purpose” look like in xSTJ language?

<Lesley>Um. Good question. Let me start at the beginning. They came to me in desperation because they are now scared enough to ask for help.

<Bob>Scared of what?

<Lesley>Unintentionally failing. They do not want to fail and they do not need beating with sticks. They are tough enough on themselves and each other.

<Bob>OK that is part of their purpose. The “Avoid” part. The bit they do not want. What do they want? What is the “Achieve” part? What is their “Nice If”?

<Lesley>To do a good job.

<Bob>Yes. And that is what I asked you – but in an unfamiliar language. Translated into English I asked “What is a good job and how do you know you are doing one?”

<Lesley>Ah ha! That is it! That is the question I need to ask. And that links in the first map – The 4N Chart®. And it links in measurement, time-series charts and BaseLine© too. Wow!

<Bob>OK. So what is your second question?

<Lesley>Oh yes! I keep getting asked “How do we work out how much extra capacity we need?” and I answer “I doubt that you need any more capacity.”

<Bob>And their response is?

<Lesley>Anger and frustration! They say “That is obvious rubbish! We have a constant stream of complaints from patients about waiting too long and we are all maxed out so of course we need more capacity! We just need to know the minimum we can get away with – the what, where and when so we can work out how much it will cost for the business case.

<Bob>OK. So what do they mean by the word “capacity”. And what do you mean?

<Lesley>Capacity to do a good job?

<Bob>Very quick! Ho ho! That is a bit imprecise and subjective for a process designer though. The Laws of Physics need the terms “capacity”, “good” and “job” clearly defined – with units of measurement that are meaningful.

<Lesley>OK. Let us define “good” as “delivered on time” and “job” as “a patient with a health problem”.

<Bob>OK. So how do we define and measure capacity? What are the units of measurement?

<Lesley>Ah yes – I see what you mean. We touched on that in FISH but did not go into much depth.

<Bob>Now we dig deeper.

<Lesley>OK. FISH talks about three interdependent forms of capacity: flow-capacity, resource-capacity, and space-capacity.

<Bob>Yes. They are the space-and-time capacities. If we are too loose with our use of these and treat them as interchangeable then we will create the confusion and conflict that you have experienced. What are the units of measurement of each?

<Lesley>Um. Flow-capacity will be in the same units as flow, the same units as demand and activity – tasks per unit time.

<Bob>Yes. Good. And space-capacity?

<Lesley>That will be in the same units as work in progress or inventory – tasks.

<Bob>Good! And what about resource-capacity?

<Lesley>Um – Will that be resource-time – so time?

<Bob>Actually it is resource-time per unit time. So they have different units of measurement. It is invalid to mix them up any-old-way. It would be meaningless to add them for example.

<Lesley>OK. So I cannot see how to create a valid combination from these three! I cannot get the units of measurement to work.

<Bob>This is a critical insight. So what does that mean?

<Lesley>There is something missing?

<Bob>Yes. Excellent! Your homework this week is to work out what the missing pieces of the capacity-jigsaw are.

<Lesley>You are not going to tell me the answer?

<Bob>Nope. You are doing ISP training now. You already know enough to work it out.

<Lesley>OK. Now you have got me thinking. I like it. Until next week then.

<Bob>Have a good week.

stick_figure_help_button_150_wht_9911If the headlines in the newspapers are a measure of social anxiety then healthcare in the UK is in a state of panic: “Hospitals Fear The Winter Crisis Is Here Early“.

The Panic Button is being pressed and the Patient Safety Alarms are sounding.

Closer examination of the statement suggests that the winter crisis is not unexpected – it is just here early.  So we are assuming it will be worse than last year – which was bad enough.

The evidence shows this fear is well founded.  Last year was the worst on the last 5 years and this year is shaping up to be worse still.

So if it is a predictable annual crisis and we have a lot of very intelligent, very committed, very passionate people working on the problem – then why is it getting worse rather than better?

One possible factor is Temperament Treacle.

This is the glacially slow pace of effective change in healthcare – often labelled as “resistance to change” and implying deliberate scuppering of the change boat by powerful forces within the healthcare system.

Resistance to the flow of change is probably a better term. We could call that cultural viscosity.  Treacle has a very high viscosity – it resists flow.  Wading through treacle is very hard work. So pushing change though cultural treacle is hard work. Many give up in exhaustion after a while.

So why the term “Temperament Treacle“?

Improvement Science has three parts – Processes, Politics and Systems.

Process Science is applied physics. It is an objective, logical, rational science. The Laws of Physics are not negotiable. They are absolute.

Political Science is applied psychology. It is a subjective, illogical, irrational science. The Laws of People are totally negotiable.  They are arbitrary.

Systems Science is a combination of Physics and Psychology. A synthesis. A synergy. A greater-than-the-sum-of-the-parts combination.

The Swiss physician Carl Gustav Jung studied psychology – and in 1920 published “Psychological Types“.  When this ground-breaking work was translated into English in 1923 it was picked up by Katherine Cook Briggs and made popular by her daughter Isabel.  Isabel Briggs married Clarence Myers and in 1942 Isabel Myers learned about the Humm-Wadsworth Scale,  a tool for matching people with jobs. So using her knowledge of psychological type differences she set out to develop her own “personality sorting tool”. The first prototype appeared in 1943; in the 1950′s she tested the third iteration and measured the personality types of 5,355 medical students and over 10,000 nurses.   The Myers-Briggs Type Indicator was published 1962 and since then the MBTI® has been widely tested and validated and is the most extensively used personality type instrument. In 1980 Isabel Myers finished writing Gifts Differing just before she died at the age of 82 after a twenty year long battle with cancer.

The essence of Jung’s model is that an individual’s temperament is largely innate and the result of a combination of three dimensions:

1. The input or perceiving  process (P). The poles are Intuitor (N) or Sensor (S).
2. The decision or judging process (J). The poles are Thinker (T) or Feeler (F).
3. The output or doing process. The poles are Extraversion (E) or Intraversion (I).

Each of Jung’s dimensions had two “opposite” poles so when combined they gave eight types.  Isabel Myers, as a result of her extensive empirical testing, added a fourth dimension – which gives the four we see in the modern MBTI®.  The fourth dimension linked the other three together – it describes if the J or the P process is the one shown to the outside world. So the MBTI® has sixteen broad personality types.  In 1998 a book called “Please Understand Me II” written by David Keirsey, the MBTI® is put into an historical context and Keirsey concluded that there are four broad Temperaments – and these have been described since Ancient times.

When Isabel Myers measured different populations using her new tool she discovered a consistent pattern: that the proportions of the sixteen MBTI® types were consistent across a wide range of societies. Personality type is, as Jung had suggested, an innate part of the “human condition”. She also saw that different types clustered in different occupations. Finding the “right job” appeared to be a process of natural selection: certain types fitted certain roles better than others and people self-selected at an early age.  If their choice was poor then the person would be unhappy and would not achieve their potential.

Isabel’s work also showed that each type had both strengths and weaknesses – and that people performed better and felt happier when their role played to their temperament strengths.  It also revealed that considerable conflict could be attributed to type-mismatch.  Polar opposite types have the least psychological “common ground” – so when they attempt to solve a common problem they do so by different routes and using different methods and language. This generates confusion and conflict.  This is why Isabel Myers gave her book the title of “Gifts Differing” and her message was that just having awareness of and respect for the innate type differences was a big step towards reducing the confusion and conflict.

So what relevance does this have to change and improvement?

Well it turns out that certain types are much more open to change than others and certain types are much more resistant.  If an organisation, by the very nature of its work, attracts the more change resistant types then that organisation will be culturally more viscous to the flow of change. It will exhibit the cultural characteristics of temperament treacle.

The key to understanding Temperament and the MBTI® is to ask a series of questions:

Q1. Does the person have the N or S preference on their perceiving function?

A1=N then Q2: Does the person have a T or F preference on their judging function?
A2=T gives the xNTx combination which is called the Rational or phlegmatic temperament.
A2=F gives the xNFx combination which is called the Idealist or choleric temperament.

A1=S then Q3: Does the person show a J or P preference to the outside world?
A3=J gives the xSxJ combination which is called the Guardian or melancholic temperament.
A3=P gives the xSxP combination which is called the Artisan or sanguine temperament.

So which is the most change resistant temperament?  The answer may not be a big surprise. It is the Guardians. The melancholics. The SJ’s.

Bureaucracies characteristically attract SJ types. The upside is that they ensure stability – the downside is that they prevent agility.  Bureaucracies block change.

The NF Idealists are the advocates and the mentors: they love initiating and facilitating transformations with the dream of making the world a better place for everyone. They light the emotional bonfire and upset the apple cart. The NT Rationals are the engineers and the architects. They love designing and building new concepts and things – so once the Idealists have cracked the bureaucratic carapace they can swing into action. The SP Sanguines are the improvisors and expeditors – they love getting the new “concept” designs to actually work in the messy real world.

Unfortunately the grand designs dreamed up by the ‘N’s often do not work in practice – and the scene is set for the we-told-you-so game, and the name-shame-blame game.

So if initiating and facilitating change is the Achilles Heel of the SJ’s then what is their strength?

Let us approach this from a different perspective:

Let us put ourselves in the shoes of patients and ask ourselves: “What do we want from a System of Healthcare and from those who deliver that care – the doctors?”

1. Safe?
2. Reliable?
3. Predictable?
4. Decisive?
5. Dependable?
6. All the above?

These are the strengths of the SJ temperament. So how do doctors measure up?

In a recent observational study, 168 doctors who attended a leadership training course completed their MBTI® self-assessments as part of developing insight into temperament from the perspective of a clinical leader.  From the collective data we can answer our question: “Are there more SJ types in the medical profession than we would expect from the general population?”

Doctor_Temperament The table shows the results – 60% of doctors were SJ compared with 35% expected for the general population.

Statistically this is highly significant difference (p<0.0001). Doctors are different.

It is of enormous practical importance well.

We are reassured that the majority of doctors have a preference for the very traits that patients want from them. That may explain why the Medical Profession always ranks highest in the league table of “trusted professionals”. We need to be able to trust them – it could literally be a matter of life or death.

The table also shows where the doctors were thin on the ground: in the mediating, improvising, developing, constructing temperaments. The very set of skills needed to initiate and facilitate effective and sustained change.

So when the healthcare system is lurching from one predictable crisis to another – the innate temperament of the very people we trust to deliver our health care are the least comfortable with changing the system of care itself.

That is a problem. A big problem.

Studies have show that when we get over-stressed, fearful and start to panic then in a desperate act of survival we tend to resort to the aspects of our temperament that are least well developed.  An SJ who is in panic-mode may resort to NP tactics: opinion-led purposeless conceptual discussion and collective decision paralysis. This is called the “headless chicken and rabbit in the headlights” mode. We have all experienced it.

A system that is no longer delivering fit-for-purpose performance because its purpose has shifted requires redesign.  The temperament treacle inhibits the flow of change so the crisis is not averted. The crisis happens, invokes panic and triggers ineffective and counter-productive behaviour. The crisis deepens and performance can drop catastrophically when the red tape is cut. It was the only thing holding the system together!

But while the bureaucracy is in disarray then innovation can start to flourish. And the next cycle starts.

It is a painful, slow, wasteful process called “reactionary evolution by natural selection“.

Improvement Science is different. It operates from a “proactive revolution through collective design” that is enjoyable, quick and efficient but it requires mastery of synergistic political science and process science. We do not have that capability – yet.

The table offers some hope.  It shows the majority of doctors are xSTJ.  They are Logical Guardians. That means that they solve problems using tried-tested-and-trustworthy logic. So they have no problem with the physics. Show them how to diagnose and design processes and they are inside their comfort zone.

Their collective weak spot is managing the politics – the critical cultural dimension of change. Often the result is manipulation rather than motivation. It does not work. The improvement stalls. Cynicism increases. The treacle gets thicker.

System-redesign requires synergistic support, development, improvisation and mediation. These strengths do exist in the medical profession – but they appear to be in short supply – so they need to be identified, and nurtured.  And change teams need to assemble and respect the different gifts.

One further point about temperament.  It is not immutable. We can all develop a broader set of MBTI® capabilities with guidance and practice – especially the ones that fill the gaps between xSTJ and xNFP.  Those whose comfort zone naturally falls nearer the middle of the four dimensions find this easier. And that is one of the goals of Improvement Science training.

Sorting_HatAnd if you are in a hurry then you might start today by identifying the xSFJ “supporters” and the xNFJ “mentors” in your organisation and linking them together to build a temporary bridge over the change culture chasm.

So to find your Temperament just click here to download the Temperament Sorter.

mirror_mirror[Dring Dring]

The phone announced the arrival of Leslie for the weekly ISP mentoring conversation with Bob.

<Leslie> Hi Bob.

<Bob> Hi Leslie. What would you like to talk about today?

<Leslie> A new challenge – one that I have not encountered before.

<Bob>Excellent. As ever you have pricked my curiosity. Tell me more.

<Leslie> OK. Up until very recently whenever I have demonstrated the results of our improvement work to individuals or groups the usual response has been “Yes, but“. The habitual discount as you call it. “Yes, but your service is simpler; Yes, but your budget is bigger; Yes, but your staff are less militant.” I have learned to expect it so I do not get angry any more.

<Bob> OK. The mantra of the skeptics is to be expected and you have learned to stay calm and maintain respect. So what is the new challenge?

<Leslie>There are two parts to it.  Firstly, because the habitual discounting is such an effective barrier to diffusion of learning;  our system has not changed; the performance is steadily deteriorating; the chaos is worsening and everything that is ‘obvious’ has been tried and has not worked. More red lights are flashing on the patient-harm dashboard and the Inspectors are on their way. There is an increasing  turnover of staff at all levels – including Executive.  There is an anguished call for “A return to compassion first” and “A search for new leaders” and “A cultural transformation“.

<Bob> OK. It sounds like the tipping point of awareness has been reached, enough people now appreciate that their platform is burning and radical change of strategy is required to avoid the ship sinking and them all drowning. What is the second part?

<Leslie> I am getting more emails along the line of “What would you do?

<Bob> And your reply?

<Leslie> I say that I do not know because I do not have a diagnosis of the cause of the problem. I do know a lot of possible causes but I do not know which plausible ones are the actual ones.

<Bob> That is a good answer.  What was the response?

<Leslie>The commonest one is “Yes, but you have shown us that Plan-Do-Study-Act is the way to improve – and we have tried that and it does not work for us. So we think that improvement science is just more snake oil!”

<Bob>Ah ha. And how do you feel about that?

<Leslie>I have learned the hard way to respect the opinion of skeptics. PDSA does work for me but not for them. And I do not understand why that is. I would like to conclude that they are not doing it right but that is just discounting them and I am wary of doing that.

<Bob>OK. You are wise to be wary. We have reached what I call the Mirror-on-the-Wall moment.  Let me ask what your understanding of the history of PDSA is?

<Leslie>It was called Plan-Do-Check-Act by Walter Shewhart in the 1930′s and was presented as a form of the scientific method that could be applied on the factory floor to improving the quality of manufactured products.  W Edwards Deming modified it to PDSA where the “Check” was changed to “Study”.  Since then it has been the key tool in the improvement toolbox.

<Bob>Good. That is an excellent summary.  What the Zealots do not talk about are the limitations of their wonder-tool.  Perhaps that is because they believe it has no limitations.  Your experience would seem to suggest otherwise though.

<Leslie>Spot on Bob. I have a nagging doubt that I am missing something here. And not just me.

<Bob>The reason PDSA works for you is because you are using it for the purpose it was designed for: incremental improvement of small bits of the big system; the steps; the points where the streams cross the stages.  You are using your FISH training to come up with change plans that will work because you understand the Physics of Flow better. You make wise improvement decisions.  In fact you are using PDSA in two separate modes: discovery mode and delivery mode.  In discovery mode we use the Study phase to build your competence – and we learn most when what happens is not what we expected.  In delivery mode we use the Study phase to build our confidence – and that grows most when what happens is what we predicted.

<Leslie>Yes, that makes sense. I see the two modes clearly now you have framed it that way – and I see that I am doing both at the same time, almost by second nature.

<Bob>Yes – so when you demonstrate it you describe PDSA generically – not as two complimentary but contrasting modes. And by demonstrating success you omit to show that there are some design challenges that cannot be solved with either mode.  That hidden gap attracts some of the “Yes, but” reactions.

<Leslie>Do you mean the challenges that others are trying to solve and failing?

<Bob>Yes. The commonest error is to discount the value of improvement science in general; so nothing is done and the inevitable crisis happens because the system design is increasingly unfit for the evolving needs.  The toast is not just burned it is on fire and is now too late to  use the discovery mode of PDSA because prompt and effective action is needed.  So the delivery mode of PDSA is applied to a emergent, ill-understood crisis. The Plan is created using invalid assumptions and guesswork so it is fundamentally flawed and the Do then just makes the chaos worse.  In the ensuing panic the Study and Act steps are skipped so all hope of learning is lost and and a vicious and damaging spiral of knee-jerk Plan-Do-Plan-Do follows. The chaos worsens, quality falls, safety falls, confidence falls, trust falls, expectation falls and depression and despair increase.

<Leslie>That is exactly what is happening and why I feel powerless to help. What do I do?

<Bob>The toughest bit is past. You have looked squarely in the mirror and can now see harsh reality rather than hasty rhetoric. Now you can look out of the window with different eyes.  And you are now looking for a real-world example of where complex problems are solved effectively and efficiently. Can you think of one?

<Leslie>Well medicine is one that jumps to mind.  Solving a complex, emergent clinical problems requires a clear diagnosis and prompt and effective action to stabilise the patient and then to cure the underlying cause: the disease.

<Bob>An excellent example. Can you describe what happens as a PDSA sequence?

<Leslie>That is a really interesting question.  I can say for starters that it does not start with P – we have learned are not to have a preconceived idea of what to do at the start because it badly distorts our clinical judgement.  The first thing we do is assess the patient to see how sick and unstable they are – we use the Vital Signs. So that means that we decide to Act first and our first action is to Study the patient.

<Bob>OK – what happens next?

<Leslie>Then we will do whatever is needed to stabilise the patient based on what we have observed – it is called resuscitation – and only then we can plan how we will establish the diagnosis; the root cause of the crisis.

<Bob> So what does that spell?

<Leslie> A-S-D-P.  It is the exact opposite of P-D-S-A … the mirror image!

<Bob>Yes. Now consider the treatment that addresses the root cause and that cures the patient. What happens then?

<Leslie>We use the diagnosis is used to create a treatment Plan for the specific patient; we then Do that, and we Study the effect of the treatment in that specific patient, using our various charts to compare what actually happens with what we predicted would happen. Then we decide what to do next: the final action.  We may stop because we have achieved our goal, or repeat the whole cycle to achieve further improvement. So that is our old friend P-D-S-A.

<Bob>Yes. And what links the two bits together … what is the bit in the middle?

<Leslie>Once we have a diagnosis we look up the appropriate treatment options that have been proven to work through research trials and experience; and we tailor the treatment to the specific patient. Oh I see! The missing link is design. We design a specific treatment plan using generic principles.

<Bob>Yup.  The design step is the jam in the improvement sandwich and it acts like a mirror: A-S-D-P is reflected back as P-D-S-A

<Leslie>So I need to teach this backwards: P-D-S-A and then Design and then A-S-P-D!

<Bob>Yup – and you know that by another name.

<Leslie> 6M Design®! That is what my Improvement Science Practitioner course is all about.

<Bob> Yup.

<Leslie> If you had told me that at the start it would not have made much sense – it would just have confused me.

<Bob>I know. That is the reason I did not. The Mirror needs to be discovered in order for the true value to appreciated. At the start we look in the mirror and perceive what we want to see. We have to learn to see what is actually there. Us. Now you can see clearly where P-D-S-A and Design fit together and the missing A-S-D-P component that is needed to assemble a 6M Design® engine. That is Improvement-by-Design in a nine-letter nutshell.

<Leslie> Wow! I can’t wait to share this.

<Bob> And what do you expect the response to be?

<Leslie>”Yes, but”?

<Bob> From the die hard skeptics – yes. It is the ones who do not say “Yes, but” that you want to engage with. The ones who are quiet. It is always the quiet ones that hold the key.

There are three necessary parts before ANY improvement-by-design effort will gain traction. Omit any one of them and nothing happens.

stick_figure_drawing_three_check_marks_150_wht_5283

1. A clear purpose and an outline strategic plan.

2. Tactical measurement of performance-over-time.

3. A generic Improvement-by-Design framework.

These are necessary minimum requirements to be able to safely delegate the day-to-day and week-to-week tactical stuff the delivers the “what is needed”.

These are necessary minimum requirements to build a self-regulating, self-sustaining, self-healing, self-learning win-win-win system.

And this is not a new idea.  It was described by Joseph Juran in the 1960′s and that description was based on 20 years of hands-on experience of actually doing it in a wide range of manufacturing and service organisations.

That is 20 years before  the terms “Lean” or “Six Sigma” or “Theory of Constraints” were coined.  And the roots of Juran’s journey were 20 years before that – when he started work at the famous Hawthorne Works in Chicago – home of the Hawthorne Effect – and where he learned of the pioneering work of  Walter Shewhart.

And the roots of Shewhart’s innovations were 20 years before that – in the first decade of the 20th Century when innovators like Henry Ford and Henry Gantt were developing the methods of how to design and build highly productive processes.

Ford gave us the one-piece-flow high-quality at low-cost production paradigm. Toyota learned it from Ford.  Gantt gave us simple yet powerful visual charts that give us an understanding-at-a-glance of the progress of the work.  And Shewhart gave us the deceptively simple time-series chart that signals when we need to take more notice.

These nuggets of pragmatic golden knowledge have been buried for decades under a deluge of academic mud.  It is nigh time to clear away the detritus and get back to the bedrock of pragmatism. The “how-to-do-it” of improvement. Just reading Juran’s 1964 “Managerial Breakthrough” illustrates just how much we now take for granted. And how ignorant we have allowed ourselves to become.

Acquired Arrogance is a creeping, silent disease – we slip from second nature to blissful ignorance without noticing when we divorce painful reality and settle down with our own comfortable collective rhetoric.

The wake-up call is all the more painful as a consequence: because it is all the more shocking for each one of us; and because it affects more of us.

The pain is temporary – so long as we treat the cause and not just the symptom.

The first step is to acknowledge the gap – and to start filling it in. It is not technically difficult, time-consuming or expensive.  Whatever our starting point we need to put in place the three foundation stones above:

1. Common purpose.
2. Measurement-over-time.
3. Method for Improvement.

Then the rubber meets the road (rather than the sky) and things start to improve – for real. Lots of little things in lots of places at the same time – facilitated by the Junior Managers. The cumulative effect is dramatic. Chaos is tamed; calm is restored; capability builds; and confidence builds. The cynics have to look elsewhere for their sport and the skeptics are able to remain healthy.

Then the Middle Managers feel the new firmness under their feet – where before there were shifting sands. They are able to exert their influence again – to where it makes a difference. They stop chasing Scotch Mist and start reporting real and tangible improvement – with hard evidence. And they rightly claim a slice of the credit.

And the upwelling of win-win-win feedback frees the Senior Managers from getting sucked into reactive fire-fighting and the Victim Vortex; and that releases the emotional and temporal space to start learning and applying System-level Design.  That is what is needed to deliver a significant and sustained improvement.

And that creates the stable platform for the Executive Team to do Strategy from. Which is their job.

It all starts with the Three Essentials:

1. A Clear and Common Constancy of Purpose.
2. Measurement-over-time of the Vital Metrics.
3. A Generic Method for Improvement-by-Design.

Black_Curtain_and_DoorA couple of weeks ago an important event happened.  A Masterclass in Demand and Capacity for NHS service managers was run by an internationally renown and very experienced practitioner of Improvement Science.

The purpose was to assist the service managers to develop their capability for designing quality, flow and cost improvement using tried and tested operations management (OM) theory, techniques and tools.

It was assumed that as experienced NHS service managers that they already knew the basic principles of  OM and the foundation concepts, terminology, techniques and tools.

It was advertised as a Masterclass and designed accordingly.

On the day it was discovered that none of the twenty delegates had heard of two fundamental OM concepts: Little’s Law and Takt Time.

These relate to how processes are designed-to-flow. It was a Demand and Capacity Master Class; not a safety, quality or cost one.  The focus was flow.

And it became clear that none of the twenty delegates were aware before the day that there is a well-known and robust science to designing systems to flow.

So learning this fact came as a bit of a shock.

The implications of this observation are profound and worrying:

if a significant % of senior NHS operational managers are unaware of the foundations of operations management then the NHS may have problem it was not aware of …

because …

“if transformational change of the NHS into a stable system that is fit-for-purpose (now and into the future) requires the ability to design processes and systems that deliver both high effectiveness and high efficiency ...”

then …

it raises the question of whether the current generation of NHS managers are fit-for-this-future-purpose“.

No wonder that discovering a Science of  Improvement actually exists came as a bit of a shock!

And saying “Yes, but clinicians do not know this science either!” is a defensive reaction and not a constructive response. They may not but they do not call themselves “operational managers”.

[PS. If you are reading this and are employed by the NHS and do not know what Little's Law and Takt Time are then it would be worth doing that first. Wikipedia is a good place to start].

And now we have another question:

“Given there are thousands of operational managers in the NHS; what does one sample of 20 managers tell us about the whole population?”

Now that is a good question.

It is also a question of statistics. More specifically quite advanced statistics.

And most people who work in the NHS have not studied statistics to that level. So now we have another do-not-know-how problem.

But it is still an important question that we need to understand the answer to – so we need to learn how and that means taking this learning path one step at a time using what we do know, rather than what we do not.

Step 1:

What do we know? We have one sample of 20 NHS service managers. We know something about our sample because our unintended experiment has measured it: that none of them had heard of Little’s Law or Takt Time. That is 0/20 or 0%.

This is called a “sample statistic“.

What we want to know is “What does this information tell us about the proportion of the whole population of all NHS managers who do have this foundation OM knowledge?”

This proportion of interest is called  the unknown “population parameter“.

And we need to estimate this population parameter from our sample statistic because it is impractical to measure a population parameter directly: That would require every NHS manager completing an independent and accurate assessment of their basic OM knowledge. Which seems unlikely to happen.

The good news is that we can get an estimate of a population parameter from measurements made from small samples of that population. That is one purpose of statistics.

Step 2:

But we need to check some assumptions before we attempt this statistical estimation trick.

Q1: How representative is our small sample of the whole population?

If we chose the delegates for the masterclass by putting the names of all NHS managers in a hat and drawing twenty names out at random, as in a  tombola or lottery, than we have what is called a “random sample” and we can trust our estimate of the wanted population parameter.  This is called “random sampling”.

That was not the case here. Our sample was self-selecting. We were not conducting a research study. This was the real world … so there is a chance of “bias”. Our sample may not be representative and we cannot say what the most likely bias is.

It is possible that the managers who selected themselves were the ones struggling most and therefore more likely than average to have a gap in their foundation OM knowledge. It is also possible that the managers who selected themselves are the most capable in their generation and are very well aware that there is something else that they need to know.

We may have a biased sample and we need to proceed with some caution.

Step 3:

So given the fact that none of our possibly biased sample of mangers were aware of the Foundation OM Knowledge then it is possible that no NHS service managers know this core knowledge.  In other words the actual population parameter is 0%. It is also possible that the managers in our sample were the only ones in the NHS who do not know this.  So, in theory, the sought-for population parameter could be anywhere between 0% and very nearly 100%.  Does that mean it is impossible to estimate the true value?

It is not impossible. In fact we can get an estimate that we can be very confident is accurate. Here is how it is done.

Statistical estimates of population parameters are always presented as ranges with a lower and an upper limit called a “confidence interval” because the sample is not the population. And even if we have an unbiased random sample we can never be 100% confident of our estimate.  The only way to be 100% confident is to measure the whole population. And that is not practical.

So, we know the theoretical limits from consideration of the extreme cases … but what happens when we are more real-world-reasonable and say – “let us assume our sample is actually a representative sample, albeit not a randomly selected one“.  How does that affect the range of our estimate of the elusive number – the proportion of NHS service managers who know basic operation management theory?

Step 4:

To answer that we need to consider two further questions:

Q2. What is the effect of the size of the sample?  What if only 5 managers had come and none of them knew; what if had been 50 or 500 and none of them knew?

Q3. What if we repeated the experiment more times? With the same or different sample sizes? What could we learn from that?

Our intuition tells us that the larger the sample size and the more often we do the experiment then the more confident we will be of the result. In other words  narrower the range of the confidence interval around our sample statistic.

Our intuition is correct because if our sample was 100% of the population we could be 100% confident.

So given we have not yet found an NHS service manager who has the OM Knowledge then we cannot exclude 0%. Our challenge narrows to finding a reasonable estimate of the upper limit of our confidence interval.

Step 5

Before we move on let us review where we have got to already and our purpose for starting this conversation: We want enough NHS service managers who are knowledgeable enough of design-for-flow methods to catalyse a transition to a fit-for-purpose and self-sustaining NHS.

One path to this purpose is to have a large enough pool of service managers who do understand this Science well enough to act as advocates and to spread both the know-of and the know-how.  This is called the “tipping point“.

There is strong evidence that when about 20% of a population knows about something that is useful for the whole population – then that knowledge  will start to spread through the grapevine. Deeper understanding will follow. Wiser decisions will emerge. More effective actions will be taken. The system will start to self-transform.

And in the Brave New World of social media this message may spread further and faster than in the past. This is good.

So if the NHS needs 20% of its operational managers aware of the Foundations of Operations Management then what value is our morsel of data from one sample of 20 managers who, by chance, were all unaware of the Knowledge.  How can we use that data to say how close to the magic 20% tipping point we are?

Step 6:

To do that we need to ask the question in a slightly different way.

Q4. What is the chance of an NHS manager NOT knowing?

We assume that they either know or do not know; so if 20% know then 80% do not.

This is just like saying: if the chance of rolling a “six” is 1-in-6 then the chance of rolling a “not-a-six” is 5-in-6.

Next we ask:

Q5. What is the likelihood that we, just by chance, selected a group of managers where none of them know – and there are 20 in the group?

This is rather like asking: what is the likelihood of rolling twenty “not-a-sixes” in a row?

Our intuition says “an unlikely thing to happen!”

And again our intuition is sort of correct. How unlikely though? Our intuition is a bit vague on that.

If the actual proportion of NHS managers who have the OM Knowledge is about the same chance of rolling a six (about 16%) then we sense that the likelihood of getting a random sample of 20 where not one knows is small. But how small? Exactly?

We sense that 20% is too a high an estimate of a reasonable upper limit.  But how much too high?

The answer to these questions is not intuitively obvious.

We need to work it out logically and rationally. And to work this out we need to ask:

Q6. As the % of Managers-who-Know is reduced from 20% towards 0% – what is the effect on the chance of randomly selecting 20 all of whom are not in the Know?  We need to be able to see a picture of that relationship in our minds.

The good news is that we can work that out with a bit of O-level maths. And all NHS service managers, nurses and doctors have done O-level maths. It is a mandatory requirement.

The chance of rolling a “not-a-six” is 5/6 on one throw – about 83%;
and the chance of rolling only “not-a-sixes” in two throws is 5/6 x 5/6 = 25/36 – about 69%
and the chance of rolling only “not-a-sixes” in three throws is 5/6 x 5/6 x 5/6 – about 58%… and so on.

[This is called the "chain rule" and it requires that the throws are independent of each other - i.e. a random, unbiased sample]

If we do this 20 times we find that the chance of rolling no sixes at all in 20 throws is about 2.6% – unlikely but far from impossible.

We need to introduce a bit of O-level algebra now.

Let us call the proportion of NHS service managers who understand basic OM, our unknown population parameter something like “p”.

So if p is the chance of a “six” then (1-p) is a chance of a “not-a-six”.

Then the chance of no sixes in one throw is (1-p)

and no sixes after 2 throws is (1-p)(1-p) = (1-p)^2 (where ^ means raise to the power)

and no sixes after three throws is (1-p)(1-p)(1-p) = (1-p)^3 and so on.

So the likelihood of  “no sixes in n throws” is (1-p)^n

Let us call this “t”

So the equation we need to solve to estimate the upper limit of our estimate of “p” is

t=(1-p)^20

Where “t” is a measure of how likely we are to choose 20 managers all of whom do not know – just by chance.  And we want that to be a small number. We want to feel confident that our estimate is reasonable and not just a quirk of chance.

So what threshold do we set for “t” that we feel is “reasonable”? 1 in a million? 1 in 1000? 1 in 100? 1 in10?

By convention we use 1 in 20 (t=0.05) – but that is arbitrary. If we are more risk-averse we might choose 1:100 or 1:1000. It depends on the context.

Let us be reasonable – let is say we want to be 95% confident our our estimated upper limit for “p” – which means we are calculating the 95% confidence interval. This means that will accept a 1:20 risk of our calculated confidence interval for “p” being wrong:  a 19:1 odds that the true value of “p” falls outside our calculated range. Pretty good odds! We will be reasonable and we will set the likelihood threshold for being “wrong” at 5%.

So now we need to solve:

0.05= (1-p)^20

And we want a picture of this relationship in our minds so let us draw a graph of t for a range of values of p.

We know the value of p must be between 0 and 1.0 so we have all we need and we can generate this graph easily using Excel.  And every senior NHS operational manager knows how to use Excel. It is a requirement. Isn’t it?

Black_Curtain

The Excel-generated chart shows the relationship between p (horizontal axis) and t (vertical axis) using our equation:

t=(1-p)^20.

Step 7

Let us first do a “sanity check” on what we have drawn. Let us “check the extreme values”.

If 0% of managers know then a sample of 20 will always reveal none – i.e. the leftmost point of the chart. Check!

If 100% of managers know then a sample of 20 will never reveal none – i.e. way off to the right. Check!

What is clear from the chart is that the relationship between p and t  is not a straight line; it is non-linear. That explains why we find it difficult to estimate intuitively. Our brains are not very good at doing non-linear analysis. Not very good at all.

So we need a tool to help us. Our Excel graph.  We read down the vertical “t” axis from 100% to the 5% point, then trace across to the right until we hit the line we have drawn, then read down to the corresponding value for “p”. It says about 14%.

So that is the upper limit of our 95% confidence interval of the estimate of the true proportion of NHS service managers who know the Foundations of Operations Management.  The lower limit is 0%.

And we cannot say better than somewhere between  0%-14% with the data we have and the assumptions we have made.

To get a more precise estimate,  a narrower 95% confidence interval, we need to gather some more data.

[Another way we can use our chart is to ask "If the actual % of Managers who know is x% the what is the chance that no one of our sample of 20 will know?" Solving this manually means marking the x% point on the horizontal axis then tracing a line vertically up until it crosses the drawn line then tracing a horizontal line to the left until it crosses the vertical axis and reading off the likelihood.]

So if in reality 5% of all managers do Know then the chance of no one knowing in an unbiased sample of 20 is about 35% – really quite likely.

Now we are getting a feel for the likely reality. Much more useful than just dry numbers!

But we are 95% sure that 86% of NHS managers do NOT know the basic language  of flow-improvement-science.

And what this chart also tells us is that we can be VERY confident that the true value of p is less than 2o% – the proportion we believe we need to get to transformation tipping point.

Now we need to repeat the experiment experiment and draw a new graph to get a more accurate estimate of just how much less – but stepping back from the statistical nuances – the message is already clear that we do have a Black Curtain problem.

A Black Curtain of Ignorance problem.

Many will now proclaim angrily “This cannot be true! It is just statistical smoke and mirrors. Surely our managers do know this by a different name – how could they not! It is unthinkable to suggest the majority of NHS manages are ignorant of the basic science of what they are employed to do!

If that were the case though then we would already have an NHS that is fit-for-purpose. That is not what reality is telling us.

And it quickly become apparent at the master class that our sample of 20 did not know-this-by-a-different-name.

The good news is that this knowledge gap could hiding the opportunity we are all looking for – a door to a path that leads to a radical yet achievable transformation of the NHS into a system that is fit-for-purpose. Now and into the future.

A system that delivers safe, high quality care for those who need it, in full, when they need it and at a cost the country can afford. Now and for the foreseeable future.

And the really good news is that this IS knowledge gap may be  and extensive deep but it is not wide … the Foundations are is easy to learn, and to start applying immediately.  The basics can be learned in less than a week – the more advanced skills take a bit longer.  And this is not untested academic theory – it is proven pragmatic real-world problem solving know-how. It has been known for over 50 years outside healthcare.

Our goal is not acquisition of theoretical knowledge – is is a deep enough understanding to make wise enough  decisions to achieve good enough outcomes. For everyone. Starting tomorrow.

And that is the design purpose of FISH. To provide those who want to learn a quick and easy way to do so.

Stop Press: Further feedback from the masterclass is that some of the managers are grasping the nettle, drawing back their own black curtains, opening the door that was always there behind it, and taking a peek through into a magical garden of opportunity. One that was always there but was hidden from view.

Sat 5th October

It started with a tweet.

08:17 [JG] The NHS is its people. If you lose them, you lose the NHS.

09:15 [DO] We are in a PEOPLE business – educating people and creating value.

Sun 6th October

08:32 [SD] Who isn’t in people business? It is only people who buy stuff. Plants, animals, rocks and machines don’t.

09:42 [DO] Very true – it is people who use a service and people who deliver a service and we ALL know what good service is.

09:47 [SD] So onus is on us to walk our own talk. If we don’t all improve our small bits of the NHS then who can do it for us?

Then we were off … the debate was on …

10:04 [DO] True – I can prove I am saving over £160 000.00 a year – roll on PBR !?

10:15 [SD] Bravo David. I recently changed my surgery process: productivity up by 35%. Cost? Zero. How? Process design methods.

11:54 [DO] Exactly – cost neutral because we were thinking differently – so how to persuade the rest?

12:10 [SD] First demonstrate it is possible then show those who want to learn how to do it themselves. http://www.saasoft.com/fish/course

We had hard evidence it was possible … and now MC joined the debate …

12:48 [MC] Simon why are there different FISH courses for safety, quality and efficiency? Shouldn’t good design do all of that?

12:52 [SD] Yes – goal of good design is all three. It just depends where you are starting from: Governance, Operations or Finance.

A number of parallel threads then took off and we all had lots of fun exploring  each others knowledge and understanding.

17:28 MC registers on the FISH course.

And that gave me an idea. I emailed an offer – that he could have a complimentary pass for the whole FISH course in return for sharing what he learns as he learns it.  He thought it over for a couple of days then said “OK”.

Weds 9th October

06:38 [MC] Over the last 4 years of so, I’ve been involved in incrementally improving systems in hospitals. Today I’m going to start an experiment.

06:40 [MC] I’m going to see if we can do less of the incremental change and more system redesign. To do this I’ve enrolled in FISH

Fri 11th October

06:47 [MC] So as part of my exploration into system design, I’ve done some studies in my clinic this week. Will share data shortly.

21:21 [MC] Here’s a chart showing cycle time of patients in my clinic. Median cycle time 14 mins, but much longer in 2 pic.twitter.com/wu5MsAKk80

20131019_TTchart

21:22 [MC] Here’s the same clinic from patients’ point if view, wait time. Much longer than I thought or would like

20131019_WTchart

21:24 [MC] Two patients needed to discuss surgery or significant news, that takes time and can’t be rushed.

21:25 [MC] So, although I started on time, worked hard and finished on time. People were waited ages to see me. Template is wrong!

21:27 [MC] By the time I had seen the the 3rd patient, people were waiting 45 mins to see me. That’s poor.

21:28 [MC] The wait got progressively worse until the end of the clinic.

Sunday 13th October

16:02 [MC] As part of my homework on systems, I’ve put my clinic study data into a Gantt chart. Red = waiting, green = seeing me pic.twitter.com/iep2PDoruN

20131019_Ganttchart

16:34 [SD] Hurrah! The visual power of the Gantt Chart. Worth adding the booked time too – there are Seven Sins of Scheduling to find.

16:36 [SD] Excellent – good idea to sort into booked time order – it makes the planned rate of demand easier to see.

16:42 [SD] Best chart is Work In Progress – count the number of patients at each time step and plot as a run chart.

17:23 [SD] Yes – just count how many lines you cross vertically at each time interval. It can be automated in Excel

17:38 [MC] Like this? pic.twitter.com/fTnTK7MdOp

 

20131019_WIPchart

This is the work-in-progress chart. The most useful process monitoring chart of all. It shows the changing size of the queue over time.  Good flow design is associated with small, steady queues.

18:22 [SD] Perfect! You’re right not to plot as XmR – this is a cusum metric. Not a healthy WIP chart this!

There was more to follow but the “ah ha” moment had been seen and shared.

Weds 16th October

MC completes the Online FISH course and receives his well-earned Certificate of Achievement.

This was his with-the-benefit-of-hindsight conclusion:

I wish I had known some of this before. I will have totally different approach to improvement projects now. Key is to measure and model well before doing anything radical.

Improvement Science works.
Improvement-by-Design is a skill that can be learned quickly.
FISH is just a first step.

This week I heard an inspiring story of applied Improvement Science that has delivered a win-win-win result. Not in a hospital. Not in a factory. In the red-in-tooth-and-claw reality of rural Kenya.

Africa has vast herds of four-hoofed herbivors called zebra and wildebeast who are accompanied by clever and powerful carnivors – called lions. The sun and rain make the grass grow; the herbivors eat the grass and the carnivors eat the herbivors. It is the way of Nature – and has been so for millions of years.

Enter Man a few thousand years ago with his domesticated cattle and the scene is set for conflict.  Domestic cattle are easy pickings for a hungry lion. Why spend a lot of energy chasing a lively zebra or wildebeast and run the risk of injury that would spell death-by-starvation? Lions are strong and smart but they do not have a social security system to look after the injured and sick. So why not go for the easier option?

Maasai_WarriorsSo Man protects his valuable cattle from hungry lions. And Man is inventive.  The cattle need to eat and sleep like the rest of us – so during the day the cattle are guarded by brave Maasai warriors armed with spears; and at night the cattle are herded into acacia thorn-ringed kraals and watched over by the boys of the tribe.

The lions come at night. Their sense of smell and sight is much better developed than Man’s.

The boys job is to deter the lions from killing the cattle.

And this conflict has been going on for thousands of years.

So when a hungry lion kills a poorly guarded cow or bull – then Man will get revenge and kill the lion.  Everyone loses.

But the application of Improvement Science is changing that ancient conflict.  And it was not done by a scientist or an animal welfare evangelist or a trained Improvementologist. It was done by young Maasai boy called Richard Turere.

He describes the why, the what and the how  … HERE.

Richard_TurereSo what was his breakthrough?

It was noticing that walking about with a torch  was a more effective lion deterrent than a fire or a scarecrow.

That was the chance discovery.  Chance favours the prepared mind.

So how do we create a prepared mind that is receptive to the hints that chance throws at us?

That is one purpose of learning Improvement Science.

What came after the discovery was not luck … it was design.

Richard used what was to hand to design a solution that achieved the required purpose – an effective lion deterrent – in a way that was also an efficient use of his lifetime.

He had bigger dreams than just protecting his tribe’s cattle. His dream was to fly in one of those silver things that he saw passing high over the savannah every day.

And sitting up every night waving a torch to deter hungry lions from eating his father’s cattle was not going to deliver that dream.

So he had to nail that Niggle before he could achieve his Nice If.

Like many budding inventors and engineers Richard is curious about how things work – and he learned a lot about electronics by dismantling his mother’s radio! It got him into a lot of trouble – but the knowledge and understanding that he gained was put to good use when he designed his “lion lights”.

This true story captures the essence of Improvement Science better than any blog, talk, lecture, course or book could.

That is why it was shared by those who learned of his improvement; then to TED; then to the World; then passed to me and I am passing it on too.  It is an inspiring story. It says that anyone can do this sort of thing if they choose to.

And it shows how Improvement Science spreads.  Through the grapevine.  And understanding how that works is part of the Science.

puzzle_lightbulb_build_PA_150_wht_4587One of the biggest challenges in Improvement Science is diffusion of an improvement outside the circle of control of the innovator.

It is difficult enough to make a significant improvement in one small area – it is an order of magnitude more difficult to spread the word and to influence others to adopt the new idea!

One strategy is to shame others into change by demonstrating that their attitude and behaviour are blocking the diffusion of innovation.

This strategy does not work.  It generates more resistance and amplifies the differences of opinion.

Another approach is to bully others into change by discounting their opinion and just rolling out the “obvious solution” by top-down diktat.

This strategy does not work either.  It generates resentment – even if the solution is fit-for-purpose – which it usually is not!

So what does work?

The key to it is to convert some skeptics because a converted skeptic is a powerful force for change.

But doesn’t that fly in the face of established change management theory?  Innovation diffuses from innovators to early-adopters, then to the silent majority, then the laggards and dinosaurs doesn’t it?

Yes – but that style of diffusion is incremental, slow and has a very high failure rate.  What is very often required is something more radical, much faster and more reliable.  For that it needs both push from the Confident Optimists and pull from some Converted Pessimists.  The tipping point does not happen until the silent majority start to come off the fence in droves: and they do that when the noisy optimists and pessimists start to agree.  The fence-sitters jump when the tug-o-war stalemate stops and the force for change becomes aligned in the direction of progress.

So how is a skeptic converted?

Simple. By another Converted Skeptic.

Here is a real example.

We are all skeptical about many things that we would actually like to improve.

Personal health for instance. Something like weight. Yawn! Not that Old Chestnut!

We are bombarded with shroud-waver stories that we are facing an epidemic of obesity, rapidly rising  rates of diabetes, and all the nasty and life-shortening consequences of that. We are exhorted to eat “five portions of fruit and veg a day” …  or else! We are told that we must all exercise our flab away. We are warned of the Evils of Cholesterol and told that fat children are symptomatic of bad parenting.

The more gullible and fearful are herded en-masse in the direction of the Get-Thin-Quick sharks who have a veritable feeding frenzy. Their goal is their short-term financial health not the long-term health of their customers.

The more insightful, skeptical and frustrated seek solace in the chocolate Hob Nob jar.

For their part the healthcare professionals are rewarded for providing ineffective healthcare by being paid-for-activity not for outcome. They dutifully measure the decline and hand out ineffective advice. Their goal is survival too.

The outcome is predictable and seemingly unavoidable.

So when an innovation comes along that challenges the current dogma and status quo the skeptics inevitably line up and proclaim that it will not work. Not that it does not work. They do not know that because they never try it. They are skeptics. Someone else has to prove it to them.

I am a skeptic about many things.

I am very skeptical about diets – the evidence suggests that their proclaimed benefit is difficult to achieve and even more difficult to sustain: and that is the hall-mark of either a poor design or a deliberate profit-driven and perfectly legal scam.

So I decided to put an innovative approach to weight loss to the test.  It is not a diet – it is a design to achieve and sustain a healthier weight to height ratio.  And for it to work it must work for me because I am a diet skeptic.

The start of the story is  HERE

I am now a Converted Skeptic.

I call the innovative design a “2 out of 7 Lo-CHO” policy and what that means is for two days a week I just cut out as much carbohydrate (CHO) as feasible.  Stuff like bread, potatoes, rice, pasta and sugar. The rest of the time I do what I normally do.  There is no need for me to exercise and no need for me to fill up on Five Fruit and Veg.

LoCHO_Design

The chart above is the evidence of what happened. It shows a 7 kg reduction in weight over 140 days – and that is impressive given that it has required no extra exercise and no need to give up tasty treats completely and definitely no need to boost the bottom-line of a Get-Thin-Quick shark!

It also shows what to expect.  The weight loss starts steeper then tails off as it approaches a new equilibrium weight. This is the classic picture of what happens to a “system” when one of its “operational policies” is wisely re-designed.  Patience, persistence and a time-series chart are all that is needed. It takes less than a minute per day to monitor the improvement.

I can afford to invest a minute per day.

The BaseLine© chart clearly shows that the day-to-day variation is quite high: and that is expected – it is inherent in the 2-out-of-7 Lo-CHO design. It is the not the short-term change that is the measure of success – it is the long-term improvement that is important.

It is important to measure daily – because it is the daily habit that keeps me mindful, aligned, and  on-goal.  It is not the measurement itself that is the most important thing – it is the conscious act of measuring and then plotting the dot in the context of the previous dots. The picture tells the story. No further “statistical” analysis is required.

The power of this chart is that it provides hard evidence that is very effective for nudging other skeptics like me into giving the innovative idea a try.  I know because I have done that many times now.  I have converted other skeptics.  It is an innovation infection.

And the same principle appears to apply to other areas.  What is critical to success is tangible and visible proof of progress. That is what skeptics need. Then a rational and logical method and explanation that respects their individual opinion and requirements. The design has to work for them. And it must make sense.

They will come out with a string of “Yes … buts” and that is OK because that is how skeptics work.  Just answer their questions with evidence and explanations. It can get a bit wearing I admit but it is worth the effort.

An effective Improvement Scientist needs to be a healthy skeptic too.

[Beep Beep]

Bob tapped the “Answer” button on his smartphone – it was Lesley calling in for their regular mentoring session.

<Bob>Hi Lesley. How are you today? And which tunnel in the ISP Learning Labyrinth shall we explore today?

<Lesley>Hi Bob. I am OK thank you. Can we invest some time in the Engagement Maze?

<Bob>OK. Do you have a specific example?

<Lesley>Sort of. This week I had a conversation with our Chief Executive about the potential of Improvement Science and the reply I got was “I am convinced by what you say but it is your colleagues who need to engage. If you have not succeeded in convincing them then how can I?” I was surprised by that response and slightly niggled because it had an uncomfortable nugget of truth in it.

<Bob>That sounds like the sound a wise leader who understands that the “power” to make things happen does not sit wholly in the lap of those charged with accountability.

<Lesley> I agree. And at the same time everything that the “Top Team” suggest gets shot down in flames by a small and very vocal group of my more skeptical colleagues.

<Bob>Ah ha!  It sounds like the Victim Vortex is causing trouble here.

<Lesley>The Victim Vortex?

<Bob>Yes. Let me give you an example. One of the common initiators of the Victim Vortex is the data flow part of a complex system design. The Sixth Flow. So can I ask you: “How are new information systems developed in your organization?

<Lesley>Wow! You hit the nail on the head first time!  Just this week there has been another firestorm of angry emails triggered by yet another silver-bullet IT system being foisted on us!

<Bob>Interesting use of language Lesley. You sound quite “niggled”.

<Lesley>I am.  Not by the constant “drizzle of IT magic – that is irritating enough – but more by the cynical reaction of my peers.

<Bob>OK. This sounds like good enough example of the Victim Vortex. What do you expect the outcome will be?

<Lesley>Well if past experience is a predictor for future performance – an expensive failure, more frustration and a deeper well of cynicism.

<Bob>Frustrating for whom?

<Lesley>Everyone. The IT department as well. It feels like we are all being sucked into a lose-lose-lose black hole of depression and despair!

<Bob>A very good description of the Victim Vortex.

<Lesley>So the Victim Vortex is an example of the Drama Triangle acting on an organizational level?

tornada_150_wht_10155<Bob>Yes. Visualize a cultural tornado. The energy that drives it is the emotional  currency spent in playing the OK – Not OK Games.  It is a self-fueling system,  a stable design, very destructive and very resistant to change.

<Lesley>That metaphor works really well for me!

<Bob>A similar one is a whirlpool – a water vortex. If you were out swimming and were caught up in a whirlpool what are your exit strategy options?

<Lesley>An interesting question.  I have never had that experience and would not want it – it sounds rather hazardous. Let me think.  If I do nothing I will just get swept around in the chaos and I am at risk of  getting bashed, bruised and then sucked under.

<Bob>Yes – you would probably spend all your time and energy just treading water and dodging the flotsam and jetsam that has been sucked into the Vortex. That is what most people do. It is called the Hamster Wheel effect.

<Lesley>So another option is to actively swim towards the middle of the Vortex – the end would at least be quick! But that is giving up and adopting the Hopelessness attitude of burned out Victim.  That would be the equivalent of taking voluntary redundancy or early retirement. It is not my style!

<Bob>Yes. It does not solve the problem either. The Vortex is always hoovering up new Victims. It is insatiable.

<Lesley> And another option would be to swim with the flow to avoid being “got” from behind. That would be seem sensible and is possible; and at least I would feel better for doing something. I might even escape if I swim fast enough!

<Bob>That is indeed what some try. The movers and shakers. The pace setters. The optimists. The extrovert leaders. The problem is that it makes the Vortex spin even faster.  It actually makes the Vortex bigger,  more chaotic and more dangerous than before.

<Lesley>Yes – I can see that.  So my other option is to swim against the flow in an attempt to slow the Vortex down. Would that work?

<Bob>If everyone did that at the same time it might but that is unlikely to happen spontaneously. If you could achieve that degree of action alignment you would not have a Victim Vortex in the first place. Trying to do it alone is ineffective – you tire very quickly, the other Victims bash into you, you slow them down, and then you all get sucked down the Plughole of Despair.

<Lesley>And I suppose a small group of like-minded champions who try to swim-against the flow might last longer if they stick together but even then eventually they would get bashed up and broken up too. I have seen that happen.  And that is probably where our team are heading at the moment. I am out of options. Is it impossible to escape the Victim Vortex?

<Bob>There is one more direction you can swim.

<Lesley>Um? You mean across the flow heading directly away from the center?

<Bob>Exactly. Consider that option.

<Lesley>Well, it would still be hard work and I would still be going around with the Vortex and I would still need to watch out for flotsam but every stroke I make would take me further from the center. The chaos would get gradually less and eventually I would be in clear water and out of danger.  I could escape the Victim Vortex!

<Bob>Yes. And what would happen if others saw you do that and did the same?

<Lesley>The Victim Vortex would dissipate!

<Bob>Yes. So that is your best strategy. It is a win-win-win strategy too. You can lead others out of the Victim Vortex.

<Lesley>Wow! That is so cool!  So how would I apply that metaphor to the Information System niggle?

<Bob>I will leave you to ponder on that.  Think about it as a design assignment. The design of the system that generates IT solutions that are fit-for-purpose.

<Lesley> Somehow I knew you were going to say that! I have my squared-paper and sharpened pencil at the ready.  Yes – an improvement-by-design assignment. Thank you once again Bob. This ISP course is the business!

CertificateOne of the best things about improvement is the delight that we feel when someone else acknowledges it.

Particularly someone whose opinion we respect.

We feel a warm glow of pride when they notice the difference and take the time to say “Well done!”

We need this affirmative feedback to fuel our improvement engine.

And we need to learn how to give ourselves affirmative feedback because usually there is a LOT of improvement work to do behind the scenes before any externally visible improvement appears.

It is like an iceberg – most of it is hidden from view.

And improvement is tough. We have to wade through Bureaucracy Treacle that is laced with Cynicide and policed by Skeptics.  We know this.

So we need to learn to celebrate the milestones we achieve and to keep reminding ourselves of what we have already done.  Even if no one else notices or cares.

Like the certificates, cups, and medals that we earned at school – still proudly displayed on our mantlepieces and shelves decades later. They are important. Especially to us.

So it is always a joy to celebrate the achievement of others and to say “Well Done” for reaching a significant milestone on the path of learning Improvement Science.

And that has been my great pleasure this week – to prepare and send the Certificates of Achievement to those who have recently completed the FISH course.

The best part of all has been to hear how many times the word “treasured” is used in the “Thank You” replies.

We display our Certificates with pride – not so much that others can see – more to remind ourselves every day to Celebrate Achievement.

 

stick_figure_open_cupboard_150_wht_8038Improvement implies change.

Change requires motivation.

And there are two flavours of motivation juice – Fear and Food.

Fear is the emotion that results from anticipated loss in the future.  Loss means some form of damage. Physical, psychological or political harm.  We fear loss of peer-esteem and we fear loss of self-esteem almost more than we fear physical harm.

Our fear of anticipated loss may be based on reality. Our experience of actual loss in the past.  We remember the emotional pain and we learn from past pain to fear future loss.

Our fear of anticipated loss may also be fueled by rhetoric.  The doom-mongering of the Shroud-Wavers, the Nay-Sayers, the Skeptics and the Cynics.

And there are examples where the rhetorical fear is deliberately generated to drive the fear-of-reality to “the solution” -which of course they have to pay dearly for. This is Machiavellian mass manipulation for commercial gain.

“Fear of germs, fear of fatness, fear of the invisible enemies outside and inside”. Spreading and Ameliorating Fear is big business. It is a Burn-and-Scrape design.

What we are seeing is the Drama Triangle operating on a massive scale. The Persecutors create the fear, the Victims run away and the Persecutors then switch role to Rescuers and offer to sell the Terrified-and-now-compliant Victims “the  solution” to their fear.  The Victims do not learn.  That is not the purpose – because that would end the Game and derail the Gravy Train.

Fear is not an effective way to motivate for sustained improvement.  We have ample evidence to support that statement!

The Burn-and-Scrape design that we see everywhere is a fear-based-design.  Any improvements are transitory and usually only achieved at the emotional expense of a passionate idealist. When they get too tired to push any more the toast gets burnt again because the toaster is perfectly designed to burn toast.  Not intentionally designed to but perfectly designed to nevertheless.

The use of Delusional Ratios and Arbitrary Targets (DRATs) is a fear-based-design-strategy. It ensures the Game and Gravy Train continue.

BUT fear has a frightening cost. The cost of checking-and-correcting. The cost of the defensive-bureaucracy that may catch errors before too much local harm results but which itself creates unmeasurable global harm in a different way – by hoovering up the priceless human resource of life-time – like an emotional black hole.

The cost of errors. The cost of queues. The list of fear-based-design costs is long.

A fear-based-design for delivering improvement is a poor design.

So we need a better design.

And a better one is based on a positive-attractive-emotional force pulling us forwards into the future. The anticipation of gains for all. A win-win-win design.

Win-win-win design starts with the Common Purpose: the outcomes that everyone wants; and the outcomes that no-one wants.  We need both.  This balance creates alignment of effort on getting the NiceIfs (the wants) while avoiding the NooNoos (the do not wants).

Then we ask the simple question: “What is preventing us having our win-win-win outcome now?

The blockers are the parts of our current design that we need to change: our errors of omission and our errors of commission.  Our gaps and our gaffes.

And to change them we need to be clear what they are; where they are and how they came to be there … and that requires a diagnostic skill that is one of our errors of omission. We have never learned how to diagnose our process design flaws.

Another common blocker is that we believe that a win-win-win outcome is impossible. This is a learned belief. And it is a self-fulfilling prophesy.

We may also believe that all swans are white because we have never seen a black swan – even though we know, in principle, that a black swan could be possible.

Rhetoric and Reality are not the same thing.  Feeling it could be possible and knowing that it actually is possible are different emotions. We need real evidence to challenge our life-limiting rhetoric.

Weary and wary life-worn skeptics crave real evidence not rhetorical exhortation.

So when that evidence is presented – and the Impossibility Hypothesis is disproved – then an emotional shock is inevitable.  We are now on the emotional roller-coaster called the Nerve Curve.  And the deeper our skepticism the bigger the shock.

After the shock we characteristically do one of three things:

1. We discount the evidence and go into denial.  We refuse to challenge their own rhetoric. Blissful ignorance is attractive.

2. We go quiet because we are now stuck in the the painful awareness of the transition zone between the past and the future. The feelings associated with the transition is anxiety and depression.

3. We sit up, we take notice, we listen harder, we rub our chins, our minds race as we become more and more excited. The feelings associated with this stage of resolution are curiosity, excitement and hope.

It is actually a sequence not a choice. This is normal.

And those who reach Stage 3 of the Nerve Curve say things like “We have food for thought;  we feel inspired; our passion is re-ignited; we now have a beacon of hope for the future.”

That is the flavour of motivation-juice that is needed to fuel the improvement-by-design engine and to deliver win-win-win designs that are both surprising and self-sustaining.

And what actually changes our belief of what is possible when we learn to do it for ourselves. For real.

That is Improvement Science in Action.

[Bing Bong]  The sound bite heralded Leslie joining the regular Improvement Science mentoring session with Bob.  They were now using web-technology to run virtual meetings because it allows a richer conversation and saves a lot of time. It is a big improvement.

<Bob> Hi Lesley, how are you today?

<Leslie> OK thank you Bob.  I have a thorny issue to ask you about today. It has been niggling me even since we started to share the experience we are gaining from our current improvement-by-design project.

<Bob> OK. That sounds interesting. Can you paint the picture for me?

<Leslie> Better than that – I can show you the picture, I will share my screen with you.

DRAT_01 <Bob> OK. I can see that RAG table. Can you give me a bit more context?

<Leslie> Yes. This is how our performance management team have been asked to produce their 4-weekly reports for the monthly performance committee meetings.

<Bob> OK. I assume the “Period” means sequential four week periods … so what is Count, Fail and Fail%?

<Leslie> Count is the number of discharges in that 4 week period, Fail is the number whose length of stay is longer than the target, and Fail% is the ratio of Fail/Count for each 4 week period.

<Bob> It looks odd that the counts are all 28.  Is there some form of admission slot carve-out policy?

<Leslie> Yes. There is one admission slot per day for this particular stream – that has been worked out from the average historical activity.

<Bob> Ah! And the Red, Amber, Green indicates what?

<Leslie> That is depends where the Fail% falls in a set of predefined target ranges; less than 5% is green, 5-10% is Amber and more than 10% is red.

<Bob> OK. So what is the niggle?

<Leslie>Each month when we are in the green we get no feedback – a deafening silence. Each month we are in amber we get a warning email.  Each month we are in the red we have to “go and explain ourselves” and provide a “back-on-track” plan.

<Bob> Let me guess – this feedback design is not helping much.

<Leslie> It is worse than that – it creates a perpetual sense of fear. The risk of breaching the target is distorting people’s priorities and their behaviour.

<Bob> Do you have any evidence of that?

<Leslie> Yes – but it is anecdotal.  There is a daily operational meeting and the highest priority topic is “Which patients are closest to the target length of stay and therefore need to have their  discharge expedited?“.

<Bob> Ah yes.  The “target tail wagging the quality dog” problem. So what is your question?

<Leslie> How do we focus on the cause of the problem rather than the symptoms?  We want to be rid of the “fear of the stick”.

<Bob> OK. What you have hear is a very common system design flaw. It is called a DRAT.

<Leslie> DRAT?

<Bob> “Delusional Ratio and Arbitrary Target”.

<Leslie> Ha! That sounds spot on!  “DRAT” is what we say every time we miss the target!

<Bob> Indeed.  So first plot this yield data as a time series chart.

<Leslie> Here we go.

DRAT_02<Bob>Good. I see you have added the cut-off thresholds for the RAG chart. These 5% and 10% thresholds are arbitrary and the data shows your current system is unable to meet them. Your design looks incapable.

<Leslie>Yes – and it also shows that the % expressed to one decimal place is meaningless because there are limited possibilities for the value.

<Bob> Yes. These are two reasons that this is a Delusional Ratio; there are quite a few more.

DRAT_03<Leslie> OK  and if I plot this as an Individuals charts I can see that this variation is not exceptional.

<Bob> Careful Leslie. It can be dangerous to do this: an Individuals chart of aggregate yield becomes quite insensitive with aggregated counts of relatively rare events, a small number of levels that go down to zero, and a limited number of points.  The SPC zealots are compounding the problem and plotting this data as a C-chart or a P-chart makes no difference.

This is all the effect of the common practice of applying  an arbitrary performance target then counting the failures and using that as means of control.

It is poor feedback loop design – but a depressingly common one.

<Leslie> So what do we do? What is a better design?

<Bob> First ask what the purpose of the feedback is?

<Leslie> To reduce the number of beds and save money by forcing down the length of stay so that the bed-day load is reduced and so we can do the same activity with fewer beds and at the same time avoid cancellations.

<Bob> OK. That sounds reasonable from the perspective of a tax-payer and a patient. It would also be a more productive design.

<Leslie> I agree but it seems to be having the opposite effect.  We are focusing on avoiding breaches so much that other patients get delayed who could have gone home sooner and we end up with more patients to expedite. It is like a vicious circle.  And every time we fail we get whacked with the RAG stick again. It is very demoralizing and it generates a lot of resentment and conflict. That is not good for anyone – least of all the patients.

<Bob>Yes.  That is the usual effect of a DRAT design. Remember that senior managers have not been trained in process improvement-by-design either so blaming them is also counter-productive.  We need to go back to the raw data. Can you plot actual LOS by patient in order of discharge as a run chart.

DRAT_04

<Bob> OK – is the maximum LOS target 8 days?

<Leslie> Yes – and this shows  we are meeting it most of the time.  But it is only with a huge amount of effort.

<Bob> Do you know where 8 days came from?

<Leslie> I think it was the historical average divided by 85% – someone read in a book somewhere that 85%  average occupancy was optimum and put 2 and 2 together.

<Bob> Oh dear! The “85% Occupancy is Best” myth combined with the “Flaw of Averages” trap. Never mind – let me explain the reasons why it is invalid to do this.

<Leslie> Yes please!

<Bob> First plot the data as a run chart and  as a histogram – do not plot the natural process limits yet as you have done. We need to do some validity checks first.

DRAT_05

<Leslie> Here you go.

<Bob> What do you see?

<Leslie> The histogram  has more than one peak – and there is a big one sitting just under the target.

<Bob>Yes. This is called the “Horned Gaussian” and is the characteristic pattern of an arbitrary lead-time target that is distorting the behaviour of the system.  Just as you have described subjectively. There is a smaller peak with a mode of 4 days and are a few very long length of stay outliers.  This multi-modal pattern means that the mean and standard deviation of this data are meaningless numbers as are any numbers derived from them. It is like having a bag of mixed fruit and then setting a maximum allowable size for an unspecified piece of fruit. Meaningless.

<Leslie> And the cases causing the breaches are completely different and could never realistically achieve that target! So we are effectively being randomly beaten with a stick. That is certainly how it feels.

<Bob> They are certainly different but you cannot yet assume that their longer LOS is inevitable. This chart just says – “go and have a look at these specific cases for a possible cause for the difference“.

<Leslie> OK … so if they are from a different system and I exclude them from the analysis what happens?

<Bob> It will not change reality.  The current design of  this process may not be capable of delivering an 8 day upper limit for the LOS.  Imposing  a DRAT does not help – it actually makes the design worse! As you can see. Only removing the DRAT will remove the distortion and reveal the underlying process behaviour.

<Leslie> So what do we do? There is no way that will happen in the current chaos!

<Bob> Apply the 6M Design® method. Map, Measure and Model it. Understand how it is behaving as it is then design out all the causes of longer LOS and that way deliver with a shorter and less variable LOS. Your chart shows that your process is stable.  That means you have enough flow capacity – so look at the policies. Draw on all your FISH training. That way you achieve your common purpose, and the big nasty stick goes away, and everyone feels better. And in the process you will demonstrate that there is a better feedback design than DRATs and RAGs. A win-win-win design.

<Leslie> OK. That makes complete sense. Thanks Bob!  But what you have described is not part of the FISH course.

<Bob> You are right. It is part of the ISP training that comes after FISH. Improvement Science Practitioner.

<Leslie> I think we will need to get a few more people trained in the theory, techniques and tools of Improvement Science.

<Bob> That would appear to be the case. They will need a real example to see what is possible.

<Leslie> OK. I am on the case!

stick_figures_pulling_door_150_wht_6913It is surprising how competitive most people are. We are constantly comparing ourselves with others and using what we find to decide what to do next. Groan or Gloat.  Chase or Cruise.

This is because we are social animals.  Comparing with other is hard-wired into us. We have little choice.

But our natural competitive behaviour can become counter-productive when we learn that we can look better-by-comparison if we block or trip-up our competitors.  In a vainglorious attempt to make ourselves look better-by-comparison we spike the wheels of our competitors’ chariots.  We fight dirty.

It is not usually openly aggressive fighting.  Most of our spiking is done passively. Often by deliberately not doing something.  A deliberate act of omission.  And if we are challenged we often justify our act of omission by claiming we were too busy.

This habitual passive-aggressive learned behaviour is not only toxic to improvement, it creates a toxic culture too. It is toxic to everything.

And it ensures that we stay stuck in The Miserable Job Swamp.  It is a bad design.

So we need a better one.

One idea is to eliminate competition.  This sounds plausible but it does not work. We are hard-wired to compete because it has proven to be a very effective long term survival strategy. The non-competitive have not survived.  To be deliberately non-competitive will guarantee mediocrity and future failure.

A better design is to leverage our competitive nature and this is surprisingly easy to do.

We flip the “battle” into a “race”.

green_leader_running_the_race_150_wht_3444To do that we need:

1) A clear destination – a shared common purpose – that can be measured. We need to be able to plot our progress using objective evidence.

2) A proven, safe, effective and efficient route plan to get us to our destination.

3) A required arrival time that is realistic.  Open-ended time-scales do not work.

4) Regular feedback to measure our individual progress and to compare ourselves with others.  Selective feedback is ineffective.  Secrecy or anonymous feedback is counter-productive at best and toxic at worst.

5) The ability to re-invest our savings on all three win-win-win dimensions: emotional, temporal and financial.  This fuels the engine of improvement. Us.

The rest just happens – but not by magic – it happens because this is a better Improvement-by-Design.

Many barriers to improvement are invisible.

This is because they are caused by what is not present rather than what is.  They are gaps or omissions.

Some gaps are blindingly obvious.  This is because we expect to see something there so we notice when it is missing. We would notice the gap if a rope bridge across chasm is obviously missing because only end posts are visible.

Many gaps are not obvious. This is because we have no experience or expectation.  The gap is invisible.  We are blind to the omission.

These are the gaps that we accidentally stumble into. Such as a gap in our knowledge and understanding that we cannot see. These are the gaps that create the fear of failure. And the fear is especially real because the gap is invisible and we only know when it is too late.

minefieldIt is like walking across an emotional minefield.  At any moment we could step on an ignorance mine and our confidence would be blasted into fragments.

So our natural and reasonable reaction is to stay outside the emotional minefield and inside our comfort zones – where we feel safe.  We give up trying to learn and trying to improve. Every-one hopes that Some-one or Any-one will do it for us.  No-one does.

The path to Improvement is always across an emotional minefield because improvement implies unlearning. So we need a better design than blundering about hoping not to fall into an invisible gap.  We need a safer design.

There are a number of options:

Option 1. Ask someone who knows the way across the minefield and can demonstrate it. Someone who knows where the mines are and knows how to avoid them. Someone to tell us where to step and where not to.

Option 2. Clear a new path and mark it clearly so others can trust that it is safe.  Remove the ignorance mines. Find and Fill the knowledge map.

Option 1 is quicker but it leaves the ignorance mines in place.  So sooner or later someone will step on one. Boom!

We need to be able to do Option 2.

The obvious  strategy for Option 2 is to clear the ignorance mines.  We could do this by deliberately blundering about setting off the mines. We could adopt the burn-and-scrape or learn-from-mistakes approach.

Or we could detect, defuse and remove them.

The former requires people willing to take emotional risks; the latter does not require such a sacrifice.

And “learn-by-mistakes” only works if people are able to make mistakes visibly so everyone can learn. In an adversarial, competitive, distrustful context this can not happen: and the result is usually for the unwilling troops to be forced into the minefield with the threat of a firing-squad if they do not!

And where a mistake implies irreversible harm it is not acceptable to learn that way. Mistakes are covered up. The ignorance mines are re-set for the next hapless victim to step on. The emotional carnage continues. Any change 0f sustained, system-wide improvement is blocked.

So in a low-trust cultural context the detect-defuse-and-remove strategy is the safer option.

And this requires a proactive approach to finding the gaps in understanding; a proactive approach to filling the knowledge holes; and a proactive approach to sharing what was learned.

Or we could ask someone who knows where the ignorance mines are and work our way through finding and filling our knowledge gaps. By that means any of us can build a safe, effective and efficient path to sustainable improvement.

And the person to ask is someone who can demonstrate a portfolio of improvement in practice – an experienced Improvement Science Practitioner.

And we can all learn to become an ISP and then guide others across their own emotional minefields.

All we need to do is take the first step on a well-trodden path to sustained improvement.

stick_figures_moving_net_150_wht_8609
It is almost autumn again.  The new school year brings anticipation and excitement. The evenings are drawing in and there is a refreshing chill in the early morning air.

This is the time of year for fudge.

Alas not the yummy sweet sort that Grandma cooked up and gave out as treats.

In healthcare we are already preparing the Winter Fudge – the annual guessing game of attempting to survive the Winter Pressures. By fudging the issues.

This year with three landmark Safety and Quality reports under our belts we have more at stake than ever … yet we seem as ill prepared as usual. Mr Francis, Prof Keogh and Dr Berwick have collectively exhorted us to pull up our socks.

So let us explore how and why we resort to fudging the issues.

Watch the animation of a highly simplified emergency department and follow the thoughts of the manager. You can pause, rewind, and replay as much as you like.  Follow the apparently flawless logic – it is very compelling. The exercise is deliberately simplified to eliminate wriggle room. But it is valid because the behaviour is defined by the Laws of Physics – and they are not negotiable.

The problem was combination of several planning flaws – two in particular.

First is the “Flaw of Averages” which is where the past performance-over-time is boiled down to one number. An average. And that is then used to predict precise future behaviour. This is a very big mistake.

The second is the “Flaw of Fudge Factors” which is a attempt to mitigate the effects of first error by fudging the answer – by adding an arbitrary “safety margin”.

This pseudo-scientific sleight-of-hand may polish the planning rhetoric and render it more plausible to an unsuspecting Board – but it does not fool Reality.

In reality the flawed design failed – as the animation dramatically demonstrated.  The simulated patients came to harm. Unintended harm to be sure – but harm nevertheless.

So what is the alternative?

The alternative is to learn how to avoid Sir Flaw of Averages and his slippery friend Mr Fudge Factor.

And learning how to do that is possible … it is called Improvement Science.

And you can start right now … click HERE.

bull_by_the_horns_anim_150_wht_9609Take the bull by the horns” is a phrase that is often heard in Improvement circles.

The metaphor implies that the system – the bull – is an unpredictable, aggressive, wicked, wild animal with dangerous sharp horns.

“Unpredictable” and “Dangerous” is certainly what the newspapers tell us the NHS system is – and this generates fear.  Fear-for-our-safety and fear drives us to avoid the bad tempered beast.

It creates fear in the hearts of the very people the NHS is there to serve – the public.  It is not the intended outcome.

Bullish” is a phrase we use for “aggressive behaviour” and it is disappointing to see those accountable behave in a bullish manner – aggressive, unpredictable and dangerous.

We are taught that bulls are to be  avoided and we are told to not to wave red flags at them! For our own safety.

But that is exactly what must happen for Improvement to flourish.  We all need regular glimpses of the Red Flag of Reality.  It is called constructive feedback – but it still feels uncomfortable.  Our natural tendency to being shocked out of our complacency is to get angry and to swat the red flag waver.  And the more powerful we are,  the sharper our horns are, the more swatting we can do and the more fear we can generate.  Often intentionally.

So inexperienced improvement zealots are prodded into “taking the executive bull by the horns” – but it is poor advice.

Improvement Scientists are not bull-fighters. They are not fearless champions who put themselves at personal risk for personal glory and the entertainment of others.  That is what Rescuers do. The fire-fighters; the quick-fixers; the burned-toast-scrapers; the progress-chasers; and the self-appointed-experts. And they all get gored by an angry bull sooner or later.  Which is what the crowd came to see – Bull Fighter Blood and Guts!

So attempting to slay the wicked bullish system is not a realistic option.

What about taming it?

This is the game of Bucking Bronco.  You attach yourself to the bronco like glue and wear it down as it tries to throw you off and trample you under hoof. You need strength, agility, resilience and persistence. All admirable qualities. Eventually the exhausted beast gives in and does what it is told. It is now tamed. You have broken its spirit.  The stallion is no longer a passionate leader; it is just a passive follower. It has become a Victim.

Improvement requires spirit – lots of it.

Improvement requires the spirit-of-courage to challenge dogma and complacency.
Improvement requires the spirit-of-curiosity to seek out the unknown unknowns.
Improvement requires the spirit-of-bravery to take calculated risks.
Improvement requires the spirit-of-action to make  the changes needed to deliver the improvements.
Improvement requires the spirit-of-generosity to share new knowledge, understanding and wisdom.

So taming the wicked bull is not going to deliver sustained improvement.  It will only achieve stable mediocrity.

So what next?

What about asking someone who has actually done it – actually improved something?

Good idea! Who?

What about someone like Don Berwick – founder of the Institute of Healthcare Improvement in the USA?

Excellent idea! We will ask him to come and diagnose the disease in our system – the one that lead to the Mid-Staffordshire septic safety carbuncle, and the nasty quality rash in 14 Trusts that Professor Sir Bruce Keogh KBE uncovered when he lifted the bed sheet.

[Click HERE to see Dr Bruce's investigation].

We need a second opinion because the disease goes much deeper – and we need it from a credible, affable, independent, experienced expert. Like Dr Don B.

So Dr Don has popped over the pond,  examined the patient, formulated his diagnosis and delivered his prescription.

[Click HERE to read Dr Don's prescription].

Of course if you ask two experts the same question you get two slightly different answers.  If you ask ten you get ten.  This is because if there was only one answer that everyone agreed on then there would be no problem, no confusion, and need for experts. The experts know this of course. It is not in their interest to agree completely.

One bit of good news is that the reports are getting shorter.  Mr Robert’s report on the failing of one hospital is huge and has 209 recommendations.  A bit of a bucketful.  Dr Bruce’s report is specific to the Naughty Fourteen who have strayed outside the statistical white lines of acceptable mediocrity.

Dr Don’s is even shorter and it has just 10 recommendations. One for each finger – so easy to remember.

1. The NHS should continually and forever reduce patient harm by embracing wholeheartedly an ethic of learning.

2. All leaders concerned with NHS healthcare – political, regulatory, governance, executive, clinical and advocacy – should place quality of care in general, and patient safety in particular, at the top of their priorities for investment, inquiry, improvement, regular reporting, encouragement and support.

3. Patients and their carers should be present, powerful and involved at all levels of healthcare organisations from wards to the boards of Trusts.

4. Government, Health Education England and NHS England should assure that sufficient staff are available to meet the NHS’s needs now and in the future. Healthcare organisations should ensure that staff are present in appropriate numbers to provide safe care at all times and are well-supported.

5. Mastery of quality and patient safety sciences and practices should be part of initial preparation and lifelong education of all health care professionals, including managers and executives.

6. The NHS should become a learning organisation. Its leaders should create and support the capability for learning, and therefore change, at scale, within the NHS.

7. Transparency should be complete, timely and unequivocal. All data on quality and safety, whether assembled by government, organisations, or professional societies, should be shared in a timely fashion with all parties who want it, including, in accessible form, with the public.

8. All organisations should seek out the patient and carer voice as an essential asset in monitoring the safety and quality of care.

9. Supervisory and regulatory systems should be simple and clear. They should avoid diffusion of responsibility. They should be respectful of the goodwill and sound intention of the vast majority of staff. All incentives should point in the same direction.

10. We support responsive regulation of organisations, with a hierarchy of responses. Recourse to criminal sanctions should be extremely rare, and should function primarily as a deterrent to wilful or reckless neglect or mistreatment.

The meat in the sandwich are recommendations 5 and 6 that together say “Learn Improvement Science“.

And what happens when we commit and engage in that learning journey?

Steve Peak has described what happens in this this very blog. It is called the OH effect.

OH stands for “Obvious-in-Hindsight”.

Obvious means “understandable” which implies visible, sensible, rational, doable and teachable.

Hindsight means “reflection” which implies having done something and learning from reality.

So if you would like to have a sip of Dr Don’s medicine and want to get started on the path to helping to create a healthier healthcare system you can do so right now by learning how to FISH – the first step to becoming an Improvement Science Practitioner.

The good news is that this medicine is neither dangerous nor nasty tasting – it is actually fun!

And that means it is OK for everyone – clinicians, managers, patients, carers and politicians.  All of us.

 

Fish!One of the reasons that many people find improvement difficult is because they are told that they will undergo a “transformational” change and they will have a “Road-To-Damascus Moment” when the “penny drops” and the “light bulb goes on”.

This is rubbish advice.

The unstated implication is that “and if you do not then there is something wrong with you“.

There is no Improvementologist I know who ever had a massive “ah ha” moment – the insight was gained gradually, bit-by-bit, over a long period of time.

And that is for a good reason.

We are all very weak-willed.

We all very easily slip back into Victim role, and I’m Not OK or They’re not OK thinking.  Especially when bad news is so plentiful and so cheap.

The “Eureka Mantra”  does not work with trying to improve physical health by losing weight so why should it work for anything else?  Diets do not work – if they did we would all be a healthy weight.

A few months ago I ran an experiment – to see if I could lose a significant amount of weight without much effort – certainly without doing any extra exercise.  How?  By “not burning the toast” on the first place. By ingesting fewer carbs.

That experiment has shown it is possible – I have the evidence – hard facts not just fuzzy feelings.

The most surprising lesson was that all I had to do was to reduce carb intake for two days a week. I just skipped the sugar, biscuits, bread, potatoes, crisps etc for two days a week. It was not difficult. In fact it was so easy I am not surprised that the Five-and-Two weight reduction plan is going viral.

So I wonder what would happen if we try the same experiment for other areas of improvement – psychological.  What if we just change the “diet” from “carbs” to “cants”.  What if for two days a week we just restrict our “cant” intake.  What if we turn down the volume of our inner voice that tells us what we cant do?  What if we just ignore the people whose response to every improvement suggestion is “yes …but”?  What if we just do this and measure what happens?

For only two days a week though.

I’m not interested in being suddenly transformed – a gradual metamorphosis is OK by me.  My intuition is that it will be important to maintain a normal diet of whining and denying for the other five days – because I need variation and I do seem to get pleasure from wallowing in my own toxic emotional swamp.

That sounds doable.

I could probably maintain a “negative thought filter” for two days a week – and then return to my curmudgeonly comfort zone for the other five.

I’ll need to choose which days wisely though …  and I had better wear a special hat, tie or badge that indicates which mode I am in – a pessimistic Black Hat five days a week and an optimistic Yellow Hat for the other two perhaps.

I wonder if anyone will notice?

And the idea of choosing your attitude for a day reminds me of a little book called FISH!

It’s another sunny day and the laptop continues to perform well in the garden!

Yippee! I have completed my Foundations in Improvement Science for Healthcare (FISH©) course. The final stages of the course have taken me through visual presentation of system data, some worked examples (very useful) and of course the final assessment.  The key elements of the course came back to me easily for the assessment test which I always think is an indication of both enjoyment and how well the material has been presented.

My mentor says I have done more than enough to progress to the next stage of my improvement science journey.  Practitioner level now awaits. It is when it really gets serious and you take the learning so far and start applying it in very practical ways.  My goal is to become ‘safe’ in the use of the tools and techniques, which will give me the confidence to help others learn these fantastic skills.  All very satisfying indeed.

The other day I was at Keele University doing a session on change management to a group of specialist registrars.  We were exploring the key steps to follow if you are going to improve your approach to change management.  It struck me at the time that we need to make our approaches to potentially complex scenarios habit forming.  In other words lots and lots of research on change management has been conducted, so lets use it rather than stumbling through.  Similarly improvement science gives you a set of disciplines and tools to support and deliver changes in the design of our healthcare systems.  What we have to do is get to the point where it is a widespread habit to approach our healthcare systems and processes using this knowledge.  I am absolutely convinced patients will feel the difference and the ‘ground hog day’ operational struggles can be approached with renewed vigour and produce differing outcomes. i.e. improved quality, motivation and productivity.

So bring on the next stage of my journey as a mentor to other FISH participants, learning to be a practitioner and being able to apply this knowledge habitually.

The sun is still out!

labyrinth

The mind is a labyrinth of knowledge – a maze with many twists, turns, joins, splits, tunnels, bridges, crevasses and caverns.

Some paths lead to dead ends; others take a long way around but get to the destination in the end.

The shortest path is not obvious – even in hindsight.

And there is another challenge … no two individuals share the same knowledge labyrinth.  An obvious path between problem and solution for one person may be  invisible or incomprehensible to another.

But the greatest challenge, and the greatest opportunity, is that our labyrinth of knowledge can change and does change continuously … through learning.

So if one person can see a path of improvement between current problem and future solution, then how can they guide another who cannot?

This is a challenge that an Improvement Scientist faces every day.

It is not effective to just give a list of instructions – “To get from problem to solution follow this path“.  The path may not exist in the recipients knowledge labyrinth. If they just follow the instructions they will come up against a wall or fall into a hole.

It is not realistic to expect the learner to replace their labyrinth of knowledge with that of the teacher – to clone the teachers way of thinking. Just reciting the Words of the Guru is not improvement – is Zealotry.

One way is for a guide to describe their own labyrinth of knowledge.  To lay it out in a way that any other can explore.  A way that is fully signposted, with explanations and maps that that the explorer can refer to as  they go.  A template against which they can compare their own knowledge labyrinth to reveal the similarities and the differences.

No two people will explore a knowledge labyrinth in the same way … but that does not matter. So long as they are able to uncover and assumptions that misguide them and any gaps in their knowledge block their progress.  With that feedback they can update their own mental signposts and create safe, effective and efficient paths that they can follow in future at will.

And that  is how the online FISH training is designed.  It is the knowledge labyrinth of an experienced Improvement Scientist that can be explored online.

And it keeps changing  …

growing_blue_vine_scrolling_down_150_wht_247Improvement Science is a collaborative community activity.

And word about what is possible spreads through The Grape Vine.

And it spreads in a particular way – through stories – personal accounts of “ah ha” moments.

Those “ah ha” moments are generated by a process – a process designed to generate them.

And that process is called the Nerve Curve.  It is rather like an emotional roller-coaster ride.

The Nerve Curve starts comfortably enough with a few gentle ups, downs, twists and turns – just to settle everyone in their seats.

Then it picks up pace and you have to hold on a bit tighter.

Then comes the Challenge – an interactive group-led improvement activity.  Something like the “Save the NHS Game“.

Then comes the Shock!  When the “intuitively obvious” and “collectively agreed” decisions and actions make the problem worse rather than better. The shock is magnified by learning that there is a solution – and that it was hidden from us. We did not know what we did not know. We were blissfully ignorant.

Now we are not. We are painfully aware of what we did not know.

Impossibility_HypothesisThen we head for Denial like a scared rabbit – but the cars are moving fast now and the is no stopping or going back.  We cannot get off – we cannot go back – so we cover our eyes and ears to block out the New Reality.

It does not work very well.  We quickly realize that it is safer to be able to see where we are heading so we can prepare for what is coming.  An emotional brick wall looms up in front of us – and written on it are the words “Impossibility Hypothesis”.  And we are heading right at it. A new emotion bubbles to the surface.

Anger.

Who’s ****** idea was it to get on this infernal contraption?  Why weren’t we warned?  Who is in charge? Who is to blame?

That does not work very well either. So we try a different strategy.

Bargaining.

We desperately want to limit the damage to comfort zone and confidence so we try negotiating a compromise, finding an exit option, and looking for the emergency stop cord.  There isn’t one. Reality is relentless and ruthless. Uncompromising.

Now we are really scared and with no viable options for staying where we were and no credible options for avoiding a catastrophe we are emotionally stuck – and we start to sink into Depression which is the path to Hopelessness, Apathy and Despair (HAD). We have run out of options. And we cannot stay in the past.

But the seed of innovation has been sown.  A hidden problem has been uncovered and an unknown option has been demonstrated. The “Way Over The Impossible-for-Me barrier” is clearly signposted. The light at the end of the tunnel has been switched on. We have a choice.

And at the last second we sweep over the Can’t Do Barrier and when we look back it has disappeared – it was a mirage – a perceptual trick our Intuition was playing on us. It only existed in our minds.

That is the “Ah ha”.

And now we can see a way forward – and how with support, guidance, encouragement and effort we can climb up Acceptance Mountain to Resolution Peak. It is will not be quick.  It will not be comfortable.  We have some unlearning to do. A few old assumptions and habits that need to be challenged, dismantled and re-designed.

It is hard work but it is surprisingly invigorating as a previously unrecognized inner well of hope, enthusiasm and confidence is tapped. We surprise ourselves with what we can do already.  We realize that the only thing that was actually blocking us before was our belief it was too difficult. And the lack of a guide.

And then we share our “ah ha” with others through The Grape Vine.

Here is a shared “ah ha” from this week:

The Post It® Note exercise was my biggest “Aha” moment on a combination of levels. The aspect that particularly resonated was the range of behaviours and responses from the different pairings, an aspect that would have been hidden had I done the exercise on my own. I’m still smiling at the simple elegance of this particular exercise and the depth of learning I am getting from it. [PD, Consultant Paediatrician. 15th July 2013].

The Post It® Note exercise is part of the FISH course … you can try it yourself here

This blog is part of The Grape Vine.

The Nerve Curve is ready and waiting to take you on an exciting ride through Improvement Science!

figure_juggling_balls_150_wht_4301Improvement Science is like three-ball juggling.

And there are different sets of three things that an Improvementologist needs to juggle:

the Quality-Flow-Cost set and
the Governance-Operations-Finance set and
the Customer-Staff-Organization set.

But the problem with juggling is that it looks very difficult to do – so almost impossible to learn – so we do not try.  We give up before we start. And if we are foolhardy enough to try (by teaching ourselves using the suck-it-and-see or trial-and-error method) then we drop all the balls very quickly. We succeed in reinforcing our impossible-for-me belief with evidence.  It is a self-fulfilling prophesy. Only the most tenacious, self-motivated and confident people succeed – which further reinforces the I-Can’t-Do belief of everyone else.

The problem here is that we are making an Error of Omission.

We are omitting to ask ourselves two basic questions “How does a juggler learn their art?” and “How long does it take?

The answer is surprising.

It is possible for just about anyone to learn to juggle in about 10 minutes. Yes – TEN MINUTES.


Skeptical?  Sure you are – if it was that easy we would all be jugglers.  That is the “I Can’t Do” belief talking. Let us silence that confidence-sapping voice once and for all.

Here is how …

You do need to have at least one working arm and one working eyeball and something connecting them … and it is a bit easier with two working arms and two working eyeballs and something connecting them.

And you need something to juggle – fruit is quite good – oranges and apples are about the right size, shape, weight and consistency (and you can eat the evidence later too).

And you need something else.

You need someone to teach you.

And that someone must be able to juggle and more importantly they must be able to teach someone else how to juggle which is a completely different skill.

juggling_at_Keele_June_2013Those are the necessary-and-sufficient requirements to learn to juggle in 10 minutes.

The recent picture shows an apprentice Improvement Scientist at the “two orange” stage – just about ready to move to the “three orange” stage.

Exactly the same is true of learning the Improvement Science juggling trick.

The ability to improve Quality, Flow and Cost at the same time.

The ability to align Governance, Operations and Finance into a win-win-win synergistic system.

The ability to delight customers, motivate staff and support leaders at the same time.


And the trick to learning to juggle is called step-by-step unlearning. It is counter-intuitive.

To learn to juggle you just “unlearn” what is stopping you from juggling. You unlearn the unconscious assumptions and habits that are getting in the way.

And that is why you need a teacher who knows what needs to be unlearned and how to help you do it.

fish
And for an apprentice Improvement Scientist the first step on the Unlearning Journey is FISH.

It’s a sunny day and I realize that my laptop screen is viewable whilst sitting in the garden!

I am now three quarters the way through my Foundations in Improvement Science for Healthcare (FISH©) course.  It has been a revelation to say the least.  The last time I blogged on my progress I remarked that memories of operational struggles whilst working within my various senior leadership roles have become clearer as to why we had some success and plenty of failure in terms of sustainable difference around the three key wins.  These are improved quality, productivity and motivation.  This feeling has most definitely continued!

The course so far has taken me through the general concepts using the Three Wins Design®, plenty of the people stuff that is fundamental to success and on the last few ‘study’ occasions the more technical stuff of what it takes to understand how a system is functioning. In other words how to build up a picture of the root causes for the outcomes from the system, how to analyse the data and present the data so that it is information and finally how potential design changes can be tested to reveal how the root causes can be reduced to achieve a balancing act around the three wins.  So I am becoming more confident in the use of value stream maps that set out how work is done and how resources are used and presentation on a process template.  What this does is to remove rhetoric; intuition and frankly some guess work that is all too common when tackling operational challenges.  The notion of cycle times that can help to explain why outpatient clinics, day case units etc can be a less than positive experience for patients by simply setting out the process on a Gantt chart is wonderful to see as it changes perceived complexity into a simple picture.

I am feeling more motivated than ever to complete the course as the power to resolve challenges becomes more and more obvious.  This is despite the fact I am being tested to grasp the concepts of schedules, standard work, hand offs, Pareto analysis, the 80:20 heuristic and how to present demand, workloads and resources in a consistent manner.  This is not easy for somebody who does not naturally occupy this type of space!

So why the Chimp in me?  Whilst completing the course I am reading an interesting book called the Chimp Paradox by Dr Steve Peters.  He sets out his thoughts on how the brain functions and how to manage your chimp.  Your chimp is the emotional part of the brain that will tell your human or logical part you can’t do something or ask why would you want to learn something new that could make you look daft.   Well my chimp is feeling settled and untroubled at the moment because of the combination of the achievement and the huge potential I see in using improvement science.  All this adds up to, I want to learn some more of this stuff.  Oh and the sun is still shining!

Steve Peak

Improvement-by-Design is not the same as Improvement-by-Desire.

Improvement-by-Design has a clear destination and a design that we know can get us there because we have tested it before we implement it.

Improvement-by-Desire has a vague direction and no design – we do not know if the path we choose will take us in the direction we desire to go. We cannot see the twists and turns, the unknown decisions, the forks, the loops, and the dead-ends. We expect to discover those along the way. It is an exercise in hope.

So where pessimists and skeptics dominate the debate then Improvement-by-Design is a safer strategy.

Just over seven weeks ago I started an Improvement-by-Design project – a personal one. The destination was clear: to get my BMI (body mass index) into a “healthy” range by reducing weight by about 5 kg.  The design was clear too – to reduce energy input rather than increase energy output. It is a tried-and-tested method – “avoid burning the toast”.  The physical and physiological model predicted that the goal was achievable in 6 to 8 weeks.

So what has happened?

To answer that question requires two time-series charts. The input chart of calories ingested and the output chart of weight. This is Step 5 of the 6M Design® sequence.

Energy_Weight_ModelRemember that there was another parameter  in this personal Energy-Weight system: the daily energy expended.

But that is very difficult to measure accurately – so I could not do that.

What I could do was to estimate the actual energy expended from the model of the system using the measured effect of the change. But that is straying into the Department of Improvement Science Nerds. Let us stay in the real world a  bit longer.

Here is the energy input chart …

SRD_EnergyIn_XmR

It shows an average calorie intake of 1500 kcal – the estimated required value to achieve the weight loss given the assumptions of the physiological model. It also shows a wide day-to-day variation.  It does not show any signal flags (red dots) so an inexperienced Improvementologist might conclude that this just random noise.

It is not.  The data is not homogeneous. There is a signal in the system – a deliberate design change – and without that context it is impossible to correctly interpret the chart.

Remember Rule #1: Data without context is meaningless.

The deliberate process design change was to reduce calorie intake for just two days per week by omitting unnecessary Hi-Cal treats – like those nice-but-naughty Chocolate Hobnobs. But which two days varied – so there is no obvious repeating pattern in the chart. And the intake on all days varied – there were a few meals out and some BBQ action.

To separate out these two parts of the voice-of-the-process we need to rationally group the data into the Lo-cal days (F) and the OK-cal days (N).

SRD_EnergyIn_Grouped_XmR

The grouped BaseLine© chart tells a different story.  The two groups clearly have a different average and both have a lower variation-over-time than the meaningless mixed-up chart.

And we can now see a flag – on the second F day. That is a prompt for an “investigation” which revealed: will-power failure.  Thursday evening beer and peanuts! The counter measure was to avoid Lo-cal on a Thursday!

What we are seeing here is the fifth step of 6M Design® exercise  – the Monitor step.

And as well as monitoring the factor we are changing – the cause;  we also monitor the factor we want to influence – the effect.

The effect here is weight. And our design includes a way of monitoring that – the daily weighing.

SRD_WeightOut_XmRThe output metric BaseLine© chart – weight – shows a very different pattern. It is described as “unstable” because there are clusters of flags (red dots) – some at the start and some at the end. The direction of the instability is “falling” – which is the intended outcome.

So we have robust, statistically valid evidence that our modified design is working.

The weight is falling so the energy going in must be less than the energy being put out. I am burning off the excess lard and without doing any extra exercise.  The physics of the system mandate that this is the only explanation. And that was my design specification.

So that is good. Our design is working – but is it working as we designed?  Does observation match prediction? This is Improvement-by-Design.

Remember that we had to estimate the other parameter to our model – the average daily energy output – and we guessed a value of 2400 kcal per day using generic published data.  Now I can refine the model using my specific measured change in weight – and I can work backwards to calculate the third parameter.  And when I did that the number came out at 2300 kcal per day.  Not a huge difference – the equivalent of one yummy Chocolate Hobnob a day – but the effect is cumulative.  Over the 53 days of the 6M Design® project so far that would be a 5300 kcal difference – about 0.6kg of useless blubber.

So now I have refined my personal energy-weight model using the new data and I can update my prediction and create a new chart – a Deviation from Aim chart.

SRD_WeightOut_DFA
This is the  chart I need to watch to see  if I am on the predicted track – and it too is unstable -and not a good direction.  It shows that the deviation-from-aim is increasing over time and this is because my original guesstimate of an unmeasurable model parameter was too high.

This means that my current design will not get me to where I want to be, when I what to be there. This tells me  I need to tweak my design.  And I have a list of options.

1) I could adjust the target average calories per day down from 1500 to 1400 and cut out a few more calories; or

2) I could just keep doing what I am doing and accept that it will take me longer to get to the destination; or

3) I could do a bit of extra exercise to burn the extra 100 kcals a day off, or

4) I could do a bit of any or all three.

And because I am comparing experience with expectation using a DFA chart I will know very quickly if the design tweak is delivering.

And because some nice weather has finally arrived so the BBQ will be busy I have chosen to take longer to get there. I will enjoy the weather, have a few beers and some burgers. And that is OK. It is a perfectly reasonable design option – it is a rational and justifiable choice.

And I need to set my next destination – a weight if about 72 kg according to the BMI chart – and with my calibrated Energy-Weight model I will know exactly how to achieve that weight and how long it will take me. And I also know how to maintain it – by  increasing my calorie intake. More beer and peanuts – maybe – or the occasional Chocolate Hobnob even. Hurrah! Win-win-win!


6MDesign This real-life example illustrates 6M Design® in action and demonstrates that it is a generic framework.

The energy-weight model in this case is a very simple one that can be worked out on the back of a beer mat (which is what I did).

It is called a linear model because the relationship between calories-in and weight-out is approximately a straight line.

Most real-world systems are not like this. Inputs are not linearly related to outputs.  They are called non-linear systems: and that makes a BIG difference.

A very common error is to impose a “linear model” on a “non-linear system” and it is a recipe for disappointment and disaster.  We do that when we commit the Flaw of Averages error. We do it when we plot linear regression lines through time-series data. We do it when we extrapolate beyond the limits of our evidence.  We do it when we equate time with money.

The danger of this error is that our linear model leads us to make unwise decisions and we actually make the problem worse – not better.  We then give up in frustration and label the problem as “impossible” or “wicked” or get sucked into to various forms of Snake Oil Sorcery.

The safer approach is to assume the system is non-linear and just let the voice of the system talk to us through our BaseLine© charts. The challenge for us is to learn to understand what the system is saying.

That is why the time-series charts are called System Behaviour Charts and that is why they are an essential component of Improvement-by-Design.

However – there is a step that must happen before this – and that is to get the Foundations in place. The foundation of knowledge on which we can build our new learning. That gap must be filled first.

And anyone who wants to invest in learning the foundations of improvement science can now do so at their own convenience and at their own pace because it is on-line …. and it is here.

fish

Anyone with much experience of  change will testify that one of the hardest parts is sustaining the hard won improvement.

The typical story is all too familiar – a big push for improvement, a dramatic improvement, congratulations and presentations then six months later it is back where it was before but worse. The cynics are feeding on the corpse of the dead change effort.

The cause of this recurrent nightmare is a simple error of omission.

Failure to complete the change sequence. Missing out the last and most important step. Step 6 – Maintain.

Regular readers may remember the story of the pharmacy project – where a skeptical department were surprised and delighted to discover that zero-cost improvement was achievable and that a win-win-win outcome was not an impossible dream.

Enough time has now passed to ask the question: “Was the improvement sustained?”

TTO_Yield_Nov12_Jun13The BaseLine© chart above shows their daily performance data on their 2-hour turnaround target for to-take-out prescriptions . The weekends are excluded because the weekend system is different from the weekday system. The first split in the data in Jan 2013 is when the improvement-by-design change was made. Step 4 on the 6M Design® sequence – Modify.

There was an immediate and dramatic improvement in performance that was sustained for about six weeks – then it started to drift back. Bit by Bit.  The time-series chart flags it clearly.


So what happened next?

The 12-week review happened next – and it was done by the change leader – in this case the Inspector/Designer/Educator.  The review data plotted as a time-series chart revealed instability and that justified an investigation of the root cause: which was that the final and critical step had not been completed as recommended. The inner feedback loop was missing. Step 6 – Maintain was not in place.

The outer feedback loop had not been omitted. That was the responsibility of the experienced change leader.

And the effect of closing the outer-loop is clearly shown by the third segment: a restoration of stability and improved capability. The system is again delivering the improvement it was designed to deliver.


What does this lesson teach us?

The message here is that the sponsors of improvement have essential parts to play in the initiation and the maintenance of change and improvement. If they fail in their responsibility then the outcome is inevitable and predictable. Mediocrity and cynicism.

Part 1: Setting the clarity and constancy of common purpose.

Without a clear purpose then alignment, focus and effectiveness are thwarted. Purpose that changes frequently is not a purpose – it is reactive knee-jerk politics.  Constancy of purpose is required because improvement takes time. There is always a lag so moving the target while the arrow is in flight is both dangerous and leads to disengagement. Establishing common ground is essential to avoiding the time-wasting discussion and negotiation that is inevitable when opinions differ – which they always do.

Part 2: Respectful challenge.

Effective change leadership requires an ability to challenge from a position of mutual respect.  Telling people what to do is not leadership – it is dictatorship.  Dodging the difficult conversations and passing the buck to others is not leadership – it is ineffective delegation. Asking people what they want to do is not leadership – it is abdication of responsibility.  People need their leaders to challenge them and to respect them at the same time.  It is not a contradiction.  It is possible to do both.

And one way that a leader of change can challenge with respect is to expose the need for change; to create the context for change; and then to commit to holding those charged with change to account. And to make it clear at the start what their expectation is as a leader – and what the consequences of disappointment are.

It is a delight to see individuals,  teams, departments and organisations blossom and grow when the context of change is conducive; at it is disappointing to see them wither and shrink when the context of change is laced with cynicide – the toxic product of cynicism.


So what is the next step?

What could an aspirant change leader do to get this for themselves and their organisations?

One option is to become a Student of Improvementology® – and they can do that here.

team_puzzle_123456There seems to be a natural cycle to change and improvement.

A pace that feels right and that works well. Try to push faster and resistance increases. Relax and pull slower and interest wanders.

The pace that feels about right is a six week cycle.

So why six weeks? Is it 42 days that is important or it there something about a seven-day week and the number six?

The daily and the weekly cycles are dictated by the Celestial Clockwork.  The day is the Earth’s rotation and the week is one quarter if the 28 day Lunar cycle. These are not arbitrary policies – they are celestial physics. Not negotiable.

So where does the Six come from? That does seem to be something to do with people and psychology.

team_puzzle_SDABDRRemember the Nerve Curve?

The predictable sequence of emotional states that accompanies significant change? The sequence of Shock-Denial-Anger-Bargaining-Depression-Resolution?  It has six stages.  Is that just a co-incidence?

team_puzzle_MMMMMMRemember 6M Design®?

The required sequence of steps that structure any improvement-by-design challenge? It has six stages.

Is that just a co-incidence too?

And is seven days a convenient size? It was originally six-days-of-work and one-day-of-rest. The modern 5-and-2 design is a recent invention.

And if each stage requires at least one week to complete and we require six stages then we get a Six Week cycle.

It sounds lie a plausible hypothesis but is that what happens in reality?

There is a lot of empirical evidence to suggest that it does. It seems we feel comfortable working with six-week chunks of time.  We plan about six weeks ahead.  School terms are divided into about six week chunks. A financial “quarter” is about two chunks. We can fit four of those into a Year with a bit left over.  Action learning seems to work well in six week cycles. Courses are very often carved up into six week modules. It feels OK.

So what does this mean for the Improvement Scientist?

First it suggests that doing something every week makes sense. Leaving it all to the last minute does not.
Second it suggests that each week the step required and the emotional reaction is predictable.
Third it suggests that five weeks of facilitative investment are required.
Fourth it suggests that if something throws a spanner into the sequence the we need to add extra weeks.

And it suggests that in the Seventh Week we can rest, reflect, share and prepare for the next Six Week change cycle.

So maybe Douglas Adams was correct – the Answer to Life the Universe and Everything is Forty Two.

clock_hands_spinning_import_150_wht_3149Patience is a virtue for an advocate of Improvementology®.

This week Mike Davidge (Head of Measurement for the former NHS Institute for Innovation and Improvement) posted some feedback on the Journal of Improvement Science site.

His feedback is reproduced here in full with Mike’s permission. The rationale for reproduction that the activity data shows that more people the Blog than the Journal.

Feedback posted on 15/06/2013 at 07:35:05 for paper entitled:

Dodds S. A Case Study of a Successful One-Stop Clinic Schedule Design using Formal Methods . Journal of Improvement Science 2012:6; 1-13.

“It’s only taken me a year to get round to reading this, an improvement on your 9 years to write it! It was well worth the read. You should make a serious attempt to publish this where it gets a wider audience. Rank = 5/5″

thank_you_boing_150_wht_5547Mike is a world expert in healthcare system measurement and improvement so this is a huge compliment. Thank you Mike. He is right too – 1 year is a big improvement on 9 years. So why did it take 9 years to write up?

One reason is that publication was not the purpose. Improvement was the purpose. Another reason was that this was a step in a bigger improvement project – one that is described in Three Wins.  There is a third reason: the design flaws of the traditional academic peer review process. This is radical stuff and upsets a lot of people so we need to be careful.

The two primary design flaws of conventional peer-reviewed academic journals are:

1) that it has a long lead time and
2) that it has a low yield.

So it is very expensive in author-lifetime.  Improvement is not the same as research.  Perfection is not the goal. Author lifetime is a very valuable resource. If it is wasted with an inefficient publication process design then the result is less output and less dissemination of valuable Improvement Science.

So if any visitors would like to benefit from Mike’s recommendation then you can download the full text of the essay here. It has not been peer-reviewed so you will have to make you own minds up about the value. And if you have any questions then you are free to ask the author.

PS. The visitor who points out the most spelling and grammar errors will earn themselves a copy of BaseLine© the time-series analysis software used to create the charts.

[Bing-Bong]

The email from Leslie was unexpected.

Hi Bob, can I change the planned topic of our session today to talk about resistance. We got off to a great start with our improvement project but now I am hitting brick walls and we are losing momentum. I am getting scared we will stall. Leslie”

Bob replied immediately – it was only a few minutes until their regular teleconference call.

Hi Leslie, no problem. Just firing up the Webex session now. Bob”

[Whoop-Whoop]

The sound bite announced Leslie joining in the teleconference.

<Leslie> Hi Bob. Sorry about the last minute change of plan. Can I describe the scenario?

<Bob> Hi Leslie. Please do.

<Leslie> Well we are at stage five of the 6M Design® sequence and we are monitoring the effect of the first set of design changes that we have made. We started by eliminating design flaws that were generating errors and impairing quality.   The information coming in confirms what we predicted at stage 3.  The problem is that a bunch of “fence-sitters” that said nothing at the start are now saying that the data is a load of rubbish and implying we are cooking the books to make it look better than it is! I am pulling my hair out trying to convince them that it is working.

<Bob> OK. What is your measure for improvement?

<Leslie> The percentage yield from the new quality-by-design process. It is improving. The BaseLine© chart says so.

<Bob> And how is that improvement being reported?

<Leslie> As the average yield per week.  I know we should not aggregate for a month because we need to see the impact of the change as it happens and I know there is a seven-day cycle in the system so we set the report interval at one week.

<Bob> Yes. Those are all valid reasons. What is the essence of the argument against your data?

<Leslie> There is no specific argument – it is just being discounted as “rubbish”.

<Bob> So you are feeling resistance?

<Leslie> You betcha!

<Bob> OK. Let us take a different tack on this. How often do you measure the yield?

<Leslie> Daily.

<Bob> And what is the reason you are using the percentage yield as your metric?

<Leslie> So we can compare one day with the next more easily and plot it on a time-series chart. The denominator is different every day so we cannot use just the count of errors.

<Bob> OK. And how do you calculate the weekly average?

<Leslie> From the daily percentage yields. It is not a difficult calculation!

There was a definite hint of irritation and sarcasm in Leslie’s voice.

<Bob> And how confident are you in your answer?

<Leslie> Completely confident. The team are fantastic. They see the value of this and are collecting the data assiduously. They can feel the improvement. They do not need the data to prove it. The feedback is to convince the fence-sitters and skeptics and they are discounting it.

<Bob> OK so you are confident in the quality of the data going in to your calculation – how confident are you in the data coming out?

<Leslie> What do you mean!  It is a simple calculation – a 12 year old could do.

<Bob> How are you feeling Leslie?

<Leslie>Irritated!

<Bob> Does it feel as if I am resisting too?

<Leslie>Yes!!

<Bob> Irritation is anger – the sense of loss in the present. What do you feel you are losing?

<Leslie> My patience and my self-confidence.

<Bob> So what might be my reasons for resisting?

<Leslie> You could be playing games or you could have a good reason.

<Bob> Do I play games?

<Leslie> Not so far! Sorry … no. You do not do that.

<Bob> So what could be my good reason?

<Leslie> Um. You can feel or see something that I cannot. An error?

<Bob> Yes. If I just feel something is not right I cannot do much else but say “That does not feel right”.  If I can see what I is not right I can explain my rationale for resisting.  Can I try to illuminate?

<Leslie> Yes please!

<Bob> OK – have you got a spreadsheet handy?

<Leslie> Yes.

<Bob> OK – create a column of twenty random numbers in the range 20-80 and label them “daily successes”. Next to them create a second column of random numbers in the range 20-100 and label then “daily activity”.

<Leslie> OK – done that.

<Bob> OK – calculate the % yield by day then the average of the column of daily % yield.

<Leslie> OK – that is exactly how I do it.

<Bob> OK – now sum the columns of successes and activities and calculate the average % yield from those two totals.

<Leslie> Yes – I could do that and it will give the same final answer but I do not do that because I cannot use that data on my run chart – for the reasons I said before.

<Bob> Does it give the same answer?

<Leslie> Um – no. Wait. I must have made an error. Let me check. No. I have done it correctly. They are not the same. Uh?

<Bob> What are you feeling?

<Leslie> Confused!  But the evidence is right there in front of me.

<Bob> An assumption you have been making has just been exposed to be invalid. Your rhetoric does not match reality.

<Leslie> But everyone does this … it is standard practice.

<Bob> And that makes it valid?

<Leslie> No .. of course not. That is one of the fundamental principles of Improvement Science. Just doing what everyone else does is not necessarily correct.

<Bob> So now we must understand what is happening. Can you now change the Daily Activity column so it is the same every day – say 60.

<Leslie> OK. Now my method works. The yield answers are the same.

<Bob> Yes.

<Leslie> Why is that?

<Bob> The story goes back to 1948 when Claude Shannon described “Information Theory”.  When you create a ratio you start with two numbers and end up with only one which implies that information is lost in the conversion.  Two numbers can only give one ratio, but that same ratio can be created by an infinite set of two numbers.  The relationship is asymmetric. It is not an equality. And it has nothing to do with the precision of the data. When we throw data away we create ambiguity.

<Leslie> And in my data the activity by day does vary. There is a regular weekly cycle and some random noise. So the way I am calculating the average yield is incorrect, and the message I am sharing is distorted, so others can quite reasonably challenge the data, and because I was 100% confident I was correct I have been assuming that their resistance was just due to cussedness!

<Bob> There may be some cussedness too. It is sometimes difficult to separate skepticism and cynicism.

<Leslie> So what is the lesson here? There must be more to your example than just exposing a basic arithmetic error.

<Bob> The message is that when you feel resistance you must accept the possibility that you are making an error that you cannot see.  The person demonstrating resistance can feel the emotional pain of a rhetoric-reality mismatch but can not explain the cause. You need to strive to see the problem through their eyes. It is OK to say “With respect I do not see it that way because …”.

<Leslie> So feeling “resistance” signals an opportunity for learning?

<Bob> Yes. Always.

<Leslie> So the better response is to pull back and to check assumptions and not to push forward and make the resistance greater or worse still break through the barrier of resistance, celebrate the victory, and then commit an inevitable and avoidable blunder – and then add insult to injury and blame someone else creating even more cynicism on the future.

<Bob> Yes. Well put.

<Leslie> Wow!  And that is why patience and persistence are necessary.  Not persistently pushing but persistently searching for the unconscious assumptions that underpin resistance; consistently using Reality as the arbiter;  and having enough patience to let Reality tell its own story.

<Bob> Yes. And having the patience and persistence to keep learning from our confusion and to keep learning how to explain what we have discovered better and better.

<Leslie> Thanks Bob. Once again you have  opened a new door for me.

<Bob> A door that was always there and yet hidden from view until it was illuminated with an example.

My head is a buzzing this morning with poems by John Godfrey Saxe, Theory of Constraints, Six Thinking Hats®, managing transitions and discrete event simulations!

It is not because of the rather lovely bottle of red yesterday evening nor as a result of an episode of the hitchhikers guide to the galaxy but rather my start on the Foundations of Improvement Science in Healthcare course.

The Three Wins book that kicks off the course should be offered to all those folks who are trying to bring about improvements to patients but finding it frustrating and about to consider giving it up. You know who you are and I have been there on a few occasions myself. The book plots the journey of the vascular team at Good Hope Hospital who deliver some fantastic changes to improve the service to patients and in doing so achieve the Three Wins: quality, performance and motivation. John’s story fills your heart with joy!

So it is Saturday morning and sporting events are happening around me. I am delighted to have started my course and have an end in mind. My G-R-O-W outline is done and I have my Niggles that I will convert to NooNoos and my Nice Ifs that I want to end up as Nuggets. I have played the Post It® Note and Six Dice games and begun ‘learning’ the concepts behind improvement science that I know will complement any people skills I might possess. The human side of change, the key goals of quality and performance are all wrapped up together as we all know well and here it is becoming clearer how these things can and must be pulled off simultaneously.

I am excited about all this and having chatted to a cracking CEO leader yesterday I can see more and more clearly how his goals of deeper engagement and involvement with the hospitals teams, his desire to improvement the patient’s view of the services offered and also sorry to say this but how the money can be made to work harder can be delivered.

I have programmed some further time next week to hit the next stage of the course where the more technical bits get explained and illustrated using the exercises, examples and language that thus far are making this fun.

Next Friday sees the arrival of a friend from Australia who has not been seen in 10 years. The next blog might be interesting!

Steve Peak

line_figure_phone_400_wht_9858[Dring Dring]

<Bob> Hi Leslie. How are you today?

<Leslie> Really good thanks. We are making progress and it is really exciting to see tangible and measurable improvement in safety, quality, delivery and financial stability.

<Bob> That is good to hear. So what topic shall we explore today?

<Leslie> I would like to return to the topic of engagement.

<Bob> OK. I am sensing that you have a specific Niggle that you would like to share.

<Leslie> Yes.  Specifically it is engaging the Board.

<Bob> Ah ha. I wondered when we would get to that. Can you describe your Niggle?

<Leslie> Well, the feeling is fear and that follows from the risk of being identified as a trouble-maker which follows from exposing gaps in knowledge and understanding of seniors.

<Bob> Well put.  This is an expected hurdle that all Improvement Scientists have to learn to leap reliably. What is the barrier that you see?

<Leslie> That I do not know how to do it and I have seen a  lot of people try and commit career-suicide.

<Bob> OK – so it is a real fear based on real evidence. What methods did the “toasted moths” try?

<Leslie> Some got angry and blasted off angry send-to-all emails.  They just succeeded in identifying themselves as “terrorists” and were dismissed – politically and actually. Others channeled  their passion more effectively by heroic acts that held the system together for a while – and they succeeded in burning themselves out. The end result was the same: toasted!

<Bob> So with your understanding of design principles what does that say?

<Leslie> That the design of their engagement process is wrong.

<Bob> Wrong?

<Leslie> I mean “not fit for purpose”.

<Bob> And the difference is?

<Leslie> “Wrong” is a subjective judgement, “not fit for purpose” is an objective assessment.

<Bob> Yes. Good. We need to be careful with words. So what is the “purpose”?

<Leslie> An organisation that is capable of consistently delivering win-win-win improvement.

<Bob> Which requires?

<Leslie> All the parts working in synergy to a common purpose.

<Bob> So what are the parts?

<Leslie> The departments.

<Bob> They are the stages that the streams cross – they are parts of system structure. I am thinking more broadly.

<Leslie> The workers, the managers and the executives?

<Bob> Yes.  And how is that usually perceived?

<Leslie> As a power hierarchy.

<Bob> And do physical systems have power hierarchies?

<Leslie> No … they have components with different and complementary roles.

<Bob> So does that help?

<Leslie> Yes! To achieve synergy each component has to know its complementary role and be competent to do it.

<Bob> And each must understand the roles of the others,  respect the difference, and trust in their competence.

<Leslie> And the concepts of understanding, respect and trust appears again.

<Bob> Indeed.  They are always there in one form or another.

<Leslie> So as learning and improvement is a challenge then engagement is respectful challenge …

<Bob> … uh huh …

<Leslie> … and each part is different so requires a different form of respectful challenge?

<Bob> Yes. And with three parts there are six relationships between them – so six different ways of one part respectfully challenging another. Six different designs that have the same purpose but a different context.

<Leslie> Ah ha!  And if we do not use the context-dependent-fit-for-purpose-respectful-challenge-design we do not achieve our purpose?

<Bob> Correct. The principles of design are generic.

<Leslie> So what are the six designs?

<Bob> Let us explore three of them. First the context of a manager respectfully challenging a worker to learn and improve.

<Leslie> That would require some form of training. Either the manager trains the worker or employs someone else to.

<Bob> Yes – and when might a manager delegate training?

<Leslie> When they do not have time to or do not know how to.

<Bob> Yes. So how would the flaw in that design be avoided?

<Leslie> By the manager maintaining their own know-how by doing enough training themselves and delegating the rest.

<Bob> Yup. Well done. OK let us consider a manager respectfully challenging other managers to learn and improve.

<Leslie> I see what you mean. That is a completely different dynamic. The closest I can think of is a mentoring arrangement.

<Bob> Yes. Mentoring is quite different from training. It is more of a two-way relationship and I prefer to refer to it as “informal co-mentoring” because both respectfully challenge each other in different ways; both share knowledge; and both learn and develop.

<Leslie> And that is what you are doing now?

<Bob> Yes. The only difference is that we have agreed a formal mentoring contract. So what about a worker respectfully challenging a manager or a manager respectfully challenging an executive?

<Leslie>That is a very different dynamic. It is not training and it is not mentoring.

<Bob> What other options are there?

<Leslie>Coaching?  But an executive is not going to ask a manager to coach them!

<Bob> You are right on both counts – so what is the essence of coaching?

<Leslie> A coach provides a different perspective and will say what they see if asked and will ask questions that help to illustrate alternative perspectives and offer evidence of alternative options.

<Bob> Yes. A coach does not need to know anything about the specific area of expertise – unlike a trainer or a mentor. Anyone can coach anyone else. We do it informally all the time. And we are often coached by those much younger than ourselves who have a more modern perspective. Our children for instance.

<Leslie> So an informal coaching metaphor is the one that a manager can use to engage an executive.

<Bob> Yes. And look at it from the perspective of the executive – they want feedback that can help them made wiser strategic decisions. That is their role. Boards are always asking for customer feedback, staff feedback and performance feedback.  They want to know the Nuggets, the Niggles, the Nice Ifs and the NooNoos.  They just do not ask for it like that.

<Leslie> So they are no different from the rest of us?

<Bob> Not in respect of an insatiable appetite for unfiltered and undistorted feedback. What is different is their role. They are responsible for the strategic decisions – the ones that affect us all – so we can help ourselves by helping them make those decisions. A informal coaching model is fit-for-that-purpose.

<Leslie> And an Improvement Scientist needs to be able to do all three – training, mentoring and coaching in a collaborative informal style. Is that leadership?

<Bob> I call it co-men-t-oaching.

<Leslie> It makes complete sense to me. There is a lot of new stuff here and I will need to reflect on it. Thank you once again for showing me a different perspective on the problem.

<Bob> I enjoyed it too – talking it through helps me to learn to explain it better – and I look forward to hearing the conclusions from your reflections because I know I will learn from that too.

Over the past few weeks I have been conducting an Improvement Science Experiment (ISE).  I do that a lot.  This one is a health improvement experiment. I do that a lot too.  Specifically – improving my own health. Ah! Not so diligent with that one.

The domain of health that I am focusing on is weight – for several reasons:
(1) because a stable weight that is within “healthy” limits is a good idea for many reasons and
(2) because weight is very easy to measure objectively and accurately.

But like most people I have constraints: motivation constraints, time constraints and money constraints.  What I need is a weight reduction design that requires no motivation, no time, and no money.  That sounds like a tough design challenge – so some consideration is needed.

Design starts with a specific purpose and a way of monitoring progress.  And I have a purpose – weight within acceptable limits; a method for monitoring progress – a dusty set of digital scales. What I need is a design for delivering the improvement and a method for maintaining it. That is the challenge.

So I need a tested design that will deliver the purpose.  I could invent something here but it is usually quicker to learn from others who have done it, or something very similar.  And there is lots of knowledge and experience out there.  And they fall into two broad schools – Eat Healthier or Exercise More and usually Both.

Eat Healthier is sold as  Eat Less of the Yummy Bad Stuff and more of the Yukky Good Stuff. It sounds like a Puritanical Policy and is not very motivating. So with zero motivation as  a constraint this is a problem.  And Yukky Good Stuff seems to come with a high price tag. So with zero budget as a constraint this is a problem too.

Exercise More is sold as Get off Your Bottom and Go for a Walk. It sounds like a Macho Man Mantra. Not very motivating either. It takes time to build up a “healthy” sweat and I have no desire to expose myself as a health-desperado by jogging around my locality in my moth-eaten track suit.  So with zero time as a constraint this is a problem. Gym subscriptions and the necessary hi-tech designer garb do not come cheap.  So with a zero budget constraint this is another problem.

So far all the conventional wisdom is failing to meet any of my design constraints. On all dimensions.

Oh dear!

The rhetoric is not working.  That packet of Chocolate Hob Nobs is calling to me from the cupboard. And I know I will feel better if I put them out of their misery. Just one will not do any harm. Yum Yum.  Arrrgh!!!  The Guilt. The Guilt.

OK – get a grip – time for Improvement Scientist to step in – we need some Science.

[Improvement Science hat on]

The physics and physiology are easy on this one:

(a) What we eat provides us with energy to do necessary stuff (keep warm, move about, think, etc). Food energy  is measured in “Cals”; work energy is measured in “Ergs”.
(b) If we eat more Cals than we burn as Ergs then the difference is stored for later – ultimately as blubber (=fat).
(c) There are four contributors to or weight: dry (bones and stuff), lean (muscles and glands of various sorts), fluid (blood, wee etc), and blubber (fat).
(d) The sum of the dry, lean, and fluids should be constant – we need them – we do not store energy there.
(e) The fat component varies. It is stored energy. Work-in-progress so to speak.
(f) One kilogram of blubber is equivalent to about 9000 Cals.
(g) An adult of average weight, composition, and activity uses between 2000 and 2500 Cals per day – just to stay at a stable weight.

These facts are all we need to build an energy flow model.

Food Cals = Energy In.
Work Ergs = Energy Out.
Difference between Energy In and Energy Out is converted to-and-from blubber at a rate of 1 gram per 9 Cal.
Some of our weight is the accumulated blubber – the accumulated difference between Cals-In and Ergs-Out

The Laws Of Physics are 100% Absolute and 0% Negotiable. The Behaviours of People are 100% Relative and 100% Negotiable.  Weight loss is more about behaviour. Habits. Lifestyle.

Bit more Science needed now:

Which foods have the Cals?

(1) Fat (9 Cal per gram)
(2) Carbs (4 Cal per gram)
(3) Protein (4 Cal per gram)
(4) Water, Vitamins, Minerals, Fibre, Air, Sunshine, Fags, Motivation (0 Cal per gram).

So how much of each do we get from the stuff we nosh?

It is easy enough to work out – but it is very tedious to do so.  This is how calorie counting weight loss diets work. You weigh everything that goes in, look up the Cal conversions per gram in a big book, do some maths and come up with a number.  That takes lots of time. Then you convert to points and engage in a pseudo-accounting game where you save points up and cash them in as an occasional cream cake.  Time is a constraint and Saving-the-Yummies-for-Later is not changing a habit – it is feeding it!

So it is just easier for me to know what a big bowel of tortilla chips translates to as Cals. Then I can make an informed choice. But I do not know that.

Why not?

Because I never invested time in learning.  Like everyone else I gossip, I guess, and I generalise.  I say “Yummy stuff is bad because it is Hi-Cal; Yukky stuff is good because it is Lo-Cal“.  And from this generalisation I conclude “Cutting Cals feels bad“. Which is a problem because my motivation is already rock bottom.  So I do nothing,  and my weight stays the same, and I still feel bad.

The Get-Thin-Quick industry knows this … so they use Shock Tactics to motivate us.  They scare us with stories of fat young people having heart attacks and dying wracked with regret. Those they leave behind are the real victims. The industry bludgeons us into fearful submission and into coughing up cash for their Get Thin Quick Panaceas.  Their real goal is the repeat work – the loyal customers. And using scare mongering and a few whale-to-waif conversions as rabble-rousing  zealots they cook up the ideal design to achieve that.  They know that, for most of us, as soon as the fear subsides, the will weakens, the chips are down (the neck), the blubber builds, and we are back with our heads hung low and our wallets open.

I have no motivation – that is a constraint.  So flogging an over-weight and under-motivated middle-aged curmudgeon will only get a more over-weight, ego-bruised-and-depressed, middle-aged cynic. I may even seek solace in the Chocolate Hob Nob jar.

Nah! I need a better design.

[Improvement Scientist hat back on]

First Rule of Improvement – Check the Assumptions.

Assumption 1:
Yummy => Hi-Cal => Bad for Health
Yukky => Lo-Cal => Good for Health

It turns out this is a gross over-simplification.  Lots of Yummy things are Lo-Cal; lots of Yukky things are Hi-Cal. Yummy and Yukky are subjective. Cals are not.

OK – that knowledge is really useful because if I know which-is-which then I can made wiser decisions. I can do swaps so that the Yummy Score goes higher and the Cals Score goes lower.  That sounds more like it! My Motiv-o-Meter twitches.

Assumption 2:
Hi-Cal => Cheap => Good for Wealth
Lo-Cal => Expensive => Bad for Wealth

This is a gross over-simplification too. Lots of Expensive things are Hi-Cal; lots of Cheap things are Lo-Cal.

OK so what about the combination?

Bingo!  There are lots of Yummy+Cheap+Lo-Cal things out there !  So my process is to swap the Lose-Lose-Lose for the Win-Win-Win. I feel a motivation surge. The needle on my Motiv-o-Meter definitely moved this time.

But how much? And for how long? And how will I know if it is working?

[Improvement Science hat back on]

Second Rule of Improvement Science – Work from the Purpose

We need an output  specification.  What weight reduction in what time-scale?

OK – I work out my target weight – using something called the BMI (body mass index) which uses my height and a recommended healthy BMI range to give a target weight range. I plumb for 75 kg – not just “10% reduction” – I need an absolute goal. (PS. The BMI chart I used is at the end of the blog).

OK – I now I need a time-scale – and I know that motivation theory shows that if significant improvement is not seen within 15 repetitions of a behaviour change then it does not stick. It will not become a new habit. I need immediate feedback. I need to see a significant weight reduction within two weeks. I need a quick win to avoid eroding my fragile motivation.  And so long as a get that I will keep going. And how long to get to target weight?  One or two lunar cycles feels about right. Let us compromise on six weeks.

And what is a “significant improvement”?

Ah ha! Now I am on familiar ground – I have a tool for answering that question – a system behaviour chart (SBC).  I need to measure my weight and plot it on a time-series chart using BaseLine.  And I know that I need 9 points to show a significant shift, and I know I must not introduce variation into my measurements. So I do four things – I ensure my scales have high enough precision (+/- 0.1 kg); I do the weighing under standard conditions (same time of day and same state of dress);  I weigh myself every day or every other day; and I plot-the-dots.

OK – how am I doing on my design checklist?
1. Purpose – check
2. Process – check
3. Progress – check

Anything missing?

Yes – I need to measure the energy input – the Cals per day going in – but I need a easy, quick and low-cost way of doing it.

Time for some brainstorming. What about an App? That fancy new smartphone can earn its living for a change. Yup – lots of free ones for tracking Cals.  Choose one. Works OK. Another flick on the Motiv-o-Meter needle.

OK – next bit of the jigsaw. What is my internal process metric (IPM)?  How many fewer Cals per day on average do I need to achieve … quick bit of beer-mat maths … that many kg reduction times Cal per kg of blubber divided by 6 weeks gives  … 1300 Cals per day less than now (on average).  So what is my daily Cals input now?  I dunno. I do not have a baseline.  And I do not fancy measuring it for a couple of weeks to get one. My feeble motivation will not last that long. I need action. I need a quick win.

OK – I need to approach this a different way.  What if I just change the input to more Yummy+Cheap+Lo-Cal stuff and less Yummy+Cheap+Hi-Cal stuff and just measure what happens.  What if I just do what I feel able to? I can measure the input Cals accurately enough and also the output weight. My curiosity is now pricked too and my Inner Nerd starts to take notice and chips in “You can work out the rest from that. It is a simple S&F model” . Thanks Inner Nerd – you do come in handy occasionally. My Motiv-o-Meter is now in the green – enough emotional fuel for a decision and some action.

I have all the bits of the design jigsaw – Purpose, Process, Progress and Pieces.  Studying, and Planning over – time for Doing.

So what happened?

It is an ongoing experiment – but so far it has gone exactly as the design dictated (and the nerdy S&F model predicted).

And the experience has helped me move some Get-Thin-Quick mantras to the rubbish bin.

I have counted nine so far:

Mantra 1. Do not weight yourself every dayrubbish – weigh yourself every day using a consistent method and plot the dots.
Mantra 2. Focus on the fatrubbish – it is Cals that count whatever the source – fat, carbs, protein (and alcohol).
Mantra 3. Five fresh fruit and veg a dayrubbish – they are just Hi-Cost+Low-Cal stocking fillers.
Mantra 4. Only eat balanced mealsrubbish -  it is OK to increase protein and reduce both carbs and fat.
Mantra 5. It costs money to get healthyrubbish – it is possible to reduce cost by switching to Yummy+Cheap+Lo-Cal stuff.
Mantra 6. Cholesterol is badrubbish – we make more cholesterol than we eat – just stay inside a recommended range.
Mantra 7. Give up all alcohol – rubbish - just be sensible – just stay inside a recommended range.
Mantra 8. Burn the fat with exercise - rubbish – this is scraping-the-burnt-toast thinking – less Cals in first.
Mantra 9. Eat less every dayrubbish – it is OK to have Lo-Cal days and OK-Cal days – it is the average Cals that count.

And the thing that has made the biggest difference is the App.  Just being able to quickly look up the Cals in a “Waitrose Potato Croquette” when-ever and where-ever I want to is what I really needed. I have quickly learned what-is-in-what and that helps me make “Do I need that Chocolate Hob-Nob or not?” decisions on the fly. One tiny, insignificant Chocolate Hob-Nob = 95 Cals. Ouch! Maybe not.

I have been surprised by what I have learned. I now know that before I was making lots of unwise decisions based on completely wrong assumptions. Doh!

The other thing that has helped me build motivation is seeing the effect of those wiser design decisions translated into a tangible improvement – and quickly!  With a low-variation and high-precision weight measurement protocol I can actually see the effect of the Cals ingested yesterday on the Weight recorded today.  Our bodies obey the Laws of Physics. We are what we eat.

So what is the lesson to take away?

That there are two feedback loops that need to be included in all Improvement Science challenges – and both loops need to be closed so information flows if the Improvement exercise is to succeed and to sustain.

First the Rhetoric Feedback loop – where new, specific, knowledge replaces old, generic gossip. We want to expose the myths and mantras and reveal novel options.  Challenge assumptions with scientifically valid evidence. If you do not know then look it up.

Second the Reality Feedback loop – where measured outcomes verifies the wisdom of the decision – the intended purpose was achieved.  Measure the input, internal and output metrics and plot all as time-series charts. Seeing is believing.

So the design challenge has been achieved and with no motivation, no time and no budget.

Now where is that packet of Chocolate Hob Nobs. I think I have earned one. Yum yum.

[PS. This is not a new idea - it is called "double loop learning".  Do not know of it? Worth looking it up?]


bmi_chart

[Ring Ring]

<Bob> Hi Leslie how are you to today?

<Leslie> I am good thanks Bob and looking forward to today’s session. What is the topic?

<Bob> We will use your Niggle-o-Gram® to choose something. What is top of the list?

<Leslie> Let me see.  We have done “Engagement” and “Productivity” so it looks like “Near-Misses” is next.

<Bob> OK. That is an excellent topic. What is the specific Niggle?

<Leslie> “We feel scared when we have a safety near-miss because we know that there is a catastrophe waiting to happen.”

<Bob> OK so the Purpose is to have a system that we can trust not to generate avoidable harm. Is that OK?

<Leslie> Yes – well put. When I ask myself the purpose question I got a “do” answer rather than a “have” one. The word trust is key too.

<Bob> OK – what is the current safety design used in your organisation?

<Leslie> We have a computer system for reporting near misses – but it does not deliver the purpose above. If the issue is ranked as low harm it is just counted, if medium harm then it may be mentioned in a report, and if serious harm then all hell breaks loose and there is a root cause investigation conducted by a committee that usually results in a new “you must do this extra check” policy.

<Bob> Ah! The Burn-and-Scrape model.

<Leslie>Pardon? What was that? Our Governance Department call it the Swiss Cheese model.

<Bob> Burn-and-Scrape is where we wait for something to go wrong – we burn the toast – and then we attempt to fix it – we scrape the burnt toast to make it look better. It still tastes burnt though and badly burnt toast is not salvageable.

<Leslie>Yes! That is exactly what happens all the time – most issues never get reported – we just “scrape the burnt toast” at all levels.

fire_blaze_s_150_clr_618 fire_blaze_h_150_clr_671 fire_blaze_n_150_clr_674<Bob> One flaw with the Burn-and-Scrape design is that harm has to happen for the design to work. It is reactive.  Another design flaw is that it focuses attention on the serious harm first – avoidable mortality for example.  Counting the extra body bags completely misses the purpose.  Avoidable death means avoidably shortened lifetime.  Avoidable non-fatal will also shorten lifetime – and it is even harder to measure.  Just consider the cumulative effect of all that non-fatal life-shortening avoidable-but-ignored harm?  Most of the reasons that we live longer today is because we have removed a lot of lifetime shortening hazards – like infectious disease and severe malnutrition.

Take healthcare as an example – accurately measuring avoidable mortality in an inherently high-risk system is rather difficult.  And to conclude “no action needed” from “no statistically significant difference in mortality between Us and the Average” is invalid and it leads to a complacent delusion that all is good enough.  When it comes to harm it is never “good enough”.

<Leslie> But we do not have the resources to investigate the thousands of cases of minor harm – we have to concentrate on the biggies.

<Bob> And do the near misses keep happening?

<Leslie> Yes – that is why they are top rank  on the Niggle-o-Gram®.

<Bob> So the Burn-and-Scrape design is not fit-for-purpose.

<Leslie> Hmm. So it seems. But what is the alternative? If there was one we would be using it – surely?

<Bob> Look back Leslie. How many of the Improvement Science methods that you have already learned are business-as-usual?

<Leslie> Good point. Almost none.

<Bob> And do they work?

<Leslie> You betcha!

<Bob> This is another example.  It is possible to design systems to be safe – so the frequent near misses become rare events.

<Leslie> Is it?  Wow! That know-how would be really useful to have. Can you teach me?

<Bob> Yes. First we need to explore what the benefits would be.

<Leslie> OK – well first there would be no avoidable serious harm and we could trust in the safety of our system – which is the purpose.

<Bob> Yes …. and?

<Leslie> And … all the effort, time and cost spent “scraping the burnt toast” would be released.

<Bob> Yes …. and?

<Leslie> The safer-by-design processes would be quicker and smoother and would be a more enjoyable experience for both customers and suppliers!

<Bob> Yes. So what does that all add up to?

<Leslie> A Three Wins® outcome!

<Bob> Indeed. So a one-off investment of effort, time and money in learning Safety-by-Design methods would appear to be a wise business decision.

<Leslie> Yes indeed!  When do we start?

<Bob> We have already started.