butterfly_flying_around_465Some animals undergo a remarkable transformation on their journey to becoming an adult.

This metamorphosis is most obvious with a butterfly: the caterpillar enters the stage and a butterfly emerges.

Their capabilities and behaviours of the stages are very different.  A caterpillar crawls and feeds on leaves. A butterfly flies and feeds on nectar.


The transformation of an organisation from chaotic to calm; from dressed to enthused; and from struggling to growing shares many similarities.

And the metamorphosis of individuals within organisations is what drives the change – the transformation from the sceptic to the advocate.


metamorphosis_1The process starts with the tiny, hungry caterpillar emerging from the egg …

This like is the curious new sceptic who is tentatively engaging the the process of learning. Usually triggered by the seed of an idea that they have seen: a success that disproves their ‘impossibility hypothesis’.


metamorphosis_2A caterpillar is an eating machine. And as it grows it sheds its skin and becomes bigger. It also changes its appearance.

Our curious improvement sceptic is devouring new information and is visibly growing in knowledge, understanding and confidence. 


metamorphosis_3When the caterpillar sheds the last skin a new form emerges. A pupa. It has a different appearance and behaviour. It is stationary and does not eat.

This is the contemplative sceptic who appears to have become dormant but is not at all.


metamorphosis_5Inside the pupa the solid body of the caterpillar is converted to ‘cellular soup’ and then cells are reassembled into a completely new structure called an adult butterfly.

Our healthy sceptic dissolves their self-limiting beliefs and restructures their mental model.


metamorphosis_7And suddenly the adult butterfly emerges: fully formed but not yet able to fly. Its wings are not yet ready – they need to be inflated and tested.

So it is with our newly hatched improvement practitioner. They need to pause, prepare, and practice before they feel safe to fly solo.


metamorphosis_8After a short rest the new wings are fully expanded and able to lift the butterfly aloft to explore the new opportunities that await. A whole new and exciting world full of nectar.

Our improvement practitioner can feel when they are capable to explore the unknown.


 

And and active improvement practitioner will lay many eggs, and many of those will hatch into improvement caterpillars who will busily munch on the knowledge and grow. Then it goes quiet. And as if by magic the new generation of improvement butterflies appear. And they continue to spread the word and the knowledge.  And the next generation of caterpillars is much bigger.

That is how Improvement Science grows and spreads – by metamorphosis.

Flow_Science_Works[Beep] It was time again for the weekly Webex coaching session. Bob dialled into the teleconference to find Leslie already there … and very excited.

<Leslie> Hi Bob, I am so excited. I cannot wait to tell you about what has happened this week.

<Bob> Hi Leslie. You really do sound excited. I cannot wait to hear.

<Leslie> Well, let us go back a bit in the story.  You remember that I was really struggling to convince the teams I am working with to actually make changes.  I kept getting the ‘Yes … but‘ reaction from the sceptics.  It was as if they were more comfortable with complaining.

<Bob> That is the normal situation. We are all very able to delude ourselves that what we have is all we can expect.

<Leslie> Well, I listened to what you said and I asked them to work through what they predicted could happen if they did nothing.  Their healthy scepticism then worked to build their conviction that doing nothing was a very dangerous choice.

<Bob> OK. And I am guessing that insight was not enough.

<Leslie> Correct.  So then I shared some examples of what others had achieved and how they had done it, and I started to see some curiosity building, but no engagement still.  So I kept going, sharing stories of ‘what’, and ‘how’.  And eventually I got an email saying “We have thought about what you said about a one day experiment and we are prepared to give that a try“.

<Bob> Excellent. How long ago was that?

<Leslie> Three months. And I confess that I was part of the delay.  I was so surprised that they said ‘OK‘ that I was not ready to follow on.

<Bob> OK. It sounds like you did not really believe it was possible either. So what did you do next?

<Leslie> Well I knew for sure that we would only get one chance.  If the experiment failed then it would be Game Over. So I needed to know before the change what the effect would be.  I needed to be able to predict it accurately. I also needed to feel reassured enough to take the leap of faith.

<Bob> Very good, so did you use some of your ISP-2 skills?

<Leslie> Yes! And it was a bit of a struggle because doing it in theory is one thing; doing it in reality is a lot messier.

<Bob> So what did you focus on?

<Leslie> The top niggle of course!  At St Elsewhere® we have a call-centre that provides out-of-office-hours telephone advice and guidance – and it is especially busy at weekends.  We are required to answer all calls quickly, which we do, and then we categorise them into ‘urgent’  and ‘non-urgent’ and pass them on to the specialists.  They call the clients back and provide expert advice and guidance for their specific problem.

<Bob>So you do not use standard scripts?

<Leslie> No, that does not work. The variety of the problems we have to solve is too wide. And the specialist has to come to a decision quite quickly … solve the problem over the phone, arrange a visit to an out of hours clinic, or to dispatch a mobile specialist to the client immediately.

<Bob> OK. So what was the top niggle?

<Leslie> We have contractual performance specifications we have to meet for the maximum waiting time for our specialists to call clients back; and we were not meeting them.  That implied that we were at risk of losing the contract and that meant loss of revenue and jobs.

<Bob> So doing nothing was not an option.

<Leslie> Correct. And asking for more resources was not either … the contract was a fixed price one. We got it because we offered the lowest price. If we employed more staff we would go out of business.  It was a rock-and-a-hard-place problem.

<Bob> OK.  So if this was ranked as your top niggle then you must have had a solution in mind.

<Leslie> I had a diagnosis.  The Vitals Chart© showed that we already had enough resources to do the work. The performance failure was caused by a scheduling policy – one that we created – our intuitively-obvious policy.

<Bob> Ah ha! So you suggested doing something that felt counter-intuitive.

<Leslie> Yes. And that generated all the ‘Yes .. but‘  discussion.

<Bob> OK. Do you have the Vitals Chart© to hand? Can you send me the Wait-Time run chart?

<Leslie> Yes, I expected you would ask for that … here it is.

StE_CallCentre_Before<Bob> OK. So I am looking at the run chart of waiting time for the call backs for one Saturday, and it is in call arrival order, and the blue line is the maximum allowed waiting time is that correct?

<Leslie>Yup. Can you see the diagnosis?

<Bob> Yes. This chart shows the classic pattern of ‘prioritycarveoutosis’.  The upper border is the ‘non-urgents’ and the lower group are the ‘urgents’ … the queue jumpers.

<Leslie> Spot on.  It is the rising tide of non-urgent calls that spill over the specification limit.  And when I shared this chart the immediate reaction was ‘Well that proves we need more capacity!

<Bob> And the WIP chart did not support that assertion.

<Leslie> Correct. It showed we had enough total flow-capacity already.

<Bob> So you suggested a change in the scheduling policy would solve the problem without costing any money.

<Leslie> Yes. And the reaction to that was ‘That is impossible. We are already working flat out. We need more capacity because to work quicker will mean cutting corners and it is unsafe to cut-corners‘.

<Bob> So how did you get around that invalid but widely held belief?

<Leslie> I used one of the FISH techniques. I got a few of them to play a table top game where we simulated a much simpler process and demonstrated the same waiting time pattern on a hand-drawn run chart.

<Bob> Excellent.  Did that get you to the ‘OK, we will give it a go for one day‘ decision.

<Leslie>Yes. But then I had to come up with a new design and I had test it so I know it would work.

<Bob> Because that was a step too far for them. And It sounds like you achieved that.

<Leslie> Yes.  It was tough though because I knew I had to prove to myself I could do it. If I had asked you I know what you would have said – ‘I know you can do this‘.  And last Saturday we ran the ‘experiment’. I was pacing up and down like an expectant parent!

<Bob> I expect rather like the ESA team who have just landed Rosetta’s little probe-child on an asteroid travelling at 38,000 miles per hour, billions of miles from Earth after a 10 year journey through deep space!  Totally inspiring stuff!

<Leslie> Yes. And that is why I am so excited because OUR DESIGN WORKED!  Exactly as predicted.

<Bob> Three cheers for you!  You have experienced that wonderful feeling when you see the effect of improvement-by-design with your own eyes. When that happens then you really believe what opportunities become possible.

<Leslie> So I want to show you the ‘after’ chart …

StE_CallCentre_After

<Bob> Wow!  That is a spectacular result! The activity looks very similar, and other than a ‘blip’ between 15:00 and 19:00 the prioritycarveoutosis has gone. The spikes have assignable causes I assume?

<Leslie> Spot on again!  The activity was actually well above average for a Saturday.  The subjective feedback was that the new design felt calm and under-control. The chaos had evaporated.  The performance was easily achieved and everyone was very positive about the whole experience.  The sceptics were generous enough to say it had gone better than they expected.  And yes, I am now working through the ‘spikes’ and excluding them … but only once I have a root cause that explains them.

<Bob> Well done Leslie! I sense that you now believe what is possible whereas before you just hoped it would be.

<Leslie> Yes! And the most important thing to me is that we did it ourselves. Which means improvement-by-design can be learned. It is not obvious, it feels counter-intuitive, so it is not easy … but it works.

<Bob> Yes. That is the most important message. And you have now earned your ISP Certificate of Competency.

figure_weight_lift_success_150_wht_12334Improvement Science is exactly like a sport: it requires training and practice to do well.

Elite athletes do not just turn up and try hard … they have invested thousands of hours of blood, sweat and tears to even be eligible to turn up.

And their preparation is not random or haphazard … it is structured and scientific.  Sport is a science.

So it is well worth using this sporting metaphor to outline some critical-to-success factors … because the statistics on improvement projects is not good.

It is said that over 70% of improvement projects fail to achieve their goals.

figure_weight_lift_fail_anim_150_wht_12338That is a shocking statistic. It is like saying 70% of runners who start a race do not finish!

And in sport if you try something that you are not ready for then you can seriously damage your health. So just turning up and trying hard is not enough. In can actually be counter-productive!

Common sense tells us that those fail to complete the course were not well enough prepared to undertake the challenge.  We know that only one person can win a race … but everyone else could finish it.  And to start and finish a tough race is a major achievement for each participant.

It is actually their primary goal.

Being good enough to when we need to is the actual objective;  being the best-on-the-day is a bonus. Not winning is not a failure. Not finishing is.


So how does an Improvement Scientist prepare for the improvement challenge?

First, we need enough intrinsic motivation to get out of bed and to invest the required time and effort.  We must have enough passion to get started and to keep going.  We must be disappointed enough with past failures to commit to preventing future ones.  We must be angry enough with the present problems to take action … not on the people … but on the problem. We must be fearful enough of the future consequences of inaction to force us to act. And we need to be excited enough by the prospect of success to reach out for it.

Second, we need some technical training.  How to improve the behaviour and performance of  a complex adaptive system is not obvious. If it were we would all know how to do it. Many of the most effective designs appear counter-intuitive at first sight.  Many of our present assumptions and beliefs are actually a barrier to change.  So we need help and guidance in identifying what assumptions we need to unlearn.

stick_woman_toe_touch_150_wht_12023Third, We need to practice what we have learned until it becomes second-nature, and almost effortless. Deceptively easy to the untrained eye.  And we develop our capability incrementally by taking on challenges of graded difficulty. Each new challenge is a bit of a stretch, and we build on what we have achieved already.  There are no short cuts or quick fixes if we want to be capable and confident at taking on BIG improvement challenges.


And we need a coach as well as a trainer.

The role of a trainer is to teach us technical skills and to develop our physical strength, stamina and resilience.

The role of the coach is to help us develop our emotional stamina and resilience.  We need to learn to manage our minds as much as our muscles. We all harbour self-defeating attitudes, beliefs and behaviours. Bad habits that trip us up and cause us to slip, fall and bruise our egos and confidence.

The psychological development is actually more important than the physical … because if is our self-defeating “can’t do” and “yes but” inner voices that sap our intrinsic motivation and prevent us crawling out of bed and getting started.

bicycle_racer_150_wht_5606The UK Cycling Team that won multiple goal medals in the 2012 Olympics did not just train hard and have the latest and best equipment. They also had the support of a very special type of coach. Dr Steve Peters … who showed them how to manage their inner Chimp … and how to develop their mental strength in synergy with their technical ability. The result was a multi-gold medal winning engine.

And we can all benefit from this wisdom just by reading The Chimp Paradox by Dr Steve Peters.


So when we take on a difficult improvement challenge, one that many have tried and failed to overcome, and if we want world class performance as the outcome … then we need to learn the hard-won lessons of the extreme athletes … and we need to model their behaviour.

Because that is what it takes to become an Improvement Science Practitioner.

Our goal is to finish each improvement race that we start … to deliver a significant and sustained improvement.  We do not need to be perfect or the best … we just need to start and finish the race.

trapped_in_question_PA_300_wht_3174[Beeeeeep] It was time for the weekly coaching Webex. Bob, a seasoned practitioner of flow science, dialled into the teleconference with Lesley.

<Bob> Good afternoon Lesley, can I suggest a topic today?

<Lesley> Hi Bob. That would be great … and I am sure you have a good reason for suggesting it.

<Bob> I would like to explore the concept of time-traps again because it something that many find confusing. Which is a shame because it is often the key to delivering surprisingly dramatic and rapid improvements at no cost.

<Lesley> Well doing exactly that is what everyone seems to be clamouring for so it sounds like a good topic to me. I confess that I am still not confident to teach others about time-traps.

<Bob> OK. Let us start there. Can you describe what happens when you try to teach it?

<Lesley> Well, it seems to be when I say that the essence of a time-trap is that the lead time and the flow are independent … for example the lead time stays the same even though the flow is changing.  That really seems to confuse people … and me too if I am brutally honest.

<Bob> OK. Can you share the example that you use?

<Lesley> Well it depends on who I am talking to. I prefer to use an example that they are familiar with.  If it is a doctor I might use the example of the ward round. If it is a manager I might use the example of emails or meetings.

<Bob> Assume I am a doctor then – an urgent care physician.

<Lesley> OK.  Let us take it that I have done the 4N Chart® and the  top niggle is ‘Frustration because the post-take ward round takes so long that it delays the discharge of patients who then often have to stay an extra night which then fills up the unit with waiting patients and we get blamed for blocking flow from A&E and causing A&E breaches‘.

<Bob> That sounds like a good example. What is the time-trap in that design?

<Lesley> The  post-take ward round.

<Bob> And what justification is usually offered for using that design?

<Lesley> That it is a more efficient use of the expensive doctor’s time if the whole team congregate once a day and work through all the patients admitted over the previous 24 hours. They review the presentation, results of tests, diagnosis, management plans, response to treatment, decide the next steps and do the paperwork.

<Bob> And why is that a time-trap design?

<Lesley> Because  it does not matter if one patient is admitted or ten … the average lead time from the perspective of the patient is the same – about one day.

<Bob> Correct. So why is the doctor complaining that there are always lots of patients to see?

<Lesley> Because there are. The emergency short stay ward is usually full by the time the post take ward round happens.

<Bob> And how do you present the data that shows the lead time is independent of the flow?

<Lesley> I use a Gantt chart, but the problem I find is that there is so much variation and queue jumping it is not blindingly obvious from the Gantt chart that there is a time-trap. There is so much else clouding the picture.

<Bob>Is that where the ‘but I do not understand‘ conversation starts?

<Lesley> Yes. And that is where I get stuck too.

<Bob> OK.  The issue here is that a Gantt chart is not the ideal visualisation tool when there are lots of crossed-streams, frequently changing priorities, and many other sources of variation.  The Gantt chart gets ‘messy’.   The trick here is to use a Vitals Chart – and you can derive that from the same data you used for the Gantt chart.

<Lesley> You are right about the Gantt chart getting messy. I have seen massive wall-sized Gantt charts that are veritable works-of-art and that have taken hours to create … and everyone standing looking at it and saying ‘Wow! That is an impressive piece of work. So what does it tell us? How does it help?

<Bob> Yes, I have experienced that too. I think what happens is that those who do the foundation training and discover the Gantt chart then try to use it to solve every flow problem – and in their enthusiasm they discount any warning advice. Desperation drives over-inflated expectation which is often the pre-cursor to disappointment, and then disillusionment. The Nerve Curve again.

<Lesley> But a Vitals Chart is an ISP level technique and you said that we do not need to put everyone through ISP training.

<Bob>That is correct. I am advocating an ISP-in-training using a Vitals Chart to explain the concept of a time-trap so that everyone understands it well enough to see the flaw in the design.

<Lesely> Ah ha!  Yes, I see.  So what is my next step?

<Bob> I will let you answer that.

<Lesley> Um, let me think.

The outcome I want is everyone understands the concept of a time-trap well enough to feel comfortable with trying a different no-trap design because they can see the benefits for them.

And to get that depth of understanding I need to design a table top exercise that starts with a time-trap design and generates raw data that we can use to build both a Gantt chart and the Vitals Chart; so I can point out and explain the characteristic finger-print of a time trap.

And then we ‘test’ an alternative time-trap-free design and generate the prognostic Gantt and Vitals Charts and compare with the baseline diagnostic charts to reveal the improvement.

<Bob> That sounds like a good plan to me.  And if you do that, and your team apply it to a real improvement exercise, and you see the improvement and you share the story … then that will earn you a coveted ISP Certificate of Competency.

<Lesley>Ah ha! Now I understand the reason you suggested this topic!  I am on the case!

F4P_PillsWe all want a healthcare system that is fit for purpose.

One which can deliver diagnosis, treatment and prognosis where it is needed, when it is needed, with empathy and at an affordable cost.

One that achieves intended outcomes without unintended harm – either physical or psychological.

We want safety, delivery, quality and affordability … all at the same time.

And we know that there are always constraints we need to work within.

There are constraints set by the Laws of the Universe – physical constraints.

These are absolute,  eternal and are not negotiable.

Dr Who’s fantastical tardis is fictional. We cannot distort space, or travel in time, or go faster than light – well not with our current knowledge.

There are also constraints set by the Laws of the Land – legal constraints.

Legal constraints are rigid but they are also adjustable.  Laws evolve over time, and they are arbitrary. We design them. We choose them. And we change them when they are no longer fit for purpose.

The third limit is often seen as the financial constraint. We are required to live within our means. There is no eternal font of  limitless funds to draw from.  We all share a planet that has finite natural resources  – and ‘grow’ in one part implies ‘shrink’ in another.  The Laws of the Universe are not negotiable. Mass, momentum and energy are conserved.

The fourth constraint is perceived to be the most difficult yet, paradoxically, is the one that we have most influence over.

It is the cultural constraint.

The collective, continuously evolving, unwritten rules of socially acceptable behaviour.


Improvement requires challenging our unconscious assumptions, our beliefs and our habits – and selectively updating those that are no longer fit-4-purpose.

To learn we first need to expose the gaps in our knowledge and then to fill them.

We need to test our hot rhetoric against cold reality – and when the fog of disillusionment forms we must rip up and rewrite what we have exposed to be old rubbish.

We need to examine our habits with forensic detachment and we need to ‘unlearn’ the ones that are limiting our effectiveness, and replace them with new habits that better leverage our capabilities.

And all of that is tough to do. Life is tough. Living is tough. Learning is tough. Leading is tough. But it energising too.

Having a model-of-effective-leadership to aspire to and a peer-group for mutual respect and support is a critical piece of the jigsaw.

It is not possible to improve a system alone. No matter how smart we are, how committed we are, or how hard we work.  A system can only be improved by the system itself. It is a collective and a collaborative challenge.


So with all that in mind let us sketch a blueprint for a leader of systemic cultural improvement.

What values, beliefs, attitudes, knowledge, skills and behaviours would be on our ‘must have’ list?

What hard evidence of effectiveness would we ask for? What facts, figures and feedback?

And with our check-list in hand would we feel confident to spot an ‘effective leader of systemic cultural improvement’ if we came across one?


This is a tough design assignment because it requires the benefit of  hindsight to identify the critical-to-success factors: our ‘must have and must do’ and ‘must not have and must not do’ lists.

H’mmmm ….

So let us take a more pragmatic and empirical approach. Let us ask …

“Are there any real examples of significant and sustained healthcare system improvement that are relevant to our specific context?”

And if we can find even just one Black Swan then we can ask …

Q1. What specifically was the significant and sustained improvement?
Q2. How specifically was the improvement achieved?
Q3. When exactly did the process start?
Q4. Who specifically led the system improvement?

And if we do this exercise for the NHS we discover some interesting things.

First let us look for exemplars … and let us start using some official material – the Monitor website (http://www.monitor.gov.uk) for example … and let us pick out ‘Foundation Trusts’ because they are the ones who are entrusted to run their systems with a greater degree of capability and autonomy.

And what we discover is a league table where those FTs that are OK are called ‘green’ and those that are Not OK are coloured ‘red’.  And there are some that are ‘under review’ so we will call them ‘amber’.

The criteria for deciding this RAG rating are embedded in a large balanced scorecard of objective performance metrics linked to a robust legal contract that provides the framework for enforcement.  Safety metrics like standardised mortality ratios, flow metrics like 18-week and 4-hour target yields, quality metrics like the friends-and-family test, and productivity metrics like financial viability.

A quick tally revealed 106 FTs in the green, 10 in the amber and 27 in the red.

But this is not much help with our quest for exemplars because it is not designed to point us to who has improved the most, it only points to who is failing the most!  The league table is a name-and-shame motivation-destroying cultural-missile fuelled by DRATs (delusional ratios and arbitrary targets) and armed with legal teeth.  A projection of the current top-down, Theory-X, burn-the-toast-then-scrape-it management-of-mediocrity paradigm. Oh dear!

However,  despite these drawbacks we could make better use of this data.  We could look at the ‘reds’ and specifically at their styles of cultural leadership and compare with a random sample of all the ‘greens’ and their models for success. We could draw out the differences and correlate with outcomes: red, amber or green.

That could offer us some insight and could give us the head start with our blueprint and check-list.


It would be a time-consuming and expensive piece of work and we do not want to wait that long. So what other avenues are there we can explore now and at no cost?

Well there are unofficial sources of information … the ‘grapevine’ … the stuff that people actually talk about.

What examples of effective improvement leadership in the NHS are people talking about?

Well a little blue bird tweeted one in my ear this week …

And specifically they are talking about a leader who has learned to walk-the-improvement-walk and is now talking-the-improvement-walk: and that is Sir David Dalton, the CEO of Salford Royal.

Here is a copy of the slides from Sir David’s recent lecture at the Kings Fund … and it is interesting to compare and contrast it with the style of NHS Leadership that led up to the Mid Staffordshire Failure, and to the Francis Report, and to the Keogh Report and to the Berwick Report.

Chalk and cheese!


So if you are an NHS employee would you rather work as part of an NHS Trust where the leaders walk-DD’s-walk and talk-DD’s-talk?

And if you are an NHS customer would you prefer that the leaders of your local NHS Trust walked Sir David’s walk too?


We are the system … we get the leaders that we deserve … we make the  choice … so we need to choose wisely … and we need to make our collective voice heard.

Actions speak louder than words.  Walk works better than talk.  We must be the change we want to see.

teamwork_puzzle_build_PA_150_wht_2341[Bing bong]. The sound heralded Lesley logging on to the weekly Webex coaching session with Bob, an experienced Improvement Science Practitioner.

<Bob> Good afternoon Lesley.  How has your week been and what topic shall we explore today?

<Lesley> Hi Bob. Well in a nutshell, the bit of the system that I have control over feels like a fragile oasis of calm in a perpetual desert of chaos.  It is hard work keeping the oasis clear of the toxic sand that blows in!

<Bob> A compelling metaphor. I can just picture it.  Maintaining order amidst chaos requires energy. So what would you like to talk about?

<Lesley> Well, I have a small shoal of FISHees who I am guiding  through the foundation shallows and they are getting stuck on Little’s Law.  I confess I am not very good at explaining it and that suggests to me that I do not really understand it well enough either.

<Bob> OK. So shall we link those two theme – chaos and Little’s Law?

<Lesley> That sounds like an excellent plan!

<Bob> OK. So let us refresh the foundation knowledge. What is Little’s Law?

<Lesley>It is a fundamental Law of process physics that relates flow, with lead time and work in progress.

<Bob> Good. And specifically?

<Lesley> Average lead time is equal to the average flow multiplied by the average work in progress.

<Bob>Yes. And what are the units of flow in your equation?

<Lesley> Ah yes! That is  a trap for the unwary. We need to be clear how we express flow. The usual way is to state it as number of tasks in a defined period of time, such as patients admitted per day.  In Little’s Law the convention is to use the inverse of that which is the average interval between consecutive flow events. This is an unfamiliar way to present flow to most people.

<Bob> Good. And what is the reason that we use the ‘interval between events’ form?

<Leslie> Because it is easier to compare it with two critically important  flow metrics … the takt time and the cycle time.

<Bob> And what is the takt time?

<Leslie> It is the average interval between new tasks arriving … the average demand interval.

<Bob> And the cycle time?

<Leslie> It is the shortest average interval between tasks departing …. and is determined by the design of the flow constraint step.

<Bob> Excellent. And what is the essence of a stable flow design?

<Lesley> That the cycle time is less than the takt time.

<Bob>Why less than? Why not equal to?

<Leslie> Because all realistic systems need some flow resilience to exhibit stable and predictable-within-limits behaviour.

<Bob> Excellent. Now describe the design requirements for creating chronically chaotic system behaviour?

<Leslie> This is a bit trickier to explain. The essence is that for chronically chaotic behaviour to happen then there must be two feedback loops – a destabilising loop and a stabilising loop.  The destabilising loop creates the chaos, the stabilising loop ensures it is chronic.

<Bob> Good … so can you give me an example of a destabilising feedback loop?

<Leslie> A common one that I see is when there is a long delay between detecting a safety risk and the diagnosis, decision and corrective action.  The risks are often transitory so if the corrective action arrives long after the root cause has gone away then it can actually destabilise the process and paradoxically increase the risk of harm.

<Bob> Can you give me an example?

<Leslie>Yes. Suppose a safety risk is exposed by a near miss.  A delay in communicating the niggle and a root cause analysis means that the specific combination of factors that led to the near miss has gone. The holes in the Swiss cheese are not static … they move about in the chaos.  So the action that follows the accumulation of many undiagnosed near misses is usually the non-specific mantra of adding yet another safety-check to the already burgeoning check-list. The longer check-list takes more time to do, and is often repeated many times, so the whole flow slows down, queues grow bigger, waiting times get longer and as pressure comes from the delivery targets corners start being cut, and new near misses start to occur; on top of the other ones. So more checks are added and so on.

<Bob> An excellent example! And what is the outcome?

<Leslie> Chronic chaos which is more dangerous, more disordered and more expensive. Lose lose lose.

<Bob> And how do the people feel who work in the system?

<Leslie> Chronically naffed off! Angry. Demotivated. Cynical.

<Bob>And those feelings are the key symptoms.  Niggles are not only symptoms of poor process design, they are also symptoms of a much deeper problem: a violation of values.

<Leslie> I get the first bit about poor design; but what is that second bit about values?

<Bob>  We all have a set of values that we learned when we were very young and that have bee shaped by life experience.  They are our source of emotional energy, and our guiding lights in an uncertain world. Our internal unconscious check-list.  So when one of our values is violated we know because we feel angry. How that anger is directed varies from person to person … some internalise it and some externalise it.

<Leslie> OK. That explains the commonest emotion that people report when they feel a niggle … frustration which is the same as anger.

<Bob>Yes.  And we reveal our values by uncovering the specific root causes of our niggles.  For example if I value ‘Hard Work’ then I will be niggled by laziness. If you value ‘Experimentation’ then you may be niggled by ‘Rigid Rules’.  If someone else values ‘Safety’ then they may value ‘Rigid Rules’ and be niggled by ‘Innovation’ which they interpret as risky.

<Leslie> Ahhhh! Yes, I see.  This explains why there is so much impassioned discussion when we do a 4N Chart! But if this behaviour is so innate then it must be impossible to resolve!

<Bob> Understanding  how our values motivate us actually helps a lot because we are naturally attracted to others who share the same values – because we have learned that it reduces conflict and stress and improves our chance of survival. We are tribal and tribes share the same values.

<Leslie> Is that why different  departments appear to have different cultures and behaviours and why they fight each other?

<Bob> It is one factor in the Silo Wars that are a characteristic of some large organisations.  But Silo Wars are not inevitable.

<Leslie> So how are they avoided?

<Bob> By everyone knowing what common purpose of the organisation is and by being clear about what values are aligned with that purpose.

<Leslie> So in the healthcare context one purpose is avoidance of harm … primum non nocere … so ‘safety’ is a core value.  Which implies anything that is felt to be unsafe generates niggles and well-intended but potentially self-destructive negative behaviour.

<Bob> Indeed so, as you described very well.

<Leslie> So how does all this link to Little’s Law?

<Bob>Let us go back to the foundation knowledge. What are the four interdependent dimensions of system improvement?

<Leslie> Safety, Flow, Quality and Productivity.

<Bob> And one measure of  productivity is profit.  So organisations that have only short term profit as their primary goal are at risk of making poor long term safety, flow and quality decisions.

<Leslie> And flow is the key dimension – because profit is just  the difference between two cash flows: income and expenses.

<Bob> Exactly. One way or another it all comes down to flow … and Little’s Law is a fundamental Law of flow physics. So if you want all the other outcomes … without the emotionally painful disorder and chaos … then you cannot avoid learning to use Little’s Law.

<Leslie> Wow!  That is a profound insight.  I will need to lie down in a darkened room and meditate on that!

<Bob> An oasis of calm is the perfect place to pause, rest and reflect.

monster_in_closet_150_wht_14500We spend a lot of time in a state of anxiety and fear. It is part and parcel of life because there are many real threats that we need to detect and avoid.

For our own safety and survival.

Unfortunately there are also many imagined threats that feel just as real and just as terrifying.

In these cases it is our fear that does the damage because it paralyses our decision making and triggers our ‘fright’ then ‘fight’ or ‘flight’ reaction.

Fear is not bad … the emotional energy it releases can be channelled into change and improvement. Just as anger can.


So we need to be able to distinguish the real fears from the imaginary ones. And we need effective strategies to defuse the imaginary ones.  Because until we do that we will find it very difficult to listen, learn, experiment, change and improve.

So let us grasp the nettle and talk about a dozen universal fears …

Fear of dying before one’s time.
Fear of having one’s basic identity questioned.
Fear of poverty or loss of one’s livelihood.
Fear of being denied one’s fundamental rights and liberties.

Fear of being unjustly accused of wrongdoing.
Fear of public humiliation.
Fear of being unjustly seen as lacking character.
Fear of being discovered as inauthentic – a fraud.

Fear of radical change.
Fear of feedback.
Fear of failure.
Fear of the unknown.

Notice that some of these fears are much ‘deeper’ than others … this list is approximately in depth order. Some relate to ‘self'; some relate to ‘others’ and all are inter-related to some degree. Fear of failure links to fear of humiliation and to fear of loss-of-livelihood.


Of these the four that are closest to the surface are the easiest to tackle … fear of radical change, fear of feedback, fear of failure, and fear of the unknown.  These are the Four Fears that block personal improvement.


Fear of the unknown is the easiest to defuse. We just open the door and look … from an emotionally safe distance so that we can run away if our worst fears are realised … which does not happen when the fear is imagined.

This is an effective strategy for defusing the emotionally and socially damaging effects of self-generated phobias.

And we find overcoming fear-of-the-unknown exhilarating … that is how theme parks and roller-coaster rides work.

First we open our eyes, we look, we see, we observe, we reflect, we learn and we convert the unknown to the unfamiliar and then to the familiar. We may not conquer our fear completely … there may be some reasonable residual anxiety … but we have learned to contain it and to control it. We have made friends with our inner Chimp. We climb aboard the roller coaster that is called ‘life’.


Fear of failure is next.  We defuse this by learning how to fail safely so that we can learn-by-doing and by that means we reduce the risk of future failures. We make frequent small safe failures in order to learn how to avoid the rare big unsafe ones!

Many people approach improvement from an academic angle. They sit on the fence. They are the reflector-theorists. And this may because they are too fearful-of-failing to learn the how-by-doing. So they are unable to demonstrate the how and their fear becomes the fear-of-fraud and the fear-of-humiliation. They are blocked from developing their pragmatist/activist capability by their self-generated fear-of-failure.

So we start small, we stay focussed, we stay inside our circle of control, and we create a safe zone where we can learn how to fail safely – first in private and later in public.

One of the most inspiring behaviours of an effective leader is the courage to learn in public and to make small failures that demonstrate their humility and humanity.

Those who insist on ‘perfect’ leaders are guaranteed to be disappointed.


And one thing that we all fail repeatedly is to ask for, to give and to receive effective feedback. This links to the deeper fear-of-humiliation.

And it is relatively easy to defuse this fear-of-feedback too … we just need a framework to support us until we find our feet and our confidence.

The key to effective feedback is to make it non-judgemental.

And that can only be done by developing our ability to step back and out of the Drama Triangle and to cultivate an I’m OK- You’re OK  mindset.

The mindset of mutual respect. Self-respect and Other-respect.

And remember that Other-respect does not imply trust, alignment, agreement, or even liking.

Sworn enemies can respect each other while at the same time not trusting, liking or agreeing with each other.

Judgement-free feedback (JFF) is a very effective technique … both for defusing fear and for developing mutual respect.

And from that foundation radical change becomes possible, even inevitable.

wacky_languageAll innovative ideas are inevitably associated with new language.

Familiar words used in an unfamiliar context so that the language sounds ‘wacky’ to those in the current paradigm.

Improvement science is no different.

A problem arises when familiar words are used in a new context and therefore with a different meaning. Confusion.

So we try to avoid this cognitive confusion by inventing new words, or by using foreign words that are ‘correct’ but unfamiliar.

This use of novel and foreign language exposes us to another danger: the evolution of a clique of self-appointed experts who speak the new and ‘wacky’ language.

This self-appointed expert clique can actually hinder change because it can result yet another us-and-them division.  Another tribe. More discussion. More confusion. Less improvement.


So it is important for an effective facilitator-of-improvement to define any new language using the language of the current paradigm.  This can be achieved by sharing examples of new concepts and their language in familiar contexts and with familiar words, because we learn what words mean from their use-in-context.

For example:

The word ‘capacity’ is familiar and we all know what we think it means.  So when we link it to another familiar word, ‘demand’, then we feel comfortable that we understand what the phrase ‘demand-and-capacity’ means.

But do we?

The act of recognising a word is a use of memory or knowledge. Understanding what a word means requires more … it requires knowing the context in which the word is used.  It means understanding the concept that the word is a label for.

To a practitioner of flow science the word ‘capacity’ is confusing – because it is too fuzzy.  There are many different forms of capacity: flow-capacity, space-capacity, time-capacity, and so on.  Each has a different unit and they are not interchangeable. So the unqualified term ‘capacity’ will trigger the question:

What sort of capacity are you referring to?

[And if that is not the reaction then you may be talking to someone who has little understanding of flow science].


Then there are the foreign words that are used as new labels for old concepts.

Lean zealots seem particularly fond of peppering their monologues with Japanese words that are meaningless to anyone else but other Lean zealots.  Words like muda and muri and mura which are labels for important and useful flow science concepts … but the foreign name gives no clue as to what that essential concept is!

[And for a bit of harmless sport ask a Lean zealot to explain what these three words actually mean but only using  language that you understand. If they cannot to your satisfaction then you have exposed the niggle. And if they can then it is worth asking 'What is the added value of the foreign language?']

And for those who are curious to know the essential concepts that these four-letter M words refer to:

muda means ‘waste’ and refers to the effects of poor process design in terms of the extra time (and cost) required for the process to achieve its intended purpose.  A linked concept is a ‘niggle’ which is the negative emotional effect of a poor process design.

muri means ‘overburdening’ and can be illustrated  with an example.  Suppose you work in a system where there is always a big backlog of work waiting to be done … a large queue of patients in the waiting room … a big heap of notes on the trolley. That ‘burden’ generates stress and leads to other risky behaviours such as rushing, corner-cutting, deflection and overspill. It is also an outcome of poor process design, so  is avoidable.

mura means variation or uncertainty. Again an example helps. Suppose we are running an emergency service then, by definition, a we have no idea what medical problem the next patient that comes through the door will present us with. It could be trivial or life-threatening. That is unplanned and expected variation and is part of the what we need our service to be designed to handle.  Suppose when we arrive for our shift that we have no idea how many staff will be available to do the work because people phone in sick at the last minute and there is no resilience on the staffing capacity.  Our day could be calm-and-capable (and rewarding) or chaotic-and-incapable (and unrewarding).  It is the stress of not knowing that creates the emotional and cultural damage, and is the expected outcome of incompetent process design. And is avoidable.


And finally we come to words that are not foreign but are not very familiar either.

Words like praxis.

This sounds like ‘practice’ but is not spelt the same. So is the the same?

And it sounds like a medical condition called dyspraxia which means:  poor coordination of movement.

And when we look up praxis in an English dictionary we discover that one definition is:

the practice and practical side of a profession or field of study, as opposed to theory.

Ah ah! So praxis is a label for the the concept of ‘how to’ … and someone who has this ‘know how’ is called a practitioner.  That makes sense.

On deeper reflection we might then describe our poor collective process design capability as dyspraxic or uncoordinated. That feels about right too.


An improvement science practitioner (ISP) is someone who knows the science of improvement; and can demonstrate their know-how in practice; and can explain the principles that underpin their praxis using the language of the learner. Without any wacky language.

So if we want to diagnose and treat our organisational dyspraxia;

… and if we want smooth and efficient services (i.e. elimination of chaos and reduction of cost);

… and if we want to learn this know-how,  practice or praxis;

… then we could study the Foundations of Improvement Science in Healthcare (FISH);

… and we could seek the wisdom of  the growing Community of Healthcare Improvement Practitioners (CHIPs).


FISH & CHIPs … a new use for a familiar phrase?

figure_breaking_through_wall_anim_150_wht_15036The dictionary definition of resilience is “something that is capable of  returning to its original shape after being stretched, bent or otherwise deformed“.

The term is applied to inanimate objects, to people and to systems.

A rubber ball is resilient … it is that physical property that gives it bounce.

A person is described as resilient if they are able to cope with stress without being psychologically deformed in the process.  Emotional resilience is regarded as an asset.

Systems are described as resilient when they are able to cope with variation without failing. And this use of the term is associated with another concept: strength.

Strong things can withstand a lot of force before they break. Strength is not the same as resilience.

Engineers use another term – strain – which means the amount of deformation that happens when a force is applied.

Stress is the force applied, strain is the deformation that results.

So someone who is strong and resilient will not buckle under high pressure and will absorb variation – like the suspension of you car.

But is strength-and-resilience always an asset?


Suppose some strong and resilient people finds themselves in a relentlessly changing context … one in which they actually need to adapt and evolve to survive in the long term.

How well does their highly valued strength-and-resilience asset serve them?  Not very well.

They will resist the change – they are resilient – and they will resist it for a long time – they are strong.

But the change is relentless and eventually the limit of their strength will be reached … and they snap!

And when that happens all the stored energy is suddenly released. So they do not just snap – they explode!

Just like the wall in the animation above.

The final straw that triggers the sudden failure may appear insignificant … and at any other time  it would be.

But when the pressure is really on and the system is at the limit then it can be just enough to trigger the catastrophic failure from which there is no return.


Social systems behave in exactly the same way.

Those that have demonstrated durability are both strong and resilient – but in a relentlessly changing context even they will fail eventually, and when they do the collapse is sudden and catastrophic.

Structural engineers know that catastrophic failure usually starts at a localised failure and spreads rapidly through the hyper-stressed structure; each part failing in sequence as it becomes exposed and exceeds its limit of strength.  That is how the strong and resilient Twin Towers failed and fell on Sept 11th 2001. They were not knocked over. They were weakened to the point of catastrophic failure.

When systems are exposed to variable strains then these localised micro-fractures only occur at the peaks of stress and may not have time to spread very far. The damage is done though. The system is a bit weaker than it was before. And catastrophic failure is more likely in the future.

That is what caused the sudden loss of some of the first jet airliners which inexplicably just fell out of the sky on otherwise uneventful flights.  It took a long time for the root cause to be uncovered … the square windows.

Jet airliners fly at high altitude because it allows higher speeds and requires less fuel an so allows long distance flight over wide oceans, steppes, deserts and icecaps. But the air pressure is low at high altitude and passengers could not tolerate that: so the air pressure inside an airliner at high altitude is much higher than outside. It is a huge flying pressurised metal cannister.  And as it goes up and down the thin metal skin is exposed to high variations in stress which a metal tube can actually handle rather well … until we punch holes in it to fit windows to allow our passengers a nice view of the clouds outside.  We are used to square windows in our houses (because they are easier to make) so the aircraft engineers naturally put square windows in the early airliners.  And that is where the problem arose … the corners of the windows concentrate the stress and over time, with enough take-offs and landings,  the metal skin at the corners of the windows accumulate invisible micro-fractures. The metal actually fatigues. Then one day – pop – a single rivet at the corner of a square window fails and triggers the catastrophic failure of the whole structure. But the aircraft designers did not understand that.

The solution? A more resilient design – use round-cornered windows. It was that simple!


So what is the equivalent resilient design for social system? Adaptability.

But how it is possible for a system to be strong, resilient and adaptable?

The trick is to install “emotional strain gauges” or that indicate when and where the internal cultural stress is being concentrated and where the emotional strain shows first.

These niggleometers will alert us to where the stresses and strains are being felt strongest and most often – rather like pain detectors. We use the patterns of information from our network of niggleometers to help us focus our re-design attention to continuously adapt parts of our system to relieve the strain and to reduce the system wide risk of catastrophic failure.

And by installing niggleometers across our system we will move towards a design that is strong, resilient and that continuously adapts to a changing environment.

It really is that simple.

cardiogram_heart_signal_150_wht_5748[Beep] It was time for the weekly e-mentoring session so Bob switched on his laptop, logged in to the virtual meeting site and found that Lesley was already there.

<Bob> Hi Lesley. What shall we talk about today?

<Lesley> Hello Bob. Another old chestnut I am afraid. Queues.  I keep hitting the same barrier where people who are fed up with the perpetual queue chaos have only one mantra “If you want to avoid long waiting times then we need more capacity.

<Bob> So what is the problem? You know that is not the cause of chronic queues.

<Lesley> Yes, I know that mantra is incorrect – but I do not yet understand how to respectfully challenge it and how to demonstrate why it is incorrect and what the alternative is.

<Bob> OK. I understand. So could you outline a real example that we can work with.

<Lesley> Yes. Another old chestnut: the Emergency Department 4-hour breaches.

<Bob> Do you remember the Myth of Sisyphus?

<Leslie> No, I do not remember that being mentioned in the FISH course.

<Bob> Ho ho! No indeed,  it is much older. In Greek mythology Sisyphus was a king of Ephyra who was punished by the Gods for chronic deceitfulness by being compelled to roll an immense boulder up a hill, only to watch it roll back down, and then to repeat this action forever.

Sisyphus_Cartoon

<Lesley> Ah! I see the link. Yes, that is exactly how people in the ED feel.  Everyday it feels like they are pushing a heavy boulder uphill – only to have to repeat the same labour the next day. And they do not believe it can ever be any better with the resources they have.

<Bob> A rather depressing conclusion! Perhaps a better metaphor is the story in the film  “Ground Hog Day” where Bill Murray plays the part of a rather arrogant newsreader who enters a recurring nightmare where the same day is repeated, over and over. He seems powerless to prevent it.  He does eventually escape when he learns the power of humility and learns how to behave differently.

<Lesley> So the message is that there is a way out of this daily torture – if we are humble enough to learn the ‘how’.

<Bob> Well put. So shall we start?

<Lesley> Yes please!

<Bob> OK. As you know very well it is important not to use the unqualified term ‘capacity’.  We must always state if we are referring to flow-capacity or space-capacity.

<Lesley> Because they have different units and because they are intimately related to lead time by Little’s Law.

<Bob> Yes.  Little’s Law is mathematically proven Law of flow physics – it is not negotiable.

<Lesley> OK. I know that but how does it solve problem we started with?

<Bob> Little’s Law is necessary but it is not sufficient. Little’s Law relates to averages – and is therefore just the foundation. We now need to build the next level of understanding.

<Lesley> So you mean we need to introduce variation?

<Bob> Yes. And the tool we need for this is a particular form of time-series chart called a Vitals Chart.

<Lesley> And I am assuming that will show the relationship between flow, lead time and work in progress … over time ?

<Bob> Exactly. It is the temporal patterns on the Vitals Chart that point to the root causes of the Sisyphean Chaos. The flow design flaws.

<Lesley> Which are not lack of flow-capacity or space-capacity.

<Bob> Correct. If the chaos is chronic then there must already be enough space-capacity and flow-capacity. Little’s Law shows that, because if there were not the system would have failed completely a long time ago. The usual design flaw in a chronically chaotic system is one or more misaligned policies.  It is as if the system hardware is OK but the operating software is not.

<Lesley> So to escape from the Sisyphean Recurring ED 4-Hour Breach Nightmare we just need enough humility and enough time to learn how to diagnose and redesign some of our ED system operating software? Some of our own policies? Some of our own mantras?

<Bob> Yup.  And not very much actually. Most of the software is OK. We need to focus on the flaws.

<Lesley> So where do I start?

<Bob> You need to do the ISP-1 challenge that is called Brainteaser 104.  That is where you learn how to create a Vitals Chart.

<Lesley> OK. Now I see what I need to do and the reason:  understanding how to do that will help me explain it to others. And you are not going to just give me the answer.

<Bob> Correct. I am not going to just give you the answer. You will not fully understand unless you are able to build your own Vitals Chart generator. You will not be able to explain the how to others unless you demonstrate it to yourself first.

<Lesley> And what else do I need to do that?

<Bob> A spreadsheet and your raw start and finish event data.

<Lesley> But we have tried that before and neither I nor the database experts in our Performance Department could work out how to get the real time work in progress from the events – so we assumed we would have to do a head count or a bed count every hour which is impractical.

<Bob> It is indeed possible as you are about to discover for yourself. The fact that we do not know how to do something does not prove that it is impossible … humility means accepting our inevitable ignorance and being open to learning. Those who lack humility will continue to live the Sisyphean Nightmare of ED Ground Hog Day. The choice to escape is ours.

<Lesley> I choose to learn. Please send me BT104.

<Bob> It is on its way …

media_video_icon_anim_150_wht_14142In a recent blog we explored the subject of learning styles and how a balance of complementary learning styles is needed to get the wheel-of-change turning.

Experience shows that many of us show a relative weakness in the ‘Activist’ quadrant of the cycle.

That implies we are less comfortable with learning-by-doing. Experimenting.

This behaviour is driven by a learned fear.  The fear-of-failure.

So when did we learn this fear?

Typically it is learned during childhood and is reinforced throughout adulthood.

The fear comes not from the failure though  … it comes from the emotional reaction of others to our supposed failure. The emotional backlash of significant others. Parents and parent-like figures such as school teachers.

Children are naturally curious and experimental and fearless.  That is how they learn. They make lots of mistakes – but they learn from them. Walking, talking, tying a shoelace, and so on.  Small mistakes do not created fear. We learn fear from others.

Full-of-fear others.

To an adult who has learned how to do many things it becomes easy to be impatient with the trial-and-error approach of a child … and typically we react in three ways:

1) We say “Don’t do that” when we see our child attempt something in a way we believe will not work or we believe could cause an accident. We teach them our fears.

2) We say “No” when we disagree with an idea or an answer that a child has offered. We discount them by discounting their ideas.

3) We say “I’ll do it” when we see a child try and fail. We discount their ability to learn how to solve problems and we discount our ability to let them.

Our emotional reaction is negative in all three cases and that is what teaches our child the fear of failure.

So they stop trying as hard.

And bit-by-bit they lose their curiosity and their courage.

We have now put them on the path to scepticism and cynicism.  Which is how we were taught.


This fear-of-failure brainwashing continues at school.

But now it is more than just fear of disappointing our parents; now it is fear of failing tests and exams … fear of the negative emotional backlash from peers, teachers and parents.

Some give up: they flee.  Others become competitive: they fight.

Neither strategies dissolve the source of the fear though … they just exacerbate it.


So it is rather too common to see very accomplished people paralysed with fear when circumstances dictate that they need to change in some way … to learn a new skill for example … to self-improve maybe.

Their deeply ingrained fear-of-failure surfaces and takes over control – and the fright/flight/fight behaviour is manifest.


So to get to the elusive win-win-win outcomes we want we have to weaken the fear-of-failure reflex … we need to develop a new habit … learning-by-doing.

The trick to this is to focus on things that fall 100% inside our circle of control … the Niggles that rank highest on our Niggle-o-Gram®.

And when we Study the top niggle; and then Plan the change; and then Do what we planned, and then Study effect of our action … then we learn-by-doing.

But not just by doing …. by Studying, Planning, Doing and Studying again.

Actions Speak not just to us but to everyone else too.

6MDesignJigsawSystems are made of interdependent parts that link together – rather like a jigsaw.

If pieces are distorted, missing, or in the wrong place then the picture is distorted and the system does not work as well as it could.

And if pieces of one jigsaw are mixed up with those of another then it is even more difficult to see any clear picture.

A system of improvement is just the same.

There are many improvement jigsaws each of which have pieces that fit well together and form a synergistic whole. Lean, Six Sigma, and Theory of Constraints are three well known ones.

Each improvement jigsaw evolved in a different context so naturally the picture that emerges is from a particular perspective: such as manufacturing.

So when the improvement context changes then the familiar jigsaws may not work as well: such as when we shift context from products to services, and from commercial to public.

A public service such as healthcare requires a modified improvement jigsaw … so how do we go about getting that?


One way is to ‘evolve’ an old jigsaw into a new context. That is tricky because it means adding new pieces and changing old pieces and the ‘zealots’ do not like changing their familiar jigsaw so they resist.

Another way is to ‘combine’ several old jigsaws in the hope that together they will provide enough perspectives. That is even more tricky because now you have several tribes of zealots who resist having their familiar jigsaws modified.

What about starting with a blank canvas and painting a new picture from scratch? Well it is actually very difficult to create a blank canvas for learning because we cannot erase what we already know. Our current mental model is the context we need for learning new knowledge.


So what about using a combination of the above?

What about first learning a new creative approach called design? And within that framework we can then create a new improvement jigsaw that better suits our specific context using some of the pieces of the existing ones. We may need to modify the pieces a bit to allow them to fit better together, and we may need to fashion new pieces to fill the gaps that we expose. But that is part of the fun.


6MDesignJigsawThe improvement jigsaw shown here is a new hybrid.

It has been created from a combination of existing improvement knowledge and some innovative stuff.

Pareto analysis was described by Vilfredo Pareto over 100 years ago.  So that is tried and tested!

Time-series charts were invented by Walter Shewhart almost 100 years ago. So they are tried and tested too!

The combination of Pareto and Shewhart tools have been used very effectively for over 50 years. The combination is well proven.

The other two pieces are innovative. They have different parents and different pedigrees. And different purposes.

The Niggle-o-Gram® is related to 2-by-2, FMEA and EIQ and the 4N Chart®.  It is the synthesis of them that creates a powerful lens for focussing our improvement efforts on where the greatest return-on-investment will be.

The Right-2-Left Map® is a descendent of the Design family and has been crossed with Graph Theory and Causal Network exemplars to introduce their best features.  Its purpose is to expose errors of omission.

The emergent system is synergistic … much more effective than each part individually … and more even than their linear sum.


So when learning this new Science of Improvement we have to focus first on learning about the individual pieces and we do that by seeing examples of them used in practice.  That in itself is illuminating!

As we learn about more pieces a fog of confusion starts to form and we run the risk of mutating into a ‘tool-head’.  We know about the pieces in detail but we still do not see the bigger picture.

To avoid the tool-head trap we must balance our learning wheel and ensure that we invest enough time in learning-by-doing.

Then one day something apparently random will happen that triggers a ‘click’.  Familiar pieces start to fit together in a unfamiliar way and as we see the relationships, the sequences, and the synergy – then a bigger picture will start to emerge. Slowly at first and then more quickly as more pieces aggregate.

Suddenly we feel a big CLICK as the final pieces fall into place.  The fog of confusion evaporates in the bright sunlight of a paradigm shift in our thinking.

The way forward that was previously obscured becomes clearly visible.

Ah ha!

And we are off on the next stage  of our purposeful journey of improvement.

stick_figure_liking_it_150_wht_9170Common-sense tells us that to achieve system-wide improvement we need to grasp the “culture nettle”.

Most of us believe that culture drives attitudes; and attitudes drive behaviour; and behaviour drives improvement.

Therefore to get improvement we must start with culture.

And that requires effective leadership.

So our unspoken assumptions about how leaders motivate our behaviour seem rather important to understand.

In 1960 a book was published with the  title “The Human Side of Enterprise” which went right to the heart of this issue.   The author was Doug McGregor who was a social scientist and his explanation of why improvement appears to be so difficult in large organisations was a paradigm shift in thinking.  His book inspired many leaders to try a different approach – and they discovered that it worked and that enterprise-wide transformation followed.  The organisations that these early-adopters led evolved into commercial successes and more enjoyable places to work.

The new leaders learned to create the context for change – not to dictate the content.

Since then social scientists have disproved many other ‘common sense’ beliefs by applying a rigorous scientific approach and using robust evidence.

They have busted the culture-drives-change myth …. the evidence shows that it is the other way around … change drives culture.

And what changes first is behaviour.

We are social  animals …. most of us are much more likely to change our behaviour if we see other people doing the same.  We do not like being too different.

As we speak there is a new behaviour spreading – having a bucket of cold water tipped over your head as part of a challenge to raise money for charity.

This craze has a positive purpose … feeling good about helping others through donating money to a worthwhile cause … but most of us need a nudge to get us to do it.

Seeing well-known public figures having iced-water dumped on them on a picture or video shared through multiple, parallel, social media channels is a powerful cultural signal that says “This new behaviour is OK”.

Exhortation and threats are largely ineffective – fear will move people – it will scatter them, not align them. Shaming-and-blaming into behaving differently is largely ineffective too – it generates short-term anger and long-term resentment.

This is what Doug McGregor highlighted over half a century ago … and his message is timeless.

“.. the research evidence indicates quite clearly that skillful and sensitive membership behaviour is the real clue to effective group operation“.

Appreciating this critical piece of evidence opens a new door to system-wide improvement … one that we can all walk through:  Sharing improvement stories.

Sharing stories of actions that others have done and the benefits they achieved as a result; and also sharing stories of things that we ourselves have done and achieved.

Stories of small changes that delivered big benefits for others and for ourselves.  Win-win-wins. Stories of things that took little time and little effort to do because they fell inside our circles of control.

See-and-Share is an example of skillful and sensitive membership behaviour.

Effective leaders are necessary … yes … they are needed to create the context for change. It is we members who create and share the content.

PARTImprovement implies learning – new experiences, new insights, new models and new ways of doing things.

So understanding the process of learning is core to the science of improvement.

What many people do not fully appreciate is that we differ in the way we prefer to learn.  These are habitual behaviours that we have acquired.

The diagram shows one model – the Honey and Mumford model that evolved from an earlier model described by Kolb.

One interesting feature of this diagram is the two dimensions – Perception and Processing which are essentially the same as the two core dimensions in the Myers-Briggs Type Index.

What the diagram above does not show so well is that the process of learning is a cycle – the clockwise direction in this diagram – Pragmatist then Activist then Reflector then Theorist and back to Pragmatist.

This is the PART sequence.  And it can start at any point … ARTP, RTPA, TPAR.

We all use all of these learning styles – but we have a preference for some more than others – our preferred learning styles are our learning comfort zones.

The large observational studies conducted in the 1980’s using the PART model revealed that most people have moderate to strong preferences for only one or two of these styles. Less than 20% have a preference for three and very few feel equally comfortable with all four.

The commonest patterns are illustrated by the left and right sides of the diagram: the Pragmatist-Activist combination and the Reflector-Theorist combination.

It is not that one is better than the other … all four are synergistic and an effective and efficient learning process requires being comfortable with using all four in a continuous sequence.

Imagine this as a wheel – an imbalance between the four parts represents a distorted wheel. So when this learning wheel ‘turns’  it delivers an emotionally bumpy ‘ride’.  Past experience of being pushed through this pain-and-gain process will tend to inhibit or even block learning completely.

So to get a more comfortable learning journey we first need to balance our PART wheel – and that implies knowing what our preferred styles are and then developing the learning styles that we use least to build our competence and confidence with them.  And that is possible because these are learned habits. With guidance, focus and practice we can all strengthen our less favoured learning ‘muscles’.

Those with a preference for planning-and-doing would focus on developing their reflection and then their abstraction skills. For example by monitoring the effects of their actions in reality and using that evidence to challenge their underlying assumptions and to generate new ‘theories’ for pragmatic experimentation. Actively seeking balanced feedback and reflecting on it is one way to do that.

Those with a preference for studying-and-abstracting would focus on developing their design and then their delivery skills and become more comfortable with experimenting to test their rhetoric against reality. Actively seeking opportunities to learn-by-doing is one way.

And by creating the context for individuals to become more productive self-learners we can see how learning organisations will follow naturally. And that is what we need to deliver system-wide improvement at scale and pace.

egg_face_spooked_400_wht_13421There seems to be a belief among some people that the “optimum” average bed occupancy for a hospital is around 85%.

More than that risks running out of beds and admissions being blocked, 4 hour breaches appearing and patients being put at risk. Less than that is inefficient use of expensive resources. They claim there is a ‘magic sweet spot’ that we should aim for.

Unfortunately, this 85% optimum occupancy belief is a myth.

So, first we need to dispel it, then we need to understand where it came from, and then we are ready to learn how to actually prevent queues, delays, disappointment, avoidable harm and financial non-viability.


Disproving this myth is surprisingly easy.   A simple thought experiment is enough.

Suppose we have a policy where  we keep patients in hospital until someone needs their bed, then we discharge the patient with the longest length of stay and admit the new one into the still warm bed – like a baton pass.  There would be no patients turned away – 0% breaches.  And all our the beds would always be full – 100% occupancy. Perfection!

And it does not matter if the number of admissions arriving per day is varying – as it will.

And it does not matter if the length of stay is varying from patient to patient – as it will.

We have disproved the hypothesis that a maximum 85% average occupancy is required to achieve 0% breaches.


The source of this specific myth appears to be a paper published in the British Medical Journal in 1999 called “Dynamics of bed use in accommodating emergency admissions: stochastic simulation model

So it appears that this myth was cooked up by academic health economists using a computer model.

And then amateur queue theory zealots jump on the band-wagon to defend this meaningless mantra and create a smoke-screen by bamboozling the mathematical muggles with tales of Poisson processes and Erlang-C equations.

And they are sort-of correct … the theoretical behaviour of the stochastic demand process was described by Poisson and the equation that describes the theoretical queue behaviour was described by Erlang.  Over 100 years ago before we had computers.

BUT …

The academics and amateurs conveniently omit one minor, but annoying,  fact … that real world systems have people in them … and people are irrational … and people cook up policies that ride roughshod over the mathematics, the statistics and the simplistic, stochastic mathematical and computer models.

And when people start meddling then just about anything can happen!


So what went wrong here?

One problem is that the academic hefalumps unwittingly stumbled into a whole minefield of pragmatic process design traps.

Here are just some of them …

1. Occupancy is a ratio – it is a meaningless number without its context – the flow parameters.

2. Using linear, stochastic models is dangerous – they ignore the non-linear complex system behaviours – chaos to you and me.

3. Occupancy relates to space-capacity and says nothing about the flow-capacity or the space and flow capacity scheduling.

4. Space capacity utilisation (i.e. occupancy) and system operational efficiency are not equivalent.

5. Queue theory is a gross simplification of reality that is needed to make the mathematics manageable.

6. Ignoring the fact that our real systems are both complex and adaptive implies the rhetoric is dangerous.

And if we recognise and avoid these traps and re-examine the problem a little more pragmatically then we discover something very  useful:

That the maximum space capacity requirement (the number of beds needed to avoid breaches) is actually easily predictable.

It does not need a black-magic-box full of scary equations or rather complicated stochastic simulation models to do this … all we need is our tried-and-trusted tool … a spreadsheet.

And we need something else … some flow science training and some simulation model design discipline.

When we do that we discover something else …. that the expected average occupancy is not 85%  … or 65%, or 99%, or 95%.

There is no one-size-fits-all optimum occupancy number.

And as we explore further we discover that:

the expected average occupancy is context dependent.

And when we remember that our real system is adaptive, and it is staffed with well-intended, well-educated people who have become rather addicted to reactive fire-fighting,  then we begin to see why the behaviour of real systems seems to defy the predictions of the 85% optimum occupancy myth:

Our hospitals seem to work better-than-predicted at much higher occupancy rates.

And then we realise that we might actually be able to design proactive policies that are better able to manage unpredictable variation: better than the simplistic maximum 85% average occupancy mantra.

And finally another penny drops … average occupancy is an output of the system …. not an input. It is a secondary effect.

And so is average length of stay.

Which implies that setting these output effects as causal inputs to our bed model creates a meaningless, self-fulfilling, self-justifying delusion.

Ooops!


Now our challenge is clear … we need to learn proactive and adaptive flow policy design … and using that understanding we have the potential to deliver zero delays and high productivity at the same time.

And doing that requires a bit more than a spreadsheet … but it is possible.

inspector_searching_around_150_wht_14757When it comes to light that things are not going well a common reaction from the top is to send in more inspectors.

This may give the impression that something decisive is being done but it almost never works … for two reasons.

The first is because it is attempting to treat the symptom and not the cause.

The second is because the inspectors are created in the same paradigm that that created the problem.

That is not so say that inspectors are not required … they are … when the system is working … not when it is failing.

The inspection police actually come last – and just before them comes the Policy that the Police enforce.

Policy comes next to last. Not first.

A rational Policy can only be written once there is proof of  effectiveness … and that requires a Pilot study … in the real world.

A small scale reality check of the rhetoric.

Cooking up Policy and delivery plans based on untested rhetoric from the current paradigm is a recipe for disappointment.


Working backwards we can see that the Pilot needs something to pilot … and that is a new Process; to replace the old process that is failing to deliver.

And any Process needs to be designed to be fit-for-purpose.  Cutting-and-pasting someone else’s design usually does not work. The design process is more important than the design it creates.

So thus brings us to the first essential requirement … the Purpose.

And that is where we very often find a big gap … an error of omission … no clarity or constancy of common Purpose.

And that is where leaders must start. It is their job to clarify and communicate the common Purpose. And if the leaders are not cohesive and the board cannot agree the Purpose then the political cracks will spread through the whole organisation and destabilize it.

And with a Purpose the system and process designers can get to work.

But here we hit another gap. There is virtually no design capability in most organisations.

There is usually lots of delivery capability … but efficiently delivering an ineffective design will amplify the chaos not dissolve it.

So in parallel with clarifying the purpose, the leaders must  endorse the creation of a cohort of process designers.

And from the organisation a cohort of process inspectors … but of a different calibre … inspectors who are able to find the root causes and able to guide the improvement process because they have done this themselves many times before.

And perhaps to draw a line between the future and the past we could give them a different name – Mentors.

database_transferring_data_150_wht_10400The Digital Age is changing the context of everything that we do – and that includes how we use information for improvement.

Historically we have used relatively small, but carefully collected, samples of data and we subjected these to rigorous statistical analysis. Or rather the statisticians did.  Statistics is a dark and mysterious art to most people.

As the digital age ramped up in the 1980’s the data storage, data transmission and data processing power became cheap and plentiful.  The World Wide Web appeared; desktop computers with graphical user interfaces appeared; data warehouses appeared, and very quickly we were all drowning in the data ocean.

Our natural reaction was to centralise but it became quickly obvious that even an army of analysts and statisticians could not keep up.

So our next step was to automate and Business Intelligence was born; along with its beguiling puppy-faced friend, the Performance Dashboard.

The ocean of data could now be boiled down into a dazzling collection of animated histograms, pie-charts, trend-lines, dials and winking indicators. We could slice-and-dice,  we could zoom in-and-out, and we could drill up-and-down until our brains ached.

And none of it has helped very much in making wiser decisions that lead to effective actions that lead to improved outcomes.

Why?

The reason is that the missing link was not a lack of data processing power … it was a lack of an effective data processing paradigm.

The BI systems are rooted in the closed, linear, static, descriptive statistics of the past … trend lines, associations, correlations, p-values and so on.

Real systems are open, non-linear and dynamic; they are eternally co-evolving. Nothing stays still.

And it is real systems that we live in … so we need a new data processing paradigm that suits our current reality.

Some are starting to call this the Big Data Era and it is very different.

  • Business Intelligence uses descriptive statistics and data with high information density to measure things, detect trends etc.;
  • Big Data uses inductive statistics and concepts from non-linear system identification to infer laws (regressions, non-linear relationships, and causal effects) from large data sets to reveal relationships, dependencies and perform predictions of outcomes and behaviours.

And each of us already has a powerful Big Data processor … the 1.3 kg of caveman wet-ware sitting between our ears.

Our brain processes billions of bits of data every second and looks for spatio-temporal relationships to identify patterns, to derive models, to create action options, to predict short-term outcomes and to make wise survival decisions.

The problem is that our Brainy Big Data Processor is easily tricked when we start looking at time-dependent systems … data from multiple simultaneous flows that are interacting dynamically with each other.

It did not evolve to do that … it evolved to help us to survive in the Wild – as individuals.

And it has been very successful … as the burgeoning human population illustrates.

But now we have a new collective survival challenge  and we need new tools … and the out-of-date Business Intelligence Performance Dashboard is just not going to cut the mustard!

Big Data on TED Talks

 

networking_people_PA_300_wht_1844The engine of improvement is a productive meeting.

Complex adaptive systems (CAS) are those that  learn and change themselves.

The books of ‘rules’ are constantly revised and refreshed as the CAS co-evolves with its environment.

System improvement is the outcome of effective actions.

Effective actions are the outcomes of wise decisions.

Wise decisions are the output of productive meetings.

So the meeting process must be designed to be productive: which means both effective and efficient.


One of the commonest niggles that individuals report is the ‘Death by Meeting’ one.

That alone is enough evidence that our current design for meetings is flawed.


One common error of omission is lack of clarity about the purpose of the meeting.

This cause has two effects:

1. The wrong sort of meeting design is used for the problem(s) under consideration.

A meeting designed for tactical  (how to) planning will not work well for strategic (why to) problems.

2. A mixed bag of problems is dumped into the all-purpose-less meeting.

Mixing up short term tactical and long term strategic problems on a single overburdened agenda is doomed to fail.


Even when the purpose of  a meeting  is clear and agreed it is common to observe an unproductive meeting process.

The process may be unproductive because it is ineffective … there are no wise decisions made and so no effective actions implemented.

Worse even than that … decisions are made that are unwise and the actions that follow lead to unintended negative consequences.

The process may also be unproductive because it is inefficient … it requires too much input to get any output.

Of course we want both an effective and an efficient meeting process … and we need to be aware that effectiveness  comes first.  Designing the meeting process to be a more efficient generator of unwise decisions is not a good idea! The result is an even bigger problem!


So our meeting design focus is ‘How could we make wise decisions as a group?’

But if we knew the answer to that we would probably already be doing it!

So we can ask the same question another way: ‘How do we make unwise decisions as a group?

The second question is easier to answer. We just reflect on our current experience.

Some ways we appear to unintentionally generate unwise decisions are:

a) Ensure we have no clarity of purpose – confusion is a good way to defuse effective feedback.
b) Be selective in who we invite to the meeting – group-think facilitates consensus.
c) Ignore the pragmatic, actual, reality and only use academic, theoretical, rhetoric.
d) Encourage the noisy – quiet people are non-contributors.
e) Engage in manipulative styles of behaviour – people cannot be trusted.
f) Encourage the  sceptics and cynics to critique and cull innovative suggestions.
g) Have a trump card – keep the critical ‘any other business’ to the end – just in case.

If we adopt all these tactics we can create meetings that are ‘lively’, frustrating, inefficient and completely unproductive. That of course protects us from making unwise decisions.


So one approach to designing meetings to be more productive is simply to recognise and challenge the unproductive behaviours – first as individuals and then as groups.

The place to start is within our own circle of influence – with those we trust – and to pledge to each other to consciously monitor for unproductive behaviours and to respectfully challenge them.

These behaviours are so habitual that we are often unaware that we are doing them.

And it feels strange at first but it get easier with practice and when you see the benefits.

lightning_strike_150_wht_5809[Drrrrring Drrrrring]

<Bob> Hi Lesley! How are you today?

<Leslie> Hi Bob.  Really good.  I have just got back from a well earned holiday so I am feeling refreshed and re-energised.

<Bob> That is good to hear.  It has been a bit stormy here over the past few weeks.  Apparently lots of  hot air hitting cold reality and forming a fog of disillusionment and storms of protest.

<Leslie> Is that a metaphor?

<Bob> Yes!  A good one do you think? And it leads us into our topic for this week. Perfect storms.

<Leslie> I am looking forward to it.  Can you be a bit more specific?

<Bob> Sure.  Remember the ISP exercise where I asked you to build a ‘chaos generator’?

<Leslie> I sure do. That was an eye-opener!  I had no idea how easy it is to create chaotic performance in a system – just by making the Flaw of Averages error and adding a pinch of variation. Booom!

<Bob> Good. We are going to use that model to demonstrate another facet of system design.  How to steer out of chaos.

<Leslie> OK – what do I need to do.

<Bob> Start up that model and set the cycle time to 10 minutes with a sigma of 1.5 minutes.

<Leslie> OK.

<Bob> Now set the demand interval to 10 minutes and the sigma of that to 2.0 minutes.

<Leslie> OK. That is what I had before.

<Bob> Set the lead time upper specification limit to 30 minutes. Run that 12 times and record the failure rate.

<Leslie> OK.  That gives a chaotic picture!  All over the place.

<Bob> OK now change just the average of the demand interval.  Start with a value of 8 minutes, run 12 times, and then increase to 8.5 minutes and repeat that up to 12 minutes.

<Leslie> OK. That will repeat the run for 10 minutes. Is that OK.

<Bob> Yes.

<Leslie> OK … it will take me a few minutes to run all these.  Do you want to get a cup of tea while I do that?

<Bob> Good idea.

[5 minutes later]

<Leslie> OK I have done all that – 108 data points. Do I plot that as a run chart?

<Bob> You could.  I suggest plotting as a scattergram.

<Leslie> With the average demand interval on the X axis and the Failure % on the  Y axis?

<Bob> Yes. Exactly so. And just the dots, no lines.

<Leslie> OK. Wow! That is amazing!  Now I see why you get so worked up about the Flaw of Averages!

<Bob> What you are looking at is called a performance curve.  Notice how steep and fuzzy it is. That is called a chaotic transition. The perfect storm.  And when fall into the Flaw of Averages trap we design our systems to be smack in the middle of it.

<Leslie> Yes I see what you are getting at.  And that implies that to calm the chaos we do not need very much resilient flow capacity … and we could probably release that just from a few minor design tweaks.

<Bob> Yup.

<Leslie> That is so cool. I cannot wait to share this with the team. Thanks again Bob.

OneStopBeforeGanttFlow improvement-by-design requires being able to see the flows; and that is trickier than it first appears.

We can see movement very easily.

Seeing flows is not so easy – particularly when they are mixed-up and unsteady.

One of the most useful tools for visualising flow was invented over 100 years ago by Henry Laurence Gantt (1861-1919).

Henry Gantt was a mechanical engineer from Johns Hopkins University and an early associate of Frederick Taylor. Gantt parted ways with Taylor because he disagreed with the philosophy of Taylorism which was that workers should be instructed what to do by managers (=parent-child).  Gantt saw that workers and managers could work together for mutual benefit of themselves and their companies (=adult-adult).  At one point Gantt was invited to streamline the production of munitions for the war effort and his methods were so successful that the Ordinance Department was the most productive department of the armed forces.  Gantt favoured democracy over autocracy and is quoted to have said “Our most serious trouble is incompetence in high places. The manager who has not earned his position and who is immune from responsibility will fail time and again, at the cost of the business and the workman“.

Henry Gantt invented a number of different charts – not just the one used in project management which was actually invented 20 years earlier by Karol Adamieki and re-invented by Gantt. It become popularised when it was used in the Hoover Dam project management; but that was after Gantt’s death in 1919.

The form of Gantt chart above is called a process template chart and it is designed to show the flow of tasks through  a process. Each horizontal line is a task; each vertical column is an interval of time. The colour code in each cell indicates what the task is doing and which resource the task is using during that time interval. Red indicates that the task is waiting. White means that the task is outside the scope of the chart (e.g. not yet arrived or already departed).

The Gantt chart shows two “red wedges”.  A red wedge that is getting wider from top to bottom is the pattern created by a flow constraint.  A red wedge that is getting narrower from top to bottom is the pattern of a policy constraint.  Both are signs of poor scheduling design.

A Gantt chart like this has three primary uses:
1) Diagnosis – understanding how the current flow design is creating the queues and delays.
2) Design – inventing new design options.
3) Prognosis – testing the innovative designs so the ‘fittest’ can be chosen for implementation.

These three steps are encapsulated in the third “M” of 6M Design® – the Model step.

In this example the design flaw was the scheduling policy.  When that was redesigned the outcome was zero-wait performance. No red on the chart at all.  The same number of tasks were completed in the same with the same resources used. Just less waiting. Which means less space is needed to store the queue of waiting work (i.e. none in this case).

That this is even possible comes as a big surprise to most people. It feels counter-intuitive. It is however an easy to demonstrate fact. Our intuition tricks us.

And that reduction in the size of the queue implies a big cost reduction when the work-in-progress is perishable and needs constant attention [such as patients lying on A&E trolleys and in hospital beds].

So what was the cost of re-designing this schedule?

A pinch of humility. A few bits of squared paper and some coloured pens. A couple hours of time. And a one-off investment in learning how to do it.  Peanuts in comparison with the recurring benefit gained.

 

Conscious_and_CompetentThis week I was made mindful again of a simple yet powerful model that goes a long way to explaining why we find change so difficult.

It is the conscious-competent model.

There are two dimensions which gives four combinations that are illustrated in the diagram.

We all start in the bottom left corner. We do not know what we do not know.  We are ignorant and incompetent and unconscious of the  fact.

Let us call that Blissful Ignorance.

Then suddenly we get a reality check. A shock. A big enough one to start us on the emotional roller coaster ride we call the Nerve Curve.

We become painfully aware of our ignorance (and incompetence). Conscious of it.

That is not a happy place to be and we have a well-developed psychological first line of defence to protect us. It is called Denial.

“That’s a load of rubbish!” we say.

But denial does not change reality and eventually we are reminded. Reality does not go away.

Our next line of defence is to shoot the messenger. We get angry and aggressive.

Who the **** are you to tell me that I do not know what I am doing!” we say.

Sometimes we are openly aggressive.  More often we use passive aggressive tactics. We resort to below-the-belt behind-the-back corridor-gossip behaviour.

But that does not change reality either.  And we are slowly forced to accept that we need to change. But not yet …

Our next line of defence is to bargain for more time (in the hope that reality will swing back in our favour).

There may be something in this but I am too busy at the moment … I will look at this  tomorrow/next week/next month/after my holiday/next quarter/next financial year/in my next job/when I retire!” we wheedle.

Our strategy usually does not work – it just wastes time – and while we prevaricate the crisis deepens. Reality is relentless.

Our last line of defence has now been breached and now we sink into depression and despair.

It is too late. Too difficult for me. I need rescuing. Someone help me!” we wail.

That does not work either. There is no one there. It is up to us. It is sink-or-swim time.

What we actually need now is a crumb of humility.

And with that we can start on the road to Know How. We start by unlearning the old stuff and then we can  replace it with the new stuff.  Step-by-step we climb out of the dark depths of Painful Awareness.

And then we get a BIG SURPRISE.

It is not as difficult as we assumed. And we discover that learning-by-doing is fun. And we find that demonstrating to others what we are learning is by far the most effective way to consolidate our new conscious competence.

And by playing to our strengths, with persistence, with practice and with reality-feedback our new know how capability gradually becomes second nature. Business as usual. The way we do things around here. The culture.

Then, and only then, will the improvement sustain … and spread … and grow.

 

4NChartOne of the essential components of an adaptive system is effective feedback.

Without feedback we cannot learn – we can only guess and hope.

So the design of our feedback loops is critical-to-success.

Many people do not like getting feedback because they live in a state of fear: fear of criticism. This is a learned behaviour.

Many people do not like giving feedback because they too live in a state of fear: fear of conflict. This is a learned behaviour.

And what is learned can be unlearned; with training, practice and time.

But before we will engage in unlearning our current habit we need to see the new habit that will replace it. The one that will work better for us. The one that is more effective.  The one that will require less effort. The one that is more efficient use of our most precious resource: life-time.

There is an effective and efficient feedback technique called The 4N Chart®.  And I know it works because I have used it and demonstrated to myself and others that  it works. And I have seen others use it and demonstrate to themselves and others that it works too.

The 4N Chart® has two dimensions – Time (Now and Future) and Emotion (Happy and Unhappy).

This gives four combinations each of which is given a label that begins with the letter ‘N’ – Niggles, Nuggets, NoNos and NiceIfs.

The N has a further significance … it reminds us which order to move through the  chart.

We start bottom left with the Niggles.  What is happening now that causes us to feel unhappy. What are these root causes of our niggles? And more importantly, which of these do we have control over?  Knowing that gives us a list of actions that we can do that will have the effect of reducing our niggles. And we can start that immediately because we do not need permission.

Next we move top-left to the Nuggets. What is happening now that causes us to feel happy? What are the root causes of our nuggets? Which of these do we control? We need to recognise these too and to celebrate them.  We need to give ourselves a pat on the back for them because that helps reinforce the habit to keep doing them.

Now we look to the future – and we need to consider two things: what we do not want to feel in the future and what we do want to feel in the future. These are our NoNos and our NiceIfs. It does not matter which order we do this … but  we must consider both.

Many prefer to consider dangers and threats first … that is SAFETY FIRST  thinking and is OK. First Do No Harm. Primum non nocere.

So with the four corners of our 4N Chart® filled in we have a balanced perspective and we can set off on the journey of improvement with confidence. Our 4N Chart® will help us stay on track. And we will update it as we go, as we study, as we plan and as we do things. As we convert NiceIfs into Nuggets and  Niggles into NoNos.

It sounds simple.  It is in theory. It is not quite as easy to do.

It takes practice … particularly the working backwards from the effect (the feeling) to the cause (the facts). This is done step-by-step using Reality as a guide – not our rhetoric. And we must be careful not to make assumptions in lieu of evidence. We must be careful not to jump to unsupported conclusions. That is called pre-judging.  Prejudice.

But when you get the hang of using The 4N Chart® you will be amazed at how much more easily and more quickly you make progress.

egg_face_spooked_400_wht_13421It comes as a bit of a shock to learn that some of our habitual assumptions and actions are worthless.

Improvement implies change. Change requires doing things differently. That requires making different decisions. And that requires innovative thinking. And that requires new knowledge.

We are comfortable with the idea of adding  new knowledge to the vast store we have already accumulated.

We are less comfortable with the idea of removing old knowledge when it has grown out-of-date.

We are shocked when we discover that some of our knowledge is just wrong and it always has been. Since the start of time.

So we need to prepare ourselves for those sorts of shocks. We need to be resilient so that we are not knocked off our feet by them.  We need to practice a different emotional reaction to our habitual fright-flight-or-fight reaction.

We need to cultivate our curiosity.

For example:

It comes as a big shock to many when they learn that it is impossible to determine the cause from an analysis of the observed effect.  Not just difficult. Impossible.

“No Way!”  We shout angrily.  “We do that all the time!”

But do we?

What we do is we observe temporal associations.  We notice that Y happened after X and we conclude that X caused Y.

This is an incorrect conclusion.  We can only conclude from this observation that ‘X may have played a part in causing Y’ but we cannot prove it.

Not by observation alone.

What we can definitely say is that Y did not cause X – because time does not go backwards. At least it does not appear to.

Another thing that does not go backwards is information.

Q: What is 2 + 2?  Four. Easy. There is only one answer. Two numbers become one.

Let us try this in reverse …

Q: What two numbers when added together give 4? Tricky. There are countless answers.  One number cannot become two without adding uncertainty. Guessing.

So when we look at the information coming out of a system – the effects and we attempt to analyse it to reveal the causes we hit a problem. It is impossible.

And learning that is a big shock to people who describe themselves as ‘information analysts’ …. the whole foundation of what they do appears to evaporate.

So we need to outline what we can reasonably do with the retrospective analysis of effect data.

We can look for patterns.

Patterns that point to plausible causes.

Just like patterns of symptoms that point to possible diseases.

But how do we learn what patterns to look for?

Simple. We experiment. We do things and observe what happens immediately afterwards – the immediate effects. We conduct lots and lots of small experiments. And we learn the repeating patterns. “If the context is this and I do that then I always see this effect”.

If we observe a young child learning that is what we see … they are experimenting all the time.  They are curious. They delight in discovery. Novelty is fun. Learning to walk is a game.  Learning to talk is a game.  Learning to be a synergistic partner in a social group is a game.

And that same child-like curiosity is required for effective improvement.

And we know when we are doing improvement right: it feels good. It is fun. Learning is fun.

pirate_flag_anim_150_wht_12881[Drrring Drrring] The phone heralded the start of the weekly ISP mentoring session.

<Bob> Hi Leslie, how are you today?

<Leslie> Hi Bob. To be honest I am not good. I am drowning. Drowning in data!

<Bob> Oh dear! I am sorry to hear that. Can I help? What led up to this?

<Leslie> Well, it was sort of triggered by our last chat and after you opened my eyes to the fact that we habitually throw most of our valuable information away by thresholding, aggregating and normalising.  Then we wonder why we make poor decisions … and then we get frustrated because nothing seems to improve.

<Bob> OK. What happened next?

<Leslie> I phoned our Performance Team and asked for some raw data. Three months worth.

<Bob> And what was their reaction?

<Leslie> They said “OK, here you go!” and sent me a twenty megabyte Excel spreadsheet that clogged my email inbox!  I did manage to unclog it eventually by deleting loads of old junk.  But I could swear that I heard the whole office laughing as they hung up the phone! Maybe I am paranoid?

<Bob> OK. And what happened next?

<Leslie> I started drowning!  The mega-file had a row of data for every patient that has attended A&E for the last three months as I had requested, but there were dozens of columns!  Trying to slice-and-dice it was a nightmare! My computer was smoking and each step took ages for it to complete.  In the end I gave up in frustration.  I now have a lot more respect for the Performance Team I can tell you! They do this for a living?

<Bob> OK.  It sounds like you are ready for a Stab At the Vitals.

<Leslie> What?  That sounds rather piratical!  Are you making fun of my slicing-and-dicing metaphor?

<Bob> No indeed.  I am deadly serious!  Before we leap into the data ocean we need to be able to swim; and we also need a raft that will keep us afloat;  and we need a sail to power our raft; and we need a way to navigate our raft to our desired destination.

<Leslie> OK. I like the nautical metaphor but how does it help?

<Bob> Let me translate. Learning to use system behaviour charts is equivalent to learning the skill of swimming. We have to do that first and practice until we are competent and confident.  Let us call our raft “ISP” – you are already aboard.  The sail you also have already – your Excel software.  The navigation aid is what I refer to as Vitals. So we need to have a “stab at the vitals”.

<Leslie> Do you mean we use a combination of time-series charts, ISP and Excel to create a navigation aid that helps avoid the Depths of Data and the Rocks of DRAT?

<Bob> Exactly.

<Leslie> Can you demonstrate with an example?

<Bob> Sure. Send me some of your data … just the arrival and departure events for one day – a typical one.

<Leslie> OK … give me a minute!  …  It is on its way.  How long will it take for you to analyse it?

<Bob> About 2 seconds. OK, here is your email … um … copy … paste … copy … reply

Vitals_Charts<Leslie> What the ****? That was quick! Let me see what this is … the top left chart is the demand, activity and work-in-progress for each hour; the top right chart is the lead time by patient plotted in discharge order; the table bottom left includes the 4 hour breach rate.  Those I do recognise. What is the chart on the bottom right?

<Bob> It is a histogram of the lead times … and it shows a problem.  Can you see the spike at 225 to 240 minutes?

<Leslie> Is that the fabled Horned Gaussian?

<Bob> Yes.  That is the sign that the 4-hour performance target is distorting the behaviour of the system.  And this is yet another reason why the  Breach Rate is a dangerous management metric. The adaptive reaction it triggers amplifies the variation and fuels the chaos.

<Leslie> Wow! And you did all that in Excel using my data in two seconds?  That must need a whole host of clever macros and code!

<Bob> “Yes” it was done in Excel and “No” it does not need any macros or code.  It is all done using simple formulae.

<Leslie> That is fantastic! Can you send me a copy of your Excel file?

<Bob> Nope.

<Leslie>Whaaaat? Why not? Is this some sort of evil piratical game?

<Bob> Nope. You are going to learn how to do this yourself – you are going to build your own Vitals Chart Generator – because that is the only way to really understand how it works.

<Leslie> Phew! You had me going for a second there! Bring it on! What do I do next?

<Bob> I will send you the step-by-step instructions of how to build, test and use a Vitals Chart Generator.

<Leslie> Thanks Bob. I cannot wait to get started! Weigh anchor and set the sails! Ha’ harrrr me hearties.

We_Need_Small_HospitalsThis was an interesting headline to see on the front page of a newspaper yesterday!

The Top Man of the NHS is openly challenging the current Centralisation-is-The-Only-Way-Forward Mantra;  and for good reason.

Mass centralisation is poor system design – very poor.

Q: So what is driving the centralisation agenda?

A: Money.

Or to be more precise – rather simplistic thinking about money.

The misguided money logic goes like this:

1. Resources (such as highly trained doctors, nurses and AHPs) cost a lot of money to provide.
[Yes].

2. So we want all these resources to be fully-utilised to get value-for-money.
[No, not all - just the most expensive].

3. So we will gather all the most expensive resources into one place to get the Economy-of-Scale.
[No, not all the most expensive - just the most specialised]

4. And we will suck /push all the work through these super-hubs to keep our expensive specialist resources busy all the time.
[No, what about the growing population of older folks who just need a bit of expert healthcare support, quickly, and close to home?]

This flawed logic confuses two complementary ways to achieve higher system productivity/economy/value-for-money without  sacrificing safety:

Economies of Scale (EoS) and Economies of Flow (EoF).

Of the two the EoF is the more important because by using EoF principles we can increase productivity in huge leaps at almost no cost; and without causing harm and disappointment. EoS are always destructive.

But that is impossible. You are talking rubbish … because if it were possible we would be doing it!

It is not impossible and we are doing it … but not at scale and pace in healthcare … and the reason for that is we are not trained in Economy-of-Flow methods.

And those who are trained and who have have experienced the effects of EoF would not do it any other way.

Example:

In a recent EoF exercise an ISP (Improvement Science Practitioner) helped a surgical team to increase their operating theatre productivity by 30% overnight at no cost.  The productivity improvement was measured and sustained for most of the last year. [it did dip a bit when the waiting list evaporated because of the higher throughput, and again after some meddlesome middle management madness was triggered by end-of-financial-year target chasing].  The team achieved the improvement using Economy of Flow principles and by re-designing some historical scheduling policies. The new policies  were less antagonistic. They were designed to line the ducks up and as a result the flow improved.


So the specific issue of  Super Hospitals vs Small Hospitals is actually an Economy of Flow design challenge.

But there is another critical factor to take into account.

Specialisation.

Medicine has become super-specialised for a simple reason: it is believed that to get ‘good enough’ at something you have to have a lot of practice. And to get the practice you have to have high volumes of the same stuff – so you need to specialise and then to sort undifferentiated work into separate ‘speciologist’ streams or sequence the work through separate speciologist stages.

Generalists are relegated to second-class-citizen status; mere tripe-skimmers and sign-posters.

Specialisation is certainly one way to get ‘good enough’ at doing something … but it is not the only way.

Another way to learn the key-essentials from someone who already knows (and can teach) and then to continuously improve using feedback on what works and what does not – feedback from everywhere.

This second approach is actually a much more effective and efficient way to develop expertise – but we have not been taught this way.  We have only learned the scrape-the-burned-toast-by-suck-and-see method.

We need to experience another way.

We need to experience rapid acquisition of expertise!

And being able to gain expertise quickly means that we can become expert generalists.

There is good evidence that the broader our skill-set the more resilient we are to change, and the more innovative we are when faced with novel challenges.

In the Navy of the 1800’s sailors were “Jacks of All Trades and Master of One” because if only one person knew how to navigate and they got shot or died of scurvy the whole ship was doomed.  Survival required resilience and that meant multi-skilled teams who were good enough at everything to keep the ship afloat – literally.


Specialisation has another big drawback – it is very expensive and on many dimensions. Not just Finance.

Example:

Suppose we have six-step process and we have specialised to the point where an individual can only do one step to the required level of performance (safety/flow/quality/productivity).  The minimum number of people we need is six and the process only flows when we have all six people. Our minimum costs are high and they do not scale with flow.

If any one of the six are not there then the whole process stops. There is no flow.  So queues build up and smooth flow is sacrificed.

Out system behaves in an unstable and chaotic feast-or-famine manner and rapidly shifting priorities create what is technically called ‘thrashing’.

And the special-six do not like the constant battering.

And the special-six have the power to individually hold the whole system to ransom – they do not even need to agree.

And then we aggravate the problem by paying them the high salary that it is independent of how much they collectively achieve.

We now have the perfect recipe for a bigger problem!  A bunch of grumpy, highly-paid specialists who blame each other for the chaos and who incessantly clamour for ‘more resources’ at every step.

This is not financially viable and so creates the drive for economy-of-scale thinking in which to get us ‘flow resilience’ we need more than one specialist at each of the six steps so that if one is on holiday or off sick then the process can still flow.  Let us call these tribes of ‘speciologists’ there own names and budgets, and now we need to put all these departments somewhere – so we will need a big hospital to fit them in – along with the queues of waiting work that they need.

Now we make an even bigger design blunder.  We assume the ‘efficiency’ of our system is the same as the average utilisation of all the departments – so we trim budgets until everyone’s utilisation is high; and we suck any-old work in to ensure there is always something to do to keep everyone busy.

And in so doing we sacrifice all our Economy of Flow opportunities and we then scratch our heads and wonder why our total costs and queues are escalating,  safety and quality are falling, the chaos continues, and our tribes of highly-paid specialists are as grumpy as ever they were!   It must be an impossible-to-solve problem!


Now contrast that with having a pool of generalists – all of whom are multi-skilled and can do any of the six steps to the required level of expertise.  A pool of generalists is a much more resilient-flow design.

And the key phrase here is ‘to the required level of expertise‘.

That is how to achieve Economy-of-Flow on a small scale without compromising either safety or quality.

Yes, there is still a need for a super-level of expertise to tackle the small number of complex problems – but that expertise is better delivered as a collective-expertise to an individual problem-focused process.  That is a completely different design.

Designing and delivering a system that that can achieve the synergy of the pool-of-generalists and team-of-specialists model requires addressing a key error of omission first: we are not trained how to do this.

We are not trained in Complex-Adaptive-System Improvement-by-Design.

So that is where we must start.

 

waste_paper_shot_miss_150_wht_11853[Bzzzzz Bzzzzz] Bob’s phone was on silent but the desktop amplified the vibration and heralded the arrival of Leslie’s weekly ISP mentoring call.

<Bob> Hi Leslie.  How are you today and what would you like to talk about?

<Leslie> Hi Bob.  I am well and I have an old chestnut to roast today … target-driven-behaviour!

<Bob> Excellent. That is one of my favorite topics. Is there a specific context?

<Leslie> Yes.  The usual desperate directive from on-high exhorting everyone to ‘work harder to hit the target’ and usually accompanied by a RAG table of percentages that show just who is failing and how badly they are doing.

<Bob> OK. Red RAGs irritating the Bulls eh? Percentages eh? Have we talked about Ratio Hazards?

<Leslie> We have talked about DRATs … Delusional Ratios and Arbitrary Targets as you call them. Is that the same thing?

<Bob> Sort of. What happened when you tried to explain DRATs to those who are reacting to these ‘desperate directives’?

<Leslie> The usual reply is ‘Yes, but that is how we are required to report our performance to our Commissioners and Regulatory Bodies.’

<Bob> And are the key- performance indicators that are reported upwards and outwards also being used to manage downwards and inwards?  If so then that is poor design and is very likely to be contributing to the chaos.

<Leslie> Can you explain that a bit more? It feels like a very fundamental point you have just made.

 <Bob> OK. To do that let us work through the process by which the raw data from your system is converted into the externally reported KPI.  Choose any one of your KPIs

<Leslie> Easy! The 4-hour A&E target performance.

<Bob> What is the raw data that goes in to that?

<Leslie> The percentage of patients who breach 4-hours per day.

<Bob> And where does that ratio come from?

<Leslie> Oh! I see what you mean. That comes from a count of the number of patients who are in A&E for more than 4 hours divided by a count of the number of patients who attended.

<Bob> And where do those counts come come from?

<Leslie> We calculate the time the patient is in A&E and use the 4-hour target to label them as breaches or not.

<Bob> And what data goes into the calculation of that time?

<Leslie>The arrival and departure times for each patient. The arrive and depart events.

<Bob>OK. Is that the raw data?

<Leslie>Yes. Everything follows from that.

<Bob> Good.  Each of these two events is a time – which is a continuous metric.  You could in principle record it to any degree of precision you like – milliseconds if you had a good enough enough clock.

<Leslie> Yes. We record it to an accuracy of of seconds – it is when the patient is ‘clicked through’ on the computer.

<Bob> Careful Leslie, do not confuse precision with accuracy. We need both.

<Leslie> Oops! Yes I remember we had that conversation before.

<Bob> And how often is the A&E 4-hour target KPI reported externally?

<Leslie> Quarterly. We either succeed or fail each quarter of the financial year.

<Bob> That is a binary metric. An OK or not OK. No gray zone.

<Leslie> Yes. It is rather blunt but that is how we are contractually obliged to report our performance.

<Bob> OK. And how many patients per day on average come to A&E?

<Leslie> About 200 per day.

<Bob> So the data analysis process is boiling down about 36,000 pieces of continuous data into one Yes or No bit of binary data.

<Leslie> Yes.

<Bob> And then that one bit is used to drive the action of the Board: if it is ‘OK last quarter’ then there is no ‘desperate directive’ and if it is a ‘Not OK last quarter’ then there is.

<Leslie> Yes.

<Bob> So you are throwing away 99.9999% of your data and wondering why what is left is not offering much insight in what to do.

<Leslie>Um, I guess so … when you say it like that.  But how does that relate to your phrase ‘Ratio Hazards’?

<Bob> A ratio is just one of the many ways that we throw away information. A ratio requires two numbers to calculate it; and it gives one number as an output so we are throwing half our information away.  And this is an irreversible act.  Two specific numbers will give one ratio; but that ratio can be created by an infinite number possible pairs of numbers and we have no way of knowing from the ratio what specific pair was used to create it.

<Leslie> So a ratio is an exercise in obfuscation!

<Bob> Well put! And there is an even more data-wasteful behaviour that we indulge in. We aggregate.

<Leslie> By that do you mean we summarise a whole set of numbers with an average?

<Bob> Yes. When we average we throw most of the data away and when we average over time then we abandon our ability to react in a timely way.

<Leslie>The Flaw of Averages!

<Bob> Yes.  One of them. There are many.

<Leslie>No wonder it feels like we are flying blind and out of control!

<Bob> There is more. There is an even worse data-wasteful behaviour. We threshold.

<Leslie>Is that when we use a target to decide if the lead time is OK or Not OK.

<Bob> Yes. And using an arbitrary target makes it even worse.

<Leslie> Ah ha! I see what you are getting at.  The raw event data that we painstakingly collect is a treasure trove of information and potential insight that we could use to help us diagnose, design and deliver a better service. But we throw all but one single solitary binary digit when we put it through the DRAT Processor.

<Bob> Yup.

<Leslie> So why could we not do both? Why could we not use use the raw data for ourselves and the DRAT processed data for external reporting.

<Bob> You could.  So what is stopping you doing just that?

<Leslie> We do not know how to effectively and efficiently interpret the vast ocean of raw data.

<Bob> That is what a time-series chart is for. It turns the thousands of pieces of valuable information onto a picture that tells a story – without throwing the information away in the process. You just need to learn how to interpret the pictures.

<Leslie> Wow!  Now I understand much better why you insist we ‘plot the dots’ first.

<Bob> And now you understand the Ratio Hazards a bit better too.

<Leslie> Indeed so.  And once again I have much to ponder on. Thank you again Bob.

Minecraft There is an amazing phenomenon happening right now – a whole generation of people are learning to become system designers and they are doing it by having fun.

There is a game called Minecraft which millions of people of all ages are rapidly discovering.  It is creative, fun and surprisingly addictive.

This is what it says on the website.

“Minecraft is a game about breaking and placing blocks. At first, people built structures to protect against nocturnal monsters, but as the game grew players worked together to create wonderful, imaginative things.”

The principle is that before you can build you have to dig … you have to gather the raw materials you need … and then you have to use what you have gathered in novel and imaginative ways.  You need tools too, and you need to learn what they are used for, and what they are useless for. And the quickest way to learn the necessary survival and creative  skills is by exploring, experimenting, seeking help, and sharing your hard-won knowledge and experience with others.

The same principles hold in the real world of Improvement Science.

The treasure we are looking for is less tangible though … but no less difficult to find … unless you know where to look.

The treasure we seek is learning; how to achieve significant and sustained improvement on all dimensions.

And there is a mountain of opportunity that we can mine into. It is called Reality.

And when we do that we uncover nuggets of knowledge, jewels of understanding, and pearls of wisdom.

There are already many tunnels that have been carved out by others who have gone before us. They branch and join to form a vast cave network. A veritable labyrinth. Complicated and not always well illuminated or signposted.

And stored in the caverns is a vast treasure trove of experience we can dip into – and an even greater horde of new treasure waiting to be discovered.

But even now there there is no comprehensive map of the labyrinth. So it is easy to get confused and to get lost. Not all junctions have signposts and not all the signposts are correct. There are caves with many entrances and exits, there are blind-ending tunnels, and there are many hazards and traps for the unwary.

So to enter the Learning Labyrinth and to return safety with Improvement treasure we need guides. Those who know the safe paths and the unsafe ones. And as we explore we all need to improve the signage and add warning signs where hazards lurk.

And we need to work at the edge of knowledge  to extend the tunnels further. We need to seal off the dead-ends, and to draw and share up-to-date maps of the paths.

We need to grow a Community of Improvement Science Minecrafters.

And the first things we need are some basic improvement tools and techniques … and they can be found here.

growing_blue_vine_scrolling_down_150_wht_247New ideas need time to germinate.

And seeds need soil – so if the context is toxic the seeds will remain dormant or die.

And gardeners need to have patience.

And gardeners need to prepare.  The seeds, the soil and to nurture and nourish the green shoots of innovation.

When a seed-of-change finds itself in fertile soil it will germinate.  That is just the first step.

The fragile new shoot of improvement must be watered and protected from harm as it grows taller and gains strength-of-evidence.

The goal is for the new growth to bear its own fruit, and its own seeds which then spread the proven practice far and wide.

Experienced Improvement Science Practitioners know this.

They know that when the seeds of a proven improvement meet resistance then the cultural soil is not ready.  A few hard winters may be needed to break up the clods. Or perhaps the sharp spade of an external inspection is needed to crack through the carapace of complacency.

And competition from the worthless weeds if weak thinking is always present. The bindweed of bureaucracy saps energy and enthusiasm and hacking at it is futile. It only grows even more vigorously.  Weeds need to be approached from the  roots upwards. Without roots they will wither.

Purpose, practice, patience, preparation and persistence are the characteristics that lead to sustained success.

And when the new fruit of the improvement tree are ready and the seeds are ripe it is important not to jealously protect and store them away from harsh critique … they need to be scattered to the four winds and to have an opportunity to find fertile soil elsewhere and to establish their own colonies.

Many will not succeed.  And a few will evolve into opportunities that were never anticipated.

That is the way of innovation, germination, dissemination and evolution.

That is the way of Improvement Science.

Change is scary.

Deliberately stepping out of our comfort zones is scary.

We feel the fear – but sometimes we do it anyway. Why? How?

What we do is that we prepare and the feeling of fear becomes diluted with a feeling of excitement – and when the balance is right we do it.

So what are the tell-tale signs?

Excitement is a positive emotion – so when we imagine the future and feel excited we unconsciously smile and we feel better afterwards.  We want to share our excitement. We tell others that we are looking forward to the future.

Like birthdays, and holidays, and a new house and a new job. New stuff is exciting when WE decide we want it.


Fear comes from being forced to change and from not having the opportunity to prepare.

Fear happens when change is sprung on us unexpectedly by chance or by someone else.

Fear is a negative emotion and we feel bad afterwards so we avoid it.

So if thinking about the future is dominated by a feeling of fear then we resist and we prevaricate and we get labelled as obstructive, and difficult and cynical.

And that makes the fear worse.


So the way to make the future feel exciting is:

1. Set a clear and constant win-win-win purpose.
2. Show that it is possible by sharing examples.
3. Show that it is achievable by sharing the step-by-step process.
4. Provide the opportunity for preparation.
5. Include those that the change affects to plan their own transition.
6. Ensure that those affected know their part in the process.

And do not underestimate how long this takes and how much repetition, and listening, and explanation and respectful challenge this takes – so the sooner this starts the better.

We hear the news, we feel the fear, we build the excitement and then we do it.

That is the way of change.


Metronome[Beep, Beep, Beep, Beep, Beeeeep] The reminder roused Bob from deep reflection and he clicked the Webex link on his desktop to start the meeting. Leslie was already online.

<Bob> Hi Leslie. How are you? And what would you like to share and explore today?

<Leslie> Hi Bob, I am well thank you and I would like to talk about chaos again.

<Bob> OK. That is always a rich mine of new insights!  Is there a specific reason?

<Leslie>Yes. The story I want to share is of the chaos that I have been experiencing just trying to get a new piece of software available for my team to use.  You would not believe the amount of time, emails, frustration and angst it has taken to negotiate this through the ‘proper channels’.

<Bob> Let me guess … about six months?

<Leslie> Spot on! How did you know?

<Bob> Just prior experience of similar stories.  So what is your diagnosis of the cause of the chaos?

<Leslie> My intuition shouts at me that people are just being deliberately difficult and that makes me feel angry and want to shout at them … but I have learned that behaviour is counter-productive.

<Bob> So what did you do?

<Leslie> I escalated the ‘problem’ to my line manager.

<Bob> And what did they do?

<Leslie> I am not sure, I was not copied in, but it seemed to clear the ‘obstruction’.

<Bob> And were the ‘people’ you mentioned suddenly happy and willing to help?

<Leslie> Not really … they did what we needed but they did not seem very happy about it.

<Bob> OK.  You are describing a Drama Triangle, a game, and your behaviour was from the Persecutor role.

<Leslie>What! But I deliberately did not send any ANGRY emails or get into a childish argument. I escalated the issue I could not solve because that is what we are expected to do.

<Bob> Yes I know. If you had engaged in a direct angry conversation, by whatever means, that would have been an actively aggressive act.  By escalating the issue and someone Bigger having the angry conversation you have engaged in a passive aggressive act. It is still playing the game from the Persecutor role and in fact is the more common mode of Persecution.

<Leslie> But it got the barrier cleared and the problem sorted?

<Bob> And did it leave everyone feeling happier than before?

<Leslie> I guess not. I certainly felt like a bit of a ‘tale teller’ and the IT technician probably hates me and fears for his job, and the departmental heads probably distrust each other even more than before.

<Bob> So this approach may appear to work in the short term but it creates a much bigger long term problem – and it is that long term problem of ‘distrust’ that creates the chaos. So it is a self-sustaining design.

<Leslie> Oh dear! Is there a way to avoid this and to defuse the chronic distrust?

<Bob> Yes.  You have demonstrated a process that you would like to improve – you want the same short term outcome, your software installed and working, and you want it quicker and with less angst and leaving everyone feeling good about how they have played a part in achieving that objective.

<Leslie>Yes. That would be my ideal.

<Bob>So what is different between what you did and your ‘ideal’ scenario?  What did you do that you should not have and what did you not do that you could have?

<Leslie> Well I triggered off a drama  triangle which I should not have. I also assumed that the IT people would know what to do because I do not understand the technical nuances of getting new software procured and installed. What I could have done is make it much clearer for them what I needed, why I needed it and how and when I needed it.  I could have done a lot more homework before asking them for assistance. I could also have given my inner Chimp a banana and gone to talk to them face-to-face and ask their opinion  early on so I could see the problem from  their perspective as well as mine.

<Bob> Yes – that all sounds reasonable and respectful.  What you are doing is ‘synchronising‘.  You are engaging in understanding the process well enough so that you can align all the actions that need to be done, in the correct order and then sharing that.  It is rather like being the composer of a piece of music – you share the score so that the individual players know what to do and when.  There is one other task you need to do.

<Leslie>I need to be the conductor!

<Bob> Yes.  You are the metronome.  You set the pace and guide the orchestra. They are the specialists with their instruments – that is not your role.

<Leslie> And when I do that then the music is harmonious and pleasing-to-the-ear; not a chaotic cacophony!

<Bob> Indeed … and the music is the voice of the system – and is the feedback that everyone hears – and not only do the musicians derive pleasure from contributing then the wider audience will hear what can be achieved and see how it is achieved.

<Leslie> Wow!  That musical metaphor works really well for me. Thanks Bob, I need to go and work on my communicating, composing and conducting capabilities.

sudokuAn Improvement-by-Design challenge is very like a Sudoku puzzle. The rules are deceptively simple but the solving the puzzle is not so simple.

For those who have never tried a Sudoku puzzle the objective is to fill in all the empty boxes with a number between 1 and 9. The constraint is that each row, column and 3×3 box (outlined in bold) must include all the numbers between 1 and 9 i.e. no duplicates.

What you will find when you try is that, at each point in the puzzle solving process there are more than one choice for  most empty cells.

The trick is to find the empty cells that have only one option and fill those in. That changes the puzzle and makes it ‘easier’.

And when you keep following this strategy, and so long as you do not make any mistakes, then you will solve the puzzle.  It just takes concentration, attention to detail, and discipline.

In the example above, the top-right cell in the left-box on the middle-row can only hold a 6; and the top-middle cell in the middle-box on the bottom-row must be a 3.

So we can see already there are three ways ‘into’ the solution – put the 6 in and see where that takes us; put the 3 in and see where that takes us; or put both in and see where that takes us.

The final solution will be the same – so there are multiple paths from where we are to our objective.  Some may involve more mental work than others but all will involve completing the same number of empty cells.

What is also clear is that the sequence order that we complete the empty cells is not arbitrary. Usually the boxes and rows with the fewest empty cells get competed earlier and those with the most empty cells at the start get completed later.

And even if the final configuration is the same, if we start with a different set of missing cells the solution path will be different. It may be very easy, very hard or even impossible without some ‘guessing’ and hoping for the best.


Exactly the same is true of improvement-by-design challenges.

The rules of flow science  are rather simple; but when we have a system of parallel streams (the rows) interacting with parallel stages (the columns); and when we have safety, delivery, and economy constraints to comply with at every part of the system … then finding and ‘improvement plan’ that will deliver our objective is a tough challenge.

But it is possible with concentration, attention-to-detail and discipline; and that requires some flow science training and some improvement science practice.

OK – I am off for lunch and then maybe indulge in a Sudoku puzzle or two – just for fun – and then maybe design an improvement plan to two – just for fun!

 

woman_back_and_forth_questions_150_wht_12477<Lesley> Hi Bob, how are you today?

<Bob> I’m OK thanks Lesley. Having a bit of a break from the daily grind.

<Lesley> Oh! I am sorry, I had no idea you were on holiday. I will call when you are back at work.

<Bob> No need Lesley. Our chats are always a welcome opportunity to reflect and learn.

<Lesley> OK, if you are sure.  The top niggle on my list at the moment is that I do not feel my organisation values what I do.

<Bob> OK. Have you done the diagnostic Right-2-Left Map® backwards from that top niggle?

<Lesley>Yes. The final straw was that I was asked to justify my improvement role.

<Bob> OK, and before that?

<Lesley> There have been some changes in the senior management team.

<Bob> OK. This sounds like the ‘New Brush Sweeps Clean’ effect.

<Lesley> I have heard that phrase before. What does it mean in this context?

<Bob> Senior management changes are very disruptive events. The more senior the change the more disruptive it is.  Let us call it a form of ‘Disruptive Innovation’.  The trigger for the change is important.  One trigger might be a well-respected and effective leader retiring or moving to an even more senior role.  This leaves a leadership gap which is an opportunity for someone to grow and develop.  Another trigger might be a less-respected  and ineffective leader moving on and leaving a trail of rather-too-visible failures. It is the latter tends to be associated with the New Broom effect.

<Lesley> How is that?

<Bob>Well, put yourself in the shoes of the New Leader who has inherited a Trail of Disappointment – you need to establish your authority and expectation quickly and decisively. Ambiguity and lack of clarity will only contribute to further disappointment.  So you have to ask everyone to justify what they do.  And if they cannot then you need to know that.  And if they can then you need to decide if what they do is aligned with your purpose.  This is the New Brush.

<Lesley> So what if I can justify what I do and that does not fit with the ‘New Leader’s Plan’?

<Bob> If what you do is aligned to your Life Purpose but not with the New Brush then you have to choose.  And experience shows that the road to long term personal happiness is the one the aligns with your individual purpose.  And often it is just a matter of timing. The New Brush is indiscriminate and impatient – anything that does not fit neatly into the New Plan has to go.

<Lesley> OK my purpose is to improve the safety, flow, quality and productivity of healthcare processes – for the benefit of all. That is not negotiable. It is what fires my passion and fuels my day.  So does it matter really where or how I do that?

<Bob> Not really.  You do need be mindful of the pragmatic constraints though … your life circumstances.  There are many paths to your Purpose, so it is wise to choose one that is low enough risk to both you and those you love.

<Lesley> Ah! Now I see why you say that timing is important. You need to prepare to be able to make the decision.  You do not what to be caught by surprise and off balance.

<Bob>Yes. That is why as an ISP you always start with your own Purpose and your own Right-2-Left Map®.  Then you will know what to prepare and in what order so that you have the maximum number of options when you have to make a choice.  Sometimes the Universe will create the trigger and sometimes you have to initiate it yourself.

<Lesley> So this is just another facet of Improvement Science?

<Bob>  Yes.

buncefield_fireFires are destructive, indifferent, and they can grow and spread very fast.

The picture is of  the Buncefield explosion and conflagration that occurred on 11th December 2005 near Hemel Hempstead in the UK.  The root cause was a faulty switch that failed to prevent tank number 912 from being overfilled. This resulted in an initial 300 gallon petrol spill which created the perfect conditions for an air-fuel explosion.  The explosion was triggered by a spark and devastated the facility. Over 2000 local residents needed to be evacuated and the massive fuel fire took days to bring under control. The financial cost of the accident has been estimated to run into tens of millions of pounds.

The Great Fire of London in September 1666 led directly to the adoption of new building standards – notably brick and stone instead of wood because they are more effective barriers to fire.

A common design to limit the spread of a fire is called a firewall.

And we use the same principle in computer systems to limit the spread of damage when a computer system goes out of control.


Money is the fuel that keeps the wheels of healthcare systems turning.  And healthcare is an expensive business so every drop of cash-fuel is precious.  Healthcare is also a risky business – from both a professional and a financial perspective. Mistakes can quickly lead to loss of livelihood, expensive recovery plans and huge compensation claims. The social and financial equivalent of a conflagration.

Financial fires spread just like real ones – quickly. So it makes good sense not to have all the cash-fuel in one big pot.  It makes sense to distribute it to smaller pots – in each department – and to distribute the cash-fuel intermittently. These cash-fuel silos are separated by robust financial firewalls and they are called Budgets.

The social sparks that ignite financial fires are called ‘Niggles‘.  They are very numerous but we have effective mechanisms for containing them. The problem happens when a multiple sparks happen at the same time and place and together create a small chain reaction. Then we get a complaint. A ‘Not Again‘.  And we are required to spend some of our precious cash-fuel investigating and apologizing.  We do not deal with the root cause, we just scrape the burned toast.

And then one day the chain reaction goes a bit further and we get a ‘Near Miss‘.  That has a different  reporting mechanism so it stimulates a bigger investigation and it usually culminates in some recommendations that involve more expensive checking, documenting and auditing of the checking and documentation.  The root cause, the Niggles, go untreated – because there are too many of them.

But this check-and-correct reaction is also  expensive and we need even more cash-fuel to keep the organizational engine running – but we do not have any more. Our budgets are capped. So we start cutting corners. A bit here and a bit there. And that increases the risk of more Niggles, Not Agains, and Near Misses.

Then the ‘Never Event‘ happens … a Safety and Quality catastrophe that triggers the financial conflagration and toasts the whole organization.


So although our financial firewalls, the Budgets, are partially effective they also have downsides:

1. Paradoxically they can create the perfect condition for a financial conflagration when too small a budget leads to corner-cutting on safety.

2. They lead to ‘off-loading’ which means that too-expensive-to-solve problems are chucked over the financial firewalls into the next department.  The cost is felt downstream of the source – in a different department – and is often much larger. The sparks are blown downwind.

For example: a waiting list management department is under financial pressure and is running short staffed as a recruitment freeze has been imposed. The overburdening of the remaining staff leads to errors in booking patients for operations. The knock on effect that is patients being cancelled on the day and the allocated operating theatre time is wasted.  The additional cost of wasted theatre time is orders of magnitude greater than the cost-saving achieved in the upstream stage.  The result is a lower quality service, a greater cost to the whole system, and the risk that safety corners will be cut leading to a Near Miss or a Never Event.

The nature of real systems is that small perturbations can be rapidly amplified by a ‘tight’ financial design to create a very large and expensive perturbation called a ‘catastrophe’.  A silo-based financial budget design with a cost-improvement thumbscrew feature increases the likelihood of this universally unwanted outcome.

So if we cannot use one big fuel tank or multiple, smaller, independent fuel tanks then what is the solution?

We want to ensure smooth responsiveness of our healthcare engine, we want healthcare  cash-fuel-efficiency and we want low levels of toxic emissions (i.e. complaints) at the same time. How can we do that?

Fuel-injection.

fuel_injectorsElectronic Fuel Injection (EFI) designs have now replaced the old-fashioned, inefficient, high-emission  carburettor-based engines of the 1970’s and 1980’s.

The safer, more effective and more efficient cash-flow design is to inject the cash-fuel where and when it is needed and in just the right amount.

And to do that we need to have a robust, reliable and rapid feedback system that controls the cash-injectors.

But we do not have such a feedback system in healthcare so that is where we need to start our design work.

Designing an automated cash-injection system requires understanding how the Seven Flows of any  system work together and the two critical flows are Data Flow and Cash Flow.

And that is possible.

tornada_150_wht_10155The image of a tornado is what many associate with improvement.  An unpredictable, powerful, force that sweeps away the wood in its path. It certainly transforms – but it leaves a trail of destruction and disappointment in its wake. It does not discriminate  between the green wood and the dead wood.

A whirlwind is created by a combination of powerful forces – but the trigger that unleashes the beast is innocuous. The classic ‘butterfly wing effect’. A spark that creates an inferno.

This is not the safest way to achieve significant and sustained improvement. A transformation tornado is a blunt and destructive tool.  All it can hope to achieve is to clear the way for something more elegant. Improvement Science.

We need to build the capability for improvement progressively and to build it effective, efficient, strong, reliable, and resilient. In a word  – trustworthy. We need a durable structure.

But what sort of structure?  A tower from whose lofty penthouse we can peer far into the distance?  A bridge between the past and the future? A house with foundations, walls and a roof? Do these man-made edifices meet our criteria?  Well partly.

Let us see what nature suggests. What are the naturally durable designs?

Suppose we have a bag of dry sand – an unstructured mix of individual grains – and that each grain represents an improvement idea.

Suppose we have a specific issue that we would like to improve – a Niggle.

Let us try dropping the Improvement Sand on the Niggle – not in a great big reactive dollop – but in a proactive, exploratory bit-at-a-time way.  What shape emerges?

hourglass_150_wht_8762What we see is illustrated by the hourglass.  We get a pyramid.

The shape of the pyramid is determined by two factors: how sticky the sand is and how fast we pour it.

What we want is a tall pyramid – one whose sturdy pinnacle gives us the capability to see far and to do much.

The stickier the sand the steeper the sides of our pyramid.  The faster we pour the quicker we get the height we need. But there is a limit. If we pour too quickly we create instability – we create avalanches.

So we need to give the sand time to settle into its stable configuration; time for it to trickle to where it feels most comfortable.

And, in translating this metaphor to building improvement capability in system we could suggest that the ‘stickiness’ factor is how well ideas hang together and how well individuals get on with each other and how well they share ideas and learning. How cohesive our people are.  Distrust and conflict represent repulsive forces.  Repulsion creates a large, wide, flat structure  – stable maybe but incapable of vision and improvement. That is not what we need

So when developing a strategy for building improvement capability we build small pyramids where the niggles point to. Over time they will merge and bigger pyramids will appear and merge – until we achieve the height. Then was have a stable and capable improvement structure. One that we can use and we can trust.

Just from sprinkling Improvement Science Sand on our Niggles.

hold_your_ground_rope_300_wht_6223[Dring Dring] The telephone soundbite announced the start of mentoring session.

<Bob> Good morning Leslie. How are you today?

<Leslie> I have been better.

<Bob> You seem upset. Do you want to talk about it?

<Leslie> Yes, please. The trigger for my unhappiness is that last week I received an email demanding that I justify the time I spend doing improvement work and  a summons to a meeting to ‘discuss some issues that have been raised‘.

<Bob> OK. I take it that you do not know what or who has triggered this inquiry.

<Leslie> You are correct. My working hypothesis is that it is the end of the financial year and budget holders are looking for opportunities to do some pruning – to meet their cost improvement program targets!

<Bob> So what is the problem? You have shared the output of your work. You have demonstrated significant improvements in safety, flow, quality and productivity and you have described both them and the methodology clearly.

<Leslie> I know. That us why I was so upset to get this email. It is as if everything that we have achieved has been ignored. It is almost as if it is resented.

<Bob> Ah! You may well be correct.  This is the nature of paradigm shifts. Those who have the greatest vested interest in the current paradigm get spooked when they feel it start to wobble. Each time you share the outcome of your improvement work you create emotional shock-waves. The effects are cumulative and eventually there will be is a ‘crisis of confidence’ in those who feel most challenged by the changes that you are demonstrating are possible.  The whole process is well described in Thomas Kuhn’s The Structure of Scientific Revolutions. That is not a book for an impatient reader though – for those who prefer something lighter I recommend “Our Iceberg is Melting” by John Kotter.

<Leslie> Thanks Bob. I will get a copy of Kotter’s book – that sounds more my cup of tea. Will that tell me what to do?

<Bob> It is a parable – a fictional story of a colony of penguins who discover that their iceberg is melting and are suddenly faced with a new and urgent potential risk of not surviving the storms of the approaching winter. It is not a factual account of a real crisis or a step-by-step recipe book for solving all problems  – it describes some effective engagement strategies in general terms.

<Leslie> I will still read it. What I need is something more specific to my actual context.

<Bob> This is an improvement-by-design challenge. The only difference from the challenges you have done already is that this time the outcome you are looking for is a smooth transition from the ‘old’ paradigm to the ‘new’ one.  Kuhn showed that this transition will not start to happen until there is a new paradigm because individuals choose to take the step from the old to the new and they do not all do that at the same time.  Your work is demonstrating that there is a new paradigm. Some will love that message, some will hate it. Rather like Marmite.

<Leslie> Yes, that make sense.  But how do I deal with an unseen enemy who is stirring up trouble behind my back?

<Bob> Are you are referring to those who have ‘raised some issues‘?

<Leslie> Yes.

<Bob> They will be the ones who have most invested in the current status quo and they will not be in senior enough positions to challenge you directly so they are going around spooking the inner Chimps of those who can. This is expected behaviour when the relentlessly changing reality starts to wobble the concrete current paradigm.

<Leslie> Yes! That is  exactly how it feels.

<Bob> The danger lurking here is that your inner Chimp is getting spooked too and is conjuring up Gremlins and Goblins from the Computer! Left to itself your inner Chimp will steer you straight into the Victim Vortex.  So you need to take it for a long walk, let it scream and wave its hairy arms about, listen to it, and give it lots of bananas to calm it down. Then put your put your calmed-down Chimp into its cage and your ‘paradigm transition design’ into the Computer. Only then will you be ready for the ‘so-justify-yourself’ meeting.  At the meeting your Chimp will be out of its cage like a shot and interpreting everything as a threat. It will disable you and go straight to the Computer for what to do – and it will read your design and follow the ‘wise’ instructions that you have put in there.

<Leslie> Wow! I see how you are using the Chimp Paradox metaphor to describe an incredibly complex emotional process in really simple language. My inner Chimp is feeling happier already!

<Bob> And remember that you are in all in the same race. Your collective goal is to cross the finish line as quickly as possible with the least chaos, pain and cost.  You are not in a battle – that is lose-lose inner Chimp thinking.  The only message that your interrogators must get from you is ‘Win-win is possible and here is how we can do it‘. That will be the best way to soothe their inner Chimps – the ones who fear that you are going to sink their boat by rocking it.

<Leslie> That is really helpful. Thank you again Bob. My inner Chimp is now snoring gently in its cage and while it is asleep I have some Improvement-by-Design work to do and then some Computer programming.

patient_stumbling_with_bandages_150_wht_6861Primum non nocere” is Latin for “First do no harm”.

It is a warning mantra that had been repeated by doctors for thousands of years and for good reason.

Doctors  can be bad for your health.

I am not referring to the rare case where the doctor deliberately causes harm.  Such people are criminals and deserve to be in prison.

I am referring to the much more frequent situation where the doctor has no intention to cause harm – but harm is the outcome anyway.

Very often the risk of harm is unavoidable. Healthcare is a high risk business. Seriously unwell patients can be very unstable and very unpredictable.  Heroic efforts to do whatever can be done can result in unintended harm and we have to accept those risks. It is the nature of the work.  Much of the judgement in healthcare is balancing benefit with risk on a patient by patient basis. It is not an exact science. It requires wisdom, judgement, training and experience. It feels more like an art than a science.

The focus of this essay is not the above. It is on unintentionally causing avoidable harm.

Or rather unintentionally not preventing avoidable harm which is not quite the same thing.

Safety means prevention of avoidable harm. A safe system is one that does that. There is no evidence of harm to collect. A safe system does not cause harm. Never events never happen.

Safe systems are designed to be safe.  The root causes of harm are deliberately designed out one way or another.  But it is not always easy because to do that we need to understand the cause-and-effect relationships that lead to unintended harm.  Very often we do not.


In 1847 a doctor called Ignaz Semmelweis made a very important discovery. He discovered that if the doctors and medical students washed their hands in disinfectant when they entered the labour ward, then the number of mothers and babies who died from infection was reduced.

And the number dropped a lot.

It fell from an annual average of 10% to less than 2%!  In really bad months the rate was 30%.

The chart below shows the actual data plotted as a time-series chart. The yellow flag in 1848 is just after Semmelweis enforced a standard practice of hand-washing.

Vienna_Maternal_Mortality_1785-1848

Semmelweis did not know the mechanism though. This was not a carefully designed randomised controlled trial (RCT). He was desperate. And he was desperate because this horrendous waste of young lives was only happening on the doctors ward.  On the nurses ward, which was just across the corridor, the maternal mortality was less than 2%.

The hospital authorities explained it away as ‘bad air’ from outside. That was the prevailing belief at the time. Unavoidable. A risk that had to be just accepted.

Semmeleis could not do a randomized controlled trial because they were not invented until a century later.

And Semmelweis suspected that the difference between the mortality on the nurses and the doctors wards was something to do with the Mortuary. Only the doctors performed the post-mortems and the practice of teaching anatomy to medical students using post-mortem dissection was an innovation pioneered in Vienna in 1823 (the first yellow flag on the chart above). But Semmelweis did not have this data in 1847.  He collated it later and did not publish it until 1861.

What Semmelweis demonstrated was the unintended and avoidable deaths were caused by ignorance of the mechanism of how microorganisms cause disease. We know that now. He did not.

It would be another 20 years before Louis Pasteur demonstrated the mechanism using the famous experiment with the swan neck flask. Pasteur did not discover microorganisms;  he proved that they did not appear spontaneously in decaying matter as was believed. He proved that by killing the bugs by boiling, the broth in the flask  stayed fresh even though it was exposed to the air. That was a big shock but it was a simple and repeatable experiment. He had a mechanism. He was believed. Germ theory was born. A Scottish surgeon called Joseph Lister read of this discovery and surgical antisepsis was born.

Semmelweis suspected that some ‘agent’ may have been unwittingly transported from the dead bodies to the live mothers and babies on the hands of the doctors.  It was a deeply shocking suggestion that the doctors were unwittingly killing their patients.

The other doctors did not take this suggestion well. Not well at all. They went into denial. They discounted the message and they discharged the messenger. Semmelweis never worked in Vienna again. He went back to Hungary and repeated the experiment. It worked.


Even today the message that healthcare practitioners can unwittingly bring avoidable harm to their patients is disturbing. We still seek solace in denial.

Hospital acquired infections (HAI) are a common cause of harm and many are avoidable using simple, cheap and effective measures such as hand-washing.

The harm does not come from what we do. It comes from what we do not do. It happens when we omit to follow the simple safety measures that have be proven to work. Scientifically. Statistically Significantly. Understood and avoidable errors of omission.


So how is this “statistically significant scientific proof” acquired?

By doing experiments. Just like the one Ignaz Semmelweis conducted. But the improvement he showed was so large that it did not need statistical analysis to validate it.  And anyway such analysis tools were not available in 1847. If they had been he might have had more success influencing his peers. And if he had achieved that goal then thousands, if not millions, of deaths from hospital acquired infections may have been prevented.  With the clarity of hindsight we now know this harm was avoidable.

No. The problem we have now is because the improvement that follows a single intervention is not very large. And when the causal mechanisms are multi-factorial we need more than one intervention to achieve the improvement we want. The big reduction in avoidable harm. How do we do that scientifically and safely?


About 20% of hospital acquired infections occur after surgical operations.

We have learned much since 1847 and we have designed much safer surgical systems and processes. Joseph Lister ushered in the era of safe surgery, much has happened since.

We routinely use carefully designed, ultra-clean operating theatres, sterilized surgical instruments, gloves and gowns, and aseptic techniques – all to reduce bacterial contamination from outside.

But surgical site infections (SSIs) are still common place. Studies show that 5% of patients on average will suffer this complication. Some procedures are much higher risk than others, despite the precautions we take.  And many surgeons assume that this risk must just be accepted.

Others have tried to understand the mechanism of SSI and their research shows that the source of the infections is the patients themselves. We all carry a ‘bacterial flora’ and normally that is no problem. Our natural defense – our skin – is enough.  But when that biological barrier is deliberately breached during a surgical operation then we have a problem. The bugs get in and cause mischief. They cause surgical site infections.

So we have done more research to test interventions to prevent this harm. Each intervention has been subject to well-designed, carefully-conducted, statistically-valid and very expensive randomized controlled trials.  And the results are often equivocal. So we repeat the trials – bigger, better controlled trials. But the effects of the individual interventions are small and they easily get lost in the noise. So we pool the results of many RCTs in what is called a ‘meta-analysis’ and the answer from that is very often ‘not proven’ – either way.  So individual surgeons are left to make the judgement call and not surprisingly there is wide variation in practice.  So is this the best that medical science can do?

No. There is another way. What we can do is pool all the learning from all the trials and design a multi-facetted intervention. A bundle of care. And the idea of a bundle is that the  separate small effects will add or even synergise to create one big effect.  We are not so much interested in the mechanism as the outcome. Just like Ignaz Semmelweiss.

And we can now do something else. We can test our bundle of care using statistically robust tools that do not require a RCT.  They are just as statistically valid as a RCT but a different design.

And the appropriate tool for this to measure the time interval between adverse the events  - and then to plot this continuous metric as a time-series chart.

But we must be disciplined. First we must establish the baseline average interval and then we introduce our bundle and then we just keep measuring the intervals.

If our bundle works then the interval between the adverse events gets longer – and we can easily prove that using our time-series chart. The longer the interval the more ‘proof’ we have.  In fact we can even predict how long we need to observe to prove that ‘no events’ is a statistically significant improvement. That is an elegant an efficient design.


Here is a real and recent example.

The time-series chart below shows the interval in days between surgical site infections following routine hernia surgery. These are not life threatening complications. They rarely require re-admission or re-operation. But they are disruptive for patients. They cause pain, require treatment with antibiotics, and the delay recovery and return to normal activities. So we would like to avoid them if possible.

Hernia_SSI_CareBundle

The green and red lines show the baseline period. The  green line says that the average interval between SSIs is 14 days.  The red line says that an interval more than about 60 days would be surprisingly long: valid statistical evidence of an improvement.  The end of the green and red lines indicates when the intervention was made: when the evidence-based designer care bundle was adopted together with the discipline of applying it to every patient. No judgement. No variation.

The chart tells the story. No complicated statistical analysis is required. It shows a statistically significant improvement.  And the SSI rate fell by over 80%. That is a big improvement.

We still do not know how the care bundle works. We do not know which of the seven simultaneous simple and low-cost interventions we chose are the most important or even if they work independently or in synergy.  Knowledge of the mechanism was not our goal.

Our goal was to improve outcomes for our patients – to reduce avoidable harm – and that has been achieved. The evidence is clear.

That is Improvement Science in action.

And to read the full account of this example of the Science of Improvement please go to:

http://www.journalofimprovementscience.net

It is essay number 18.

And avoid another error of omission. Do not omit to share this message – it is important.

Chimp_BattleImprovement implies change.
Change implies action.
Action implies decision.

So how is the decision made?
With Urgency?
With Understanding?

Bitter experience teaches us that often there is an argument about what to do and when to do it.  An argument between two factions. Both are motivated by a combination of anger and fear. One side is motivated more by anger than fear. They vote for action because of the urgency of the present problem. The other side is motivated more by fear than anger. They vote for inaction because of their fear of future failure.

The outcome is unhappiness for everyone.

If the ‘action’ party wins the vote and a failure results then there is blame and recrimination. If the ‘inaction’ party wins the vote and a failure results then there is blame and recrimination. If either party achieves a success then there is both gloating and resentment. Lose Lose.

The issue is not the decision and how it is achieved.The problem is the battle.

Dr Steve Peters is a psychiatrist with 30 years of clinical experience.  He knows how to help people succeed in life through understanding how the caveman wetware between their ears actually works.

In the run up to the 2012 Olympic games he was the sports psychologist for the multiple-gold-medal winning UK Cycling Team.  The World Champions. And what he taught them is described in his book – “The Chimp Paradox“.

Chimp_Paradox_SmallSteve brilliantly boils the current scientific understanding of the complexity of the human mind down into a simple metaphor.

One that is accessible to everyone.

The metaphor goes like this:

There are actually two ‘beings’ inside our heads. The Chimp and the Human. The Chimp is the older, stronger, more emotional and more irrational part of our psyche. The Human is the newer, weaker, logical and rational part.  Also inside there is the Computer. It is just a memory where both the Chimp and the Human store information for reference later. Beliefs, values, experience. Stuff like that. Stuff they use to help them make decisions.

And when some new information arrives through our senses – sight and sound for example – the Chimp gets first dibs and uses the Computer to look up what to do.  Long before the Human has had time to analyse the new information logically and rationally. By the time the Human has even started on solving the problem the Chimp has come to a decision and signaled it to the Human and associated it with a strong emotion. Anger, Fear, Excitement and so on. The Chimp operates on basic drives like survival-of-the-self and survival-of-the-species. So if the Chimp gets spooked or seduced then it takes control – and it is the stronger so it always wins the internal argument.

But the human is responsible for the actions of the Chimp. As Steve Peters says ‘If your dog bites someone you cannot blame the dog – you are responsible for the dog‘.  So it is with our inner Chimps. Very often we end up apologising for the bad behaviour of our inner Chimp.

Because our inner Chimp is the stronger we cannot ‘control’ it by force. We have to learn how to manage the animal. We need to learn how to soothe it and to nurture it. And we need to learn how to remove the Gremlins that it has programmed into the Computer. Our inner Chimp is not ‘bad’ or ‘mad’ it is just a Chimp and it is an essential part of us.

Real chimpanzees are social, tribal and territorial.  They live in family groups and the strongest male is the boss. And it is now well known that a troop of chimpanzees in the wild can plan and wage battles to acquire territory from neighbouring troops. With casualties on both sides.  And so it is with people when their inner Chimps are in control.

Which is most of the time.

Scenario:
A hospital is failing one of its performance targets – the 18 week referral-to-treatment one – and is being threatened with fines and potential loss of its autonomy. The fear at the top drives the threat downwards. Operational managers are forced into action and do so using strategies that have not worked in the past. But they do not have time to learn how to design and test new ones. They are bullied into Plan-Do mode. The hospital is also required to provide safe care and the Plan-Do knee-jerk triggers fear-of-failure in the minds of the clinicians who then angrily oppose the diktat or quietly sabotage it.

This lose-lose scenario is being played out  in  100’s if not 1000’s of hospitals across the globe as we speak.  The evidence is there for everyone to see.

The inner Chimps are in charge and the outcome is a turf war with casualties on all sides.

So how does The Chimp Paradox help dissolve this seemingly impossible challenge?

First it is necessary to appreciate that both sides are being controlled by their inner Chimps who are reacting from a position of irrational fear and anger. This means that everyone’s behaviour is irrational and their actions likely to be counter-productive.

What is needed is for everyone to be managing their inner Chimps so that the Humans are back in control of the decision making. That way we get wise decisions that lead to effective actions and win-win outcomes. Without chaos and casualties.

To do this we all need to learn how to manage our own inner Chimps … and that is what “The Chimp Paradox” is all about. That is what helped the UK cyclists to become gold medalists.

In the scenario painted above we might observe that the managers are more comfortable in the Pragmatist-Activist (PA) half of the learning cycle. The Plan-Do part of PDSA  – to translate into the language of improvement. The clinicians appear more comfortable in the Reflector-Theorist (RT) half. The Study-Act part of PDSA.  And that difference of preference is fueling the firestorm.

Improvement Science tells us that to achieve and sustain improvement we need all four parts of the learning cycle working  smoothly and in sequence.

So what at first sight looks like it must be pitched battle which will result in two losers; in reality is could be a three-legged race that will result in everyone winning. But only if synergy between the PA and the RT halves can be achieved.

And that synergy is achieved by learning to respect, understand and manage our inner Chimps.

stick_figure_scribble_pen_150_wht_6418[Beep Beep] The alarm on Bob’s smartphone was the reminder that in a few minutes his e-mentoring session with Lesley was due. Bob had just finished the e-mail he was composing so he sent it and then fired-up the Webex session. Lesley was already logged in and on line.

<Bob> Hi Lesley. What aspect of Improvement Science shall we talk about today? What is next on your map?

<Lesley> Hi Bob. Let me see. It looks like ‘Employee Engagement‘ is the one that we have explored least yet – and it links to lots of other things.

<Bob> OK. What would you say the average level of Employee Engagement is in your organisation at the moment? On a scale of zero to ten where zero is defined as ‘complete apathy’.

<Lesley> Good question. I see a wide range of engagement and I would say the average is about four out of ten.  There are some very visible, fully-engaged, energetic, action-focused  movers-and-shakers.  There are many more nearer the apathy end of the spectrum. Most employees seem to turn up, do their jobs well enough to avoid being disciplined, and then go home.

<Bob> OK. And do you feel that is a problem?

<Lesley> You betcha!  Improvement means change and change means action.  Disengaged employees are a deadweight. They do not actively block change – they will go along with it if pushed  – but they do not contribute to making it happen. And that creates a different problem. The movers-and-shakers get frustrated and eventually get tired trying to move the deadweight up hill and give up  and then can become increasingly critical and then cynical. After they give up in despair they then actively block any new ideas saying – “Do not try you will fail.”

<Bob> So how would you describe the emotional state of those you describe as “disengaged”?

<Lesley> Miserable.

<Bob> And who is making them feel miserable?

<Lesley> That is another good question. They appear to be making themselves feel miserable. And it is not what is happening that triggers this emotion. It is what is not happening. Apathy seems to be self-sustaining.

<Bob> Can you explain in a bit more about what you mean by that and maybe share an example?

<Lesley> An example is easier.  I have reflected on this a bit and I have used one of the 6M Design® techniques to help me understand it better.  I used a Right-2-Left® map to compare a personal example of when I felt really motivated and delivered a significant and measurable improvement; with one where I felt miserable and no change happened.

<Bob> Excellent. What did you discover?

<Lesley> I discovered that there were four classes of  difference between the two examples. And I then understood what you mean by ‘Acts and Errors of  Omission and Commission’.

<Bob> OK. And which was the commonest of the four combinations in your example?

<Lesley> The Errors of Omission. And within just that group there were three different types that were most obvious.

<Bob> Can you list them for me?

<Lesley> For sure. The first is the miserableness I felt when what I was doing felt to me that it was irrelevant. When what I was being asked to do had no worthwhile purpose that I was aware of.

<Bob> So which was it? No worth or not being aware of the worth?

<Lesley>Me not being aware of the worth. I hoped it was of value to someone higher up the corporate food chain otherwise I would not have been asked to do it! But I was never sure. And that uncertainty generated some questions. What if what I am doing is of no worth to anyoneWhat if I am just wasting my lifetime doing it? That fearful thought left me feeling more miserable than motivated.

<Bob> OK. What was the second Error of Omission?

<Lesley> It is linked to the first one. I had no objective way of knowing if I was doing a worthwhile job.  And the word objective is important.  I am not asking for subjective feedback – there is too much expectation, variation, assumption, prejudgement and politics mixed up in opinions of what I achieve.  I needed specific, objective and timely feedback. I associated my feeling of miserableness with not getting objective feedback that told me what I was doing was making a worthwhile difference to someone else. Anyone else!

<Bob> I thought that you get a lot of objective feedback on a whole raft of organisational performance metrics?

<Lesley> Oh yes! We do!! The problem is that it is high level, aggregated, anonymous, and delayed. To get a copy of a report that says as an organisation we did or did not meet last quarters arbitrary performance target for x, y or z usually generates a ‘So what? How does that relate to what I do?’ reaction. I need objective, specific and timely feedback about the effects of my work. Good or bad.

<Bob> OK.  And Error of Omission Three?

<Lesley> This was the trickiest one to nail down. What it came down to was being treated as a process and not as a person.  I felt anonymous.  I was just  a headcount, a number on a payroll ledger, an overhead cost. That feeling was actually the most demotivating of all.

<Bob> And did it require all Three Errors of Omission to be present for the ‘miserableness’ to become manifest?

<Lesley> Alas no! Any one of them was enough. The more of them at the same time the deeper the feeling of misery the less motivated I felt.

<Bob> Thank you for being so frank and open. So what have you ‘abstracted’ from your ‘reflection’?

<Lesley> That employee engagement requires that these Three Errors of Omission must be deliberately checked for and proactively addressed if discovered.

<Bob> And who would, could or should do this check-and-correct work?

<Lesley> H’mm. Another very god question. The employee could do it but it is difficult for them because a lot of the purpose-setting and feedback comes from outside their circle of control and from higher up. Approaching  a line-manager with a list of their Errors of Omission will be too much of a challenge!

<Bob> So?

<Lesley> The manager should do it.  They should ask themselves these questions.  Only they can correct their  own Errors of Omission.  I doubt if that would happen spontaneously though! Humility seems a bit of a rare commodity.

<Bob> I agree. So what can the employee do to help their boss?

<Lesley> They could ask how they can be of most value to their boss and they could ask for objective and timely feedback on how well they are performing as an individual on those measures of worth. It sounds so simple and obvious when said out loud. So why does no one do it?

<Bob> A very good question. Some do and they are the often described as ‘motivating leaders’. So does this insight suggest to you any strategies for grasping the ‘Employee Engagement’ nettle without getting stung?

<Lesley> Yes indeed! I am already planning my next action. A chat with my line-manager about what I could do. Thanks Bob.

<Bob> My pleasure. And remember that the same principle works for everyone that we work directly with – especially those immediately ‘upstream’ and ‘downstream’ of us in our daily work.

ViewFromSpaceThis is a picture of Chris Hadfield. He is an astronaut and to prove it here he is in the ‘cupola’ of the International Space Station (ISS). Through the windows is a spectacular view of the Earth from space.

Our home seen from space.

What is remarkable about this image is that it even exists.

This image is tangible evidence of a successful outcome of a very long path of collaborative effort by 100’s of 1000’s of people who share a common dream.

That if we can learn to overcome the challenge of establishing a permanent manned presence in space then just imagine what else we might achieve?

Chis is unusual for many reasons.  One is that he is Canadian and there are not many Canadian astronauts. He is also the first Canadian astronaut to command the ISS.  Another claim to fame is that when he recently lived in space for 5 months on the ISS, he recorded a version of David Bowie’s classic song – for real – in space. To date this has clocked up 21 million YouTube hits and had helped to bring the inspiring story of space exploration back to the public consciousness.

Especially the next generation of explorers – our children.

Chris has also written a book ‘An Astronaut’s View of Life on Earth‘ that tells his story. It describes how he was inspired at a young age by seeing the first man to step onto the Moon in 1969.  He overcame seemingly impossible obstacles to become an astronaut, to go into space, and to command the ISS.  The image is tangible evidence.

We all know that space is a VERY dangerous place.  I clearly remember the two space shuttle disasters. There have been many other much less public accidents.  Those tragic events have shocked us all out of complacency and have created a deep sense of humility in those who face up to the task of learning to overcome the enormous technical and cultural barriers.

Getting six people into space safely, staying there long enough to conduct experiments on the long-term effects of weightlessness, and getting them back again safely is a VERY difficult challenge.  And it has been overcome. We have the proof.

Many of the seemingly impossible day-to-day problems that we face seem puny in comparison.

For example: getting every patient into hospital, staying there just long enough to benefit from cutting edge high-technology healthcare, and getting them back home again safely.

And doing it repeatedly and consistently so that the system can be trusted and we are not greeted with tragic stories every time we open a newspaper. Stories that erode our trust in the ability of groups of well-intended people to do anything more constructive than bully, bicker and complain.

So when the exasperated healthcare executive exclaims ‘Getting 95% of emergency admissions into hospital in less than 4 hours is not rocket science!‘ – then perhaps a bit more humility is in order. It is rocket science.

Rocket science is Improvement science.

And reading the story of a real-life rocket-scientist might be just the medicine our exasperated executives need.

Because Chris explains exactly how it is done.

And he is credible because he has walked-the-talk so he has earned the right to talk-the-walk.

The least we can do is listen and learn.

Here is is Chris answering the question ‘How to achieve an impossible dream?

Nerve_CurveThe emotional roller-coaster ride that is associated with change, learning and improvement is called the Nerve Curve.

We are all very familiar with the first stages – of Shock, Denial, Anger, Bargaining, Depression and Despair.  We are less familiar with the stages associated with the long climb out to Resolution: because most improvement initiatives fail for one reason of another.

The critical first step is to “Disprove Impossibility” and this is the first injection of innovation. Someone (the ‘innovator’) discovers that what was believed to be impossible is not. They only have to have one example too. One Black Swan.

The tougher task is to influence those languishing in the ‘Depths of Despair’ that there is hope and that there is a ‘how’. This is not easy because cynicism is toxic to innovation.  So an experienced Improvement Science Practitioner (ISP) bypasses the cynics and engages with the depressed-but-still-healthy-skeptics.

The challenge now is how to get a shed load of them up the hill.

When we first learn to drive we start on the flat, not on hills,  for a very good reason. Safety.

We need to learn to become confident with the controls first. The brake, the accelerator, the clutch and the steering wheel.  This takes practice until it is comfortable, unconscious and almost second nature. We want to achieve a smooth transition from depression to delight, not chaotic kangaroo jumps!

Only when we can do that on the flat do we attempt a hill-start. And the key to a successful hill start is the sequence.  Hand brake on  for safety, out of gear, engine running, pointing at the goal. Then we depress the clutch and select a low gear – we do not want to stall. Speed is not the goal. Safety comes first. Then we rev the engine to give us the power we need to draw on. Then we ease the clutch until the force of the engine has overcome the force of gravity and we feel the car wanting to move forward. And only then do we ease the handbrake off, let the clutch out more and hit the gas to keep the engine revs in the green.

So when we are planning to navigate a group of healthy skeptics up the final climb of the Nerve Curve we need to plan and prepare carefully.

What is least likely to be successful?

Well, if all we have is our own set of wheels,  a cheap and cheerful mini-motor, then it is not going to be a good idea to shackle a trailer to it; fill the trailer with skeptics and attempt a hill start. We will either stall completely or burn out our clutch. We may even be dragged backwards into the Cynic Infested Toxic Swamp.

So what if we hire a bus, load up our skeptical passengers, and have a go.  We may be lucky –  but if we have no practice doing hill starts with a full bus then we could be heading for disappointment; or disaster.

So what is a safer plan:
1) First we need to go up the mountain ourselves to demonstrate it is possible.
2) Then we take one or two of the least skeptical up in our car to show it is safe.
3) We then invite those skeptics with cars to learn how to do safe hill starts.
4) Finally we ask the ex-skeptics to teach the fresh-skeptics how to do it.

Brmmmm Brmmmm. Off we go.

hurry_with_the_SFQP_kit[Dring] Bob’s laptop signaled the arrival of Leslie for their regular ISP remote mentoring session.

<Bob> Hi Leslie. Thanks for emailing me with a long list of things to choose from. It looks like you have been having some challenging conversations.

<Leslie> Hi Bob. Yes indeed! The deepening gloom and the last few blog topics seem to be polarising opinion. Some are claiming it is all hopeless and others, perhaps out of desperation, are trying the FISH stuff for themselves and discovering that it works.  The ‘What Ifs’ are engaged in war of words with the ‘Yes Buts’.

<Bob> I like your metaphor! Where would you like to start on the long list of topics?

<Leslie> That is my problem. I do not know where to start. They all look equally important.

<Bob> So, first we need a way to prioritise the topics to get the horse-before-the-cart.

<Leslie> Sounds like a good plan to me!

<Bob> One of the problems with the traditional improvement approaches is that they seem to start at the most difficult point. They focus on ‘quality’ first – and to be fair that has been the mantra from the gurus like W.E.Deming. ‘Quality Improvement’ is the Holy Grail.

<Leslie>But quality IS important … are you saying they are wrong?

<Bob> Not at all. I am saying that it is not the place to start … it is actually the third step.

<Leslie>So what is the first step?

<Bob> Safety.  Eliminating avoidable harm. Primum Non Nocere. The NooNoos. The stuff that generates the most fear for everyone.

<Leslie> You mean having a service that we can trust not to harm us unnecessarily?

<Bob> Yes. It is not a good idea to make an unsafe design more efficient – it will deliver even more harm!

<Leslie> OK. That makes perfect sense to me. So how do we do that?

<Bob> It does not actually matter.  Well-designed and thoroughly field-tested checklists have been proven to be very effective in the ‘ultra-safe’ industries like aerospace and nuclear.

<Leslie> OK. Something like the WHO Safe Surgery Checklist?

<Bob> Yes, that is a good example – and it is well worth reading Atul Gwande’s book about how that happened – “The Checklist Manifesto“.   Gwande is a surgeon who had published a lot on improvement and even so was quite skeptical that something as simple as a checklist could possibly work in the complex world of surgery. In his book he describes a number of personal ‘Ah Ha!’ moments that illustrate a phenomenon that I call Jiggling.

<Leslie> OK. I have made a note to read Checklist Manifesto and I am curious to learn more about Jiggling – but can we stick to the point? Does quality come after safety?

<Bob> Yes, but not immediately after. As I said, Quality is the third step.

<Leslie> So what is the second one?

<Bob> Flow.

There was a long pause – and just as Bob was about to check that the connection had not been lost – Leslie spoke.

<Leslie> But none of the Improvement Schools teach basic flow science.  They all focus on quality, waste and variation!

<Bob> I know. And attempting to improve quality before improving flow is like papering the walls before doing the plastering.  Quality cannot grow in a chaotic context. The flow must be smooth before that. And the fear of harm must be removed first.

<Leslie> So the ‘Improving Quality through Leadership‘ bandwagon that everyone is jumping on will not work?

<Bob> Well that depends on what the ‘Leaders’ are doing. If they are leading the way to learning how to design-for-safety and then design-for-flow then the bandwagon might be a wise choice. If they are only facilitating collaborative agreement and group-think then they may be making an ineffective system more efficient which will steer it over the edge into faster decline.

<Leslie>So, if we can stabilize safety using checklists do we focus on flow next?

<Bob>Yup.

<Leslie> OK. That makes a lot of sense to me. So what is Jiggling?

<Bob> This is Jiggling. This conversation.

<Leslie> Ah, I see. I am jiggling my understanding through a series of ‘nudges’ by you.

<Bob>Yes. And when the learning cogs are a bit rusty, some Improvement Science Oil and a bit of Jiggling is more effective and much safer than whacking the caveman wetware with a big emotional hammer.

<Leslie>Well the conversation has certainly jiggled Safety-Flow-Quality-and-Productivity into a sensible order for me. That helped a lot. I will sort my to-do list into that order and start at the beginning. Let me see. I have a plan for safety, now I can focus on flow. Here is my top flow niggle. How do I design the resource capacity I need to ensure the flow is smooth and the waiting times are short enough to avoid ‘persecution’ by the Target Time Police?

<Bob> An excellent question! I will send you the first ISP Brainteaser that will nudge us towards an answer to that question.

<Leslie> I am ready and waiting to have my brain-teased and my niggles-nudged!

London_UndergroundSystems are built from intersecting streams of work called processes.

This iconic image of the London Underground shows a system map – a set of intersecting transport streams.

Each stream links a sequence of independent steps – in this case the individual stations.  Each step is a system in itself – it has a set of inner streams.

For a system to exhibit stable and acceptable behaviour the steps must be in synergy – literally ‘together work’. The steps also need to be in synchrony – literally ‘same time’. And to do that they need to be aligned to a common purpose.  In the case of a transport system the design purpose is to get from A to B safety, quickly, in comfort and at an affordable cost.

In large socioeconomic systems called ‘organisations’ the steps represent groups of people with special knowledge and skills that collectively create the desired product or service.  This creates an inevitable need for ‘handoffs’ as partially completed work flows through the system along streams from one step to another. Each step contributes to the output. It is like a series of baton passes in a relay race.

This creates the requirement for a critical design ingredient: trust.

Each step needs to be able to trust the others to do their part:  right-first-time and on-time.  All the steps are directly or indirectly interdependent.  If any one of them is ‘untrustworthy’ then the whole system will suffer to some degree. If too many generate dis-trust then the system may fail and can literally fall apart. Trust is like social glue.

So a critical part of people-system design is the development and the maintenance of trust-bonds.

And it does not happen by accident. It takes active effort. It requires design.

We are social animals. Our default behaviour is to trust. We learn distrust by experiencing repeated disappointments. We are not born cynical – we learn that behaviour.

The default behaviour for inanimate systems is disorder – and it has a fancy name – it is called ‘entropy’. There is a Law of Physics that says that ‘the average entropy of a system will increase over time‘. The critical word is ‘average’.

So, if we are not aware of this and we omit to pay attention to the hand-offs between the steps we will observe increasing disorder which leads to repeated disappointments and erosion of trust. Our natural reaction then is ‘self-protect’ which implies ‘check-and-reject’ and ‘check and correct’. This adds complexity and bureaucracy and may prevent further decline – which is good – but it comes at a cost – quite literally.

Eventually an equilibrium will be achieved where our system performance is limited by the amount of check-and-correct bureaucracy we can afford.  This is called a ‘mediocrity trap’ and it is very resilient – which means resistant to change in any direction.


To escape from the mediocrity trap we need to break into the self-reinforcing check-and-reject loop and we do that by developing a design that challenges ‘trust eroding behaviour’.  The strategy is to develop a skill called  ‘smart trust’.

To appreciate what smart trust is we need to view trust as a spectrum: not as a yes/no option.

At one end is ‘nonspecific distrust’ – otherwise known as ‘cynical behaviour’. At the other end is ‘blind trust’ – otherwise  known and ‘gullible behaviour’.  Neither of these are what we need.

In the middle is the zone of smart trust that spans healthy scepticism  through to healthy optimism.  What we need is to maintain a balance between the two – not to eliminate them. This is because some people are ‘glass-half-empty’ types and some are ‘glass-half-full’. And both views have a value.

The action required to develop smart trust is to respectfully challenge every part of the organisation to demonstrate ‘trustworthiness’ using evidence.  Rhetoric is not enough. Politicians always score very low on ‘most trusted people’ surveys.

The first phase of this smart trust development is for steps to demonstrate trustworthiness to themselves using their own evidence, and then to share this with the steps immediately upstream and downstream of them.

So what evidence is needed?

SFQP1Safety comes first. If a step cannot be trusted to be safe then that is the first priority. Safe systems need to be designed to be safe.

Flow comes second. If the streams do not flow smoothly then we experience turbulence and chaos which increases stress,  the risk of harm and creates disappointment for everyone. Smooth flow is the result of careful  flow design.

Third is Quality which means ‘setting and meeting realistic expectations‘.  This cannot happen in an unsafe, chaotic system.  Quality builds on Flow which builds on Safety. Quality is a design goal – an output – a purpose.

Fourth is Productivity (or profitability) and that does not automatically follow from the other three as some QI Zealots might have us believe. It is possible to have a safe, smooth, high quality design that is unaffordable.  Productivity needs to be designed too.  An unsafe, chaotic, low quality design is always more expensive.  Always. Safe, smooth and reliable can be highly productive and profitable – if designed to be.

So whatever the driver for improvement the sequence of questions is the same for every step in the system: “How can I demonstrate evidence of trustworthiness for Safety, then Flow, then Quality and then Productivity?”

And when that happens improvement will take off like a rocket. That is the Speed of Trust.  That is Improvement Science in Action.

single_file_line_PA_150_wht_3113The modern era in science started about 500 years ago when an increasing number of people started to challenge the dogma that our future is decided by Fates and Gods. That we had no influence. And to appease the ‘Gods’ we had to do as we were told. That was our only hope of Salvation.

This paradigm came under increasing pressure as the evidence presented by Reality did not match the Rhetoric.  Many early innovators paid for their impertinence with their fortunes, their freedom and often their future. They were burned as heretics.

When the old paradigm finally gave way and the Age of Enlightenment dawned the pendulum swung the other way – and the new paradigm became the ‘mechanical universe’. Isaac Newton showed that it was possible to predict, with very high accuracy, the motion of the planets just by adopting some simple rules and a new form of mathematics called calculus. This opened a door into a more hopeful world – if Nature follows strict rules and we know what they are then we can learn to control Nature and get what we need without having to appease any Gods (or priests).

This was the door to the Industrial Revolutions – there have been more that one – each lasting about 100 years (18th C, 19th C and 20th C). Each was associated with massive population growth as we systematically eliminated the causes of early mortality – starvation and infectious disease.

But not everything behaved like the orderly clockwork of the planets and the pendulums. There was still the capricious and unpredictable behaviour that we call Lady Luck.  Had the Gods retreated but were still playing dice?

Progress was made here too – and the history of the ‘understanding of chance’ is peppered with precocious and prickly mathematical savants who discovered that chance follows rules too. Probability theory was born and that spawned a troublesome child called Statistics. This was a trickier one to understand. To most people statistics is just mathematical gobbledygook.

But from that emerged a concept called the Rational Man – which underpinned the whole of Economic Theory for 250 years. Until very recently.  The RM hypothesis stated that we make unconscious but rational judgements when presented with uncertain win/lose choices.  And from that seed sprouted concepts such as the Law of Supply and Demand – when the supply of things we  demand are limited then we (rationally) value them more and will choose to pay more so prices go up so fewer can afford them so demand drops. Foxes and Rabbits. A negative feedback loop. The economic system becomes self-adjusting and self-stabilising.  The outcome of this assumption is a belief that ‘because people are collectively rational the economic system will be self-stabilising and it will correct the adverse short term effects of any policy blunders we make‘.  The ‘let-the-market-decide’ belief that experimental economic meddling is harmless over the long term and what is learned from ‘laissez-faire’ may even be helpful. It is a no-lose long term improvement strategy. Losers are just unlucky, stupid or both.

In 2002 the Nobel Prize for Economics was not awarded to an economist. It was awarded to a psychologist – Daniel Kahneman – who showed that the model of the Rational Man did not stand up to rigorous psychological experiment.  Reality demonstrated we are Irrational Chimps. The economists had omitted to test their hypothesis. Oops!


This lesson has many implications for the Science of Improvement.  One of which is a deeper understanding of the nemesis of improvement – resistance to change.

One of the surprising findings is that our judgements are biased – and our bias operates at an unconscious level – what Kahneman describes as the System One level. Chimp level. We are not aware we are making biased decisions.

For example. Many assume that we prefer certainty to uncertainty. We fear the unpredictable. We avoid it. We seek the predictable and the stable. And we will put up with just about anything so long as it is predictable. We do not like surprises.  And when presented with that assertion most people nod and say ‘Yes’ – that feels right.

We also prefer gain to loss.  We love winning. We hate losing. This ‘competitive spirit’ is socially reinforced from day one by our ‘pushy parents’ – we all know the ones – but we all do it to some degree. Do better! Work harder! Be a success! Optimize! Be the best! Be perfect! Be Perfect! BE PERFECT.

So which is more important to us? Losing or uncertainty? This is one question that Kahneman asked. And the answer he discovered was surprising – because it effectively disproved the Rational Man hypothesis.  And this is how a psychologist earned a Nobel Prize for Economics.

Kahneman discovered that loss is more important to us than uncertainty.

To demonstrate this he presented subjects with a choice between two win/lose options; and he presented the choice in two ways. To a statistician and a Rational Man the outcomes were exactly the same in terms of gain or loss.  He designed the experiment to ensure that it was the unconscious judgement that was being measured – the intuitive gut reaction. So if our gut reactions are Rational then the choice and the way the choice was presented would have no significant effect.

There was an effect. The hypothesis was disproved.

The evidence showed that our gut reactions are biased … and in an interesting way.

If we are presented with the choice between a certain gain and an uncertain gain/loss (so the average gain is the same) then we choose the certain gain much more often.  We avoid uncertainty. Uncertainly =1 Loss=0.

BUT …

If we are presented with a choice between certain loss and an uncertain loss/gain (so the average outcome is again the same) then we choose the uncertain option much more often. This is exactly the opposite of what was expected.

And it did not make any difference if the subject knew the results of the experiment before doing it. The judgement is made out of awareness and communicated to our consciousness via an emotion – a feeling – that biases our slower, logical, conscious decision process.

This means that the sense of loss has more influence on our judgement than the sense of uncertainty.

This behaviour is hard-wired. It is part of our Chimp brain design. And once we know this we can see the effect of it everywhere.

1. We will avoid the pain of uncertainty and resist any change that might deliver a gain when we believe that future loss is uncertain. We are conservative and over-optimistic.

2. We will accept the pain of uncertainty and only try something new (and risky) when we believe that to do otherwise will result in certain loss. The Backs Against The Wall scenario.  The Cornered Rat is Unpredictable and Dangerous scenario.

This explains why we resist any change right up until the point when we see Reality clearly enough to believe that we are definitely going to lose something important if we do nothing. Lose our reputation, lose our job, lose our security, lose our freedom or lose our lives. That is a transformational event.  A Road to Damascus moment.

monkey_on_back_anim_150_wht_11200Understanding that we behave like curious, playful, social but irrational chimps is one key to unlocking significant and sustained improvement.

We need to celebrate our inner chimp – it is key to innovation.

And we need to learn how to team up with our inner chimp rather than be hijacked by it.

If we do not we will fail – the Laws of Physics, Probability and Psychology decree it.

boss_dangling_carrot_for_employee_anim_150_wht_13061[Beep Beep] Bob’s laptop signaled the arrival of Leslie to their regular Webex mentoring session. Bob picked up the phone and connected to the conference call.

<Bob> Hi Leslie, how are you today?

<Leslie> Great thanks Bob. I am sorry but that I do not have a red-hot burning issue to talk about today.

<Bob> OK – so your world is completely calm and orderly now. Excellent.

<Leslie> I wish! The reason is that I have been busy preparing for the monthly 1-2-1 with my boss.

<Bob> OK. So do you have a few minutes to talk about that?

<Leslie> What can I tell you about it?

<Bob> Can you just describe the purpose and the process for me?

<Leslie> OK. The purpose is improvement – for both the department and the individual. The process is that all departmental managers have an annual appraisal based on their monthly 1-2-1 chats and the performance scores for their departments are used to reward the top 15% and to ‘performance manage’ the bottom 15%.

<Bob> H’mmm.  What is the commonest emotion that is associated with this process?

<Leslie> I would say somewhere between severe anxiety and abject terror. No one looks forward to it. The annual appraisal feels like a lottery where the odds are stacked against you.

<Bob> Can you explain that a bit more for me?

<Leslie> Well, the most fear comes from being in the bottom 15% – the fear of being ‘handed your hat’ so to speak. Fortunately that fear motivates us to try harder and that usually saves us from the chopper because our performance improves.  The cost is the extra stress, working late and taking ‘stuff’ home.

<Bob> OK. And the anxiety?

<Leslie> Paradoxically that mostly comes from the top 15%. They are anxious to sustain their performance. Most do not and the Boss’s Golden Manager can crash spectacularly! We have seen it so often. It is almost as if being the Best carries a curse! So most of us try to stay in the middle of the pack where we do not stick out – a sort of safety in the herd strategy.  It is illogical I know because there is always a ‘top’ 15% and a ‘bottom’ 15%.

<Bob> You mentioned before that it feels like a lottery. How come?

<Leslie> Yes – it feels like a lottery but I know it has a rational scientific basis. Someone once showed me the ‘statistically significant evidence’ that proves it works.

<Bob> That what works exactly?

<Leslie> That sticks are more effective than carrots!

<Bob> Really! And what does the performance run charts look like – over the long term – say monthly over 2-3 years?

<Leslie> That is a really good question. They are surprisingly stable – well completely stable in fact. The wobble up and down of course but there is no sign of improvement over the long term – no trend. If anything it is the other way.

<Bob> So what is the rationale for maintaining the stick-is-better-than-the-carrot policy?

<Leslie> Ah! The message we are getting  is ‘as performance is not improving and sticks have been scientifically proven to be more effective than carrots then we will be using a bigger stick in future‘.

<Bob> Hence the atmosphere of fear and anxiety?

<Leslie> Exactly. But that is the way it must be I suppose.

<Bob> Actually it is not. This is an invalid design based on rubbish intuitive assumptions and statistical smoke-and-mirrors that creates unmeasurable emotional pain and destroys both people and organisations!

<Leslie> Wow! Bob! I have never heard you use language like that. You are usually so calm and reasonable. This must be really important!

 <Bob> It is – and for that reason I need to shock you out of your apathy  – and I can do that best by you proving it to yourself – scientifically – with a simple experiment. Are you up for that?

<Leslie> You betcha! This sounds like it is going to be interesting. I had better fasten my safety belt! The Nerve Curve awaits.


 The Stick-or-Carrot Experiment

<Bob> Here we go. You will need five coins, some squared-paper and a pencil. Coloured ones are even better.

<Leslie> OK. Does it matter what sort of coins?

<Bob> No. Any will do. Imagine you have four managers called A,B,C and D respectively.  Each month the performance of their department is measured as the number of organisational targets that they are above average on. Above average is like throwing a ‘head’, below average is like throwing a ‘tail’. There are five targets – hence the coins

<Leslie>OK. That makes sense – and it feels better to use the measured average – we have demonstrated that arbitrary performance targets are dangers – especially when imposed blindly across all departments.

<Bob> Indeed. So can you design a score sheet to track the data for the experiment.

<Leslie>Give me a minute.  Will this suffice?

Stick_and_Carrot_Fig1<Bob> Perfect! Now simulate a month by tossing all five coins – once for each manager – and record the outcome of each as H to T , then tot up the number of heads for each manager.

<Leslie>  OK … here is what I got.

Stick_and_Carrot_Fig2<Bob>Good. Now repeat this 11 more times to give you the results for a whole year.  In the n(Heads) column colour the boxes that have scores of zero or one as red – these are the Losers. Then colour the boxes that have 4 or 5 as green – these are the Winners.

<Leslie>OK, that will take me a few minutes – do you want to get a coffee or something.

[Five minutes later]

Here you go. That gives 96 opportunities to win or lose and I counted 9 Losers and 9 Winners so just under 20% for each. The majority were in the unexceptional middle. The herd.

Stick_and_Carrot_Fig3<Bob> Excellent.  A useful way to visualise this is using a Tally chart. Just run down the column of n(Heads) and create the Tally chart as you go. This is one of the oldest forms of counting in existence. There are fossil records that show Tally charts being used thousands of years ago.

<Leslie> I think I understand what you mean. We do not wait until all the data is in then draw the chart, we update it as we go along – as the data comes in.

<Bob> Spot on!

<Leslie> Let me see. Wow! That is so cool!  I can see the pattern appearing almost magically – and the more data I have the clearer the pattern is.

 <Bob>Can you show me?

<Leslie> Here we go.

Stick_and_Carrot_Fig4<Bob> Good.  This is the expected picture. If you repeated this many times you would get the same general pattern with more 2 and 3 scores.

Now I want you to do an experiment.

Assume each manager that is classed as a Winner in one month is given a reward – a ‘pat on the back’ from their Boss. And each manager that is classed as a Loser is given a ‘written warning’. Now look for  the effect that this has.

<Leslie> But we are using coins – which means the outcome is just a matter of chance! It is a lottery.

<Bob> I know that and you know that but let us assume that the Boss believes that the monthly feedback has an effect. The experiment we are doing is to compare the effect of the carrot with the stick. The Boss wants to know which results in more improvement and to know that with scientific and statistical confidence!

<Leslie> OK. So what I will do is look at the score the following month for each manager that was either a Winner or a  Loser; work out the difference, and then calculate the average of those differences and compare them with each other. That feels suitably scientific!

<Bob> OK. What do you get.

<Leslie> Just a minute, I need to do this carefully. OK – here it is.

<Bob>Stick_and_Carrot_Fig5 Excellent.  Just eye-balling the ‘Measured improvement after feedback’ columns I would say the Losers have improved and the Winners have deteriorated!

<Leslie> Yes! And the Losers have improved by 1.29 on average and the Winners have deteriorated by 1.78 – and that is a big difference for such small sample. I am sure that with enough data this would be a statistically significant difference! So it is true, sticks work better than carrots!

<Bob>Not so fast. What you are seeing is a completely expected behaviour called “Regression to the Mean“. Remember we know that the score for each manager each month is the result of a game of chance, a coin toss, a lottery. So no amount of stick or carrot feedback is going to influence that.

<Leslie>But the data is saying there is a difference! And that feels like the experience we have – and why fear stalks the management corridors. This is really confusing!

<Bob>Remember that confusion arises from invalid or conflicting unconscious assumptions. There is a flaw in the statistical design of this experiment. The ‘obvious’ conclusion is invalid because of this flaw. And do not be too hard on yourself. The flaw eluded mathematicians for centuries. But now you know there is one can you find it?

<Leslie>OMG!  The use of the average to classify the managers into Winners or Losers is the flaw!  That is just a lottery. Who the managers are is irrelevant. This is just a demonstration of how chance works.

But that means … OMG!  If the conclusion is invalid then sticks are not better than carrots and we have been brain-washed for decades into accepting a performance management system that is invalid – and worse still is used to ‘scientifically’ justify systematic persecution! I can see now why you get so angry!

<Bob>Bravo Leslie.  We  need to check your understanding. Does that mean carrots are better than sticks?

<Leslie>No!  The conclusion is invalid because the assumptions are invalid and the design is fatally flawed. It does not matter what the conclusion actually is.

<Bob>Excellent. So what conclusion can you draw?

<Leslie>That this short-term carrot-or-stick feedback design for achieving improvement in a stable system  is both ineffective and emotionally damaging. In fact it could well be achieving precisely the opposite effect that it is intended to. It may be preventing improvement! But the story feels so plausible and the data appears to back it up. What is happening here is we are using statistical smoke-and-mirrors to justify what we have already decided – and only an true expert would spot the flaw! Once again our intuition has tricked us!

<Bob>Well done! And with this new insight – how would you do it differently?  What would be a better design?

<Leslie>That is a very good question. I am going to have to think about that – before my 1-2-1 tomorrow. I wonder what might happen if I show this demonstration to my Boss? Thanks Bob, as always … lots of food for thought.


four_way_puzzle_people_200_wht_4883Improvement implies change.

Change follows action. Action follows planning. Effective planning follows from an understanding of the system because it is required to make the wise decisions needed to achieve the purpose. The purpose is the intended outcome.

Learning follows from observing the effect of change – whatever it is. Understanding follows from learning to predict the effect of both actions and in-actions.

All these pieces of the change jigsaw are different and they are inter-dependent. They fit together. They are a system.

And we can pick out four pieces: the Plan piece, the Action piece, the Observation piece and the Learning piece – and they seem to follow that sequence – it looks like a learning cycle.

This is not a new idea.

It is the same sequence as the Scientific Method: hypothesis, experiment, analysis, conclusion. The preferred tool of  Academics – the Thinkers.

It is also the same sequence as the Shewhart Cycle: plan, do, check, act. The preferred tool of the Pragmatists – the Doers.

So where does all the change conflict come from? What is the reason for the perpetual debate between theorists and activists? The incessant game of “Yes … but!”

One possible cause was highlighted by David Kolb  in his work on ‘experiential learning’ which showed that individuals demonstrate a learning style preference.

We tend to be thinkers or doers and only a small proportion us say that we are equally comfortable with both.

The effect of this natural preference is that real problems bounce back-and-forth between the Tribe of Thinkers and the Tribe of Doers.  Together we are providing separate parts of the big picture – but as two tribes we appear to be unaware of the synergistic power of the two parts. We are blocked by a power struggle.

The Experiential Learning Model (ELM) was promoted and developed by Peter Honey and Alan Mumford (see learning styles) and their work forms the evidence behind the Learning Style Questionnaire that anyone can use to get their ‘score’ on the four dimensions:

  • Pragmatist – the designer and planner
  • Activist – the action person
  • Reflector – the observer and analyst
  • Theorist – the abstracter and hypothesis generator

The evidence from population studies showed that individuals have a preference for one of these styles, sometimes two, occasionally three and rarely all four.

That observation, together with the fact that learning from experience requires moving around the whole cycle, leads to an awareness that both individuals and groups can get ‘stuck’ in their learning preference comfort zone. If the learning wheel is unbalanced it will deliver an emotionally bumpy ride when it turns! So it may be more comfortable just to remain stationary and not to learn.

Which means not to change. Which means not to improve.


So if we are embarking on an improvement exercise – be it individual or collective – then we are committed to learning. So where do we start on the learning cycle?

The first step is action. To do something – and the easiest and safest thing to do is just look. Observe what is actually happening out there in the real world – outside the office – outside our comfort zone. We need to look outside our rhetorical inner world of assumptions, intuition and prejudgements.

The next step is to reflect on what we see – we look in the mirror – and we compare what are actually seeing with what we expected to see. That is not as easy as it sounds – and a useful tool to help is to draw charts. To make it visual. All sorts of charts.

The result is often a shock. There is often a big gap between what we see and what we perceive; between what we expect and what we experience; between what we want and what we get.

That emotional shock is actually what we need to power us through the next phase – the Realm of the Theorist – where we ask three simple questions:
Q1: What could be causing the reality that I am seeing?
Q2: How would I know which of the plausible causes is the actual cause?
Q3: What experiment can I do to answer my question and clarify my understanding of Reality?

This is the world of the Academic.

The third step is design an experiment to test our new hypothesis.  The real world is messy and complicated and we need to be comfortable with ‘good enough’ and ‘reasonable uncertainty’.  Design is about practicalities – making something that works well enough in practice – in the real world. Something that is fit-for-purpose. We are not expecting perfection; not looking for optimum; not striving for best – just significantly better than what we have now. And the more we can test our design before we implement it the better because we want to know what to expect before we make the change and we want to avoid unintended negative consequences – the NooNoos.

twisting_arrow_200_wht_11738Then we do it … and the cycle of learning has come one revolution … but we are not back at the start – we have moved forward. Our understanding is already different from when were were at this stage before: it is deeper and wider.  We are following the trajectory of a spiral – our capability for improvement is expanding over time.

So we need to balance our learning wheel before we start the journey or we will have a slow, bumpy and painful ride!


One plausible approach is to stay inside our comfort zones, play to our strengths and to say “What we need is a team made of people with complementary strengths. We need a Department of Action for the Activists; a Department of Analysis for the Reflectors; a Department of Research for the Theorists and a Department of Planning for the Pragmatists.

But that is what we have now and what is happening? The Four Departments have become super-specialised and more polarised.  There is little common ground or shared language.  There is no common direction, no co-ordination, no oil on the axle of the wheel of change. We have ground to a halt. We have chaos. Each part is working but independently of the others in an unsynchronised mess.

We have vehicular fibrillation. Change output has dropped to zero.


A better design is for everyone to focus first on balancing their own learning wheel by actively redirecting emotional energy from their comfort zone, their strength,  into developing the next step in their learning cycle.

Pragmatists develop their capability for Action.
Activists develop their capability for Reflection.
Reflectors develop their capability for Hypothesis.
Theorists develop their capability for Design.

The first step in the improvement spiral is Action – so if you are committed to improvement then investing £10 and 20 minutes in the 80-question Learning Style Questionnaire will demonstrate your commitment – not only to others – more importantly to yourself.

 

figure_snowblowing_150_wht_13606It is the time of year when our minds turn to self-improvement.

New Year.

We re-affirm our Resolutions from last year and we vow to try harder this year. As we did last year. And the year before that. And we usually fail.

So why do we fail to keep our New Year Resolutions?

One reason is because we do not let go of the past. We get pulled back into old habits too easily. To get a new future we have to do some tidying up. We need to get The Shredder. We need to make the act of letting go irreversible.

Bzzzzzzz …. Aaaaah. That feels better.

Why does this work?

First, because it feels good to be taking definitive action.  We know that resolutions are just good intentions. It is not until we take action that change happens.  Many of us are weak on the Activist dimension. We talk a lot about what we should do but we do not walk as much as we could do.

Second, because  we can see the evidence of the improvement immediately. We get immediate, visual, positive feedback. That heap of old bills and emails and reports that we kept ‘just in case’ is no longer cluttering up our desks, our eyes, our minds and our lives.  And we have ‘recycled’ it which feels even better.

Third, because we have challenged our own Prevarication Policy. And if we can do that for ourselves we can, with some credibility, do the same for others. We feel more competent and more confident.

Fourth, because we have freed up valuable capacity to invest.  More space. More time (our prevarication before kept us busy but wasted our limited time). More motivation (trying to work around a pile of rubbish day-in and day-out is emotionally draining).

So all we need to do in the New Year is stay inside our circle of control and shred some years of accumulated rubbish.

figure_picking_up_trash_150_wht_11857And it is not just tangible rubbish we can dispose of.  We can shred some emotional garbage too. The list of “Yes … But” excuses that we cling on to.  The sack of guilt for past failures that weighs us down. The flag of fear that we wave when we surrender our independence and adopt the Victim role.  The righteous indignation that we use to hide our own self-betrayal.

And just by putting that lot through The Shredder we release the opportunity for improvement.

The rest just happens – as if by magic.

clock_hands_spinning_import_150_wht_3149[Hmmmmmm] The desk amplified the vibration of Bob’s smartphone as it signaled the time for his planned e-mentoring session with Leslie.

[Dring Dring]

<Bob> Hi Leslie, right-on-time, how are you today?

<Leslie> Good thanks Bob. I have a specific topic to explore if that is OK. Can we talk about time traps.

<Bob> OK – do you have a specific reason for choosing that topic?

<Leslie> Yes. The blog last week about ‘Recipe for Chaos‘ set me thinking and I remembered that time-traps were mentioned in the FISH course but I confess, at the time, I did not understand them. I still do not.

<Bob> Can you describe how the ‘Recipe for Chaos‘ blog triggered this renewed interest in time-traps?

<Leslie> Yes – the question that occurred to me was: ‘Is a time-trap a recipe for chaos?’

<Bob> A very good question! What do you feel the answer is?

<Leslie>I feel that time-traps can and do trigger chaos but I cannot explain how. I feel confused.

<Bob>Your intuition is spot on – so can you localize the source of your confusion?

<Leslie>OK. I will try. I confess I got the answer to the MCQ correct by guessing – and I wrote down the answer when I eventually guessed correctly – but I did not understand it.

<Bob>What did you write down?

<Leslie>“The lead time is independent of the flow”.

<Bob>OK. That is accurate – though I agree it is perhaps a bit abstract. One source of confusion may be that there are different causes of of time-traps and there is a lot of overlap with other chaos-creating policies. Do you have a specific example we can use to connect theory with reality?

<Leslie> OK – that might explain my confusion.  The example that jumped to mind is the RTT target.

<Bob> RTT?

<Leslie> Oops – sorry – I know I should not use undefined abbreviations. Referral to Treatment Time.

<Bob> OK – can you describe what you have mapped and measured already?

<Leslie> Yes.  When I plot the lead-time for patients in date-of-treatment order the process looks stable but the histogram is multi-modal with a big spike just underneath the RTT target of 18 weeks. What you describe as the ‘Horned Gaussian’ – the sign that the performance target is distorting the behaviour of the system and the design of the system is not capable on its own.

<Bob> OK and have you investigated why there is not just one spike?

<Leslie> Yes – the factor that best explains that is the ‘priority’ of the referral.  The  ‘urgents’ jump in front of the ‘soons’ and both jump in front of the ‘routines’. The chart has three overlapping spikes.

<Bob> That sounds like a reasonable policy for mixed-priority demand. So what is the problem?

<Leslie> The ‘Routine’ group is the one that clusters just underneath the target. The lead time for routines is almost constant but most of the time those patients sit in one queue or another being leap-frogged by other higher-priority patients. Until they become high-priority – then they do the leap frogging.

<Bob> OK – and what is the condition for a time trap again?

<Leslie> That the lead time is independent of flow.

<Bob>Which implies?

<Leslie> Um. let me think. That the flow can be varying but the lead time stays the same?

<Bob> Yup. So is the flow of routine referrals varying?

<Leslie> Not over the long term. The chart is stable.

<Bob> What about over the short term? Is demand constant?

<Leslie>No of course not – it varies – but that is expected for all systems. Constant means ‘over-smoothed data’ – the Flaw of Averages trap!

<Bob>OK. And how close is the average lead time for routines to the RTT maximum allowable target?

<Leslie> Ah! I see what you mean. The average is about 17 weeks and the target is 18 weeks.

<Bob>So what is the flow variation on a week-to-week time scale?

<Leslie>Demand or Activity?

<Bob>Both.

<Leslie>H’mm – give me a minute to re-plot flow as a weekly-aggregated chart. Oh! I see what you mean – both the weekly activity and demand are both varying widely and they are not in sync with each other. Work in progress must be wobbling up and down a lot! So how can the lead time variation be so low?

<Bob>What do the flow histograms look like?

<Leslie> Um. Just a second. That is weird! They are both bi-modal with peaks at the extremes and not much in the middle – the exact opposite of what I expected to see! I expected a centered peak.

<Bob>What you are looking at is the characteristic flow fingerprint of a chaotic system – it is called ‘thrashing’.

<Leslie> So I was right!

<Bob> Yes. And now you know the characteristic pattern to look for. So what is the policy design flaw here?

<Leslie>The DRAT – the delusional ratio and arbitrary target?

<Bob> That is part of it – that is the external driver policy. The one you cannot change easily. What is the internally driven policy? The reaction to the DRAT?

<Leslie> The policy of leaving routine patients until they are about to breach then re-classifying them as ‘urgent’.

<Bob>Yes! It is called a ‘Prevarication Policy’ and it is surprisingly and uncomfortably common. Ask yourself – do you ever prevaricate? Do you ever put off ‘lower priority’ tasks until later and then not fill the time freed up with ‘higher priority tasks’?

<Leslie> OMG! I do that all the time! I put low priority and unexciting jobs on a ‘to do later’ heap but I do not sit idle – I do then focus on the high priority ones.

<Bob> High priority for whom?

<Leslie> Ah! I see what you mean. High priority for me. The ones that give me the biggest reward! The fun stuff or the stuff that I get a pat on the back for doing or that I feel good about.

<Bob> And what happens?

<Leslie> The heap of ‘no-fun-for-me-to-do’ jobs gets bigger and I await the ‘reminders’ and then have to rush round in a mad panic to avoid disappointment, criticism and blame. It feels chaotic. I get grumpy. I make more mistakes and I deliver lower-quality work. If I do not get a reminder I assume that the job was not that urgent after all and if I am challenged I claim I am too busy doing the other stuff.

<Bob> Have you avoided disappointment?

<Leslie> Ah! No – that I needed to be reminded meant that I had already disappointed. And when I do not get a reminded does not prove I have not disappointed either. Most people blame rather than complain. I have just managed to erode other people’s trust in my reliability. I have disappointed myself. I have achieved exactly the opposite of what I intended. Drat!

<Bob> So what is the reason that you work this way? There will be a reason.  A good reason.

<Leslie> That is a very good question! I will reflect on that because I believe it will help me understand why others behave this way too.

<Bob> OK – I will be interested to hear your conclusion.  Let us return to the question. What is the  downside of a ‘Prevarication Policy’?

<Leslie> It creates stress, chaos, fire-fighting, last minute changes, increased risk of errors,  more work and it erodes both quality, confidence and trust.

<Bob>Indeed so – and the impact on productivity?

<Leslie> The activity falls, the system productivity falls, revenue falls, queues increase, waiting times increase and the chaos increases!

<Bob> And?

<Leslie> We treat the symptoms by throwing resources at the problem – waiting list initiatives – and that pushes our costs up. Either way we are heading into a spiral of decline and disappointment. We do not address the root cause.

<Bob> So what is the way out of chaos?

<Leslie> Reduce the volume on the destabilizing feedback loop? Stop the managers meddling!

<Bob> Or?

<Leslie> Eh? I do not understand what you mean. The blog last week said management meddling was the problem.

<Bob> It is a problem. How many feedback loops are there?

<Leslie> Two – that need to be balanced.

<Bob> So what is another option?

<Leslie> OMG! I see. Turn UP the volume of the stabilizing feedback loop!

<Bob> Yup. And that is a lot easier to do in reality. So that is your other challenge to reflect on this week. And I am delighted to hear you using the terms ‘stabilizing feedback loop’ and ‘destabilizing feedback loop’.

<Leslie> Thank you. That was a lesson for me after last week – when I used the terms ‘positive and negative feedback’ it was interpreted in the emotional context – positive feedback as encouragement and negative feedback as criticism.  So ‘reducing positive feedback’ in that sense is the exact opposite of what I was intending. So I switched my language to using ‘stabilizing and destabilizing’ feedback loops that are much less ambiguous and the confusion and conflict disappeared.

<Bob> That is very useful learning Leslie … I think I need to emphasize that distinction more in the blog. That is one advantage of online media – it can be updated!

 <Leslie> Thanks again Bob!  And I have the perfect opportunity to test a new no-prevarication-policy design – in part of the system that I have complete control over – me!

boxes_group_PA4_150_wht_4916There are only four ingredients required to create Chaos.

The first is Time.

All processes and systems are time-dependent.

The second ingredient is a Metric of Interest (MoI).

That means a system performance metric that is important to all – such as a Safety or Quality or Cost; and usually all three.

The third ingredient is a feedback loop of a specific type – it is called a Negative Feedback Loop.  The NFL  is one that tends to adjust, correct and stabilise the behaviour of the system.

Negative feedback loops are very useful – but they have a drawback. They resist change and they reduce agility. The name is also a disadvantage – the word ‘negative feedback’ is often associated with criticism.

The fourth and final ingredient in our Recipe for Chaos is also a feedback loop but one of a different design – a Positive Feedback Loop (PFL)- one that amplifies variation and change.

Positive feedback loops are also very useful – they are required for agility – quick reactions to unexpected events. Fast reflexes.

The downside of a positive feedback loop is that increases instability.

The name is also confusing – ‘positive feedback’ is associated with encouragement and praise.

So in this context it is better to use the terms ‘stabilizing feedback’ and ‘destabilizing feedback’  loops.

When we mix these four ingredients in just the right amounts we get a system that may behave chaotically. That is surprising. It is counter-intuitive. It is also how the Universe works.

For example:

Suppose our Metric of Interest is the amount of time that patients spend in a Accident and Emergency Department. We know that the longer this time is the less happy they are and the higher the risk of avoidable harm – so it is a reasonable goal to reduce it.

Longer-than-possible waiting times have many root causes – it is a non-specific metric.  That means there are many things that could be done to reduce waiting time and the most effective actions will vary from case-to-case, day-to-day and even minute-to-minute.  There is no one-size-fits-all solution.

This implies that those best placed to correct the causes of the delays are the people who know the specific system well – because they work in it. Those who actually deliver urgent care. They are the stablizing agent in our Recipe for Chaos.

The destabilizing feedback loop is the beat-the-arbitrary-target policy.

This policy is typically involves:
(1) Setting a target that is impossible for the current design to achieve reliably;
(2) inspecting how close to the target we are; then
(3) using the data to justify threats of dire consequences for failure.

Now we have a Recipe for Chaos.

The higher the failure rate the more inspection, reports, meetings, exhortation, threats, interruptions, and interventions that are generated.  Fear-fuelled management meddling. This behaviour consumes valuable time – so leaves less time to do the worthwhile work. The pressure increases and makes the system even more sensitive to small changes. Delays multiply and errors occur more often.  Tempers become frayed and molehills become magnified into mountains. Irritations become arguments.  And all of this makes the problem worse rather than better. Less stable. More variable. More chaotic.

It is actually possible to write a simple equation that describes this characteristic behaviour of real systems – and that was a very surprising finding when it was discovered in 1976 by a mathematician called Robert May.

This equation is called the logistic equation.

Here is the abstract of his seminal paper.

Nature 261, 459-467 (10 June 1976)

Simple mathematical models with very complicated dynamics

First-order difference equations arise in many contexts in the biological, economic and social sciences. Such equations, even though simple and deterministic, can exhibit a surprising array of dynamical behaviour, from stable points, to a bifurcating hierarchy of stable cycles, to apparently random fluctuations. There are consequently many fascinating problems, some concerned with delicate mathematical aspects of the fine structure of the trajectories, and some concerned with the practical implications and applications. This is an interpretive review of them.

The fact that this chaotic behaviour is completely predictable and does not need any ‘random’ element was a big surprise. Chaotic is not the same as random. The observed chaos in the urgent healthcare care system is the result of the design of the system – or more specifically the current healthcare system management policies.

This has a number of profound implications – the most important of which is this:

If the chaos we observe in our health care systems is the predictable and inevitable result of the management policies we ourselves have created and adopted – then eliminating the chaos will only require us to re-design these policies.

In fact we only need to tweak one of the ingredients of the Recipe for Chaos – the strength of the destabilizing feedback loop. The gain. The volume control on the variation amplifier!

This is called the MM factor – otherwise known as ‘Management Meddling‘.

We need to keep all four ingredients though – because we need our system to have both agility and stability.

The flaw is not the Managers themselves – it is their learned behaviour – the Meddling.  It is learned so it can be unlearned. We need to keep the Managers and to change their role slightly. As they unlearn their old habits they move from being Policy-Enforcers and Fire-Fighters to becoming Policy-Engineers and Chaos-Calmers. They focus on learning to understand the root causes of variation that come from outside the circle of influence of the Workers.   They learn how to rationally and radically redesign system policies to achieve both agility and stability.

And doing that requires developing systemic-thinking and learning Improvement Science skills – because chaos is counter-intuitive. If it were intuitively-obvious we would have discovered the nature of chaos thousands of years ago. The fact that it was not discovered until 1976 demonstrates this.

It is our homo sapiens intuition that got us into this mess!  The inherent flaws of the  caveman wetware between our ears.  Our current management policies are intuitively-obvious, collectively-agreed, rubber-stamped and wrong! They are a Recipe for Chaos.

And when we learn to re-design our system policies and upload the new system software then the chaos evaporates as if a magic wand had been waved.

That comes as a big surprise!

And what also comes as a big surprise is just how small the counter-intuitive policy design tweaks often are.

Smooth, effective, efficient,safe and productive flow is restored. Calm confidence reigns. Safety, Quality and Productivity all increase – at the same time.  The emotional storm clouds dissipate and the cultural sun shines again.

Everyone feels better. Everyone. Patients, workers and managers.

This is Win-Win-Win improvement by design. Improvement Science.

Locked_DoorIf we were exploring the corridors in an unfamiliar building and our way forward was blocked by a door that looked like this … we would suspect that something of value lay beyond.

We know there is an unknown.

The puzzle we have to solve to release the chain tells us this. This is called an “affordance” – the design of the lock tells us what we need to do.

More often however what we need to move forward is unknown to us and the problems face afford no clues as to how to solve them.  Worse than that – the clues the do offer are misleading. Our intuition is tricked. We do the ‘intuitively obvious’ thing and the problem gets worse.

It is easy to lose confidence, become despondent and even start to believe there is no solution. To assume the problem is impossible for us to solve.

Then one day someone shows us how to solve an “impossible” problem. And with the benefit of our new perspective the solution looks simple and how it works is obvious. But only in retrospect.

Our unknown was known all along. But not by us. We were ignorant.

And our intuitions are flaky, forgetful and fickle. Not to be trusted. And our egos are fragile too – we do not like to feel flaky, forgetful and fickle. So we lie to ourselves and we confuse obvious-in-hindsight with obvious-in-foresight. They are not the same.

Suppose we now want to demonstrate our new understanding to someone else – to help them solve their “impossible” problem. How do we do that?

Do we say “But it is obvious – if you cannot see it you must be blind or stupid!” How can we say that when it was not to use only a short time ago? Is our ego getting the in way again? Can our intuition or ego be trusted at all?

To help others gain insight and deepen their understanding we must put ourselves back into the shoes we used to be in – or rather their shoes now:  and we need to and look at the problem again from their perspective. With the benefit of the three views of the problem: our old one, their current one and our new one we may be able to then see where the Unknown-Known is for them.

Only then can we help them discover it for themselves … and then they can help others discover their Unknown-Knowns.  That is know understanding spreads.

And understanding is the bridge between Knowledge and Wisdom.

And it is a wonderful thing to see someone move from confusion to clarity by asking them just the right question at just the right time in just the right way.

No more than that.

Socrates knew how to do this a long time ago – which is why it is called the Socratic Method.

 

computer_power_display_glowing_150_wht_9646A healthcare system has two inter-dependent parts. Let us call them the ‘hardware’ and the ‘software’ – terms we are more familiar with when referring to computer systems.

In a computer the critical-to-success software is called the ‘operating system’ – and we know that by the brand labels such as Windows, Linux, MacOS, or Android. There are many.

It is the O/S that makes the hardware fit-for-purpose. Without the O/S the computer is just a box of hot chips. A rather expensive room heater.

All the programs and apps that we use to to deliver our particular information service require the O/S to manage the actual hardware. Without a coordinator there would be chaos.

In a healthcare system the ‘hardware’ is the buildings, the equipment, and the people.  They are all necessary – but they are not sufficient on their own.

The ‘operating system’ in a healthcare system are the management policies: the ‘instructions’ that guide the ‘hardware’ to do what is required, when it is required and sometimes how it is required.  These policies are created by managers – they are the healthcare operating system design engineers so-to-speak.

Change the O/S and you change the behaviour of the whole system – it may look exactly the same – but it will deliver a different performance. For better or for worse.


In 1953 the invention of the transistor led to the first commercially viable computers. They were faster, smaller, more reliable, cheaper to buy and cheaper to maintain than their predecessors. They were also programmable.  And with many separate customer programs demanding hardware resources – an effective and efficient operating system was needed. So the understanding of “good” O/S design developed quickly.

In the 1960’s the first integrated circuits appeared and the computer world became dominated by mainframe computers. They filled air-conditioned rooms with gleaming cabinets tended lovingly by white-coated technicians carrying clipboards. Mainframes were, and still are, very expensive to build and to run! The valuable resource that was purchased by the customers was ‘CPU time’.  So the operating systems of these machines were designed to squeeze every microsecond of value out of the expensive-to-maintain CPU: for very good commercial reasons. Delivering the “data processing jobs” right, on-time and every-time was paramount.

The design of the operating system software was critical to the performance and to the profit.  So a lot of brain power was invested in learning how to schedule jobs; how to orchestrate the parts of the hardware system so that they worked in harmony; how to manage data buffers to smooth out flow and priority variation; how to design efficient algorithms for number crunching, sorting and searching; and how to switch from one task to the next quickly and without wasting time or making errors.

Every modern digital computer has inherited this legacy of learning.

In the 1970’s the first commercial microprocessors appeared – which reduced the size and cost of computers by orders of magnitude again – and increased their speed and reliability even further. Silicon Valley blossomed and although the first micro-chips were rather feeble in comparison with their mainframe equivalents they ushered in the modern era of the desktop-sized personal computer.

In the 1980’s players such as Microsoft and Apple appeared to exploit this vast new market. The only difference was that Microsoft only offered just the operating system for the new IBM-PC hardware (called MS-DOS); while Apple created both the hardware and software as a tightly integrated system – the Apple I.

The ergonomic-seamless-design philosophy at Apple led to the Apple Mac which revolutionised personal computing. It made them usable by people who had no interest in the innards or in programming. The Apple Macs were the “designer”computers and were reassuringly more expensive. The innovations that Apple designed into the Mac are now expected in all personal computers as well as the latest generations of smartphones and tablets.

Today we carry more computing power in our top pocket than a mainframe of the 1970’s could deliver! The design of the operating system has hardly changed though.

It was the O/S  design that leveraged the maximum potential of the very expensive hardware.  And that is still the case – but we take it for completely for granted.


Exactly the same principle applies to our healthcare systems.

The only difference is that the flow is not 1’s and 0’s – it is patients and all the things needed to deliver patient care. The ‘hardware’ is the expensive part to assemble and run – and the largest cost is the people.  Healthcare is a service delivered by people to people. Highly-trained nurses, doctors and allied healthcare professionals are expensive.

So the key to healthcare system performance is high quality management policy design – the healthcare operating system (HOS).

And here we hit a snag.

Our healthcare management policies have not been designed using the same rigor as the operating systems for our computers. They have not been designed using the well-understood principles of flow physics. The various parts of our healthcare system do not work well together. The flows are fractured. The silos work independently. And the ubiquitous symptom of this dysfunction is confusion, chaos and conflict.  The managers and the doctors are at each others throats. And this is because the management policies have evolved through a largely ineffective and very inefficient strategy called “burn-and-scrape”. Firefighting.

The root cause of the poor design is that neither healthcare managers nor the healthcare workers are trained in operational policy design. Design for Safety. Design for Quality. Design for Delivery. Design for Productivity.

And we are all left with a lose-lose-lose legacy: a system that is no longer fit-for-purpose and a generation of managers and clinicians who have never learned how to design the operational and clinical policies that ensure the system actually delivers what the ‘hardware’ is capable of delivering.


For example:

Suppose we have a simple healthcare system with three stages called A, B and C.  All the patients flow through A, then to B and then to C.  Let us assume these three parts are managed separately as departments with separate budgets and that they are free to use whatever policies they choose so long as they achieve their performance targets -which are (a) to do all the work and (b) to stay in budget and (c) to deliver on time.  So far so good.

Now suppose that the work that arrives at Department B from Department  A is not all the same and different tasks require different pathways and different resources. A Radiology, Pathology or Pharmacy Department for example.

Sorting the work into separate streams and having expensive special-purpose resources sitting idle waiting for work to arrive is inefficient and expensive. It will push up the unit cost – the total cost divided by the total activity. This is called ‘carve-out’.

Switching resources from one pathway to another takes time and that change-over time implies some resources are not able to do the work for a while.  These inefficiencies will contribute to the total cost and therefore push up the “unit-cost”. The total cost for the department divided by the total activity for the department.

So Department B decides to improve its “unit cost” by deploying a policy called ‘batching’.  It starts to sort the incoming work into different types of task and when a big enough batch has accumulated it then initiates the change-over. The cost of the change-over is shared by the whole batch. The “unit cost” falls because Department B is now able to deliver the same activity with fewer resources because they spend less time doing the change-overs. That is good. Isn’t it?

But what is the impact on Departments A and C and what effect does it have on delivery times and work in progress and the cost of storing the queues?

Department A notices that it can no longer pass work to B when it wants because B will only start the work when it has a full batch of requests. The queue of waiting work sits inside Department A.  That queue takes up space and that space costs money but the queue cost is incurred by Department A – not Department B.

What Department C sees is the order of the work changed by Department B to create a bigger variation in lead times for consecutive tasks. So if the whole system is required to achieve a delivery time specification – then Department C has to expedite the longest waiters and delay the shortest waiters – and that takes work,  time, space and money. That cost is incurred by Department C not by Department B.

The unit costs for Department B go down – and those for A and C both go up. The system is less productive as a whole.  The queues and delays caused by the policy change means that work can not be completed reliably on time. The blame for the failure falls on Department C.  Conflict between the parts of the system is inevitable. Lose-Lose-Lose.

And conflict is always expensive – on all dimensions – emotional, temporal and financial.


The policy design flaw here looks like it is ‘batching’ – but that policy is just a reaction to a deeper design flaw. It is a symptom.  The deeper flaw is not even the use of ‘unit costing’. That is a useful enough tool. The deeper flaw is the incorrect assumption that by improving the unit costs of the stages independently will always get an improvement in whole system productivity.

This is incorrect. This error is the result of ‘linear thinking’.

The Laws of Flow Physics do not work like this. Real systems are non-linear.

To design the management policies for a non-linear system using linear-thinking is guaranteed to fail. Disappointment and conflict is inevitable. And that is what we have. As system designers we need to use ‘systems-thinking’.

This discovery comes as a bit of a shock to management accountants. They feel rather challenged by the assertion that some of their cherished “cost improvement policies” are actually making the system less productive. Precisely the opposite of what they are trying to achieve.

And it is the senior management that decide the system-wide financial policies so that is where the linear-thinking needs to be challenged and the ‘software patch’ applied first.

It is not a major management software re-write. Just a minor tweak is all that is required.

And the numbers speak for themselves. It is not a difficult experiment to do.


So that is where we need to start.

We need to learn Healthcare Operating System design and we need to learn it at all levels in healthcare organisations.

And that system-thinking skill has another name – it is called Improvement Science.

The good news is that it is a lot easier to learn than most people believe.

And that is a big shock too – because how to do this has been known for 50 years.

So if you would like to see a real and current example of how poor policy design leads to falling productivity and then how to re-design the policies to reverse this effect have a look at Journal Of Improvement Science 2013:8;1-20.

And if you would like to learn how to design healthcare operating policies that deliver higher productivity with the same resources then the first step is FISH.

box_opening_up_closing_150_wht_8035 Improvement Science requires the effective, efficient and coordinated use of diagnosis, design and delivery tools.

Experience has also taught us that it is not just about the tools – each must be used as it was designed.

The craftsman knows his tools and knows what instrument to use, where and when the context dictates; and how to use it with skill.

Some tools are simple and effective – easy to understand and to use. The kitchen knife is a good example. It does not require an instruction manual to use it.

Other tools are more complex. Very often because they have a specific purpose. They are not generic. And they may not be intuitively obvious how to use them.  Many labour-saving household appliances have specific purposes: the microwave oven, the dish-washer and so on – but they have complex controls and settings that we need to manipulate to direct the “domestic robot” to deliver what we actually want.  Very often these controls are not intuitively obvious – we are dealing with a black box – and our understanding of what is happening inside is vague.

Very often we do not understand how the buttons and dials that we can see and touch – the inputs – actually influence the innards of the box to determine the outputs. We do not have a mental model of what is inside the Black Box. We do not know – we are ignorant.

In this situation we may resort to just blindly following the instructions;  or blindly copying what someone else does; or blindly trying random combinations of inputs until we get close enough to what we want. No wiser at the end than we were at the start.  The common thread here is “blind”. The box is black. We cannot see inside.

And the complex black box is deliberately made so – because the supplier of the super-tool does not want their “secret recipe” to be known to all – least of all their competitors.

This is a perfect recipe for confusion and for conflict. Lose-Lose-Lose.

Improvement Science is dedicated to eliminating confusion and conflict – so Black Box Tools are NOT on the menu.

Improvement Scientists need to understand how their tools work – and the best way to achieve that level of understanding is to design and build their own.

This may sound like re-inventing the wheel but it is not about building novel tools – it is about re-creating the tried and tested tools – for the purpose of understanding how they work. And understanding their strengths, their weaknesses, their opportunities and their risks or threats.

And doing that requires guidance from a mentor who has been through this same learning journey. Starting with simple, intuitive tools, and working step-by-step to design, build and understand the more complex ones.

So where do we start?

In the FISH course the first tool we learn to use is a Gantt Chart.

It was invented by Henry Laurence Gantt about 100 years ago and requires nothing more than pencil and paper. Coloured pencils and squared paper are even better.

Gantt_ChartThis is an example of a Gantt Chart for a Day Surgery Unit.

At the top are the “tasks” – patients 1 and 2; and at the bottom are the “resources”.

Time runs left to right.

Each coloured bar appears twice: once on each chart.

The power of a Gantt Chart is that it presents a lot of information in a very compact and easy-to-interpret format. That is what Henry Gantt intended.

A Gantt Chart is like the surgeon’s scalpel. It is a simple, generic easy-to-create tool that has a wide range of uses. The skill is knowing where, when and how to use it: and just as importantly where-not, when-not and how-not.

DRAT_04The second tool that an Improvement Scientist learns to use is the Shewhart or time-series chart.

It was invented about 90 years ago.

This is a more complex tool and as such there is a BIG danger that it is used as a Black Box with no understanding of the innards.  The SPC  and Six-Sigma Zealots sell it as a Magic Box. It is not.

We could paste any old time-series data into a bit of SPC software; twiddle with the controls until we get the output we want; and copy the chart into our report. We could do that and hope that no-one will ask us to explain what we have done and how we have done it. Most do not because they do not want to appear ‘ignorant’. The elephant is in the room though.  There is a conspiracy of silence.

The elephant-in-the-room is the risk we take when use Black Box tools – the risk of GIGO. Garbage In Garbage Out.

And unfortunately we have a tendency to blindly trust what comes out of the Black Box that a plausible Zealot tells us is “magic”. This is the Emporer’s New Clothes problem.  Another conspiracy of silence follows.

The problem here is not the tool – it is the desperate person blindly wielding it. The Zealots know this and they warn the Desperados of the risk and offer their expensive Magician services. They are not interested in showing how the magic trick is done though! They prefer the Box to stay Black.

So to avoid this cat-and-mouse scenario and to understand both the simpler and the more complex tools, and to be able to use them effectively and safely, we need to be able to build one for ourselves.

And the know-how to do that is not obvious – if it were we would have already done it – so we need guidance.

And once we have  built our first one – a rough-and-ready working prototype – then we can use the existing ones that have been polished with long use. And we can appreciate the wisdom that has gone into their design. The Black Box becomes Transparent.

So learning how the build the essential tools is the first part of the Improvement Science Practitioner (ISP) training – because without that knowledge it is difficult to progress very far. And without that understanding it is impossible to teach anyone anything other than to blindly follow a Black Box recipe.

Of course Magic Black Box Solutions Inc will not warm to this idea – they may not want to reveal what is inside their magic product. They are fearful that their customers may discover that it is much simpler than they are being told.  And we can test that hypothesis by asking them to explain how it works in language that we can understand. If they cannot (or will not) then we may want to keep looking for someone who can and will.

line_figure_phone_400_wht_9858<Lesley>Hi Bob! How are you today?

<Bob>OK thanks Lesley. And you?

<Lesley>I am looking forward to our conversation. I have two questions this week.

<Bob>OK. What is the first one?

<Lesley>You have taught me that improvement-by-design starts with the “purpose” question and that makes sense to me. But when I ask that question in a session I get an “eh?” reaction and I get nowhere.

<Bob>Quod facere bonum opus et quomodo te cognovi unum?

<Lesley>Eh?

<Bob>I asked you a purpose question.

<Lesley>Did you? What language is that? Latin? I do not understand Latin.

<Bob>So although you recognize the language you do not understand what I asked, the words have no meaning. So you are unable to answer my question and your reaction is “eh?”. I suspect the same is happening with your audience. Who are they?

<Lesley>Front-line clinicians and managers who have come to me to ask how to solve their problems. There Niggles. They want a how-to-recipe and they want it yesterday!

<Bob>OK. Remember the Temperament Treacle conversation last week. What is the commonest Myers-Briggs Type preference in your audience?

<Lesley>It is xSTJ – tough minded Guardians.  We did that exercise. It was good fun! Lots of OMG moments!

<Bob>OK – is your “purpose” question framed in a language that the xSTJ preference will understand naturally?

<Lesley>Ah! Probably not! The “purpose” question is future-focused, conceptual , strategic, value-loaded and subjective.

<Bob>Indeed – it is an iNtuitor question. xNTx or xNFx. Pose that question to a roomful of academics or executives and they will debate it ad infinitum.

<Lesley>More Latin – but that phrase I understand. You are right.  And my own preference is xNTP so I need to translate my xNTP “purpose” question into their xSTJ language?

<Bob>Yes. And what language do they use?

<Lesley>The language of facts, figures, jobs-to-do, work-schedules, targets, budgets, rational, logical, problem-solving, tough-decisions, and action-plans. Objective, pragmatic, necessary stuff that keep the operational-wheels-turning.

<Bob>OK – so what would “purpose” look like in xSTJ language?

<Lesley>Um. Good question. Let me start at the beginning. They came to me in desperation because they are now scared enough to ask for help.

<Bob>Scared of what?

<Lesley>Unintentionally failing. They do not want to fail and they do not need beating with sticks. They are tough enough on themselves and each other.

<Bob>OK that is part of their purpose. The “Avoid” part. The bit they do not want. What do they want? What is the “Achieve” part? What is their “Nice If”?

<Lesley>To do a good job.

<Bob>Yes. And that is what I asked you – but in an unfamiliar language. Translated into English I asked “What is a good job and how do you know you are doing one?”

<Lesley>Ah ha! That is it! That is the question I need to ask. And that links in the first map – The 4N Chart®. And it links in measurement, time-series charts and BaseLine© too. Wow!

<Bob>OK. So what is your second question?

<Lesley>Oh yes! I keep getting asked “How do we work out how much extra capacity we need?” and I answer “I doubt that you need any more capacity.”

<Bob>And their response is?

<Lesley>Anger and frustration! They say “That is obvious rubbish! We have a constant stream of complaints from patients about waiting too long and we are all maxed out so of course we need more capacity! We just need to know the minimum we can get away with – the what, where and when so we can work out how much it will cost for the business case.

<Bob>OK. So what do they mean by the word “capacity”. And what do you mean?

<Lesley>Capacity to do a good job?

<Bob>Very quick! Ho ho! That is a bit imprecise and subjective for a process designer though. The Laws of Physics need the terms “capacity”, “good” and “job” clearly defined – with units of measurement that are meaningful.

<Lesley>OK. Let us define “good” as “delivered on time” and “job” as “a patient with a health problem”.

<Bob>OK. So how do we define and measure capacity? What are the units of measurement?

<Lesley>Ah yes – I see what you mean. We touched on that in FISH but did not go into much depth.

<Bob>Now we dig deeper.

<Lesley>OK. FISH talks about three interdependent forms of capacity: flow-capacity, resource-capacity, and space-capacity.

<Bob>Yes. They are the space-and-time capacities. If we are too loose with our use of these and treat them as interchangeable then we will create the confusion and conflict that you have experienced. What are the units of measurement of each?

<Lesley>Um. Flow-capacity will be in the same units as flow, the same units as demand and activity – tasks per unit time.

<Bob>Yes. Good. And space-capacity?

<Lesley>That will be in the same units as work in progress or inventory – tasks.

<Bob>Good! And what about resource-capacity?

<Lesley>Um – Will that be resource-time – so time?

<Bob>Actually it is resource-time per unit time. So they have different units of measurement. It is invalid to mix them up any-old-way. It would be meaningless to add them for example.

<Lesley>OK. So I cannot see how to create a valid combination from these three! I cannot get the units of measurement to work.

<Bob>This is a critical insight. So what does that mean?

<Lesley>There is something missing?

<Bob>Yes. Excellent! Your homework this week is to work out what the missing pieces of the capacity-jigsaw are.

<Lesley>You are not going to tell me the answer?

<Bob>Nope. You are doing ISP training now. You already know enough to work it out.

<Lesley>OK. Now you have got me thinking. I like it. Until next week then.

<Bob>Have a good week.