stick_figure_liking_it_150_wht_9170Common-sense tells us that to achieve system-wide improvement we need to grasp the “culture nettle”.

Most of us believe that culture drives attitudes; and attitudes drive behaviour; and behaviour drives improvement.

Therefore to get improvement we must start with culture.

And that requires effective leadership.

So our unspoken assumptions about how leaders motivate our behaviour seem rather important to understand.

In 1960 a book was published with the  title “The Human Side of Enterprise” which went right to the heart of this issue.   The author was Doug McGregor who was a social scientist and his explanation of why improvement appears to be so difficult in large organisations was a paradigm shift in thinking.  His book inspired many leaders to try a different approach – and they discovered that it worked and that enterprise-wide transformation followed.  The organisations that these early-adopters led evolved into commercial successes and more enjoyable places to work.

The new leaders learned to create the context for change – not to dictate the content.

Since then social scientists have disproved many other ‘common sense’ beliefs by applying a rigorous scientific approach and using robust evidence.

They have busted the culture-drives-change myth …. the evidence shows that it is the other way around … change drives culture.

And what changes first is behaviour.

We are social  animals …. most of us are much more likely to change our behaviour if we see other people doing the same.  We do not like being too different.

As we speak there is a new behaviour spreading – having a bucket of cold water tipped over your head as part of a challenge to raise money for charity.

This craze has a positive purpose … feeling good about helping others through donating money to a worthwhile cause … but most of us need a nudge to get us to do it.

Seeing well-known public figures having iced-water dumped on them on a picture or video shared through multiple, parallel, social media channels is a powerful cultural signal that says “This new behaviour is OK”.

Exhortation and threats are largely ineffective – fear will move people – it will scatter them, not align them. Shaming-and-blaming into behaving differently is largely ineffective too – it generates short-term anger and long-term resentment.

This is what Doug McGregor highlighted over half a century ago … and his message is timeless.

“.. the research evidence indicates quite clearly that skillful and sensitive membership behaviour is the real clue to effective group operation“.

Appreciating this critical piece of evidence opens a new door to system-wide improvement … one that we can all walk through:  Sharing improvement stories.

Sharing stories of actions that others have done and the benefits they achieved as a result; and also sharing stories of things that we ourselves have done and achieved.

Stories of small changes that delivered big benefits for others and for ourselves.  Win-win-wins. Stories of things that took little time and little effort to do because they fell inside our circles of control.

See-and-Share is an example of skillful and sensitive membership behaviour.

Effective leaders are necessary … yes … they are needed to create the context for change. It is we members who create and share the content.

PARTImprovement implies learning – new experiences, new insights, new models and new ways of doing things.

So understanding the process of learning is core to the science of improvement.

What many people do not fully appreciate is that we differ in the way we prefer to learn.  These are habitual behaviours that we have acquired.

The diagram shows one model – the Honey and Mumford model that evolved from an earlier model described by Kolb.

One interesting feature of this diagram is the two dimensions – Perception and Processing which are essentially the same as the two core dimensions in the Myers-Briggs Type Index.

What the diagram above does not show so well is that the process of learning is a cycle – the clockwise direction in this diagram – Pragmatist then Activist then Reflector then Theorist and back to Pragmatist.

This is the PART sequence.  And it can start at any point … ARTP, RTPA, TPAR.

We all use all of these learning styles – but we have a preference for some more than others – our preferred learning styles are our learning comfort zones.

The large observational studies conducted in the 1980′s using the PART model revealed that most people have moderate to strong preferences for only one or two of these styles. Less than 20% have a preference for three and very few feel equally comfortable with all four.

The commonest patterns are illustrated by the left and right sides of the diagram: the Pragmatist-Activist combination and the Reflector-Theorist combination.

It is not that one is better than the other … all four are synergistic and an effective and efficient learning process requires being comfortable with using all four in a continuous sequence.

Imagine this as a wheel – an imbalance between the four parts represents a distorted wheel. So when this learning wheel ‘turns’  it delivers an emotionally bumpy ‘ride’.  Past experience of being pushed through this pain-and-gain process will tend to inhibit or even block learning completely.

So to get a more comfortable learning journey we first need to balance our PART wheel – and that implies knowing what our preferred styles are and then developing the learning styles that we use least to build our competence and confidence with them.  And that is possible because these are learned habits. With guidance, focus and practice we can all strengthen our less favoured learning ‘muscles’.

Those with a preference for planning-and-doing would focus on developing their reflection and then their abstraction skills. For example by monitoring the effects of their actions in reality and using that evidence to challenge their underlying assumptions and to generate new ‘theories’ for pragmatic experimentation. Actively seeking balanced feedback and reflecting on it is one way to do that.

Those with a preference for studying-and-abstracting would focus on developing their design and then their delivery skills and become more comfortable with experimenting to test their rhetoric against reality. Actively seeking opportunities to learn-by-doing is one way.

And by creating the context for individuals to become more productive self-learners we can see how learning organisations will follow naturally. And that is what we need to deliver system-wide improvement at scale and pace.

egg_face_spooked_400_wht_13421There seems to be a belief among some people that the “optimum” average bed occupancy for a hospital is around 85%.

More than that risks running out of beds and admissions being blocked, 4 hour breaches appearing and patients being put at risk. Less than that is inefficient use of expensive resources. They claim there is a ‘magic sweet spot’ that we should aim for.

Unfortunately, this 85% optimum occupancy belief is a myth.

So, first we need to dispel it, then we need to understand where it came from, and then we are ready to learn how to actually prevent queues, delays, disappointment, avoidable harm and financial non-viability.


Disproving this myth is surprisingly easy.   A simple thought experiment is enough.

Suppose we have a policy where  we keep patients in hospital until someone needs their bed, then we discharge the patient with the longest length of stay and admit the new one into the still warm bed – like a baton pass.  There would be no patients turned away – 0% breaches.  And all our the beds would always be full – 100% occupancy. Perfection!

And it does not matter if the number of admissions arriving per day is varying – as it will.

And it does not matter if the length of stay is varying from patient to patient – as it will.

We have disproved the hypothesis that a maximum 85% average occupancy is required to achieve 0% breaches.


The source of this specific myth appears to be a paper published in the British Medical Journal in 1999 called “Dynamics of bed use in accommodating emergency admissions: stochastic simulation model

So it appears that this myth was cooked up by academic health economists using a computer model.

And then amateur queue theory zealots jump on the band-wagon to defend this meaningless mantra and create a smoke-screen by bamboozling the mathematical muggles with tales of Poisson processes and Erlang-C equations.

And they are sort-of correct … the theoretical behaviour of the stochastic demand process was described by Poisson and the equation that describes the theoretical queue behaviour was described by Erlang.  Over 100 years ago before we had computers.

BUT …

The academics and amateurs conveniently omit one minor, but annoying,  fact … that real world systems have people in them … and people are irrational … and people cook up policies that ride roughshod over the mathematics, the statistics and the simplistic, stochastic mathematical and computer models.

And when people start meddling then just about anything can happen!


So what went wrong here?

One problem is that the academic hefalumps unwittingly stumbled into a whole minefield of pragmatic process design traps.

Here are just some of them …

1. Occupancy is a ratio – it is a meaningless number without its context – the flow parameters.

2. Using linear, stochastic models is dangerous – they ignore the non-linear complex system behaviours – chaos to you and me.

3. Occupancy relates to space-capacity and says nothing about the flow-capacity or the space and flow capacity scheduling.

4. Space capacity utilisation (i.e. occupancy) and system operational efficiency are not equivalent.

5. Queue theory is a gross simplification of reality that is needed to make the mathematics manageable.

6. Ignoring the fact that our real systems are both complex and adaptive implies the rhetoric is dangerous.

And if we recognise and avoid these traps and re-examine the problem a little more pragmatically then we discover something very  useful:

That the maximum space capacity requirement (the number of beds needed to avoid breaches) is actually easily predictable.

It does not need a black-magic-box full of scary equations or rather complicated stochastic simulation models to do this … all we need is our tried-and-trusted tool … a spreadsheet.

And we need something else … some flow science training and some simulation model design discipline.

When we do that we discover something else …. that the expected average occupancy is not 85%  … or 65%, or 99%, or 95%.

There is no one-size-fits-all optimum occupancy number.

And as we explore further we discover that:

the expected average occupancy is context dependent.

And when we remember that our real system is adaptive, and it is staffed with well-intended, well-educated people who have become rather addicted to reactive fire-fighting,  then we begin to see why the behaviour of real systems seems to defy the predictions of the 85% optimum occupancy myth:

Our hospitals seem to work better-than-predicted at much higher occupancy rates.

And then we realise that we might actually be able to design proactive policies that are better able to manage unpredictable variation: better than the simplistic maximum 85% average occupancy mantra.

And finally another penny drops … average occupancy is an output of the system …. not an input. It is a secondary effect.

And so is average length of stay.

Which implies that setting these output effects as causal inputs to our bed model creates a meaningless, self-fulfilling, self-justifying delusion.

Ooops!


Now our challenge is clear … we need to learn proactive and adaptive flow policy design … and using that understanding we have the potential to deliver zero delays and high productivity at the same time.

And doing that requires a bit more than a spreadsheet … but it is possible.

inspector_searching_around_150_wht_14757When it comes to light that things are not going well a common reaction from the top is to send in more inspectors.

This may give the impression that something decisive is being done but it almost never works … for two reasons.

The first is because it is attempting to treat the symptom and not the cause.

The second is because the inspectors are created in the same paradigm that that created the problem.

That is not so say that inspectors are not required … they are … when the system is working … not when it is failing.

The inspection police actually come last – and just before them comes the Policy that the Police enforce.

Policy comes next to last. Not first.

A rational Policy can only be written once there is proof of  effectiveness … and that requires a Pilot study … in the real world.

A small scale reality check of the rhetoric.

Cooking up Policy and delivery plans based on untested rhetoric from the current paradigm is a recipe for disappointment.


Working backwards we can see that the Pilot needs something to pilot … and that is a new Process; to replace the old process that is failing to deliver.

And any Process needs to be designed to be fit-for-purpose.  Cutting-and-pasting someone else’s design usually does not work. The design process is more important than the design it creates.

So thus brings us to the first essential requirement … the Purpose.

And that is where we very often find a big gap … an error of omission … no clarity or constancy of common Purpose.

And that is where leaders must start. It is their job to clarify and communicate the common Purpose. And if the leaders are not cohesive and the board cannot agree the Purpose then the political cracks will spread through the whole organisation and destabilize it.

And with a Purpose the system and process designers can get to work.

But here we hit another gap. There is virtually no design capability in most organisations.

There is usually lots of delivery capability … but efficiently delivering an ineffective design will amplify the chaos not dissolve it.

So in parallel with clarifying the purpose, the leaders must  endorse the creation of a cohort of process designers.

And from the organisation a cohort of process inspectors … but of a different calibre … inspectors who are able to find the root causes and able to guide the improvement process because they have done this themselves many times before.

And perhaps to draw a line between the future and the past we could give them a different name – Mentors.

database_transferring_data_150_wht_10400The Digital Age is changing the context of everything that we do – and that includes how we use information for improvement.

Historically we have used relatively small, but carefully collected, samples of data and we subjected these to rigorous statistical analysis. Or rather the statisticians did.  Statistics is a dark and mysterious art to most people.

As the digital age ramped up in the 1980′s the data storage, data transmission and data processing power became cheap and plentiful.  The World Wide Web appeared; desktop computers with graphical user interfaces appeared; data warehouses appeared, and very quickly we were all drowning in the data ocean.

Our natural reaction was to centralise but it became quickly obvious that even an army of analysts and statisticians could not keep up.

So our next step was to automate and Business Intelligence was born; along with its beguiling puppy-faced friend, the Performance Dashboard.

The ocean of data could now be boiled down into a dazzling collection of animated histograms, pie-charts, trend-lines, dials and winking indicators. We could slice-and-dice,  we could zoom in-and-out, and we could drill up-and-down until our brains ached.

And none of it has helped very much in making wiser decisions that lead to effective actions that lead to improved outcomes.

Why?

The reason is that the missing link was not a lack of data processing power … it was a lack of an effective data processing paradigm.

The BI systems are rooted in the closed, linear, static, descriptive statistics of the past … trend lines, associations, correlations, p-values and so on.

Real systems are open, non-linear and dynamic; they are eternally co-evolving. Nothing stays still.

And it is real systems that we live in … so we need a new data processing paradigm that suits our current reality.

Some are starting to call this the Big Data Era and it is very different.

  • Business Intelligence uses descriptive statistics and data with high information density to measure things, detect trends etc.;
  • Big Data uses inductive statistics and concepts from non-linear system identification to infer laws (regressions, non-linear relationships, and causal effects) from large data sets to reveal relationships, dependencies and perform predictions of outcomes and behaviours.

And each of us already has a powerful Big Data processor … the 1.3 kg of caveman wet-ware sitting between our ears.

Our brain processes billions of bits of data every second and looks for spatio-temporal relationships to identify patterns, to derive models, to create action options, to predict short-term outcomes and to make wise survival decisions.

The problem is that our Brainy Big Data Processor is easily tricked when we start looking at time-dependent systems … data from multiple simultaneous flows that are interacting dynamically with each other.

It did not evolve to do that … it evolved to help us to survive in the Wild – as individuals.

And it has been very successful … as the burgeoning human population illustrates.

But now we have a new collective survival challenge  and we need new tools … and the out-of-date Business Intelligence Performance Dashboard is just not going to cut the mustard!

Big Data on TED Talks

 

networking_people_PA_300_wht_1844The engine of improvement is a productive meeting.

Complex adaptive systems (CAS) are those that  learn and change themselves.

The books of ‘rules’ are constantly revised and refreshed as the CAS co-evolves with its environment.

System improvement is the outcome of effective actions.

Effective actions are the outcomes of wise decisions.

Wise decisions are the output of productive meetings.

So the meeting process must be designed to be productive: which means both effective and efficient.


One of the commonest niggles that individuals report is the ‘Death by Meeting’ one.

That alone is enough evidence that our current design for meetings is flawed.


One common error of omission is lack of clarity about the purpose of the meeting.

This cause has two effects:

1. The wrong sort of meeting design is used for the problem(s) under consideration.

A meeting designed for tactical  (how to) planning will not work well for strategic (why to) problems.

2. A mixed bag of problems is dumped into the all-purpose-less meeting.

Mixing up short term tactical and long term strategic problems on a single overburdened agenda is doomed to fail.


Even when the purpose of  a meeting  is clear and agreed it is common to observe an unproductive meeting process.

The process may be unproductive because it is ineffective … there are no wise decisions made and so no effective actions implemented.

Worse even than that … decisions are made that are unwise and the actions that follow lead to unintended negative consequences.

The process may also be unproductive because it is inefficient … it requires too much input to get any output.

Of course we want both an effective and an efficient meeting process … and we need to be aware that effectiveness  comes first.  Designing the meeting process to be a more efficient generator of unwise decisions is not a good idea! The result is an even bigger problem!


So our meeting design focus is ‘How could we make wise decisions as a group?’

But if we knew the answer to that we would probably already be doing it!

So we can ask the same question another way: ‘How do we make unwise decisions as a group?

The second question is easier to answer. We just reflect on our current experience.

Some ways we appear to unintentionally generate unwise decisions are:

a) Ensure we have no clarity of purpose – confusion is a good way to defuse effective feedback.
b) Be selective in who we invite to the meeting – group-think facilitates consensus.
c) Ignore the pragmatic, actual, reality and only use academic, theoretical, rhetoric.
d) Encourage the noisy – quiet people are non-contributors.
e) Engage in manipulative styles of behaviour – people cannot be trusted.
f) Encourage the  sceptics and cynics to critique and cull innovative suggestions.
g) Have a trump card – keep the critical ‘any other business’ to the end – just in case.

If we adopt all these tactics we can create meetings that are ‘lively’, frustrating, inefficient and completely unproductive. That of course protects us from making unwise decisions.


So one approach to designing meetings to be more productive is simply to recognise and challenge the unproductive behaviours – first as individuals and then as groups.

The place to start is within our own circle of influence – with those we trust – and to pledge to each other to consciously monitor for unproductive behaviours and to respectfully challenge them.

These behaviours are so habitual that we are often unaware that we are doing them.

And it feels strange at first but it get easier with practice and when you see the benefits.

lightning_strike_150_wht_5809[Drrrrring Drrrrring]

<Bob> Hi Lesley! How are you today?

<Leslie> Hi Bob.  Really good.  I have just got back from a well earned holiday so I am feeling refreshed and re-energised.

<Bob> That is good to hear.  It has been a bit stormy here over the past few weeks.  Apparently lots of  hot air hitting cold reality and forming a fog of disillusionment and storms of protest.

<Leslie> Is that a metaphor?

<Bob> Yes!  A good one do you think? And it leads us into our topic for this week. Perfect storms.

<Leslie> I am looking forward to it.  Can you be a bit more specific?

<Bob> Sure.  Remember the ISP exercise where I asked you to build a ‘chaos generator’?

<Leslie> I sure do. That was an eye-opener!  I had no idea how easy it is to create chaotic performance in a system – just by making the Flaw of Averages error and adding a pinch of variation. Booom!

<Bob> Good. We are going to use that model to demonstrate another facet of system design.  How to steer out of chaos.

<Leslie> OK – what do I need to do.

<Bob> Start up that model and set the cycle time to 10 minutes with a sigma of 1.5 minutes.

<Leslie> OK.

<Bob> Now set the demand interval to 10 minutes and the sigma of that to 2.0 minutes.

<Leslie> OK. That is what I had before.

<Bob> Set the lead time upper specification limit to 30 minutes. Run that 12 times and record the failure rate.

<Leslie> OK.  That gives a chaotic picture!  All over the place.

<Bob> OK now change just the average of the demand interval.  Start with a value of 8 minutes, run 12 times, and then increase to 8.5 minutes and repeat that up to 12 minutes.

<Leslie> OK. That will repeat the run for 10 minutes. Is that OK.

<Bob> Yes.

<Leslie> OK … it will take me a few minutes to run all these.  Do you want to get a cup of tea while I do that?

<Bob> Good idea.

[5 minutes later]

<Leslie> OK I have done all that – 108 data points. Do I plot that as a run chart?

<Bob> You could.  I suggest plotting as a scattergram.

<Leslie> With the average demand interval on the X axis and the Failure % on the  Y axis?

<Bob> Yes. Exactly so. And just the dots, no lines.

<Leslie> OK. Wow! That is amazing!  Now I see why you get so worked up about the Flaw of Averages!

<Bob> What you are looking at is called a performance curve.  Notice how steep and fuzzy it is. That is called a chaotic transition. The perfect storm.  And when fall into the Flaw of Averages trap we design our systems to be smack in the middle of it.

<Leslie> Yes I see what you are getting at.  And that implies that to calm the chaos we do not need very much resilient flow capacity … and we could probably release that just from a few minor design tweaks.

<Bob> Yup.

<Leslie> That is so cool. I cannot wait to share this with the team. Thanks again Bob.

OneStopBeforeGanttFlow improvement-by-design requires being able to see the flows; and that is trickier than it first appears.

We can see movement very easily.

Seeing flows is not so easy – particularly when they are mixed-up and unsteady.

One of the most useful tools for visualising flow was invented over 100 years ago by Henry Laurence Gantt (1861-1919).

Henry Gantt was a mechanical engineer from Johns Hopkins University and an early associate of Frederick Taylor. Gantt parted ways with Taylor because he disagreed with the philosophy of Taylorism which was that workers should be instructed what to do by managers (=parent-child).  Gantt saw that workers and managers could work together for mutual benefit of themselves and their companies (=adult-adult).  At one point Gantt was invited to streamline the production of munitions for the war effort and his methods were so successful that the Ordinance Department was the most productive department of the armed forces.  Gantt favoured democracy over autocracy and is quoted to have said “Our most serious trouble is incompetence in high places. The manager who has not earned his position and who is immune from responsibility will fail time and again, at the cost of the business and the workman“.

Henry Gantt invented a number of different charts – not just the one used in project management which was actually invented 20 years earlier by Karol Adamieki and re-invented by Gantt. It become popularised when it was used in the Hoover Dam project management; but that was after Gantt’s death in 1919.

The form of Gantt chart above is called a process template chart and it is designed to show the flow of tasks through  a process. Each horizontal line is a task; each vertical column is an interval of time. The colour code in each cell indicates what the task is doing and which resource the task is using during that time interval. Red indicates that the task is waiting. White means that the task is outside the scope of the chart (e.g. not yet arrived or already departed).

The Gantt chart shows two “red wedges”.  A red wedge that is getting wider from top to bottom is the pattern created by a flow constraint.  A red wedge that is getting narrower from top to bottom is the pattern of a policy constraint.  Both are signs of poor scheduling design.

A Gantt chart like this has three primary uses:
1) Diagnosis – understanding how the current flow design is creating the queues and delays.
2) Design – inventing new design options.
3) Prognosis – testing the innovative designs so the ‘fittest’ can be chosen for implementation.

These three steps are encapsulated in the third “M” of 6M Design® – the Model step.

In this example the design flaw was the scheduling policy.  When that was redesigned the outcome was zero-wait performance. No red on the chart at all.  The same number of tasks were completed in the same with the same resources used. Just less waiting. Which means less space is needed to store the queue of waiting work (i.e. none in this case).

That this is even possible comes as a big surprise to most people. It feels counter-intuitive. It is however an easy to demonstrate fact. Our intuition tricks us.

And that reduction in the size of the queue implies a big cost reduction when the work-in-progress is perishable and needs constant attention [such as patients lying on A&E trolleys and in hospital beds].

So what was the cost of re-designing this schedule?

A pinch of humility. A few bits of squared paper and some coloured pens. A couple hours of time. And a one-off investment in learning how to do it.  Peanuts in comparison with the recurring benefit gained.

 

Conscious_and_CompetentThis week I was made mindful again of a simple yet powerful model that goes a long way to explaining why we find change so difficult.

It is the conscious-competent model.

There are two dimensions which gives four combinations that are illustrated in the diagram.

We all start in the bottom left corner. We do not know what we do not know.  We are ignorant and incompetent and unconscious of the  fact.

Let us call that Blissful Ignorance.

Then suddenly we get a reality check. A shock. A big enough one to start us on the emotional roller coaster ride we call the Nerve Curve.

We become painfully aware of our ignorance (and incompetence). Conscious of it.

That is not a happy place to be and we have a well-developed psychological first line of defence to protect us. It is called Denial.

“That’s a load of rubbish!” we say.

But denial does not change reality and eventually we are reminded. Reality does not go away.

Our next line of defence is to shoot the messenger. We get angry and aggressive.

Who the **** are you to tell me that I do not know what I am doing!” we say.

Sometimes we are openly aggressive.  More often we use passive aggressive tactics. We resort to below-the-belt behind-the-back corridor-gossip behaviour.

But that does not change reality either.  And we are slowly forced to accept that we need to change. But not yet …

Our next line of defence is to bargain for more time (in the hope that reality will swing back in our favour).

There may be something in this but I am too busy at the moment … I will look at this  tomorrow/next week/next month/after my holiday/next quarter/next financial year/in my next job/when I retire!” we wheedle.

Our strategy usually does not work – it just wastes time – and while we prevaricate the crisis deepens. Reality is relentless.

Our last line of defence has now been breached and now we sink into depression and despair.

It is too late. Too difficult for me. I need rescuing. Someone help me!” we wail.

That does not work either. There is no one there. It is up to us. It is sink-or-swim time.

What we actually need now is a crumb of humility.

And with that we can start on the road to Know How. We start by unlearning the old stuff and then we can  replace it with the new stuff.  Step-by-step we climb out of the dark depths of Painful Awareness.

And then we get a BIG SURPRISE.

It is not as difficult as we assumed. And we discover that learning-by-doing is fun. And we find that demonstrating to others what we are learning is by far the most effective way to consolidate our new conscious competence.

And by playing to our strengths, with persistence, with practice and with reality-feedback our new know how capability gradually becomes second nature. Business as usual. The way we do things around here. The culture.

Then, and only then, will the improvement sustain … and spread … and grow.

 

4NChartOne of the essential components of an adaptive system is effective feedback.

Without feedback we cannot learn – we can only guess and hope.

So the design of our feedback loops is critical-to-success.

Many people do not like getting feedback because they live in a state of fear: fear of criticism. This is a learned behaviour.

Many people do not like giving feedback because they too live in a state of fear: fear of conflict. This is a learned behaviour.

And what is learned can be unlearned; with training, practice and time.

But before we will engage in unlearning our current habit we need to see the new habit that will replace it. The one that will work better for us. The one that is more effective.  The one that will require less effort. The one that is more efficient use of our most precious resource: life-time.

There is an effective and efficient feedback technique called The 4N Chart®.  And I know it works because I have used it and demonstrated to myself and others that  it works. And I have seen others use it and demonstrate to themselves and others that it works too.

The 4N Chart® has two dimensions – Time (Now and Future) and Emotion (Happy and Unhappy).

This gives four combinations each of which is given a label that begins with the letter ‘N’ – Niggles, Nuggets, NoNos and NiceIfs.

The N has a further significance … it reminds us which order to move through the  chart.

We start bottom left with the Niggles.  What is happening now that causes us to feel unhappy. What are these root causes of our niggles? And more importantly, which of these do we have control over?  Knowing that gives us a list of actions that we can do that will have the effect of reducing our niggles. And we can start that immediately because we do not need permission.

Next we move top-left to the Nuggets. What is happening now that causes us to feel happy? What are the root causes of our nuggets? Which of these do we control? We need to recognise these too and to celebrate them.  We need to give ourselves a pat on the back for them because that helps reinforce the habit to keep doing them.

Now we look to the future – and we need to consider two things: what we do not want to feel in the future and what we do want to feel in the future. These are our NoNos and our NiceIfs. It does not matter which order we do this … but  we must consider both.

Many prefer to consider dangers and threats first … that is SAFETY FIRST  thinking and is OK. First Do No Harm. Primum non nocere.

So with the four corners of our 4N Chart® filled in we have a balanced perspective and we can set off on the journey of improvement with confidence. Our 4N Chart® will help us stay on track. And we will update it as we go, as we study, as we plan and as we do things. As we convert NiceIfs into Nuggets and  Niggles into NoNos.

It sounds simple.  It is in theory. It is not quite as easy to do.

It takes practice … particularly the working backwards from the effect (the feeling) to the cause (the facts). This is done step-by-step using Reality as a guide – not our rhetoric. And we must be careful not to make assumptions in lieu of evidence. We must be careful not to jump to unsupported conclusions. That is called pre-judging.  Prejudice.

But when you get the hang of using The 4N Chart® you will be amazed at how much more easily and more quickly you make progress.

egg_face_spooked_400_wht_13421It comes as a bit of a shock to learn that some of our habitual assumptions and actions are worthless.

Improvement implies change. Change requires doing things differently. That requires making different decisions. And that requires innovative thinking. And that requires new knowledge.

We are comfortable with the idea of adding  new knowledge to the vast store we have already accumulated.

We are less comfortable with the idea of removing old knowledge when it has grown out-of-date.

We are shocked when we discover that some of our knowledge is just wrong and it always has been. Since the start of time.

So we need to prepare ourselves for those sorts of shocks. We need to be resilient so that we are not knocked off our feet by them.  We need to practice a different emotional reaction to our habitual fright-flight-or-fight reaction.

We need to cultivate our curiosity.

For example:

It comes as a big shock to many when they learn that it is impossible to determine the cause from an analysis of the observed effect.  Not just difficult. Impossible.

“No Way!”  We shout angrily.  “We do that all the time!”

But do we?

What we do is we observe temporal associations.  We notice that Y happened after X and we conclude that X caused Y.

This is an incorrect conclusion.  We can only conclude from this observation that ‘X may have played a part in causing Y’ but we cannot prove it.

Not by observation alone.

What we can definitely say is that Y did not cause X – because time does not go backwards. At least it does not appear to.

Another thing that does not go backwards is information.

Q: What is 2 + 2?  Four. Easy. There is only one answer. Two numbers become one.

Let us try this in reverse …

Q: What two numbers when added together give 4? Tricky. There are countless answers.  One number cannot become two without adding uncertainty. Guessing.

So when we look at the information coming out of a system – the effects and we attempt to analyse it to reveal the causes we hit a problem. It is impossible.

And learning that is a big shock to people who describe themselves as ‘information analysts’ …. the whole foundation of what they do appears to evaporate.

So we need to outline what we can reasonably do with the retrospective analysis of effect data.

We can look for patterns.

Patterns that point to plausible causes.

Just like patterns of symptoms that point to possible diseases.

But how do we learn what patterns to look for?

Simple. We experiment. We do things and observe what happens immediately afterwards – the immediate effects. We conduct lots and lots of small experiments. And we learn the repeating patterns. “If the context is this and I do that then I always see this effect”.

If we observe a young child learning that is what we see … they are experimenting all the time.  They are curious. They delight in discovery. Novelty is fun. Learning to walk is a game.  Learning to talk is a game.  Learning to be a synergistic partner in a social group is a game.

And that same child-like curiosity is required for effective improvement.

And we know when we are doing improvement right: it feels good. It is fun. Learning is fun.

pirate_flag_anim_150_wht_12881[Drrring Drrring] The phone heralded the start of the weekly ISP mentoring session.

<Bob> Hi Leslie, how are you today?

<Leslie> Hi Bob. To be honest I am not good. I am drowning. Drowning in data!

<Bob> Oh dear! I am sorry to hear that. Can I help? What led up to this?

<Leslie> Well, it was sort of triggered by our last chat and after you opened my eyes to the fact that we habitually throw most of our valuable information away by thresholding, aggregating and normalising.  Then we wonder why we make poor decisions … and then we get frustrated because nothing seems to improve.

<Bob> OK. What happened next?

<Leslie> I phoned our Performance Team and asked for some raw data. Three months worth.

<Bob> And what was their reaction?

<Leslie> They said “OK, here you go!” and sent me a twenty megabyte Excel spreadsheet that clogged my email inbox!  I did manage to unclog it eventually by deleting loads of old junk.  But I could swear that I heard the whole office laughing as they hung up the phone! Maybe I am paranoid?

<Bob> OK. And what happened next?

<Leslie> I started drowning!  The mega-file had a row of data for every patient that has attended A&E for the last three months as I had requested, but there were dozens of columns!  Trying to slice-and-dice it was a nightmare! My computer was smoking and each step took ages for it to complete.  In the end I gave up in frustration.  I now have a lot more respect for the Performance Team I can tell you! They do this for a living?

<Bob> OK.  It sounds like you are ready for a Stab At the Vitals.

<Leslie> What?  That sounds rather piratical!  Are you making fun of my slicing-and-dicing metaphor?

<Bob> No indeed.  I am deadly serious!  Before we leap into the data ocean we need to be able to swim; and we also need a raft that will keep us afloat;  and we need a sail to power our raft; and we need a way to navigate our raft to our desired destination.

<Leslie> OK. I like the nautical metaphor but how does it help?

<Bob> Let me translate. Learning to use system behaviour charts is equivalent to learning the skill of swimming. We have to do that first and practice until we are competent and confident.  Let us call our raft “ISP” – you are already aboard.  The sail you also have already – your Excel software.  The navigation aid is what I refer to as Vitals. So we need to have a “stab at the vitals”.

<Leslie> Do you mean we use a combination of time-series charts, ISP and Excel to create a navigation aid that helps avoid the Depths of Data and the Rocks of DRAT?

<Bob> Exactly.

<Leslie> Can you demonstrate with an example?

<Bob> Sure. Send me some of your data … just the arrival and departure events for one day – a typical one.

<Leslie> OK … give me a minute!  …  It is on its way.  How long will it take for you to analyse it?

<Bob> About 2 seconds. OK, here is your email … um … copy … paste … copy … reply

Vitals_Charts<Leslie> What the ****? That was quick! Let me see what this is … the top left chart is the demand, activity and work-in-progress for each hour; the top right chart is the lead time by patient plotted in discharge order; the table bottom left includes the 4 hour breach rate.  Those I do recognise. What is the chart on the bottom right?

<Bob> It is a histogram of the lead times … and it shows a problem.  Can you see the spike at 225 to 240 minutes?

<Leslie> Is that the fabled Horned Gaussian?

<Bob> Yes.  That is the sign that the 4-hour performance target is distorting the behaviour of the system.  And this is yet another reason why the  Breach Rate is a dangerous management metric. The adaptive reaction it triggers amplifies the variation and fuels the chaos.

<Leslie> Wow! And you did all that in Excel using my data in two seconds?  That must need a whole host of clever macros and code!

<Bob> “Yes” it was done in Excel and “No” it does not need any macros or code.  It is all done using simple formulae.

<Leslie> That is fantastic! Can you send me a copy of your Excel file?

<Bob> Nope.

<Leslie>Whaaaat? Why not? Is this some sort of evil piratical game?

<Bob> Nope. You are going to learn how to do this yourself – you are going to build your own Vitals Chart Generator – because that is the only way to really understand how it works.

<Leslie> Phew! You had me going for a second there! Bring it on! What do I do next?

<Bob> I will send you the step-by-step instructions of how to build, test and use a Vitals Chart Generator.

<Leslie> Thanks Bob. I cannot wait to get started! Weigh anchor and set the sails! Ha’ harrrr me hearties.

We_Need_Small_HospitalsThis was an interesting headline to see on the front page of a newspaper yesterday!

The Top Man of the NHS is openly challenging the current Centralisation-is-The-Only-Way-Forward Mantra;  and for good reason.

Mass centralisation is poor system design – very poor.

Q: So what is driving the centralisation agenda?

A: Money.

Or to be more precise – rather simplistic thinking about money.

The misguided money logic goes like this:

1. Resources (such as highly trained doctors, nurses and AHPs) cost a lot of money to provide.
[Yes].

2. So we want all these resources to be fully-utilised to get value-for-money.
[No, not all - just the most expensive].

3. So we will gather all the most expensive resources into one place to get the Economy-of-Scale.
[No, not all the most expensive - just the most specialised]

4. And we will suck /push all the work through these super-hubs to keep our expensive specialist resources busy all the time.
[No, what about the growing population of older folks who just need a bit of expert healthcare support, quickly, and close to home?]

This flawed logic confuses two complementary ways to achieve higher system productivity/economy/value-for-money without  sacrificing safety:

Economies of Scale (EoS) and Economies of Flow (EoF).

Of the two the EoF is the more important because by using EoF principles we can increase productivity in huge leaps at almost no cost; and without causing harm and disappointment. EoS are always destructive.

But that is impossible. You are talking rubbish … because if it were possible we would be doing it!

It is not impossible and we are doing it … but not at scale and pace in healthcare … and the reason for that is we are not trained in Economy-of-Flow methods.

And those who are trained and who have have experienced the effects of EoF would not do it any other way.

Example:

In a recent EoF exercise an ISP (Improvement Science Practitioner) helped a surgical team to increase their operating theatre productivity by 30% overnight at no cost.  The productivity improvement was measured and sustained for most of the last year. [it did dip a bit when the waiting list evaporated because of the higher throughput, and again after some meddlesome middle management madness was triggered by end-of-financial-year target chasing].  The team achieved the improvement using Economy of Flow principles and by re-designing some historical scheduling policies. The new policies  were less antagonistic. They were designed to line the ducks up and as a result the flow improved.


So the specific issue of  Super Hospitals vs Small Hospitals is actually an Economy of Flow design challenge.

But there is another critical factor to take into account.

Specialisation.

Medicine has become super-specialised for a simple reason: it is believed that to get ‘good enough’ at something you have to have a lot of practice. And to get the practice you have to have high volumes of the same stuff – so you need to specialise and then to sort undifferentiated work into separate ‘speciologist’ streams or sequence the work through separate speciologist stages.

Generalists are relegated to second-class-citizen status; mere tripe-skimmers and sign-posters.

Specialisation is certainly one way to get ‘good enough’ at doing something … but it is not the only way.

Another way to learn the key-essentials from someone who already knows (and can teach) and then to continuously improve using feedback on what works and what does not – feedback from everywhere.

This second approach is actually a much more effective and efficient way to develop expertise – but we have not been taught this way.  We have only learned the scrape-the-burned-toast-by-suck-and-see method.

We need to experience another way.

We need to experience rapid acquisition of expertise!

And being able to gain expertise quickly means that we can become expert generalists.

There is good evidence that the broader our skill-set the more resilient we are to change, and the more innovative we are when faced with novel challenges.

In the Navy of the 1800′s sailors were “Jacks of All Trades and Master of One” because if only one person knew how to navigate and they got shot or died of scurvy the whole ship was doomed.  Survival required resilience and that meant multi-skilled teams who were good enough at everything to keep the ship afloat – literally.


Specialisation has another big drawback – it is very expensive and on many dimensions. Not just Finance.

Example:

Suppose we have six-step process and we have specialised to the point where an individual can only do one step to the required level of performance (safety/flow/quality/productivity).  The minimum number of people we need is six and the process only flows when we have all six people. Our minimum costs are high and they do not scale with flow.

If any one of the six are not there then the whole process stops. There is no flow.  So queues build up and smooth flow is sacrificed.

Out system behaves in an unstable and chaotic feast-or-famine manner and rapidly shifting priorities create what is technically called ‘thrashing’.

And the special-six do not like the constant battering.

And the special-six have the power to individually hold the whole system to ransom – they do not even need to agree.

And then we aggravate the problem by paying them the high salary that it is independent of how much they collectively achieve.

We now have the perfect recipe for a bigger problem!  A bunch of grumpy, highly-paid specialists who blame each other for the chaos and who incessantly clamour for ‘more resources’ at every step.

This is not financially viable and so creates the drive for economy-of-scale thinking in which to get us ‘flow resilience’ we need more than one specialist at each of the six steps so that if one is on holiday or off sick then the process can still flow.  Let us call these tribes of ‘speciologists’ there own names and budgets, and now we need to put all these departments somewhere – so we will need a big hospital to fit them in – along with the queues of waiting work that they need.

Now we make an even bigger design blunder.  We assume the ‘efficiency’ of our system is the same as the average utilisation of all the departments – so we trim budgets until everyone’s utilisation is high; and we suck any-old work in to ensure there is always something to do to keep everyone busy.

And in so doing we sacrifice all our Economy of Flow opportunities and we then scratch our heads and wonder why our total costs and queues are escalating,  safety and quality are falling, the chaos continues, and our tribes of highly-paid specialists are as grumpy as ever they were!   It must be an impossible-to-solve problem!


Now contrast that with having a pool of generalists – all of whom are multi-skilled and can do any of the six steps to the required level of expertise.  A pool of generalists is a much more resilient-flow design.

And the key phrase here is ‘to the required level of expertise‘.

That is how to achieve Economy-of-Flow on a small scale without compromising either safety or quality.

Yes, there is still a need for a super-level of expertise to tackle the small number of complex problems – but that expertise is better delivered as a collective-expertise to an individual problem-focused process.  That is a completely different design.

Designing and delivering a system that that can achieve the synergy of the pool-of-generalists and team-of-specialists model requires addressing a key error of omission first: we are not trained how to do this.

We are not trained in Complex-Adaptive-System Improvement-by-Design.

So that is where we must start.

 

waste_paper_shot_miss_150_wht_11853[Bzzzzz Bzzzzz] Bob’s phone was on silent but the desktop amplified the vibration and heralded the arrival of Leslie’s weekly ISP mentoring call.

<Bob> Hi Leslie.  How are you today and what would you like to talk about?

<Leslie> Hi Bob.  I am well and I have an old chestnut to roast today … target-driven-behaviour!

<Bob> Excellent. That is one of my favorite topics. Is there a specific context?

<Leslie> Yes.  The usual desperate directive from on-high exhorting everyone to ‘work harder to hit the target’ and usually accompanied by a RAG table of percentages that show just who is failing and how badly they are doing.

<Bob> OK. Red RAGs irritating the Bulls eh? Percentages eh? Have we talked about Ratio Hazards?

<Leslie> We have talked about DRATs … Delusional Ratios and Arbitrary Targets as you call them. Is that the same thing?

<Bob> Sort of. What happened when you tried to explain DRATs to those who are reacting to these ‘desperate directives’?

<Leslie> The usual reply is ‘Yes, but that is how we are required to report our performance to our Commissioners and Regulatory Bodies.’

<Bob> And are the key- performance indicators that are reported upwards and outwards also being used to manage downwards and inwards?  If so then that is poor design and is very likely to be contributing to the chaos.

<Leslie> Can you explain that a bit more? It feels like a very fundamental point you have just made.

 <Bob> OK. To do that let us work through the process by which the raw data from your system is converted into the externally reported KPI.  Choose any one of your KPIs

<Leslie> Easy! The 4-hour A&E target performance.

<Bob> What is the raw data that goes in to that?

<Leslie> The percentage of patients who breach 4-hours per day.

<Bob> And where does that ratio come from?

<Leslie> Oh! I see what you mean. That comes from a count of the number of patients who are in A&E for more than 4 hours divided by a count of the number of patients who attended.

<Bob> And where do those counts come come from?

<Leslie> We calculate the time the patient is in A&E and use the 4-hour target to label them as breaches or not.

<Bob> And what data goes into the calculation of that time?

<Leslie>The arrival and departure times for each patient. The arrive and depart events.

<Bob>OK. Is that the raw data?

<Leslie>Yes. Everything follows from that.

<Bob> Good.  Each of these two events is a time – which is a continuous metric.  You could in principle record it to any degree of precision you like – milliseconds if you had a good enough enough clock.

<Leslie> Yes. We record it to an accuracy of of seconds – it is when the patient is ‘clicked through’ on the computer.

<Bob> Careful Leslie, do not confuse precision with accuracy. We need both.

<Leslie> Oops! Yes I remember we had that conversation before.

<Bob> And how often is the A&E 4-hour target KPI reported externally?

<Leslie> Quarterly. We either succeed or fail each quarter of the financial year.

<Bob> That is a binary metric. An OK or not OK. No gray zone.

<Leslie> Yes. It is rather blunt but that is how we are contractually obliged to report our performance.

<Bob> OK. And how many patients per day on average come to A&E?

<Leslie> About 200 per day.

<Bob> So the data analysis process is boiling down about 36,000 pieces of continuous data into one Yes or No bit of binary data.

<Leslie> Yes.

<Bob> And then that one bit is used to drive the action of the Board: if it is ‘OK last quarter’ then there is no ‘desperate directive’ and if it is a ‘Not OK last quarter’ then there is.

<Leslie> Yes.

<Bob> So you are throwing away 99.9999% of your data and wondering why what is left is not offering much insight in what to do.

<Leslie>Um, I guess so … when you say it like that.  But how does that relate to your phrase ‘Ratio Hazards’?

<Bob> A ratio is just one of the many ways that we throw away information. A ratio requires two numbers to calculate it; and it gives one number as an output so we are throwing half our information away.  And this is an irreversible act.  Two specific numbers will give one ratio; but that ratio can be created by an infinite number possible pairs of numbers and we have no way of knowing from the ratio what specific pair was used to create it.

<Leslie> So a ratio is an exercise in obfuscation!

<Bob> Well put! And there is an even more data-wasteful behaviour that we indulge in. We aggregate.

<Leslie> By that do you mean we summarise a whole set of numbers with an average?

<Bob> Yes. When we average we throw most of the data away and when we average over time then we abandon our ability to react in a timely way.

<Leslie>The Flaw of Averages!

<Bob> Yes.  One of them. There are many.

<Leslie>No wonder it feels like we are flying blind and out of control!

<Bob> There is more. There is an even worse data-wasteful behaviour. We threshold.

<Leslie>Is that when we use a target to decide if the lead time is OK or Not OK.

<Bob> Yes. And using an arbitrary target makes it even worse.

<Leslie> Ah ha! I see what you are getting at.  The raw event data that we painstakingly collect is a treasure trove of information and potential insight that we could use to help us diagnose, design and deliver a better service. But we throw all but one single solitary binary digit when we put it through the DRAT Processor.

<Bob> Yup.

<Leslie> So why could we not do both? Why could we not use use the raw data for ourselves and the DRAT processed data for external reporting.

<Bob> You could.  So what is stopping you doing just that?

<Leslie> We do not know how to effectively and efficiently interpret the vast ocean of raw data.

<Bob> That is what a time-series chart is for. It turns the thousands of pieces of valuable information onto a picture that tells a story – without throwing the information away in the process. You just need to learn how to interpret the pictures.

<Leslie> Wow!  Now I understand much better why you insist we ‘plot the dots’ first.

<Bob> And now you understand the Ratio Hazards a bit better too.

<Leslie> Indeed so.  And once again I have much to ponder on. Thank you again Bob.

Minecraft There is an amazing phenomenon happening right now – a whole generation of people are learning to become system designers and they are doing it by having fun.

There is a game called Minecraft which millions of people of all ages are rapidly discovering.  It is creative, fun and surprisingly addictive.

This is what it says on the website.

“Minecraft is a game about breaking and placing blocks. At first, people built structures to protect against nocturnal monsters, but as the game grew players worked together to create wonderful, imaginative things.”

The principle is that before you can build you have to dig … you have to gather the raw materials you need … and then you have to use what you have gathered in novel and imaginative ways.  You need tools too, and you need to learn what they are used for, and what they are useless for. And the quickest way to learn the necessary survival and creative  skills is by exploring, experimenting, seeking help, and sharing your hard-won knowledge and experience with others.

The same principles hold in the real world of Improvement Science.

The treasure we are looking for is less tangible though … but no less difficult to find … unless you know where to look.

The treasure we seek is learning; how to achieve significant and sustained improvement on all dimensions.

And there is a mountain of opportunity that we can mine into. It is called Reality.

And when we do that we uncover nuggets of knowledge, jewels of understanding, and pearls of wisdom.

There are already many tunnels that have been carved out by others who have gone before us. They branch and join to form a vast cave network. A veritable labyrinth. Complicated and not always well illuminated or signposted.

And stored in the caverns is a vast treasure trove of experience we can dip into – and an even greater horde of new treasure waiting to be discovered.

But even now there there is no comprehensive map of the labyrinth. So it is easy to get confused and to get lost. Not all junctions have signposts and not all the signposts are correct. There are caves with many entrances and exits, there are blind-ending tunnels, and there are many hazards and traps for the unwary.

So to enter the Learning Labyrinth and to return safety with Improvement treasure we need guides. Those who know the safe paths and the unsafe ones. And as we explore we all need to improve the signage and add warning signs where hazards lurk.

And we need to work at the edge of knowledge  to extend the tunnels further. We need to seal off the dead-ends, and to draw and share up-to-date maps of the paths.

We need to grow a Community of Improvement Science Minecrafters.

And the first things we need are some basic improvement tools and techniques … and they can be found here.

growing_blue_vine_scrolling_down_150_wht_247New ideas need time to germinate.

And seeds need soil – so if the context is toxic the seeds will remain dormant or die.

And gardeners need to have patience.

And gardeners need to prepare.  The seeds, the soil and to nurture and nourish the green shoots of innovation.

When a seed-of-change finds itself in fertile soil it will germinate.  That is just the first step.

The fragile new shoot of improvement must be watered and protected from harm as it grows taller and gains strength-of-evidence.

The goal is for the new growth to bear its own fruit, and its own seeds which then spread the proven practice far and wide.

Experienced Improvement Science Practitioners know this.

They know that when the seeds of a proven improvement meet resistance then the cultural soil is not ready.  A few hard winters may be needed to break up the clods. Or perhaps the sharp spade of an external inspection is needed to crack through the carapace of complacency.

And competition from the worthless weeds if weak thinking is always present. The bindweed of bureaucracy saps energy and enthusiasm and hacking at it is futile. It only grows even more vigorously.  Weeds need to be approached from the  roots upwards. Without roots they will wither.

Purpose, practice, patience, preparation and persistence are the characteristics that lead to sustained success.

And when the new fruit of the improvement tree are ready and the seeds are ripe it is important not to jealously protect and store them away from harsh critique … they need to be scattered to the four winds and to have an opportunity to find fertile soil elsewhere and to establish their own colonies.

Many will not succeed.  And a few will evolve into opportunities that were never anticipated.

That is the way of innovation, germination, dissemination and evolution.

That is the way of Improvement Science.

Change is scary.

Deliberately stepping out of our comfort zones is scary.

We feel the fear – but sometimes we do it anyway. Why? How?

What we do is that we prepare and the feeling of fear becomes diluted with a feeling of excitement – and when the balance is right we do it.

So what are the tell-tale signs?

Excitement is a positive emotion – so when we imagine the future and feel excited we unconsciously smile and we feel better afterwards.  We want to share our excitement. We tell others that we are looking forward to the future.

Like birthdays, and holidays, and a new house and a new job. New stuff is exciting when WE decide we want it.


Fear comes from being forced to change and from not having the opportunity to prepare.

Fear happens when change is sprung on us unexpectedly by chance or by someone else.

Fear is a negative emotion and we feel bad afterwards so we avoid it.

So if thinking about the future is dominated by a feeling of fear then we resist and we prevaricate and we get labelled as obstructive, and difficult and cynical.

And that makes the fear worse.


So the way to make the future feel exciting is:

1. Set a clear and constant win-win-win purpose.
2. Show that it is possible by sharing examples.
3. Show that it is achievable by sharing the step-by-step process.
4. Provide the opportunity for preparation.
5. Include those that the change affects to plan their own transition.
6. Ensure that those affected know their part in the process.

And do not underestimate how long this takes and how much repetition, and listening, and explanation and respectful challenge this takes – so the sooner this starts the better.

We hear the news, we feel the fear, we build the excitement and then we do it.

That is the way of change.


Metronome[Beep, Beep, Beep, Beep, Beeeeep] The reminder roused Bob from deep reflection and he clicked the Webex link on his desktop to start the meeting. Leslie was already online.

<Bob> Hi Leslie. How are you? And what would you like to share and explore today?

<Leslie> Hi Bob, I am well thank you and I would like to talk about chaos again.

<Bob> OK. That is always a rich mine of new insights!  Is there a specific reason?

<Leslie>Yes. The story I want to share is of the chaos that I have been experiencing just trying to get a new piece of software available for my team to use.  You would not believe the amount of time, emails, frustration and angst it has taken to negotiate this through the ‘proper channels’.

<Bob> Let me guess … about six months?

<Leslie> Spot on! How did you know?

<Bob> Just prior experience of similar stories.  So what is your diagnosis of the cause of the chaos?

<Leslie> My intuition shouts at me that people are just being deliberately difficult and that makes me feel angry and want to shout at them … but I have learned that behaviour is counter-productive.

<Bob> So what did you do?

<Leslie> I escalated the ‘problem’ to my line manager.

<Bob> And what did they do?

<Leslie> I am not sure, I was not copied in, but it seemed to clear the ‘obstruction’.

<Bob> And were the ‘people’ you mentioned suddenly happy and willing to help?

<Leslie> Not really … they did what we needed but they did not seem very happy about it.

<Bob> OK.  You are describing a Drama Triangle, a game, and your behaviour was from the Persecutor role.

<Leslie>What! But I deliberately did not send any ANGRY emails or get into a childish argument. I escalated the issue I could not solve because that is what we are expected to do.

<Bob> Yes I know. If you had engaged in a direct angry conversation, by whatever means, that would have been an actively aggressive act.  By escalating the issue and someone Bigger having the angry conversation you have engaged in a passive aggressive act. It is still playing the game from the Persecutor role and in fact is the more common mode of Persecution.

<Leslie> But it got the barrier cleared and the problem sorted?

<Bob> And did it leave everyone feeling happier than before?

<Leslie> I guess not. I certainly felt like a bit of a ‘tale teller’ and the IT technician probably hates me and fears for his job, and the departmental heads probably distrust each other even more than before.

<Bob> So this approach may appear to work in the short term but it creates a much bigger long term problem – and it is that long term problem of ‘distrust’ that creates the chaos. So it is a self-sustaining design.

<Leslie> Oh dear! Is there a way to avoid this and to defuse the chronic distrust?

<Bob> Yes.  You have demonstrated a process that you would like to improve – you want the same short term outcome, your software installed and working, and you want it quicker and with less angst and leaving everyone feeling good about how they have played a part in achieving that objective.

<Leslie>Yes. That would be my ideal.

<Bob>So what is different between what you did and your ‘ideal’ scenario?  What did you do that you should not have and what did you not do that you could have?

<Leslie> Well I triggered off a drama  triangle which I should not have. I also assumed that the IT people would know what to do because I do not understand the technical nuances of getting new software procured and installed. What I could have done is make it much clearer for them what I needed, why I needed it and how and when I needed it.  I could have done a lot more homework before asking them for assistance. I could also have given my inner Chimp a banana and gone to talk to them face-to-face and ask their opinion  early on so I could see the problem from  their perspective as well as mine.

<Bob> Yes – that all sounds reasonable and respectful.  What you are doing is ‘synchronising‘.  You are engaging in understanding the process well enough so that you can align all the actions that need to be done, in the correct order and then sharing that.  It is rather like being the composer of a piece of music – you share the score so that the individual players know what to do and when.  There is one other task you need to do.

<Leslie>I need to be the conductor!

<Bob> Yes.  You are the metronome.  You set the pace and guide the orchestra. They are the specialists with their instruments – that is not your role.

<Leslie> And when I do that then the music is harmonious and pleasing-to-the-ear; not a chaotic cacophony!

<Bob> Indeed … and the music is the voice of the system – and is the feedback that everyone hears – and not only do the musicians derive pleasure from contributing then the wider audience will hear what can be achieved and see how it is achieved.

<Leslie> Wow!  That musical metaphor works really well for me. Thanks Bob, I need to go and work on my communicating, composing and conducting capabilities.

sudokuAn Improvement-by-Design challenge is very like a Sudoku puzzle. The rules are deceptively simple but the solving the puzzle is not so simple.

For those who have never tried a Sudoku puzzle the objective is to fill in all the empty boxes with a number between 1 and 9. The constraint is that each row, column and 3×3 box (outlined in bold) must include all the numbers between 1 and 9 i.e. no duplicates.

What you will find when you try is that, at each point in the puzzle solving process there are more than one choice for  most empty cells.

The trick is to find the empty cells that have only one option and fill those in. That changes the puzzle and makes it ‘easier’.

And when you keep following this strategy, and so long as you do not make any mistakes, then you will solve the puzzle.  It just takes concentration, attention to detail, and discipline.

In the example above, the top-right cell in the left-box on the middle-row can only hold a 6; and the top-middle cell in the middle-box on the bottom-row must be a 3.

So we can see already there are three ways ‘into’ the solution – put the 6 in and see where that takes us; put the 3 in and see where that takes us; or put both in and see where that takes us.

The final solution will be the same – so there are multiple paths from where we are to our objective.  Some may involve more mental work than others but all will involve completing the same number of empty cells.

What is also clear is that the sequence order that we complete the empty cells is not arbitrary. Usually the boxes and rows with the fewest empty cells get competed earlier and those with the most empty cells at the start get completed later.

And even if the final configuration is the same, if we start with a different set of missing cells the solution path will be different. It may be very easy, very hard or even impossible without some ‘guessing’ and hoping for the best.


Exactly the same is true of improvement-by-design challenges.

The rules of flow science  are rather simple; but when we have a system of parallel streams (the rows) interacting with parallel stages (the columns); and when we have safety, delivery, and economy constraints to comply with at every part of the system … then finding and ‘improvement plan’ that will deliver our objective is a tough challenge.

But it is possible with concentration, attention-to-detail and discipline; and that requires some flow science training and some improvement science practice.

OK – I am off for lunch and then maybe indulge in a Sudoku puzzle or two – just for fun – and then maybe design an improvement plan to two – just for fun!

 

woman_back_and_forth_questions_150_wht_12477<Lesley> Hi Bob, how are you today?

<Bob> I’m OK thanks Lesley. Having a bit of a break from the daily grind.

<Lesley> Oh! I am sorry, I had no idea you were on holiday. I will call when you are back at work.

<Bob> No need Lesley. Our chats are always a welcome opportunity to reflect and learn.

<Lesley> OK, if you are sure.  The top niggle on my list at the moment is that I do not feel my organisation values what I do.

<Bob> OK. Have you done the diagnostic Right-2-Left Map® backwards from that top niggle?

<Lesley>Yes. The final straw was that I was asked to justify my improvement role.

<Bob> OK, and before that?

<Lesley> There have been some changes in the senior management team.

<Bob> OK. This sounds like the ‘New Brush Sweeps Clean’ effect.

<Lesley> I have heard that phrase before. What does it mean in this context?

<Bob> Senior management changes are very disruptive events. The more senior the change the more disruptive it is.  Let us call it a form of ‘Disruptive Innovation’.  The trigger for the change is important.  One trigger might be a well-respected and effective leader retiring or moving to an even more senior role.  This leaves a leadership gap which is an opportunity for someone to grow and develop.  Another trigger might be a less-respected  and ineffective leader moving on and leaving a trail of rather-too-visible failures. It is the latter tends to be associated with the New Broom effect.

<Lesley> How is that?

<Bob>Well, put yourself in the shoes of the New Leader who has inherited a Trail of Disappointment – you need to establish your authority and expectation quickly and decisively. Ambiguity and lack of clarity will only contribute to further disappointment.  So you have to ask everyone to justify what they do.  And if they cannot then you need to know that.  And if they can then you need to decide if what they do is aligned with your purpose.  This is the New Brush.

<Lesley> So what if I can justify what I do and that does not fit with the ‘New Leader’s Plan’?

<Bob> If what you do is aligned to your Life Purpose but not with the New Brush then you have to choose.  And experience shows that the road to long term personal happiness is the one the aligns with your individual purpose.  And often it is just a matter of timing. The New Brush is indiscriminate and impatient – anything that does not fit neatly into the New Plan has to go.

<Lesley> OK my purpose is to improve the safety, flow, quality and productivity of healthcare processes – for the benefit of all. That is not negotiable. It is what fires my passion and fuels my day.  So does it matter really where or how I do that?

<Bob> Not really.  You do need be mindful of the pragmatic constraints though … your life circumstances.  There are many paths to your Purpose, so it is wise to choose one that is low enough risk to both you and those you love.

<Lesley> Ah! Now I see why you say that timing is important. You need to prepare to be able to make the decision.  You do not what to be caught by surprise and off balance.

<Bob>Yes. That is why as an ISP you always start with your own Purpose and your own Right-2-Left Map®.  Then you will know what to prepare and in what order so that you have the maximum number of options when you have to make a choice.  Sometimes the Universe will create the trigger and sometimes you have to initiate it yourself.

<Lesley> So this is just another facet of Improvement Science?

<Bob>  Yes.

buncefield_fireFires are destructive, indifferent, and they can grow and spread very fast.

The picture is of  the Buncefield explosion and conflagration that occurred on 11th December 2005 near Hemel Hempstead in the UK.  The root cause was a faulty switch that failed to prevent tank number 912 from being overfilled. This resulted in an initial 300 gallon petrol spill which created the perfect conditions for an air-fuel explosion.  The explosion was triggered by a spark and devastated the facility. Over 2000 local residents needed to be evacuated and the massive fuel fire took days to bring under control. The financial cost of the accident has been estimated to run into tens of millions of pounds.

The Great Fire of London in September 1666 led directly to the adoption of new building standards – notably brick and stone instead of wood because they are more effective barriers to fire.

A common design to limit the spread of a fire is called a firewall.

And we use the same principle in computer systems to limit the spread of damage when a computer system goes out of control.


Money is the fuel that keeps the wheels of healthcare systems turning.  And healthcare is an expensive business so every drop of cash-fuel is precious.  Healthcare is also a risky business – from both a professional and a financial perspective. Mistakes can quickly lead to loss of livelihood, expensive recovery plans and huge compensation claims. The social and financial equivalent of a conflagration.

Financial fires spread just like real ones – quickly. So it makes good sense not to have all the cash-fuel in one big pot.  It makes sense to distribute it to smaller pots – in each department – and to distribute the cash-fuel intermittently. These cash-fuel silos are separated by robust financial firewalls and they are called Budgets.

The social sparks that ignite financial fires are called ‘Niggles‘.  They are very numerous but we have effective mechanisms for containing them. The problem happens when a multiple sparks happen at the same time and place and together create a small chain reaction. Then we get a complaint. A ‘Not Again‘.  And we are required to spend some of our precious cash-fuel investigating and apologizing.  We do not deal with the root cause, we just scrape the burned toast.

And then one day the chain reaction goes a bit further and we get a ‘Near Miss‘.  That has a different  reporting mechanism so it stimulates a bigger investigation and it usually culminates in some recommendations that involve more expensive checking, documenting and auditing of the checking and documentation.  The root cause, the Niggles, go untreated – because there are too many of them.

But this check-and-correct reaction is also  expensive and we need even more cash-fuel to keep the organizational engine running – but we do not have any more. Our budgets are capped. So we start cutting corners. A bit here and a bit there. And that increases the risk of more Niggles, Not Agains, and Near Misses.

Then the ‘Never Event‘ happens … a Safety and Quality catastrophe that triggers the financial conflagration and toasts the whole organization.


So although our financial firewalls, the Budgets, are partially effective they also have downsides:

1. Paradoxically they can create the perfect condition for a financial conflagration when too small a budget leads to corner-cutting on safety.

2. They lead to ‘off-loading’ which means that too-expensive-to-solve problems are chucked over the financial firewalls into the next department.  The cost is felt downstream of the source – in a different department – and is often much larger. The sparks are blown downwind.

For example: a waiting list management department is under financial pressure and is running short staffed as a recruitment freeze has been imposed. The overburdening of the remaining staff leads to errors in booking patients for operations. The knock on effect that is patients being cancelled on the day and the allocated operating theatre time is wasted.  The additional cost of wasted theatre time is orders of magnitude greater than the cost-saving achieved in the upstream stage.  The result is a lower quality service, a greater cost to the whole system, and the risk that safety corners will be cut leading to a Near Miss or a Never Event.

The nature of real systems is that small perturbations can be rapidly amplified by a ‘tight’ financial design to create a very large and expensive perturbation called a ‘catastrophe’.  A silo-based financial budget design with a cost-improvement thumbscrew feature increases the likelihood of this universally unwanted outcome.

So if we cannot use one big fuel tank or multiple, smaller, independent fuel tanks then what is the solution?

We want to ensure smooth responsiveness of our healthcare engine, we want healthcare  cash-fuel-efficiency and we want low levels of toxic emissions (i.e. complaints) at the same time. How can we do that?

Fuel-injection.

fuel_injectorsElectronic Fuel Injection (EFI) designs have now replaced the old-fashioned, inefficient, high-emission  carburettor-based engines of the 1970′s and 1980′s.

The safer, more effective and more efficient cash-flow design is to inject the cash-fuel where and when it is needed and in just the right amount.

And to do that we need to have a robust, reliable and rapid feedback system that controls the cash-injectors.

But we do not have such a feedback system in healthcare so that is where we need to start our design work.

Designing an automated cash-injection system requires understanding how the Seven Flows of any  system work together and the two critical flows are Data Flow and Cash Flow.

And that is possible.

tornada_150_wht_10155The image of a tornado is what many associate with improvement.  An unpredictable, powerful, force that sweeps away the wood in its path. It certainly transforms – but it leaves a trail of destruction and disappointment in its wake. It does not discriminate  between the green wood and the dead wood.

A whirlwind is created by a combination of powerful forces – but the trigger that unleashes the beast is innocuous. The classic ‘butterfly wing effect’. A spark that creates an inferno.

This is not the safest way to achieve significant and sustained improvement. A transformation tornado is a blunt and destructive tool.  All it can hope to achieve is to clear the way for something more elegant. Improvement Science.

We need to build the capability for improvement progressively and to build it effective, efficient, strong, reliable, and resilient. In a word  – trustworthy. We need a durable structure.

But what sort of structure?  A tower from whose lofty penthouse we can peer far into the distance?  A bridge between the past and the future? A house with foundations, walls and a roof? Do these man-made edifices meet our criteria?  Well partly.

Let us see what nature suggests. What are the naturally durable designs?

Suppose we have a bag of dry sand – an unstructured mix of individual grains – and that each grain represents an improvement idea.

Suppose we have a specific issue that we would like to improve – a Niggle.

Let us try dropping the Improvement Sand on the Niggle – not in a great big reactive dollop – but in a proactive, exploratory bit-at-a-time way.  What shape emerges?

hourglass_150_wht_8762What we see is illustrated by the hourglass.  We get a pyramid.

The shape of the pyramid is determined by two factors: how sticky the sand is and how fast we pour it.

What we want is a tall pyramid – one whose sturdy pinnacle gives us the capability to see far and to do much.

The stickier the sand the steeper the sides of our pyramid.  The faster we pour the quicker we get the height we need. But there is a limit. If we pour too quickly we create instability – we create avalanches.

So we need to give the sand time to settle into its stable configuration; time for it to trickle to where it feels most comfortable.

And, in translating this metaphor to building improvement capability in system we could suggest that the ‘stickiness’ factor is how well ideas hang together and how well individuals get on with each other and how well they share ideas and learning. How cohesive our people are.  Distrust and conflict represent repulsive forces.  Repulsion creates a large, wide, flat structure  – stable maybe but incapable of vision and improvement. That is not what we need

So when developing a strategy for building improvement capability we build small pyramids where the niggles point to. Over time they will merge and bigger pyramids will appear and merge – until we achieve the height. Then was have a stable and capable improvement structure. One that we can use and we can trust.

Just from sprinkling Improvement Science Sand on our Niggles.

hold_your_ground_rope_300_wht_6223[Dring Dring] The telephone soundbite announced the start of mentoring session.

<Bob> Good morning Leslie. How are you today?

<Leslie> I have been better.

<Bob> You seem upset. Do you want to talk about it?

<Leslie> Yes, please. The trigger for my unhappiness is that last week I received an email demanding that I justify the time I spend doing improvement work and  a summons to a meeting to ‘discuss some issues that have been raised‘.

<Bob> OK. I take it that you do not know what or who has triggered this inquiry.

<Leslie> You are correct. My working hypothesis is that it is the end of the financial year and budget holders are looking for opportunities to do some pruning – to meet their cost improvement program targets!

<Bob> So what is the problem? You have shared the output of your work. You have demonstrated significant improvements in safety, flow, quality and productivity and you have described both them and the methodology clearly.

<Leslie> I know. That us why I was so upset to get this email. It is as if everything that we have achieved has been ignored. It is almost as if it is resented.

<Bob> Ah! You may well be correct.  This is the nature of paradigm shifts. Those who have the greatest vested interest in the current paradigm get spooked when they feel it start to wobble. Each time you share the outcome of your improvement work you create emotional shock-waves. The effects are cumulative and eventually there will be is a ‘crisis of confidence’ in those who feel most challenged by the changes that you are demonstrating are possible.  The whole process is well described in Thomas Kuhn’s The Structure of Scientific Revolutions. That is not a book for an impatient reader though – for those who prefer something lighter I recommend “Our Iceberg is Melting” by John Kotter.

<Leslie> Thanks Bob. I will get a copy of Kotter’s book – that sounds more my cup of tea. Will that tell me what to do?

<Bob> It is a parable – a fictional story of a colony of penguins who discover that their iceberg is melting and are suddenly faced with a new and urgent potential risk of not surviving the storms of the approaching winter. It is not a factual account of a real crisis or a step-by-step recipe book for solving all problems  – it describes some effective engagement strategies in general terms.

<Leslie> I will still read it. What I need is something more specific to my actual context.

<Bob> This is an improvement-by-design challenge. The only difference from the challenges you have done already is that this time the outcome you are looking for is a smooth transition from the ‘old’ paradigm to the ‘new’ one.  Kuhn showed that this transition will not start to happen until there is a new paradigm because individuals choose to take the step from the old to the new and they do not all do that at the same time.  Your work is demonstrating that there is a new paradigm. Some will love that message, some will hate it. Rather like Marmite.

<Leslie> Yes, that make sense.  But how do I deal with an unseen enemy who is stirring up trouble behind my back?

<Bob> Are you are referring to those who have ‘raised some issues‘?

<Leslie> Yes.

<Bob> They will be the ones who have most invested in the current status quo and they will not be in senior enough positions to challenge you directly so they are going around spooking the inner Chimps of those who can. This is expected behaviour when the relentlessly changing reality starts to wobble the concrete current paradigm.

<Leslie> Yes! That is  exactly how it feels.

<Bob> The danger lurking here is that your inner Chimp is getting spooked too and is conjuring up Gremlins and Goblins from the Computer! Left to itself your inner Chimp will steer you straight into the Victim Vortex.  So you need to take it for a long walk, let it scream and wave its hairy arms about, listen to it, and give it lots of bananas to calm it down. Then put your put your calmed-down Chimp into its cage and your ‘paradigm transition design’ into the Computer. Only then will you be ready for the ‘so-justify-yourself’ meeting.  At the meeting your Chimp will be out of its cage like a shot and interpreting everything as a threat. It will disable you and go straight to the Computer for what to do – and it will read your design and follow the ‘wise’ instructions that you have put in there.

<Leslie> Wow! I see how you are using the Chimp Paradox metaphor to describe an incredibly complex emotional process in really simple language. My inner Chimp is feeling happier already!

<Bob> And remember that you are in all in the same race. Your collective goal is to cross the finish line as quickly as possible with the least chaos, pain and cost.  You are not in a battle – that is lose-lose inner Chimp thinking.  The only message that your interrogators must get from you is ‘Win-win is possible and here is how we can do it‘. That will be the best way to soothe their inner Chimps – the ones who fear that you are going to sink their boat by rocking it.

<Leslie> That is really helpful. Thank you again Bob. My inner Chimp is now snoring gently in its cage and while it is asleep I have some Improvement-by-Design work to do and then some Computer programming.

patient_stumbling_with_bandages_150_wht_6861Primum non nocere” is Latin for “First do no harm”.

It is a warning mantra that had been repeated by doctors for thousands of years and for good reason.

Doctors  can be bad for your health.

I am not referring to the rare case where the doctor deliberately causes harm.  Such people are criminals and deserve to be in prison.

I am referring to the much more frequent situation where the doctor has no intention to cause harm – but harm is the outcome anyway.

Very often the risk of harm is unavoidable. Healthcare is a high risk business. Seriously unwell patients can be very unstable and very unpredictable.  Heroic efforts to do whatever can be done can result in unintended harm and we have to accept those risks. It is the nature of the work.  Much of the judgement in healthcare is balancing benefit with risk on a patient by patient basis. It is not an exact science. It requires wisdom, judgement, training and experience. It feels more like an art than a science.

The focus of this essay is not the above. It is on unintentionally causing avoidable harm.

Or rather unintentionally not preventing avoidable harm which is not quite the same thing.

Safety means prevention of avoidable harm. A safe system is one that does that. There is no evidence of harm to collect. A safe system does not cause harm. Never events never happen.

Safe systems are designed to be safe.  The root causes of harm are deliberately designed out one way or another.  But it is not always easy because to do that we need to understand the cause-and-effect relationships that lead to unintended harm.  Very often we do not.


In 1847 a doctor called Ignaz Semmelweis made a very important discovery. He discovered that if the doctors and medical students washed their hands in disinfectant when they entered the labour ward, then the number of mothers and babies who died from infection was reduced.

And the number dropped a lot.

It fell from an annual average of 10% to less than 2%!  In really bad months the rate was 30%.

The chart below shows the actual data plotted as a time-series chart. The yellow flag in 1848 is just after Semmelweis enforced a standard practice of hand-washing.

Vienna_Maternal_Mortality_1785-1848

Semmelweis did not know the mechanism though. This was not a carefully designed randomised controlled trial (RCT). He was desperate. And he was desperate because this horrendous waste of young lives was only happening on the doctors ward.  On the nurses ward, which was just across the corridor, the maternal mortality was less than 2%.

The hospital authorities explained it away as ‘bad air’ from outside. That was the prevailing belief at the time. Unavoidable. A risk that had to be just accepted.

Semmeleis could not do a randomized controlled trial because they were not invented until a century later.

And Semmelweis suspected that the difference between the mortality on the nurses and the doctors wards was something to do with the Mortuary. Only the doctors performed the post-mortems and the practice of teaching anatomy to medical students using post-mortem dissection was an innovation pioneered in Vienna in 1823 (the first yellow flag on the chart above). But Semmelweis did not have this data in 1847.  He collated it later and did not publish it until 1861.

What Semmelweis demonstrated was the unintended and avoidable deaths were caused by ignorance of the mechanism of how microorganisms cause disease. We know that now. He did not.

It would be another 20 years before Louis Pasteur demonstrated the mechanism using the famous experiment with the swan neck flask. Pasteur did not discover microorganisms;  he proved that they did not appear spontaneously in decaying matter as was believed. He proved that by killing the bugs by boiling, the broth in the flask  stayed fresh even though it was exposed to the air. That was a big shock but it was a simple and repeatable experiment. He had a mechanism. He was believed. Germ theory was born. A Scottish surgeon called Joseph Lister read of this discovery and surgical antisepsis was born.

Semmelweis suspected that some ‘agent’ may have been unwittingly transported from the dead bodies to the live mothers and babies on the hands of the doctors.  It was a deeply shocking suggestion that the doctors were unwittingly killing their patients.

The other doctors did not take this suggestion well. Not well at all. They went into denial. They discounted the message and they discharged the messenger. Semmelweis never worked in Vienna again. He went back to Hungary and repeated the experiment. It worked.


Even today the message that healthcare practitioners can unwittingly bring avoidable harm to their patients is disturbing. We still seek solace in denial.

Hospital acquired infections (HAI) are a common cause of harm and many are avoidable using simple, cheap and effective measures such as hand-washing.

The harm does not come from what we do. It comes from what we do not do. It happens when we omit to follow the simple safety measures that have be proven to work. Scientifically. Statistically Significantly. Understood and avoidable errors of omission.


So how is this “statistically significant scientific proof” acquired?

By doing experiments. Just like the one Ignaz Semmelweis conducted. But the improvement he showed was so large that it did not need statistical analysis to validate it.  And anyway such analysis tools were not available in 1847. If they had been he might have had more success influencing his peers. And if he had achieved that goal then thousands, if not millions, of deaths from hospital acquired infections may have been prevented.  With the clarity of hindsight we now know this harm was avoidable.

No. The problem we have now is because the improvement that follows a single intervention is not very large. And when the causal mechanisms are multi-factorial we need more than one intervention to achieve the improvement we want. The big reduction in avoidable harm. How do we do that scientifically and safely?


About 20% of hospital acquired infections occur after surgical operations.

We have learned much since 1847 and we have designed much safer surgical systems and processes. Joseph Lister ushered in the era of safe surgery, much has happened since.

We routinely use carefully designed, ultra-clean operating theatres, sterilized surgical instruments, gloves and gowns, and aseptic techniques – all to reduce bacterial contamination from outside.

But surgical site infections (SSIs) are still common place. Studies show that 5% of patients on average will suffer this complication. Some procedures are much higher risk than others, despite the precautions we take.  And many surgeons assume that this risk must just be accepted.

Others have tried to understand the mechanism of SSI and their research shows that the source of the infections is the patients themselves. We all carry a ‘bacterial flora’ and normally that is no problem. Our natural defense – our skin – is enough.  But when that biological barrier is deliberately breached during a surgical operation then we have a problem. The bugs get in and cause mischief. They cause surgical site infections.

So we have done more research to test interventions to prevent this harm. Each intervention has been subject to well-designed, carefully-conducted, statistically-valid and very expensive randomized controlled trials.  And the results are often equivocal. So we repeat the trials – bigger, better controlled trials. But the effects of the individual interventions are small and they easily get lost in the noise. So we pool the results of many RCTs in what is called a ‘meta-analysis’ and the answer from that is very often ‘not proven’ – either way.  So individual surgeons are left to make the judgement call and not surprisingly there is wide variation in practice.  So is this the best that medical science can do?

No. There is another way. What we can do is pool all the learning from all the trials and design a multi-facetted intervention. A bundle of care. And the idea of a bundle is that the  separate small effects will add or even synergise to create one big effect.  We are not so much interested in the mechanism as the outcome. Just like Ignaz Semmelweiss.

And we can now do something else. We can test our bundle of care using statistically robust tools that do not require a RCT.  They are just as statistically valid as a RCT but a different design.

And the appropriate tool for this to measure the time interval between adverse the events  - and then to plot this continuous metric as a time-series chart.

But we must be disciplined. First we must establish the baseline average interval and then we introduce our bundle and then we just keep measuring the intervals.

If our bundle works then the interval between the adverse events gets longer – and we can easily prove that using our time-series chart. The longer the interval the more ‘proof’ we have.  In fact we can even predict how long we need to observe to prove that ‘no events’ is a statistically significant improvement. That is an elegant an efficient design.


Here is a real and recent example.

The time-series chart below shows the interval in days between surgical site infections following routine hernia surgery. These are not life threatening complications. They rarely require re-admission or re-operation. But they are disruptive for patients. They cause pain, require treatment with antibiotics, and the delay recovery and return to normal activities. So we would like to avoid them if possible.

Hernia_SSI_CareBundle

The green and red lines show the baseline period. The  green line says that the average interval between SSIs is 14 days.  The red line says that an interval more than about 60 days would be surprisingly long: valid statistical evidence of an improvement.  The end of the green and red lines indicates when the intervention was made: when the evidence-based designer care bundle was adopted together with the discipline of applying it to every patient. No judgement. No variation.

The chart tells the story. No complicated statistical analysis is required. It shows a statistically significant improvement.  And the SSI rate fell by over 80%. That is a big improvement.

We still do not know how the care bundle works. We do not know which of the seven simultaneous simple and low-cost interventions we chose are the most important or even if they work independently or in synergy.  Knowledge of the mechanism was not our goal.

Our goal was to improve outcomes for our patients – to reduce avoidable harm – and that has been achieved. The evidence is clear.

That is Improvement Science in action.

And to read the full account of this example of the Science of Improvement please go to:

http://www.journalofimprovementscience.net

It is essay number 18.

And avoid another error of omission. Do not omit to share this message – it is important.

Chimp_BattleImprovement implies change.
Change implies action.
Action implies decision.

So how is the decision made?
With Urgency?
With Understanding?

Bitter experience teaches us that often there is an argument about what to do and when to do it.  An argument between two factions. Both are motivated by a combination of anger and fear. One side is motivated more by anger than fear. They vote for action because of the urgency of the present problem. The other side is motivated more by fear than anger. They vote for inaction because of their fear of future failure.

The outcome is unhappiness for everyone.

If the ‘action’ party wins the vote and a failure results then there is blame and recrimination. If the ‘inaction’ party wins the vote and a failure results then there is blame and recrimination. If either party achieves a success then there is both gloating and resentment. Lose Lose.

The issue is not the decision and how it is achieved.The problem is the battle.

Dr Steve Peters is a psychiatrist with 30 years of clinical experience.  He knows how to help people succeed in life through understanding how the caveman wetware between their ears actually works.

In the run up to the 2012 Olympic games he was the sports psychologist for the multiple-gold-medal winning UK Cycling Team.  The World Champions. And what he taught them is described in his book – “The Chimp Paradox“.

Chimp_Paradox_SmallSteve brilliantly boils the current scientific understanding of the complexity of the human mind down into a simple metaphor.

One that is accessible to everyone.

The metaphor goes like this:

There are actually two ‘beings’ inside our heads. The Chimp and the Human. The Chimp is the older, stronger, more emotional and more irrational part of our psyche. The Human is the newer, weaker, logical and rational part.  Also inside there is the Computer. It is just a memory where both the Chimp and the Human store information for reference later. Beliefs, values, experience. Stuff like that. Stuff they use to help them make decisions.

And when some new information arrives through our senses – sight and sound for example – the Chimp gets first dibs and uses the Computer to look up what to do.  Long before the Human has had time to analyse the new information logically and rationally. By the time the Human has even started on solving the problem the Chimp has come to a decision and signaled it to the Human and associated it with a strong emotion. Anger, Fear, Excitement and so on. The Chimp operates on basic drives like survival-of-the-self and survival-of-the-species. So if the Chimp gets spooked or seduced then it takes control – and it is the stronger so it always wins the internal argument.

But the human is responsible for the actions of the Chimp. As Steve Peters says ‘If your dog bites someone you cannot blame the dog – you are responsible for the dog‘.  So it is with our inner Chimps. Very often we end up apologising for the bad behaviour of our inner Chimp.

Because our inner Chimp is the stronger we cannot ‘control’ it by force. We have to learn how to manage the animal. We need to learn how to soothe it and to nurture it. And we need to learn how to remove the Gremlins that it has programmed into the Computer. Our inner Chimp is not ‘bad’ or ‘mad’ it is just a Chimp and it is an essential part of us.

Real chimpanzees are social, tribal and territorial.  They live in family groups and the strongest male is the boss. And it is now well known that a troop of chimpanzees in the wild can plan and wage battles to acquire territory from neighbouring troops. With casualties on both sides.  And so it is with people when their inner Chimps are in control.

Which is most of the time.

Scenario:
A hospital is failing one of its performance targets – the 18 week referral-to-treatment one – and is being threatened with fines and potential loss of its autonomy. The fear at the top drives the threat downwards. Operational managers are forced into action and do so using strategies that have not worked in the past. But they do not have time to learn how to design and test new ones. They are bullied into Plan-Do mode. The hospital is also required to provide safe care and the Plan-Do knee-jerk triggers fear-of-failure in the minds of the clinicians who then angrily oppose the diktat or quietly sabotage it.

This lose-lose scenario is being played out  in  100′s if not 1000′s of hospitals across the globe as we speak.  The evidence is there for everyone to see.

The inner Chimps are in charge and the outcome is a turf war with casualties on all sides.

So how does The Chimp Paradox help dissolve this seemingly impossible challenge?

First it is necessary to appreciate that both sides are being controlled by their inner Chimps who are reacting from a position of irrational fear and anger. This means that everyone’s behaviour is irrational and their actions likely to be counter-productive.

What is needed is for everyone to be managing their inner Chimps so that the Humans are back in control of the decision making. That way we get wise decisions that lead to effective actions and win-win outcomes. Without chaos and casualties.

To do this we all need to learn how to manage our own inner Chimps … and that is what “The Chimp Paradox” is all about. That is what helped the UK cyclists to become gold medalists.

In the scenario painted above we might observe that the managers are more comfortable in the Pragmatist-Activist (PA) half of the learning cycle. The Plan-Do part of PDSA  – to translate into the language of improvement. The clinicians appear more comfortable in the Reflector-Theorist (RT) half. The Study-Act part of PDSA.  And that difference of preference is fueling the firestorm.

Improvement Science tells us that to achieve and sustain improvement we need all four parts of the learning cycle working  smoothly and in sequence.

So what at first sight looks like it must be pitched battle which will result in two losers; in reality is could be a three-legged race that will result in everyone winning. But only if synergy between the PA and the RT halves can be achieved.

And that synergy is achieved by learning to respect, understand and manage our inner Chimps.

stick_figure_scribble_pen_150_wht_6418[Beep Beep] The alarm on Bob’s smartphone was the reminder that in a few minutes his e-mentoring session with Lesley was due. Bob had just finished the e-mail he was composing so he sent it and then fired-up the Webex session. Lesley was already logged in and on line.

<Bob> Hi Lesley. What aspect of Improvement Science shall we talk about today? What is next on your map?

<Lesley> Hi Bob. Let me see. It looks like ‘Employee Engagement‘ is the one that we have explored least yet – and it links to lots of other things.

<Bob> OK. What would you say the average level of Employee Engagement is in your organisation at the moment? On a scale of zero to ten where zero is defined as ‘complete apathy’.

<Lesley> Good question. I see a wide range of engagement and I would say the average is about four out of ten.  There are some very visible, fully-engaged, energetic, action-focused  movers-and-shakers.  There are many more nearer the apathy end of the spectrum. Most employees seem to turn up, do their jobs well enough to avoid being disciplined, and then go home.

<Bob> OK. And do you feel that is a problem?

<Lesley> You betcha!  Improvement means change and change means action.  Disengaged employees are a deadweight. They do not actively block change – they will go along with it if pushed  – but they do not contribute to making it happen. And that creates a different problem. The movers-and-shakers get frustrated and eventually get tired trying to move the deadweight up hill and give up  and then can become increasingly critical and then cynical. After they give up in despair they then actively block any new ideas saying – “Do not try you will fail.”

<Bob> So how would you describe the emotional state of those you describe as “disengaged”?

<Lesley> Miserable.

<Bob> And who is making them feel miserable?

<Lesley> That is another good question. They appear to be making themselves feel miserable. And it is not what is happening that triggers this emotion. It is what is not happening. Apathy seems to be self-sustaining.

<Bob> Can you explain in a bit more about what you mean by that and maybe share an example?

<Lesley> An example is easier.  I have reflected on this a bit and I have used one of the 6M Design® techniques to help me understand it better.  I used a Right-2-Left® map to compare a personal example of when I felt really motivated and delivered a significant and measurable improvement; with one where I felt miserable and no change happened.

<Bob> Excellent. What did you discover?

<Lesley> I discovered that there were four classes of  difference between the two examples. And I then understood what you mean by ‘Acts and Errors of  Omission and Commission’.

<Bob> OK. And which was the commonest of the four combinations in your example?

<Lesley> The Errors of Omission. And within just that group there were three different types that were most obvious.

<Bob> Can you list them for me?

<Lesley> For sure. The first is the miserableness I felt when what I was doing felt to me that it was irrelevant. When what I was being asked to do had no worthwhile purpose that I was aware of.

<Bob> So which was it? No worth or not being aware of the worth?

<Lesley>Me not being aware of the worth. I hoped it was of value to someone higher up the corporate food chain otherwise I would not have been asked to do it! But I was never sure. And that uncertainty generated some questions. What if what I am doing is of no worth to anyoneWhat if I am just wasting my lifetime doing it? That fearful thought left me feeling more miserable than motivated.

<Bob> OK. What was the second Error of Omission?

<Lesley> It is linked to the first one. I had no objective way of knowing if I was doing a worthwhile job.  And the word objective is important.  I am not asking for subjective feedback – there is too much expectation, variation, assumption, prejudgement and politics mixed up in opinions of what I achieve.  I needed specific, objective and timely feedback. I associated my feeling of miserableness with not getting objective feedback that told me what I was doing was making a worthwhile difference to someone else. Anyone else!

<Bob> I thought that you get a lot of objective feedback on a whole raft of organisational performance metrics?

<Lesley> Oh yes! We do!! The problem is that it is high level, aggregated, anonymous, and delayed. To get a copy of a report that says as an organisation we did or did not meet last quarters arbitrary performance target for x, y or z usually generates a ‘So what? How does that relate to what I do?’ reaction. I need objective, specific and timely feedback about the effects of my work. Good or bad.

<Bob> OK.  And Error of Omission Three?

<Lesley> This was the trickiest one to nail down. What it came down to was being treated as a process and not as a person.  I felt anonymous.  I was just  a headcount, a number on a payroll ledger, an overhead cost. That feeling was actually the most demotivating of all.

<Bob> And did it require all Three Errors of Omission to be present for the ‘miserableness’ to become manifest?

<Lesley> Alas no! Any one of them was enough. The more of them at the same time the deeper the feeling of misery the less motivated I felt.

<Bob> Thank you for being so frank and open. So what have you ‘abstracted’ from your ‘reflection’?

<Lesley> That employee engagement requires that these Three Errors of Omission must be deliberately checked for and proactively addressed if discovered.

<Bob> And who would, could or should do this check-and-correct work?

<Lesley> H’mm. Another very god question. The employee could do it but it is difficult for them because a lot of the purpose-setting and feedback comes from outside their circle of control and from higher up. Approaching  a line-manager with a list of their Errors of Omission will be too much of a challenge!

<Bob> So?

<Lesley> The manager should do it.  They should ask themselves these questions.  Only they can correct their  own Errors of Omission.  I doubt if that would happen spontaneously though! Humility seems a bit of a rare commodity.

<Bob> I agree. So what can the employee do to help their boss?

<Lesley> They could ask how they can be of most value to their boss and they could ask for objective and timely feedback on how well they are performing as an individual on those measures of worth. It sounds so simple and obvious when said out loud. So why does no one do it?

<Bob> A very good question. Some do and they are the often described as ‘motivating leaders’. So does this insight suggest to you any strategies for grasping the ‘Employee Engagement’ nettle without getting stung?

<Lesley> Yes indeed! I am already planning my next action. A chat with my line-manager about what I could do. Thanks Bob.

<Bob> My pleasure. And remember that the same principle works for everyone that we work directly with – especially those immediately ‘upstream’ and ‘downstream’ of us in our daily work.

ViewFromSpaceThis is a picture of Chris Hadfield. He is an astronaut and to prove it here he is in the ‘cupola’ of the International Space Station (ISS). Through the windows is a spectacular view of the Earth from space.

Our home seen from space.

What is remarkable about this image is that it even exists.

This image is tangible evidence of a successful outcome of a very long path of collaborative effort by 100′s of 1000′s of people who share a common dream.

That if we can learn to overcome the challenge of establishing a permanent manned presence in space then just imagine what else we might achieve?

Chis is unusual for many reasons.  One is that he is Canadian and there are not many Canadian astronauts. He is also the first Canadian astronaut to command the ISS.  Another claim to fame is that when he recently lived in space for 5 months on the ISS, he recorded a version of David Bowie’s classic song – for real – in space. To date this has clocked up 21 million YouTube hits and had helped to bring the inspiring story of space exploration back to the public consciousness.

Especially the next generation of explorers – our children.

Chris has also written a book ‘An Astronaut’s View of Life on Earth‘ that tells his story. It describes how he was inspired at a young age by seeing the first man to step onto the Moon in 1969.  He overcame seemingly impossible obstacles to become an astronaut, to go into space, and to command the ISS.  The image is tangible evidence.

We all know that space is a VERY dangerous place.  I clearly remember the two space shuttle disasters. There have been many other much less public accidents.  Those tragic events have shocked us all out of complacency and have created a deep sense of humility in those who face up to the task of learning to overcome the enormous technical and cultural barriers.

Getting six people into space safely, staying there long enough to conduct experiments on the long-term effects of weightlessness, and getting them back again safely is a VERY difficult challenge.  And it has been overcome. We have the proof.

Many of the seemingly impossible day-to-day problems that we face seem puny in comparison.

For example: getting every patient into hospital, staying there just long enough to benefit from cutting edge high-technology healthcare, and getting them back home again safely.

And doing it repeatedly and consistently so that the system can be trusted and we are not greeted with tragic stories every time we open a newspaper. Stories that erode our trust in the ability of groups of well-intended people to do anything more constructive than bully, bicker and complain.

So when the exasperated healthcare executive exclaims ‘Getting 95% of emergency admissions into hospital in less than 4 hours is not rocket science!‘ – then perhaps a bit more humility is in order. It is rocket science.

Rocket science is Improvement science.

And reading the story of a real-life rocket-scientist might be just the medicine our exasperated executives need.

Because Chris explains exactly how it is done.

And he is credible because he has walked-the-talk so he has earned the right to talk-the-walk.

The least we can do is listen and learn.

Here is is Chris answering the question ‘How to achieve an impossible dream?

Nerve_CurveThe emotional roller-coaster ride that is associated with change, learning and improvement is called the Nerve Curve.

We are all very familiar with the first stages – of Shock, Denial, Anger, Bargaining, Depression and Despair.  We are less familiar with the stages associated with the long climb out to Resolution: because most improvement initiatives fail for one reason of another.

The critical first step is to “Disprove Impossibility” and this is the first injection of innovation. Someone (the ‘innovator’) discovers that what was believed to be impossible is not. They only have to have one example too. One Black Swan.

The tougher task is to influence those languishing in the ‘Depths of Despair’ that there is hope and that there is a ‘how’. This is not easy because cynicism is toxic to innovation.  So an experienced Improvement Science Practitioner (ISP) bypasses the cynics and engages with the depressed-but-still-healthy-skeptics.

The challenge now is how to get a shed load of them up the hill.

When we first learn to drive we start on the flat, not on hills,  for a very good reason. Safety.

We need to learn to become confident with the controls first. The brake, the accelerator, the clutch and the steering wheel.  This takes practice until it is comfortable, unconscious and almost second nature. We want to achieve a smooth transition from depression to delight, not chaotic kangaroo jumps!

Only when we can do that on the flat do we attempt a hill-start. And the key to a successful hill start is the sequence.  Hand brake on  for safety, out of gear, engine running, pointing at the goal. Then we depress the clutch and select a low gear – we do not want to stall. Speed is not the goal. Safety comes first. Then we rev the engine to give us the power we need to draw on. Then we ease the clutch until the force of the engine has overcome the force of gravity and we feel the car wanting to move forward. And only then do we ease the handbrake off, let the clutch out more and hit the gas to keep the engine revs in the green.

So when we are planning to navigate a group of healthy skeptics up the final climb of the Nerve Curve we need to plan and prepare carefully.

What is least likely to be successful?

Well, if all we have is our own set of wheels,  a cheap and cheerful mini-motor, then it is not going to be a good idea to shackle a trailer to it; fill the trailer with skeptics and attempt a hill start. We will either stall completely or burn out our clutch. We may even be dragged backwards into the Cynic Infested Toxic Swamp.

So what if we hire a bus, load up our skeptical passengers, and have a go.  We may be lucky -  but if we have no practice doing hill starts with a full bus then we could be heading for disappointment; or disaster.

So what is a safer plan:
1) First we need to go up the mountain ourselves to demonstrate it is possible.
2) Then we take one or two of the least skeptical up in our car to show it is safe.
3) We then invite those skeptics with cars to learn how to do safe hill starts.
4) Finally we ask the ex-skeptics to teach the fresh-skeptics how to do it.

Brmmmm Brmmmm. Off we go.

hurry_with_the_SFQP_kit[Dring] Bob’s laptop signaled the arrival of Leslie for their regular ISP remote mentoring session.

<Bob> Hi Leslie. Thanks for emailing me with a long list of things to choose from. It looks like you have been having some challenging conversations.

<Leslie> Hi Bob. Yes indeed! The deepening gloom and the last few blog topics seem to be polarising opinion. Some are claiming it is all hopeless and others, perhaps out of desperation, are trying the FISH stuff for themselves and discovering that it works.  The ‘What Ifs’ are engaged in war of words with the ‘Yes Buts’.

<Bob> I like your metaphor! Where would you like to start on the long list of topics?

<Leslie> That is my problem. I do not know where to start. They all look equally important.

<Bob> So, first we need a way to prioritise the topics to get the horse-before-the-cart.

<Leslie> Sounds like a good plan to me!

<Bob> One of the problems with the traditional improvement approaches is that they seem to start at the most difficult point. They focus on ‘quality’ first – and to be fair that has been the mantra from the gurus like W.E.Deming. ‘Quality Improvement’ is the Holy Grail.

<Leslie>But quality IS important … are you saying they are wrong?

<Bob> Not at all. I am saying that it is not the place to start … it is actually the third step.

<Leslie>So what is the first step?

<Bob> Safety.  Eliminating avoidable harm. Primum Non Nocere. The NooNoos. The stuff that generates the most fear for everyone.

<Leslie> You mean having a service that we can trust not to harm us unnecessarily?

<Bob> Yes. It is not a good idea to make an unsafe design more efficient – it will deliver even more harm!

<Leslie> OK. That makes perfect sense to me. So how do we do that?

<Bob> It does not actually matter.  Well-designed and thoroughly field-tested checklists have been proven to be very effective in the ‘ultra-safe’ industries like aerospace and nuclear.

<Leslie> OK. Something like the WHO Safe Surgery Checklist?

<Bob> Yes, that is a good example – and it is well worth reading Atul Gwande’s book about how that happened – “The Checklist Manifesto“.   Gwande is a surgeon who had published a lot on improvement and even so was quite skeptical that something as simple as a checklist could possibly work in the complex world of surgery. In his book he describes a number of personal ‘Ah Ha!’ moments that illustrate a phenomenon that I call Jiggling.

<Leslie> OK. I have made a note to read Checklist Manifesto and I am curious to learn more about Jiggling – but can we stick to the point? Does quality come after safety?

<Bob> Yes, but not immediately after. As I said, Quality is the third step.

<Leslie> So what is the second one?

<Bob> Flow.

There was a long pause – and just as Bob was about to check that the connection had not been lost – Leslie spoke.

<Leslie> But none of the Improvement Schools teach basic flow science.  They all focus on quality, waste and variation!

<Bob> I know. And attempting to improve quality before improving flow is like papering the walls before doing the plastering.  Quality cannot grow in a chaotic context. The flow must be smooth before that. And the fear of harm must be removed first.

<Leslie> So the ‘Improving Quality through Leadership‘ bandwagon that everyone is jumping on will not work?

<Bob> Well that depends on what the ‘Leaders’ are doing. If they are leading the way to learning how to design-for-safety and then design-for-flow then the bandwagon might be a wise choice. If they are only facilitating collaborative agreement and group-think then they may be making an ineffective system more efficient which will steer it over the edge into faster decline.

<Leslie>So, if we can stabilize safety using checklists do we focus on flow next?

<Bob>Yup.

<Leslie> OK. That makes a lot of sense to me. So what is Jiggling?

<Bob> This is Jiggling. This conversation.

<Leslie> Ah, I see. I am jiggling my understanding through a series of ‘nudges’ by you.

<Bob>Yes. And when the learning cogs are a bit rusty, some Improvement Science Oil and a bit of Jiggling is more effective and much safer than whacking the caveman wetware with a big emotional hammer.

<Leslie>Well the conversation has certainly jiggled Safety-Flow-Quality-and-Productivity into a sensible order for me. That helped a lot. I will sort my to-do list into that order and start at the beginning. Let me see. I have a plan for safety, now I can focus on flow. Here is my top flow niggle. How do I design the resource capacity I need to ensure the flow is smooth and the waiting times are short enough to avoid ‘persecution’ by the Target Time Police?

<Bob> An excellent question! I will send you the first ISP Brainteaser that will nudge us towards an answer to that question.

<Leslie> I am ready and waiting to have my brain-teased and my niggles-nudged!

London_UndergroundSystems are built from intersecting streams of work called processes.

This iconic image of the London Underground shows a system map – a set of intersecting transport streams.

Each stream links a sequence of independent steps – in this case the individual stations.  Each step is a system in itself – it has a set of inner streams.

For a system to exhibit stable and acceptable behaviour the steps must be in synergy – literally ‘together work’. The steps also need to be in synchrony – literally ‘same time’. And to do that they need to be aligned to a common purpose.  In the case of a transport system the design purpose is to get from A to B safety, quickly, in comfort and at an affordable cost.

In large socioeconomic systems called ‘organisations’ the steps represent groups of people with special knowledge and skills that collectively create the desired product or service.  This creates an inevitable need for ‘handoffs’ as partially completed work flows through the system along streams from one step to another. Each step contributes to the output. It is like a series of baton passes in a relay race.

This creates the requirement for a critical design ingredient: trust.

Each step needs to be able to trust the others to do their part:  right-first-time and on-time.  All the steps are directly or indirectly interdependent.  If any one of them is ‘untrustworthy’ then the whole system will suffer to some degree. If too many generate dis-trust then the system may fail and can literally fall apart. Trust is like social glue.

So a critical part of people-system design is the development and the maintenance of trust-bonds.

And it does not happen by accident. It takes active effort. It requires design.

We are social animals. Our default behaviour is to trust. We learn distrust by experiencing repeated disappointments. We are not born cynical – we learn that behaviour.

The default behaviour for inanimate systems is disorder – and it has a fancy name – it is called ‘entropy’. There is a Law of Physics that says that ‘the average entropy of a system will increase over time‘. The critical word is ‘average’.

So, if we are not aware of this and we omit to pay attention to the hand-offs between the steps we will observe increasing disorder which leads to repeated disappointments and erosion of trust. Our natural reaction then is ‘self-protect’ which implies ‘check-and-reject’ and ‘check and correct’. This adds complexity and bureaucracy and may prevent further decline – which is good – but it comes at a cost – quite literally.

Eventually an equilibrium will be achieved where our system performance is limited by the amount of check-and-correct bureaucracy we can afford.  This is called a ‘mediocrity trap’ and it is very resilient – which means resistant to change in any direction.


To escape from the mediocrity trap we need to break into the self-reinforcing check-and-reject loop and we do that by developing a design that challenges ‘trust eroding behaviour’.  The strategy is to develop a skill called  ‘smart trust’.

To appreciate what smart trust is we need to view trust as a spectrum: not as a yes/no option.

At one end is ‘nonspecific distrust’ – otherwise known as ‘cynical behaviour’. At the other end is ‘blind trust’ – otherwise  known and ‘gullible behaviour’.  Neither of these are what we need.

In the middle is the zone of smart trust that spans healthy scepticism  through to healthy optimism.  What we need is to maintain a balance between the two – not to eliminate them. This is because some people are ‘glass-half-empty’ types and some are ‘glass-half-full’. And both views have a value.

The action required to develop smart trust is to respectfully challenge every part of the organisation to demonstrate ‘trustworthiness’ using evidence.  Rhetoric is not enough. Politicians always score very low on ‘most trusted people’ surveys.

The first phase of this smart trust development is for steps to demonstrate trustworthiness to themselves using their own evidence, and then to share this with the steps immediately upstream and downstream of them.

So what evidence is needed?

SFQP1Safety comes first. If a step cannot be trusted to be safe then that is the first priority. Safe systems need to be designed to be safe.

Flow comes second. If the streams do not flow smoothly then we experience turbulence and chaos which increases stress,  the risk of harm and creates disappointment for everyone. Smooth flow is the result of careful  flow design.

Third is Quality which means ‘setting and meeting realistic expectations‘.  This cannot happen in an unsafe, chaotic system.  Quality builds on Flow which builds on Safety. Quality is a design goal – an output – a purpose.

Fourth is Productivity (or profitability) and that does not automatically follow from the other three as some QI Zealots might have us believe. It is possible to have a safe, smooth, high quality design that is unaffordable.  Productivity needs to be designed too.  An unsafe, chaotic, low quality design is always more expensive.  Always. Safe, smooth and reliable can be highly productive and profitable – if designed to be.

So whatever the driver for improvement the sequence of questions is the same for every step in the system: “How can I demonstrate evidence of trustworthiness for Safety, then Flow, then Quality and then Productivity?”

And when that happens improvement will take off like a rocket. That is the Speed of Trust.  That is Improvement Science in Action.

single_file_line_PA_150_wht_3113The modern era in science started about 500 years ago when an increasing number of people started to challenge the dogma that our future is decided by Fates and Gods. That we had no influence. And to appease the ‘Gods’ we had to do as we were told. That was our only hope of Salvation.

This paradigm came under increasing pressure as the evidence presented by Reality did not match the Rhetoric.  Many early innovators paid for their impertinence with their fortunes, their freedom and often their future. They were burned as heretics.

When the old paradigm finally gave way and the Age of Enlightenment dawned the pendulum swung the other way – and the new paradigm became the ‘mechanical universe’. Isaac Newton showed that it was possible to predict, with very high accuracy, the motion of the planets just by adopting some simple rules and a new form of mathematics called calculus. This opened a door into a more hopeful world – if Nature follows strict rules and we know what they are then we can learn to control Nature and get what we need without having to appease any Gods (or priests).

This was the door to the Industrial Revolutions – there have been more that one – each lasting about 100 years (18th C, 19th C and 20th C). Each was associated with massive population growth as we systematically eliminated the causes of early mortality – starvation and infectious disease.

But not everything behaved like the orderly clockwork of the planets and the pendulums. There was still the capricious and unpredictable behaviour that we call Lady Luck.  Had the Gods retreated but were still playing dice?

Progress was made here too – and the history of the ‘understanding of chance’ is peppered with precocious and prickly mathematical savants who discovered that chance follows rules too. Probability theory was born and that spawned a troublesome child called Statistics. This was a trickier one to understand. To most people statistics is just mathematical gobbledygook.

But from that emerged a concept called the Rational Man – which underpinned the whole of Economic Theory for 250 years. Until very recently.  The RM hypothesis stated that we make unconscious but rational judgements when presented with uncertain win/lose choices.  And from that seed sprouted concepts such as the Law of Supply and Demand – when the supply of things we  demand are limited then we (rationally) value them more and will choose to pay more so prices go up so fewer can afford them so demand drops. Foxes and Rabbits. A negative feedback loop. The economic system becomes self-adjusting and self-stabilising.  The outcome of this assumption is a belief that ‘because people are collectively rational the economic system will be self-stabilising and it will correct the adverse short term effects of any policy blunders we make‘.  The ‘let-the-market-decide’ belief that experimental economic meddling is harmless over the long term and what is learned from ‘laissez-faire’ may even be helpful. It is a no-lose long term improvement strategy. Losers are just unlucky, stupid or both.

In 2002 the Nobel Prize for Economics was not awarded to an economist. It was awarded to a psychologist – Daniel Kahneman – who showed that the model of the Rational Man did not stand up to rigorous psychological experiment.  Reality demonstrated we are Irrational Chimps. The economists had omitted to test their hypothesis. Oops!


This lesson has many implications for the Science of Improvement.  One of which is a deeper understanding of the nemesis of improvement – resistance to change.

One of the surprising findings is that our judgements are biased – and our bias operates at an unconscious level – what Kahneman describes as the System One level. Chimp level. We are not aware we are making biased decisions.

For example. Many assume that we prefer certainty to uncertainty. We fear the unpredictable. We avoid it. We seek the predictable and the stable. And we will put up with just about anything so long as it is predictable. We do not like surprises.  And when presented with that assertion most people nod and say ‘Yes’ – that feels right.

We also prefer gain to loss.  We love winning. We hate losing. This ‘competitive spirit’ is socially reinforced from day one by our ‘pushy parents’ – we all know the ones – but we all do it to some degree. Do better! Work harder! Be a success! Optimize! Be the best! Be perfect! Be Perfect! BE PERFECT.

So which is more important to us? Losing or uncertainty? This is one question that Kahneman asked. And the answer he discovered was surprising – because it effectively disproved the Rational Man hypothesis.  And this is how a psychologist earned a Nobel Prize for Economics.

Kahneman discovered that loss is more important to us than uncertainty.

To demonstrate this he presented subjects with a choice between two win/lose options; and he presented the choice in two ways. To a statistician and a Rational Man the outcomes were exactly the same in terms of gain or loss.  He designed the experiment to ensure that it was the unconscious judgement that was being measured – the intuitive gut reaction. So if our gut reactions are Rational then the choice and the way the choice was presented would have no significant effect.

There was an effect. The hypothesis was disproved.

The evidence showed that our gut reactions are biased … and in an interesting way.

If we are presented with the choice between a certain gain and an uncertain gain/loss (so the average gain is the same) then we choose the certain gain much more often.  We avoid uncertainty. Uncertainly =1 Loss=0.

BUT …

If we are presented with a choice between certain loss and an uncertain loss/gain (so the average outcome is again the same) then we choose the uncertain option much more often. This is exactly the opposite of what was expected.

And it did not make any difference if the subject knew the results of the experiment before doing it. The judgement is made out of awareness and communicated to our consciousness via an emotion – a feeling – that biases our slower, logical, conscious decision process.

This means that the sense of loss has more influence on our judgement than the sense of uncertainty.

This behaviour is hard-wired. It is part of our Chimp brain design. And once we know this we can see the effect of it everywhere.

1. We will avoid the pain of uncertainty and resist any change that might deliver a gain when we believe that future loss is uncertain. We are conservative and over-optimistic.

2. We will accept the pain of uncertainty and only try something new (and risky) when we believe that to do otherwise will result in certain loss. The Backs Against The Wall scenario.  The Cornered Rat is Unpredictable and Dangerous scenario.

This explains why we resist any change right up until the point when we see Reality clearly enough to believe that we are definitely going to lose something important if we do nothing. Lose our reputation, lose our job, lose our security, lose our freedom or lose our lives. That is a transformational event.  A Road to Damascus moment.

monkey_on_back_anim_150_wht_11200Understanding that we behave like curious, playful, social but irrational chimps is one key to unlocking significant and sustained improvement.

We need to celebrate our inner chimp – it is key to innovation.

And we need to learn how to team up with our inner chimp rather than be hijacked by it.

If we do not we will fail – the Laws of Physics, Probability and Psychology decree it.

boss_dangling_carrot_for_employee_anim_150_wht_13061[Beep Beep] Bob’s laptop signaled the arrival of Leslie to their regular Webex mentoring session. Bob picked up the phone and connected to the conference call.

<Bob> Hi Leslie, how are you today?

<Leslie> Great thanks Bob. I am sorry but that I do not have a red-hot burning issue to talk about today.

<Bob> OK – so your world is completely calm and orderly now. Excellent.

<Leslie> I wish! The reason is that I have been busy preparing for the monthly 1-2-1 with my boss.

<Bob> OK. So do you have a few minutes to talk about that?

<Leslie> What can I tell you about it?

<Bob> Can you just describe the purpose and the process for me?

<Leslie> OK. The purpose is improvement – for both the department and the individual. The process is that all departmental managers have an annual appraisal based on their monthly 1-2-1 chats and the performance scores for their departments are used to reward the top 15% and to ‘performance manage’ the bottom 15%.

<Bob> H’mmm.  What is the commonest emotion that is associated with this process?

<Leslie> I would say somewhere between severe anxiety and abject terror. No one looks forward to it. The annual appraisal feels like a lottery where the odds are stacked against you.

<Bob> Can you explain that a bit more for me?

<Leslie> Well, the most fear comes from being in the bottom 15% – the fear of being ‘handed your hat’ so to speak. Fortunately that fear motivates us to try harder and that usually saves us from the chopper because our performance improves.  The cost is the extra stress, working late and taking ‘stuff’ home.

<Bob> OK. And the anxiety?

<Leslie> Paradoxically that mostly comes from the top 15%. They are anxious to sustain their performance. Most do not and the Boss’s Golden Manager can crash spectacularly! We have seen it so often. It is almost as if being the Best carries a curse! So most of us try to stay in the middle of the pack where we do not stick out – a sort of safety in the herd strategy.  It is illogical I know because there is always a ‘top’ 15% and a ‘bottom’ 15%.

<Bob> You mentioned before that it feels like a lottery. How come?

<Leslie> Yes – it feels like a lottery but I know it has a rational scientific basis. Someone once showed me the ‘statistically significant evidence’ that proves it works.

<Bob> That what works exactly?

<Leslie> That sticks are more effective than carrots!

<Bob> Really! And what does the performance run charts look like – over the long term – say monthly over 2-3 years?

<Leslie> That is a really good question. They are surprisingly stable – well completely stable in fact. The wobble up and down of course but there is no sign of improvement over the long term – no trend. If anything it is the other way.

<Bob> So what is the rationale for maintaining the stick-is-better-than-the-carrot policy?

<Leslie> Ah! The message we are getting  is ‘as performance is not improving and sticks have been scientifically proven to be more effective than carrots then we will be using a bigger stick in future‘.

<Bob> Hence the atmosphere of fear and anxiety?

<Leslie> Exactly. But that is the way it must be I suppose.

<Bob> Actually it is not. This is an invalid design based on rubbish intuitive assumptions and statistical smoke-and-mirrors that creates unmeasurable emotional pain and destroys both people and organisations!

<Leslie> Wow! Bob! I have never heard you use language like that. You are usually so calm and reasonable. This must be really important!

 <Bob> It is – and for that reason I need to shock you out of your apathy  – and I can do that best by you proving it to yourself – scientifically – with a simple experiment. Are you up for that?

<Leslie> You betcha! This sounds like it is going to be interesting. I had better fasten my safety belt! The Nerve Curve awaits.


 The Stick-or-Carrot Experiment

<Bob> Here we go. You will need five coins, some squared-paper and a pencil. Coloured ones are even better.

<Leslie> OK. Does it matter what sort of coins?

<Bob> No. Any will do. Imagine you have four managers called A,B,C and D respectively.  Each month the performance of their department is measured as the number of organisational targets that they are above average on. Above average is like throwing a ‘head’, below average is like throwing a ‘tail’. There are five targets – hence the coins

<Leslie>OK. That makes sense – and it feels better to use the measured average – we have demonstrated that arbitrary performance targets are dangers – especially when imposed blindly across all departments.

<Bob> Indeed. So can you design a score sheet to track the data for the experiment.

<Leslie>Give me a minute.  Will this suffice?

Stick_and_Carrot_Fig1<Bob> Perfect! Now simulate a month by tossing all five coins – once for each manager – and record the outcome of each as H to T , then tot up the number of heads for each manager.

<Leslie>  OK … here is what I got.

Stick_and_Carrot_Fig2<Bob>Good. Now repeat this 11 more times to give you the results for a whole year.  In the n(Heads) column colour the boxes that have scores of zero or one as red – these are the Losers. Then colour the boxes that have 4 or 5 as green – these are the Winners.

<Leslie>OK, that will take me a few minutes – do you want to get a coffee or something.

[Five minutes later]

Here you go. That gives 96 opportunities to win or lose and I counted 9 Losers and 9 Winners so just under 20% for each. The majority were in the unexceptional middle. The herd.

Stick_and_Carrot_Fig3<Bob> Excellent.  A useful way to visualise this is using a Tally chart. Just run down the column of n(Heads) and create the Tally chart as you go. This is one of the oldest forms of counting in existence. There are fossil records that show Tally charts being used thousands of years ago.

<Leslie> I think I understand what you mean. We do not wait until all the data is in then draw the chart, we update it as we go along – as the data comes in.

<Bob> Spot on!

<Leslie> Let me see. Wow! That is so cool!  I can see the pattern appearing almost magically – and the more data I have the clearer the pattern is.

 <Bob>Can you show me?

<Leslie> Here we go.

Stick_and_Carrot_Fig4<Bob> Good.  This is the expected picture. If you repeated this many times you would get the same general pattern with more 2 and 3 scores.

Now I want you to do an experiment.

Assume each manager that is classed as a Winner in one month is given a reward – a ‘pat on the back’ from their Boss. And each manager that is classed as a Loser is given a ‘written warning’. Now look for  the effect that this has.

<Leslie> But we are using coins – which means the outcome is just a matter of chance! It is a lottery.

<Bob> I know that and you know that but let us assume that the Boss believes that the monthly feedback has an effect. The experiment we are doing is to compare the effect of the carrot with the stick. The Boss wants to know which results in more improvement and to know that with scientific and statistical confidence!

<Leslie> OK. So what I will do is look at the score the following month for each manager that was either a Winner or a  Loser; work out the difference, and then calculate the average of those differences and compare them with each other. That feels suitably scientific!

<Bob> OK. What do you get.

<Leslie> Just a minute, I need to do this carefully. OK – here it is.

<Bob>Stick_and_Carrot_Fig5 Excellent.  Just eye-balling the ‘Measured improvement after feedback’ columns I would say the Losers have improved and the Winners have deteriorated!

<Leslie> Yes! And the Losers have improved by 1.29 on average and the Winners have deteriorated by 1.78 – and that is a big difference for such small sample. I am sure that with enough data this would be a statistically significant difference! So it is true, sticks work better than carrots!

<Bob>Not so fast. What you are seeing is a completely expected behaviour called “Regression to the Mean“. Remember we know that the score for each manager each month is the result of a game of chance, a coin toss, a lottery. So no amount of stick or carrot feedback is going to influence that.

<Leslie>But the data is saying there is a difference! And that feels like the experience we have – and why fear stalks the management corridors. This is really confusing!

<Bob>Remember that confusion arises from invalid or conflicting unconscious assumptions. There is a flaw in the statistical design of this experiment. The ‘obvious’ conclusion is invalid because of this flaw. And do not be too hard on yourself. The flaw eluded mathematicians for centuries. But now you know there is one can you find it?

<Leslie>OMG!  The use of the average to classify the managers into Winners or Losers is the flaw!  That is just a lottery. Who the managers are is irrelevant. This is just a demonstration of how chance works.

But that means … OMG!  If the conclusion is invalid then sticks are not better than carrots and we have been brain-washed for decades into accepting a performance management system that is invalid – and worse still is used to ‘scientifically’ justify systematic persecution! I can see now why you get so angry!

<Bob>Bravo Leslie.  We  need to check your understanding. Does that mean carrots are better than sticks?

<Leslie>No!  The conclusion is invalid because the assumptions are invalid and the design is fatally flawed. It does not matter what the conclusion actually is.

<Bob>Excellent. So what conclusion can you draw?

<Leslie>That this short-term carrot-or-stick feedback design for achieving improvement in a stable system  is both ineffective and emotionally damaging. In fact it could well be achieving precisely the opposite effect that it is intended to. It may be preventing improvement! But the story feels so plausible and the data appears to back it up. What is happening here is we are using statistical smoke-and-mirrors to justify what we have already decided – and only an true expert would spot the flaw! Once again our intuition has tricked us!

<Bob>Well done! And with this new insight – how would you do it differently?  What would be a better design?

<Leslie>That is a very good question. I am going to have to think about that – before my 1-2-1 tomorrow. I wonder what might happen if I show this demonstration to my Boss? Thanks Bob, as always … lots of food for thought.


four_way_puzzle_people_200_wht_4883Improvement implies change.

Change follows action. Action follows planning. Effective planning follows from an understanding of the system because it is required to make the wise decisions needed to achieve the purpose. The purpose is the intended outcome.

Learning follows from observing the effect of change – whatever it is. Understanding follows from learning to predict the effect of both actions and in-actions.

All these pieces of the change jigsaw are different and they are inter-dependent. They fit together. They are a system.

And we can pick out four pieces: the Plan piece, the Action piece, the Observation piece and the Learning piece – and they seem to follow that sequence – it looks like a learning cycle.

This is not a new idea.

It is the same sequence as the Scientific Method: hypothesis, experiment, analysis, conclusion. The preferred tool of  Academics – the Thinkers.

It is also the same sequence as the Shewhart Cycle: plan, do, check, act. The preferred tool of the Pragmatists – the Doers.

So where does all the change conflict come from? What is the reason for the perpetual debate between theorists and activists? The incessant game of “Yes … but!”

One possible cause was highlighted by David Kolb  in his work on ‘experiential learning’ which showed that individuals demonstrate a learning style preference.

We tend to be thinkers or doers and only a small proportion us say that we are equally comfortable with both.

The effect of this natural preference is that real problems bounce back-and-forth between the Tribe of Thinkers and the Tribe of Doers.  Together we are providing separate parts of the big picture – but as two tribes we appear to be unaware of the synergistic power of the two parts. We are blocked by a power struggle.

The Experiential Learning Model (ELM) was promoted and developed by Peter Honey and Alan Mumford (see learning styles) and their work forms the evidence behind the Learning Style Questionnaire that anyone can use to get their ‘score’ on the four dimensions:

  • Pragmatist – the designer and planner
  • Activist – the action person
  • Reflector – the observer and analyst
  • Theorist – the abstracter and hypothesis generator

The evidence from population studies showed that individuals have a preference for one of these styles, sometimes two, occasionally three and rarely all four.

That observation, together with the fact that learning from experience requires moving around the whole cycle, leads to an awareness that both individuals and groups can get ‘stuck’ in their learning preference comfort zone. If the learning wheel is unbalanced it will deliver an emotionally bumpy ride when it turns! So it may be more comfortable just to remain stationary and not to learn.

Which means not to change. Which means not to improve.


So if we are embarking on an improvement exercise – be it individual or collective – then we are committed to learning. So where do we start on the learning cycle?

The first step is action. To do something – and the easiest and safest thing to do is just look. Observe what is actually happening out there in the real world – outside the office – outside our comfort zone. We need to look outside our rhetorical inner world of assumptions, intuition and prejudgements.

The next step is to reflect on what we see – we look in the mirror – and we compare what are actually seeing with what we expected to see. That is not as easy as it sounds – and a useful tool to help is to draw charts. To make it visual. All sorts of charts.

The result is often a shock. There is often a big gap between what we see and what we perceive; between what we expect and what we experience; between what we want and what we get.

That emotional shock is actually what we need to power us through the next phase – the Realm of the Theorist – where we ask three simple questions:
Q1: What could be causing the reality that I am seeing?
Q2: How would I know which of the plausible causes is the actual cause?
Q3: What experiment can I do to answer my question and clarify my understanding of Reality?

This is the world of the Academic.

The third step is design an experiment to test our new hypothesis.  The real world is messy and complicated and we need to be comfortable with ‘good enough’ and ‘reasonable uncertainty’.  Design is about practicalities – making something that works well enough in practice – in the real world. Something that is fit-for-purpose. We are not expecting perfection; not looking for optimum; not striving for best – just significantly better than what we have now. And the more we can test our design before we implement it the better because we want to know what to expect before we make the change and we want to avoid unintended negative consequences – the NooNoos.

twisting_arrow_200_wht_11738Then we do it … and the cycle of learning has come one revolution … but we are not back at the start – we have moved forward. Our understanding is already different from when were were at this stage before: it is deeper and wider.  We are following the trajectory of a spiral – our capability for improvement is expanding over time.

So we need to balance our learning wheel before we start the journey or we will have a slow, bumpy and painful ride!


One plausible approach is to stay inside our comfort zones, play to our strengths and to say “What we need is a team made of people with complementary strengths. We need a Department of Action for the Activists; a Department of Analysis for the Reflectors; a Department of Research for the Theorists and a Department of Planning for the Pragmatists.

But that is what we have now and what is happening? The Four Departments have become super-specialised and more polarised.  There is little common ground or shared language.  There is no common direction, no co-ordination, no oil on the axle of the wheel of change. We have ground to a halt. We have chaos. Each part is working but independently of the others in an unsynchronised mess.

We have vehicular fibrillation. Change output has dropped to zero.


A better design is for everyone to focus first on balancing their own learning wheel by actively redirecting emotional energy from their comfort zone, their strength,  into developing the next step in their learning cycle.

Pragmatists develop their capability for Action.
Activists develop their capability for Reflection.
Reflectors develop their capability for Hypothesis.
Theorists develop their capability for Design.

The first step in the improvement spiral is Action – so if you are committed to improvement then investing £10 and 20 minutes in the 80-question Learning Style Questionnaire will demonstrate your commitment – not only to others – more importantly to yourself.

 

figure_snowblowing_150_wht_13606It is the time of year when our minds turn to self-improvement.

New Year.

We re-affirm our Resolutions from last year and we vow to try harder this year. As we did last year. And the year before that. And we usually fail.

So why do we fail to keep our New Year Resolutions?

One reason is because we do not let go of the past. We get pulled back into old habits too easily. To get a new future we have to do some tidying up. We need to get The Shredder. We need to make the act of letting go irreversible.

Bzzzzzzz …. Aaaaah. That feels better.

Why does this work?

First, because it feels good to be taking definitive action.  We know that resolutions are just good intentions. It is not until we take action that change happens.  Many of us are weak on the Activist dimension. We talk a lot about what we should do but we do not walk as much as we could do.

Second, because  we can see the evidence of the improvement immediately. We get immediate, visual, positive feedback. That heap of old bills and emails and reports that we kept ‘just in case’ is no longer cluttering up our desks, our eyes, our minds and our lives.  And we have ‘recycled’ it which feels even better.

Third, because we have challenged our own Prevarication Policy. And if we can do that for ourselves we can, with some credibility, do the same for others. We feel more competent and more confident.

Fourth, because we have freed up valuable capacity to invest.  More space. More time (our prevarication before kept us busy but wasted our limited time). More motivation (trying to work around a pile of rubbish day-in and day-out is emotionally draining).

So all we need to do in the New Year is stay inside our circle of control and shred some years of accumulated rubbish.

figure_picking_up_trash_150_wht_11857And it is not just tangible rubbish we can dispose of.  We can shred some emotional garbage too. The list of “Yes … But” excuses that we cling on to.  The sack of guilt for past failures that weighs us down. The flag of fear that we wave when we surrender our independence and adopt the Victim role.  The righteous indignation that we use to hide our own self-betrayal.

And just by putting that lot through The Shredder we release the opportunity for improvement.

The rest just happens – as if by magic.

clock_hands_spinning_import_150_wht_3149[Hmmmmmm] The desk amplified the vibration of Bob’s smartphone as it signaled the time for his planned e-mentoring session with Leslie.

[Dring Dring]

<Bob> Hi Leslie, right-on-time, how are you today?

<Leslie> Good thanks Bob. I have a specific topic to explore if that is OK. Can we talk about time traps.

<Bob> OK – do you have a specific reason for choosing that topic?

<Leslie> Yes. The blog last week about ‘Recipe for Chaos‘ set me thinking and I remembered that time-traps were mentioned in the FISH course but I confess, at the time, I did not understand them. I still do not.

<Bob> Can you describe how the ‘Recipe for Chaos‘ blog triggered this renewed interest in time-traps?

<Leslie> Yes – the question that occurred to me was: ‘Is a time-trap a recipe for chaos?’

<Bob> A very good question! What do you feel the answer is?

<Leslie>I feel that time-traps can and do trigger chaos but I cannot explain how. I feel confused.

<Bob>Your intuition is spot on – so can you localize the source of your confusion?

<Leslie>OK. I will try. I confess I got the answer to the MCQ correct by guessing – and I wrote down the answer when I eventually guessed correctly – but I did not understand it.

<Bob>What did you write down?

<Leslie>“The lead time is independent of the flow”.

<Bob>OK. That is accurate – though I agree it is perhaps a bit abstract. One source of confusion may be that there are different causes of of time-traps and there is a lot of overlap with other chaos-creating policies. Do you have a specific example we can use to connect theory with reality?

<Leslie> OK – that might explain my confusion.  The example that jumped to mind is the RTT target.

<Bob> RTT?

<Leslie> Oops – sorry – I know I should not use undefined abbreviations. Referral to Treatment Time.

<Bob> OK – can you describe what you have mapped and measured already?

<Leslie> Yes.  When I plot the lead-time for patients in date-of-treatment order the process looks stable but the histogram is multi-modal with a big spike just underneath the RTT target of 18 weeks. What you describe as the ‘Horned Gaussian’ – the sign that the performance target is distorting the behaviour of the system and the design of the system is not capable on its own.

<Bob> OK and have you investigated why there is not just one spike?

<Leslie> Yes – the factor that best explains that is the ‘priority’ of the referral.  The  ‘urgents’ jump in front of the ‘soons’ and both jump in front of the ‘routines’. The chart has three overlapping spikes.

<Bob> That sounds like a reasonable policy for mixed-priority demand. So what is the problem?

<Leslie> The ‘Routine’ group is the one that clusters just underneath the target. The lead time for routines is almost constant but most of the time those patients sit in one queue or another being leap-frogged by other higher-priority patients. Until they become high-priority – then they do the leap frogging.

<Bob> OK – and what is the condition for a time trap again?

<Leslie> That the lead time is independent of flow.

<Bob>Which implies?

<Leslie> Um. let me think. That the flow can be varying but the lead time stays the same?

<Bob> Yup. So is the flow of routine referrals varying?

<Leslie> Not over the long term. The chart is stable.

<Bob> What about over the short term? Is demand constant?

<Leslie>No of course not – it varies – but that is expected for all systems. Constant means ‘over-smoothed data’ – the Flaw of Averages trap!

<Bob>OK. And how close is the average lead time for routines to the RTT maximum allowable target?

<Leslie> Ah! I see what you mean. The average is about 17 weeks and the target is 18 weeks.

<Bob>So what is the flow variation on a week-to-week time scale?

<Leslie>Demand or Activity?

<Bob>Both.

<Leslie>H’mm – give me a minute to re-plot flow as a weekly-aggregated chart. Oh! I see what you mean – both the weekly activity and demand are both varying widely and they are not in sync with each other. Work in progress must be wobbling up and down a lot! So how can the lead time variation be so low?

<Bob>What do the flow histograms look like?

<Leslie> Um. Just a second. That is weird! They are both bi-modal with peaks at the extremes and not much in the middle – the exact opposite of what I expected to see! I expected a centered peak.

<Bob>What you are looking at is the characteristic flow fingerprint of a chaotic system – it is called ‘thrashing’.

<Leslie> So I was right!

<Bob> Yes. And now you know the characteristic pattern to look for. So what is the policy design flaw here?

<Leslie>The DRAT – the delusional ratio and arbitrary target?

<Bob> That is part of it – that is the external driver policy. The one you cannot change easily. What is the internally driven policy? The reaction to the DRAT?

<Leslie> The policy of leaving routine patients until they are about to breach then re-classifying them as ‘urgent’.

<Bob>Yes! It is called a ‘Prevarication Policy’ and it is surprisingly and uncomfortably common. Ask yourself – do you ever prevaricate? Do you ever put off ‘lower priority’ tasks until later and then not fill the time freed up with ‘higher priority tasks’?

<Leslie> OMG! I do that all the time! I put low priority and unexciting jobs on a ‘to do later’ heap but I do not sit idle – I do then focus on the high priority ones.

<Bob> High priority for whom?

<Leslie> Ah! I see what you mean. High priority for me. The ones that give me the biggest reward! The fun stuff or the stuff that I get a pat on the back for doing or that I feel good about.

<Bob> And what happens?

<Leslie> The heap of ‘no-fun-for-me-to-do’ jobs gets bigger and I await the ‘reminders’ and then have to rush round in a mad panic to avoid disappointment, criticism and blame. It feels chaotic. I get grumpy. I make more mistakes and I deliver lower-quality work. If I do not get a reminder I assume that the job was not that urgent after all and if I am challenged I claim I am too busy doing the other stuff.

<Bob> Have you avoided disappointment?

<Leslie> Ah! No – that I needed to be reminded meant that I had already disappointed. And when I do not get a reminded does not prove I have not disappointed either. Most people blame rather than complain. I have just managed to erode other people’s trust in my reliability. I have disappointed myself. I have achieved exactly the opposite of what I intended. Drat!

<Bob> So what is the reason that you work this way? There will be a reason.  A good reason.

<Leslie> That is a very good question! I will reflect on that because I believe it will help me understand why others behave this way too.

<Bob> OK – I will be interested to hear your conclusion.  Let us return to the question. What is the  downside of a ‘Prevarication Policy’?

<Leslie> It creates stress, chaos, fire-fighting, last minute changes, increased risk of errors,  more work and it erodes both quality, confidence and trust.

<Bob>Indeed so – and the impact on productivity?

<Leslie> The activity falls, the system productivity falls, revenue falls, queues increase, waiting times increase and the chaos increases!

<Bob> And?

<Leslie> We treat the symptoms by throwing resources at the problem – waiting list initiatives – and that pushes our costs up. Either way we are heading into a spiral of decline and disappointment. We do not address the root cause.

<Bob> So what is the way out of chaos?

<Leslie> Reduce the volume on the destabilizing feedback loop? Stop the managers meddling!

<Bob> Or?

<Leslie> Eh? I do not understand what you mean. The blog last week said management meddling was the problem.

<Bob> It is a problem. How many feedback loops are there?

<Leslie> Two – that need to be balanced.

<Bob> So what is another option?

<Leslie> OMG! I see. Turn UP the volume of the stabilizing feedback loop!

<Bob> Yup. And that is a lot easier to do in reality. So that is your other challenge to reflect on this week. And I am delighted to hear you using the terms ‘stabilizing feedback loop’ and ‘destabilizing feedback loop’.

<Leslie> Thank you. That was a lesson for me after last week – when I used the terms ‘positive and negative feedback’ it was interpreted in the emotional context – positive feedback as encouragement and negative feedback as criticism.  So ‘reducing positive feedback’ in that sense is the exact opposite of what I was intending. So I switched my language to using ‘stabilizing and destabilizing’ feedback loops that are much less ambiguous and the confusion and conflict disappeared.

<Bob> That is very useful learning Leslie … I think I need to emphasize that distinction more in the blog. That is one advantage of online media – it can be updated!

 <Leslie> Thanks again Bob!  And I have the perfect opportunity to test a new no-prevarication-policy design – in part of the system that I have complete control over – me!

boxes_group_PA4_150_wht_4916There are only four ingredients required to create Chaos.

The first is Time.

All processes and systems are time-dependent.

The second ingredient is a Metric of Interest (MoI).

That means a system performance metric that is important to all – such as a Safety or Quality or Cost; and usually all three.

The third ingredient is a feedback loop of a specific type – it is called a Negative Feedback Loop.  The NFL  is one that tends to adjust, correct and stabilise the behaviour of the system.

Negative feedback loops are very useful – but they have a drawback. They resist change and they reduce agility. The name is also a disadvantage – the word ‘negative feedback’ is often associated with criticism.

The fourth and final ingredient in our Recipe for Chaos is also a feedback loop but one of a different design – a Positive Feedback Loop (PFL)- one that amplifies variation and change.

Positive feedback loops are also very useful – they are required for agility – quick reactions to unexpected events. Fast reflexes.

The downside of a positive feedback loop is that increases instability.

The name is also confusing – ‘positive feedback’ is associated with encouragement and praise.

So in this context it is better to use the terms ‘stabilizing feedback’ and ‘destabilizing feedback’  loops.

When we mix these four ingredients in just the right amounts we get a system that may behave chaotically. That is surprising. It is counter-intuitive. It is also how the Universe works.

For example:

Suppose our Metric of Interest is the amount of time that patients spend in a Accident and Emergency Department. We know that the longer this time is the less happy they are and the higher the risk of avoidable harm – so it is a reasonable goal to reduce it.

Longer-than-possible waiting times have many root causes – it is a non-specific metric.  That means there are many things that could be done to reduce waiting time and the most effective actions will vary from case-to-case, day-to-day and even minute-to-minute.  There is no one-size-fits-all solution.

This implies that those best placed to correct the causes of the delays are the people who know the specific system well – because they work in it. Those who actually deliver urgent care. They are the stablizing agent in our Recipe for Chaos.

The destabilizing feedback loop is the beat-the-arbitrary-target policy.

This policy is typically involves:
(1) Setting a target that is impossible for the current design to achieve reliably;
(2) inspecting how close to the target we are; then
(3) using the data to justify threats of dire consequences for failure.

Now we have a Recipe for Chaos.

The higher the failure rate the more inspection, reports, meetings, exhortation, threats, interruptions, and interventions that are generated.  Fear-fuelled management meddling. This behaviour consumes valuable time – so leaves less time to do the worthwhile work. The pressure increases and makes the system even more sensitive to small changes. Delays multiply and errors occur more often.  Tempers become frayed and molehills become magnified into mountains. Irritations become arguments.  And all of this makes the problem worse rather than better. Less stable. More variable. More chaotic.

It is actually possible to write a simple equation that describes this characteristic behaviour of real systems – and that was a very surprising finding when it was discovered in 1976 by a mathematician called Robert May.

This equation is called the logistic equation.

Here is the abstract of his seminal paper.

Nature 261, 459-467 (10 June 1976)

Simple mathematical models with very complicated dynamics

First-order difference equations arise in many contexts in the biological, economic and social sciences. Such equations, even though simple and deterministic, can exhibit a surprising array of dynamical behaviour, from stable points, to a bifurcating hierarchy of stable cycles, to apparently random fluctuations. There are consequently many fascinating problems, some concerned with delicate mathematical aspects of the fine structure of the trajectories, and some concerned with the practical implications and applications. This is an interpretive review of them.

The fact that this chaotic behaviour is completely predictable and does not need any ‘random’ element was a big surprise. Chaotic is not the same as random. The observed chaos in the urgent healthcare care system is the result of the design of the system – or more specifically the current healthcare system management policies.

This has a number of profound implications – the most important of which is this:

If the chaos we observe in our health care systems is the predictable and inevitable result of the management policies we ourselves have created and adopted – then eliminating the chaos will only require us to re-design these policies.

In fact we only need to tweak one of the ingredients of the Recipe for Chaos – the strength of the destabilizing feedback loop. The gain. The volume control on the variation amplifier!

This is called the MM factor – otherwise known as ‘Management Meddling‘.

We need to keep all four ingredients though – because we need our system to have both agility and stability.

The flaw is not the Managers themselves – it is their learned behaviour – the Meddling.  It is learned so it can be unlearned. We need to keep the Managers and to change their role slightly. As they unlearn their old habits they move from being Policy-Enforcers and Fire-Fighters to becoming Policy-Engineers and Chaos-Calmers. They focus on learning to understand the root causes of variation that come from outside the circle of influence of the Workers.   They learn how to rationally and radically redesign system policies to achieve both agility and stability.

And doing that requires developing systemic-thinking and learning Improvement Science skills – because chaos is counter-intuitive. If it were intuitively-obvious we would have discovered the nature of chaos thousands of years ago. The fact that it was not discovered until 1976 demonstrates this.

It is our homo sapiens intuition that got us into this mess!  The inherent flaws of the  caveman wetware between our ears.  Our current management policies are intuitively-obvious, collectively-agreed, rubber-stamped and wrong! They are a Recipe for Chaos.

And when we learn to re-design our system policies and upload the new system software then the chaos evaporates as if a magic wand had been waved.

That comes as a big surprise!

And what also comes as a big surprise is just how small the counter-intuitive policy design tweaks often are.

Smooth, effective, efficient,safe and productive flow is restored. Calm confidence reigns. Safety, Quality and Productivity all increase – at the same time.  The emotional storm clouds dissipate and the cultural sun shines again.

Everyone feels better. Everyone. Patients, workers and managers.

This is Win-Win-Win improvement by design. Improvement Science.

Locked_DoorIf we were exploring the corridors in an unfamiliar building and our way forward was blocked by a door that looked like this … we would suspect that something of value lay beyond.

We know there is an unknown.

The puzzle we have to solve to release the chain tells us this. This is called an “affordance” – the design of the lock tells us what we need to do.

More often however what we need to move forward is unknown to us and the problems face afford no clues as to how to solve them.  Worse than that – the clues the do offer are misleading. Our intuition is tricked. We do the ‘intuitively obvious’ thing and the problem gets worse.

It is easy to lose confidence, become despondent and even start to believe there is no solution. To assume the problem is impossible for us to solve.

Then one day someone shows us how to solve an “impossible” problem. And with the benefit of our new perspective the solution looks simple and how it works is obvious. But only in retrospect.

Our unknown was known all along. But not by us. We were ignorant.

And our intuitions are flaky, forgetful and fickle. Not to be trusted. And our egos are fragile too – we do not like to feel flaky, forgetful and fickle. So we lie to ourselves and we confuse obvious-in-hindsight with obvious-in-foresight. They are not the same.

Suppose we now want to demonstrate our new understanding to someone else – to help them solve their “impossible” problem. How do we do that?

Do we say “But it is obvious – if you cannot see it you must be blind or stupid!” How can we say that when it was not to use only a short time ago? Is our ego getting the in way again? Can our intuition or ego be trusted at all?

To help others gain insight and deepen their understanding we must put ourselves back into the shoes we used to be in – or rather their shoes now:  and we need to and look at the problem again from their perspective. With the benefit of the three views of the problem: our old one, their current one and our new one we may be able to then see where the Unknown-Known is for them.

Only then can we help them discover it for themselves … and then they can help others discover their Unknown-Knowns.  That is know understanding spreads.

And understanding is the bridge between Knowledge and Wisdom.

And it is a wonderful thing to see someone move from confusion to clarity by asking them just the right question at just the right time in just the right way.

No more than that.

Socrates knew how to do this a long time ago – which is why it is called the Socratic Method.

 

computer_power_display_glowing_150_wht_9646A healthcare system has two inter-dependent parts. Let us call them the ‘hardware’ and the ‘software’ – terms we are more familiar with when referring to computer systems.

In a computer the critical-to-success software is called the ‘operating system’ – and we know that by the brand labels such as Windows, Linux, MacOS, or Android. There are many.

It is the O/S that makes the hardware fit-for-purpose. Without the O/S the computer is just a box of hot chips. A rather expensive room heater.

All the programs and apps that we use to to deliver our particular information service require the O/S to manage the actual hardware. Without a coordinator there would be chaos.

In a healthcare system the ‘hardware’ is the buildings, the equipment, and the people.  They are all necessary – but they are not sufficient on their own.

The ‘operating system’ in a healthcare system are the management policies: the ‘instructions’ that guide the ‘hardware’ to do what is required, when it is required and sometimes how it is required.  These policies are created by managers – they are the healthcare operating system design engineers so-to-speak.

Change the O/S and you change the behaviour of the whole system – it may look exactly the same – but it will deliver a different performance. For better or for worse.


In 1953 the invention of the transistor led to the first commercially viable computers. They were faster, smaller, more reliable, cheaper to buy and cheaper to maintain than their predecessors. They were also programmable.  And with many separate customer programs demanding hardware resources – an effective and efficient operating system was needed. So the understanding of “good” O/S design developed quickly.

In the 1960′s the first integrated circuits appeared and the computer world became dominated by mainframe computers. They filled air-conditioned rooms with gleaming cabinets tended lovingly by white-coated technicians carrying clipboards. Mainframes were, and still are, very expensive to build and to run! The valuable resource that was purchased by the customers was ‘CPU time’.  So the operating systems of these machines were designed to squeeze every microsecond of value out of the expensive-to-maintain CPU: for very good commercial reasons. Delivering the “data processing jobs” right, on-time and every-time was paramount.

The design of the operating system software was critical to the performance and to the profit.  So a lot of brain power was invested in learning how to schedule jobs; how to orchestrate the parts of the hardware system so that they worked in harmony; how to manage data buffers to smooth out flow and priority variation; how to design efficient algorithms for number crunching, sorting and searching; and how to switch from one task to the next quickly and without wasting time or making errors.

Every modern digital computer has inherited this legacy of learning.

In the 1970′s the first commercial microprocessors appeared – which reduced the size and cost of computers by orders of magnitude again – and increased their speed and reliability even further. Silicon Valley blossomed and although the first micro-chips were rather feeble in comparison with their mainframe equivalents they ushered in the modern era of the desktop-sized personal computer.

In the 1980′s players such as Microsoft and Apple appeared to exploit this vast new market. The only difference was that Microsoft only offered just the operating system for the new IBM-PC hardware (called MS-DOS); while Apple created both the hardware and software as a tightly integrated system – the Apple I.

The ergonomic-seamless-design philosophy at Apple led to the Apple Mac which revolutionised personal computing. It made them usable by people who had no interest in the innards or in programming. The Apple Macs were the “designer”computers and were reassuringly more expensive. The innovations that Apple designed into the Mac are now expected in all personal computers as well as the latest generations of smartphones and tablets.

Today we carry more computing power in our top pocket than a mainframe of the 1970′s could deliver! The design of the operating system has hardly changed though.

It was the O/S  design that leveraged the maximum potential of the very expensive hardware.  And that is still the case – but we take it for completely for granted.


Exactly the same principle applies to our healthcare systems.

The only difference is that the flow is not 1′s and 0′s – it is patients and all the things needed to deliver patient care. The ‘hardware’ is the expensive part to assemble and run – and the largest cost is the people.  Healthcare is a service delivered by people to people. Highly-trained nurses, doctors and allied healthcare professionals are expensive.

So the key to healthcare system performance is high quality management policy design – the healthcare operating system (HOS).

And here we hit a snag.

Our healthcare management policies have not been designed using the same rigor as the operating systems for our computers. They have not been designed using the well-understood principles of flow physics. The various parts of our healthcare system do not work well together. The flows are fractured. The silos work independently. And the ubiquitous symptom of this dysfunction is confusion, chaos and conflict.  The managers and the doctors are at each others throats. And this is because the management policies have evolved through a largely ineffective and very inefficient strategy called “burn-and-scrape”. Firefighting.

The root cause of the poor design is that neither healthcare managers nor the healthcare workers are trained in operational policy design. Design for Safety. Design for Quality. Design for Delivery. Design for Productivity.

And we are all left with a lose-lose-lose legacy: a system that is no longer fit-for-purpose and a generation of managers and clinicians who have never learned how to design the operational and clinical policies that ensure the system actually delivers what the ‘hardware’ is capable of delivering.


For example:

Suppose we have a simple healthcare system with three stages called A, B and C.  All the patients flow through A, then to B and then to C.  Let us assume these three parts are managed separately as departments with separate budgets and that they are free to use whatever policies they choose so long as they achieve their performance targets -which are (a) to do all the work and (b) to stay in budget and (c) to deliver on time.  So far so good.

Now suppose that the work that arrives at Department B from Department  A is not all the same and different tasks require different pathways and different resources. A Radiology, Pathology or Pharmacy Department for example.

Sorting the work into separate streams and having expensive special-purpose resources sitting idle waiting for work to arrive is inefficient and expensive. It will push up the unit cost – the total cost divided by the total activity. This is called ‘carve-out’.

Switching resources from one pathway to another takes time and that change-over time implies some resources are not able to do the work for a while.  These inefficiencies will contribute to the total cost and therefore push up the “unit-cost”. The total cost for the department divided by the total activity for the department.

So Department B decides to improve its “unit cost” by deploying a policy called ‘batching’.  It starts to sort the incoming work into different types of task and when a big enough batch has accumulated it then initiates the change-over. The cost of the change-over is shared by the whole batch. The “unit cost” falls because Department B is now able to deliver the same activity with fewer resources because they spend less time doing the change-overs. That is good. Isn’t it?

But what is the impact on Departments A and C and what effect does it have on delivery times and work in progress and the cost of storing the queues?

Department A notices that it can no longer pass work to B when it wants because B will only start the work when it has a full batch of requests. The queue of waiting work sits inside Department A.  That queue takes up space and that space costs money but the queue cost is incurred by Department A – not Department B.

What Department C sees is the order of the work changed by Department B to create a bigger variation in lead times for consecutive tasks. So if the whole system is required to achieve a delivery time specification – then Department C has to expedite the longest waiters and delay the shortest waiters – and that takes work,  time, space and money. That cost is incurred by Department C not by Department B.

The unit costs for Department B go down – and those for A and C both go up. The system is less productive as a whole.  The queues and delays caused by the policy change means that work can not be completed reliably on time. The blame for the failure falls on Department C.  Conflict between the parts of the system is inevitable. Lose-Lose-Lose.

And conflict is always expensive – on all dimensions – emotional, temporal and financial.


The policy design flaw here looks like it is ‘batching’ – but that policy is just a reaction to a deeper design flaw. It is a symptom.  The deeper flaw is not even the use of ‘unit costing’. That is a useful enough tool. The deeper flaw is the incorrect assumption that by improving the unit costs of the stages independently will always get an improvement in whole system productivity.

This is incorrect. This error is the result of ‘linear thinking’.

The Laws of Flow Physics do not work like this. Real systems are non-linear.

To design the management policies for a non-linear system using linear-thinking is guaranteed to fail. Disappointment and conflict is inevitable. And that is what we have. As system designers we need to use ‘systems-thinking’.

This discovery comes as a bit of a shock to management accountants. They feel rather challenged by the assertion that some of their cherished “cost improvement policies” are actually making the system less productive. Precisely the opposite of what they are trying to achieve.

And it is the senior management that decide the system-wide financial policies so that is where the linear-thinking needs to be challenged and the ‘software patch’ applied first.

It is not a major management software re-write. Just a minor tweak is all that is required.

And the numbers speak for themselves. It is not a difficult experiment to do.


So that is where we need to start.

We need to learn Healthcare Operating System design and we need to learn it at all levels in healthcare organisations.

And that system-thinking skill has another name – it is called Improvement Science.

The good news is that it is a lot easier to learn than most people believe.

And that is a big shock too – because how to do this has been known for 50 years.

So if you would like to see a real and current example of how poor policy design leads to falling productivity and then how to re-design the policies to reverse this effect have a look at Journal Of Improvement Science 2013:8;1-20.

And if you would like to learn how to design healthcare operating policies that deliver higher productivity with the same resources then the first step is FISH.

box_opening_up_closing_150_wht_8035 Improvement Science requires the effective, efficient and coordinated use of diagnosis, design and delivery tools.

Experience has also taught us that it is not just about the tools – each must be used as it was designed.

The craftsman knows his tools and knows what instrument to use, where and when the context dictates; and how to use it with skill.

Some tools are simple and effective – easy to understand and to use. The kitchen knife is a good example. It does not require an instruction manual to use it.

Other tools are more complex. Very often because they have a specific purpose. They are not generic. And they may not be intuitively obvious how to use them.  Many labour-saving household appliances have specific purposes: the microwave oven, the dish-washer and so on – but they have complex controls and settings that we need to manipulate to direct the “domestic robot” to deliver what we actually want.  Very often these controls are not intuitively obvious – we are dealing with a black box – and our understanding of what is happening inside is vague.

Very often we do not understand how the buttons and dials that we can see and touch – the inputs – actually influence the innards of the box to determine the outputs. We do not have a mental model of what is inside the Black Box. We do not know – we are ignorant.

In this situation we may resort to just blindly following the instructions;  or blindly copying what someone else does; or blindly trying random combinations of inputs until we get close enough to what we want. No wiser at the end than we were at the start.  The common thread here is “blind”. The box is black. We cannot see inside.

And the complex black box is deliberately made so – because the supplier of the super-tool does not want their “secret recipe” to be known to all – least of all their competitors.

This is a perfect recipe for confusion and for conflict. Lose-Lose-Lose.

Improvement Science is dedicated to eliminating confusion and conflict – so Black Box Tools are NOT on the menu.

Improvement Scientists need to understand how their tools work – and the best way to achieve that level of understanding is to design and build their own.

This may sound like re-inventing the wheel but it is not about building novel tools – it is about re-creating the tried and tested tools – for the purpose of understanding how they work. And understanding their strengths, their weaknesses, their opportunities and their risks or threats.

And doing that requires guidance from a mentor who has been through this same learning journey. Starting with simple, intuitive tools, and working step-by-step to design, build and understand the more complex ones.

So where do we start?

In the FISH course the first tool we learn to use is a Gantt Chart.

It was invented by Henry Laurence Gantt about 100 years ago and requires nothing more than pencil and paper. Coloured pencils and squared paper are even better.

Gantt_ChartThis is an example of a Gantt Chart for a Day Surgery Unit.

At the top are the “tasks” – patients 1 and 2; and at the bottom are the “resources”.

Time runs left to right.

Each coloured bar appears twice: once on each chart.

The power of a Gantt Chart is that it presents a lot of information in a very compact and easy-to-interpret format. That is what Henry Gantt intended.

A Gantt Chart is like the surgeon’s scalpel. It is a simple, generic easy-to-create tool that has a wide range of uses. The skill is knowing where, when and how to use it: and just as importantly where-not, when-not and how-not.

DRAT_04The second tool that an Improvement Scientist learns to use is the Shewhart or time-series chart.

It was invented about 90 years ago.

This is a more complex tool and as such there is a BIG danger that it is used as a Black Box with no understanding of the innards.  The SPC  and Six-Sigma Zealots sell it as a Magic Box. It is not.

We could paste any old time-series data into a bit of SPC software; twiddle with the controls until we get the output we want; and copy the chart into our report. We could do that and hope that no-one will ask us to explain what we have done and how we have done it. Most do not because they do not want to appear ‘ignorant’. The elephant is in the room though.  There is a conspiracy of silence.

The elephant-in-the-room is the risk we take when use Black Box tools – the risk of GIGO. Garbage In Garbage Out.

And unfortunately we have a tendency to blindly trust what comes out of the Black Box that a plausible Zealot tells us is “magic”. This is the Emporer’s New Clothes problem.  Another conspiracy of silence follows.

The problem here is not the tool – it is the desperate person blindly wielding it. The Zealots know this and they warn the Desperados of the risk and offer their expensive Magician services. They are not interested in showing how the magic trick is done though! They prefer the Box to stay Black.

So to avoid this cat-and-mouse scenario and to understand both the simpler and the more complex tools, and to be able to use them effectively and safely, we need to be able to build one for ourselves.

And the know-how to do that is not obvious – if it were we would have already done it – so we need guidance.

And once we have  built our first one – a rough-and-ready working prototype – then we can use the existing ones that have been polished with long use. And we can appreciate the wisdom that has gone into their design. The Black Box becomes Transparent.

So learning how the build the essential tools is the first part of the Improvement Science Practitioner (ISP) training – because without that knowledge it is difficult to progress very far. And without that understanding it is impossible to teach anyone anything other than to blindly follow a Black Box recipe.

Of course Magic Black Box Solutions Inc will not warm to this idea – they may not want to reveal what is inside their magic product. They are fearful that their customers may discover that it is much simpler than they are being told.  And we can test that hypothesis by asking them to explain how it works in language that we can understand. If they cannot (or will not) then we may want to keep looking for someone who can and will.

line_figure_phone_400_wht_9858<Lesley>Hi Bob! How are you today?

<Bob>OK thanks Lesley. And you?

<Lesley>I am looking forward to our conversation. I have two questions this week.

<Bob>OK. What is the first one?

<Lesley>You have taught me that improvement-by-design starts with the “purpose” question and that makes sense to me. But when I ask that question in a session I get an “eh?” reaction and I get nowhere.

<Bob>Quod facere bonum opus et quomodo te cognovi unum?

<Lesley>Eh?

<Bob>I asked you a purpose question.

<Lesley>Did you? What language is that? Latin? I do not understand Latin.

<Bob>So although you recognize the language you do not understand what I asked, the words have no meaning. So you are unable to answer my question and your reaction is “eh?”. I suspect the same is happening with your audience. Who are they?

<Lesley>Front-line clinicians and managers who have come to me to ask how to solve their problems. There Niggles. They want a how-to-recipe and they want it yesterday!

<Bob>OK. Remember the Temperament Treacle conversation last week. What is the commonest Myers-Briggs Type preference in your audience?

<Lesley>It is xSTJ – tough minded Guardians.  We did that exercise. It was good fun! Lots of OMG moments!

<Bob>OK – is your “purpose” question framed in a language that the xSTJ preference will understand naturally?

<Lesley>Ah! Probably not! The “purpose” question is future-focused, conceptual , strategic, value-loaded and subjective.

<Bob>Indeed – it is an iNtuitor question. xNTx or xNFx. Pose that question to a roomful of academics or executives and they will debate it ad infinitum.

<Lesley>More Latin – but that phrase I understand. You are right.  And my own preference is xNTP so I need to translate my xNTP “purpose” question into their xSTJ language?

<Bob>Yes. And what language do they use?

<Lesley>The language of facts, figures, jobs-to-do, work-schedules, targets, budgets, rational, logical, problem-solving, tough-decisions, and action-plans. Objective, pragmatic, necessary stuff that keep the operational-wheels-turning.

<Bob>OK – so what would “purpose” look like in xSTJ language?

<Lesley>Um. Good question. Let me start at the beginning. They came to me in desperation because they are now scared enough to ask for help.

<Bob>Scared of what?

<Lesley>Unintentionally failing. They do not want to fail and they do not need beating with sticks. They are tough enough on themselves and each other.

<Bob>OK that is part of their purpose. The “Avoid” part. The bit they do not want. What do they want? What is the “Achieve” part? What is their “Nice If”?

<Lesley>To do a good job.

<Bob>Yes. And that is what I asked you – but in an unfamiliar language. Translated into English I asked “What is a good job and how do you know you are doing one?”

<Lesley>Ah ha! That is it! That is the question I need to ask. And that links in the first map – The 4N Chart®. And it links in measurement, time-series charts and BaseLine© too. Wow!

<Bob>OK. So what is your second question?

<Lesley>Oh yes! I keep getting asked “How do we work out how much extra capacity we need?” and I answer “I doubt that you need any more capacity.”

<Bob>And their response is?

<Lesley>Anger and frustration! They say “That is obvious rubbish! We have a constant stream of complaints from patients about waiting too long and we are all maxed out so of course we need more capacity! We just need to know the minimum we can get away with – the what, where and when so we can work out how much it will cost for the business case.

<Bob>OK. So what do they mean by the word “capacity”. And what do you mean?

<Lesley>Capacity to do a good job?

<Bob>Very quick! Ho ho! That is a bit imprecise and subjective for a process designer though. The Laws of Physics need the terms “capacity”, “good” and “job” clearly defined – with units of measurement that are meaningful.

<Lesley>OK. Let us define “good” as “delivered on time” and “job” as “a patient with a health problem”.

<Bob>OK. So how do we define and measure capacity? What are the units of measurement?

<Lesley>Ah yes – I see what you mean. We touched on that in FISH but did not go into much depth.

<Bob>Now we dig deeper.

<Lesley>OK. FISH talks about three interdependent forms of capacity: flow-capacity, resource-capacity, and space-capacity.

<Bob>Yes. They are the space-and-time capacities. If we are too loose with our use of these and treat them as interchangeable then we will create the confusion and conflict that you have experienced. What are the units of measurement of each?

<Lesley>Um. Flow-capacity will be in the same units as flow, the same units as demand and activity – tasks per unit time.

<Bob>Yes. Good. And space-capacity?

<Lesley>That will be in the same units as work in progress or inventory – tasks.

<Bob>Good! And what about resource-capacity?

<Lesley>Um – Will that be resource-time – so time?

<Bob>Actually it is resource-time per unit time. So they have different units of measurement. It is invalid to mix them up any-old-way. It would be meaningless to add them for example.

<Lesley>OK. So I cannot see how to create a valid combination from these three! I cannot get the units of measurement to work.

<Bob>This is a critical insight. So what does that mean?

<Lesley>There is something missing?

<Bob>Yes. Excellent! Your homework this week is to work out what the missing pieces of the capacity-jigsaw are.

<Lesley>You are not going to tell me the answer?

<Bob>Nope. You are doing ISP training now. You already know enough to work it out.

<Lesley>OK. Now you have got me thinking. I like it. Until next week then.

<Bob>Have a good week.

stick_figure_help_button_150_wht_9911If the headlines in the newspapers are a measure of social anxiety then healthcare in the UK is in a state of panic: “Hospitals Fear The Winter Crisis Is Here Early“.

The Panic Button is being pressed and the Patient Safety Alarms are sounding.

Closer examination of the statement suggests that the winter crisis is not unexpected – it is just here early.  So we are assuming it will be worse than last year – which was bad enough.

The evidence shows this fear is well founded.  Last year was the worst on the last 5 years and this year is shaping up to be worse still.

So if it is a predictable annual crisis and we have a lot of very intelligent, very committed, very passionate people working on the problem – then why is it getting worse rather than better?

One possible factor is Temperament Treacle.

This is the glacially slow pace of effective change in healthcare – often labelled as “resistance to change” and implying deliberate scuppering of the change boat by powerful forces within the healthcare system.

Resistance to the flow of change is probably a better term. We could call that cultural viscosity.  Treacle has a very high viscosity – it resists flow.  Wading through treacle is very hard work. So pushing change though cultural treacle is hard work. Many give up in exhaustion after a while.

So why the term “Temperament Treacle“?

Improvement Science has three parts – Processes, Politics and Systems.

Process Science is applied physics. It is an objective, logical, rational science. The Laws of Physics are not negotiable. They are absolute.

Political Science is applied psychology. It is a subjective, illogical, irrational science. The Laws of People are totally negotiable.  They are arbitrary.

Systems Science is a combination of Physics and Psychology. A synthesis. A synergy. A greater-than-the-sum-of-the-parts combination.

The Swiss physician Carl Gustav Jung studied psychology – and in 1920 published “Psychological Types“.  When this ground-breaking work was translated into English in 1923 it was picked up by Katherine Cook Briggs and made popular by her daughter Isabel.  Isabel Briggs married Clarence Myers and in 1942 Isabel Myers learned about the Humm-Wadsworth Scale,  a tool for matching people with jobs. So using her knowledge of psychological type differences she set out to develop her own “personality sorting tool”. The first prototype appeared in 1943; in the 1950′s she tested the third iteration and measured the personality types of 5,355 medical students and over 10,000 nurses.   The Myers-Briggs Type Indicator was published 1962 and since then the MBTI® has been widely tested and validated and is the most extensively used personality type instrument. In 1980 Isabel Myers finished writing Gifts Differing just before she died at the age of 82 after a twenty year long battle with cancer.

The essence of Jung’s model is that an individual’s temperament is largely innate and the result of a combination of three dimensions:

1. The input or perceiving  process (P). The poles are Intuitor (N) or Sensor (S).
2. The decision or judging process (J). The poles are Thinker (T) or Feeler (F).
3. The output or doing process. The poles are Extraversion (E) or Intraversion (I).

Each of Jung’s dimensions had two “opposite” poles so when combined they gave eight types.  Isabel Myers, as a result of her extensive empirical testing, added a fourth dimension – which gives the four we see in the modern MBTI®.  The fourth dimension linked the other three together – it describes if the J or the P process is the one shown to the outside world. So the MBTI® has sixteen broad personality types.  In 1998 a book called “Please Understand Me II” written by David Keirsey, the MBTI® is put into an historical context and Keirsey concluded that there are four broad Temperaments – and these have been described since Ancient times.

When Isabel Myers measured different populations using her new tool she discovered a consistent pattern: that the proportions of the sixteen MBTI® types were consistent across a wide range of societies. Personality type is, as Jung had suggested, an innate part of the “human condition”. She also saw that different types clustered in different occupations. Finding the “right job” appeared to be a process of natural selection: certain types fitted certain roles better than others and people self-selected at an early age.  If their choice was poor then the person would be unhappy and would not achieve their potential.

Isabel’s work also showed that each type had both strengths and weaknesses – and that people performed better and felt happier when their role played to their temperament strengths.  It also revealed that considerable conflict could be attributed to type-mismatch.  Polar opposite types have the least psychological “common ground” – so when they attempt to solve a common problem they do so by different routes and using different methods and language. This generates confusion and conflict.  This is why Isabel Myers gave her book the title of “Gifts Differing” and her message was that just having awareness of and respect for the innate type differences was a big step towards reducing the confusion and conflict.

So what relevance does this have to change and improvement?

Well it turns out that certain types are much more open to change than others and certain types are much more resistant.  If an organisation, by the very nature of its work, attracts the more change resistant types then that organisation will be culturally more viscous to the flow of change. It will exhibit the cultural characteristics of temperament treacle.

The key to understanding Temperament and the MBTI® is to ask a series of questions:

Q1. Does the person have the N or S preference on their perceiving function?

A1=N then Q2: Does the person have a T or F preference on their judging function?
A2=T gives the xNTx combination which is called the Rational or phlegmatic temperament.
A2=F gives the xNFx combination which is called the Idealist or choleric temperament.

A1=S then Q3: Does the person show a J or P preference to the outside world?
A3=J gives the xSxJ combination which is called the Guardian or melancholic temperament.
A3=P gives the xSxP combination which is called the Artisan or sanguine temperament.

So which is the most change resistant temperament?  The answer may not be a big surprise. It is the Guardians. The melancholics. The SJ’s.

Bureaucracies characteristically attract SJ types. The upside is that they ensure stability – the downside is that they prevent agility.  Bureaucracies block change.

The NF Idealists are the advocates and the mentors: they love initiating and facilitating transformations with the dream of making the world a better place for everyone. They light the emotional bonfire and upset the apple cart. The NT Rationals are the engineers and the architects. They love designing and building new concepts and things – so once the Idealists have cracked the bureaucratic carapace they can swing into action. The SP Sanguines are the improvisors and expeditors – they love getting the new “concept” designs to actually work in the messy real world.

Unfortunately the grand designs dreamed up by the ‘N’s often do not work in practice – and the scene is set for the we-told-you-so game, and the name-shame-blame game.

So if initiating and facilitating change is the Achilles Heel of the SJ’s then what is their strength?

Let us approach this from a different perspective:

Let us put ourselves in the shoes of patients and ask ourselves: “What do we want from a System of Healthcare and from those who deliver that care – the doctors?”

1. Safe?
2. Reliable?
3. Predictable?
4. Decisive?
5. Dependable?
6. All the above?

These are the strengths of the SJ temperament. So how do doctors measure up?

In a recent observational study, 168 doctors who attended a leadership training course completed their MBTI® self-assessments as part of developing insight into temperament from the perspective of a clinical leader.  From the collective data we can answer our question: “Are there more SJ types in the medical profession than we would expect from the general population?”

Doctor_Temperament The table shows the results – 60% of doctors were SJ compared with 35% expected for the general population.

Statistically this is highly significant difference (p<0.0001). Doctors are different.

It is of enormous practical importance well.

We are reassured that the majority of doctors have a preference for the very traits that patients want from them. That may explain why the Medical Profession always ranks highest in the league table of “trusted professionals”. We need to be able to trust them – it could literally be a matter of life or death.

The table also shows where the doctors were thin on the ground: in the mediating, improvising, developing, constructing temperaments. The very set of skills needed to initiate and facilitate effective and sustained change.

So when the healthcare system is lurching from one predictable crisis to another – the innate temperament of the very people we trust to deliver our health care are the least comfortable with changing the system of care itself.

That is a problem. A big problem.

Studies have show that when we get over-stressed, fearful and start to panic then in a desperate act of survival we tend to resort to the aspects of our temperament that are least well developed.  An SJ who is in panic-mode may resort to NP tactics: opinion-led purposeless conceptual discussion and collective decision paralysis. This is called the “headless chicken and rabbit in the headlights” mode. We have all experienced it.

A system that is no longer delivering fit-for-purpose performance because its purpose has shifted requires redesign.  The temperament treacle inhibits the flow of change so the crisis is not averted. The crisis happens, invokes panic and triggers ineffective and counter-productive behaviour. The crisis deepens and performance can drop catastrophically when the red tape is cut. It was the only thing holding the system together!

But while the bureaucracy is in disarray then innovation can start to flourish. And the next cycle starts.

It is a painful, slow, wasteful process called “reactionary evolution by natural selection“.

Improvement Science is different. It operates from a “proactive revolution through collective design” that is enjoyable, quick and efficient but it requires mastery of synergistic political science and process science. We do not have that capability – yet.

The table offers some hope.  It shows the majority of doctors are xSTJ.  They are Logical Guardians. That means that they solve problems using tried-tested-and-trustworthy logic. So they have no problem with the physics. Show them how to diagnose and design processes and they are inside their comfort zone.

Their collective weak spot is managing the politics – the critical cultural dimension of change. Often the result is manipulation rather than motivation. It does not work. The improvement stalls. Cynicism increases. The treacle gets thicker.

System-redesign requires synergistic support, development, improvisation and mediation. These strengths do exist in the medical profession – but they appear to be in short supply – so they need to be identified, and nurtured.  And change teams need to assemble and respect the different gifts.

One further point about temperament.  It is not immutable. We can all develop a broader set of MBTI® capabilities with guidance and practice – especially the ones that fill the gaps between xSTJ and xNFP.  Those whose comfort zone naturally falls nearer the middle of the four dimensions find this easier. And that is one of the goals of Improvement Science training.

Sorting_HatAnd if you are in a hurry then you might start today by identifying the xSFJ “supporters” and the xNFJ “mentors” in your organisation and linking them together to build a temporary bridge over the change culture chasm.

So to find your Temperament just click here to download the Temperament Sorter.

mirror_mirror[Dring Dring]

The phone announced the arrival of Leslie for the weekly ISP mentoring conversation with Bob.

<Leslie> Hi Bob.

<Bob> Hi Leslie. What would you like to talk about today?

<Leslie> A new challenge – one that I have not encountered before.

<Bob>Excellent. As ever you have pricked my curiosity. Tell me more.

<Leslie> OK. Up until very recently whenever I have demonstrated the results of our improvement work to individuals or groups the usual response has been “Yes, but“. The habitual discount as you call it. “Yes, but your service is simpler; Yes, but your budget is bigger; Yes, but your staff are less militant.” I have learned to expect it so I do not get angry any more.

<Bob> OK. The mantra of the skeptics is to be expected and you have learned to stay calm and maintain respect. So what is the new challenge?

<Leslie>There are two parts to it.  Firstly, because the habitual discounting is such an effective barrier to diffusion of learning;  our system has not changed; the performance is steadily deteriorating; the chaos is worsening and everything that is ‘obvious’ has been tried and has not worked. More red lights are flashing on the patient-harm dashboard and the Inspectors are on their way. There is an increasing  turnover of staff at all levels – including Executive.  There is an anguished call for “A return to compassion first” and “A search for new leaders” and “A cultural transformation“.

<Bob> OK. It sounds like the tipping point of awareness has been reached, enough people now appreciate that their platform is burning and radical change of strategy is required to avoid the ship sinking and them all drowning. What is the second part?

<Leslie> I am getting more emails along the line of “What would you do?

<Bob> And your reply?

<Leslie> I say that I do not know because I do not have a diagnosis of the cause of the problem. I do know a lot of possible causes but I do not know which plausible ones are the actual ones.

<Bob> That is a good answer.  What was the response?

<Leslie>The commonest one is “Yes, but you have shown us that Plan-Do-Study-Act is the way to improve – and we have tried that and it does not work for us. So we think that improvement science is just more snake oil!”

<Bob>Ah ha. And how do you feel about that?

<Leslie>I have learned the hard way to respect the opinion of skeptics. PDSA does work for me but not for them. And I do not understand why that is. I would like to conclude that they are not doing it right but that is just discounting them and I am wary of doing that.

<Bob>OK. You are wise to be wary. We have reached what I call the Mirror-on-the-Wall moment.  Let me ask what your understanding of the history of PDSA is?

<Leslie>It was called Plan-Do-Check-Act by Walter Shewhart in the 1930′s and was presented as a form of the scientific method that could be applied on the factory floor to improving the quality of manufactured products.  W Edwards Deming modified it to PDSA where the “Check” was changed to “Study”.  Since then it has been the key tool in the improvement toolbox.

<Bob>Good. That is an excellent summary.  What the Zealots do not talk about are the limitations of their wonder-tool.  Perhaps that is because they believe it has no limitations.  Your experience would seem to suggest otherwise though.

<Leslie>Spot on Bob. I have a nagging doubt that I am missing something here. And not just me.

<Bob>The reason PDSA works for you is because you are using it for the purpose it was designed for: incremental improvement of small bits of the big system; the steps; the points where the streams cross the stages.  You are using your FISH training to come up with change plans that will work because you understand the Physics of Flow better. You make wise improvement decisions.  In fact you are using PDSA in two separate modes: discovery mode and delivery mode.  In discovery mode we use the Study phase to build your competence – and we learn most when what happens is not what we expected.  In delivery mode we use the Study phase to build our confidence – and that grows most when what happens is what we predicted.

<Leslie>Yes, that makes sense. I see the two modes clearly now you have framed it that way – and I see that I am doing both at the same time, almost by second nature.

<Bob>Yes – so when you demonstrate it you describe PDSA generically – not as two complimentary but contrasting modes. And by demonstrating success you omit to show that there are some design challenges that cannot be solved with either mode.  That hidden gap attracts some of the “Yes, but” reactions.

<Leslie>Do you mean the challenges that others are trying to solve and failing?

<Bob>Yes. The commonest error is to discount the value of improvement science in general; so nothing is done and the inevitable crisis happens because the system design is increasingly unfit for the evolving needs.  The toast is not just burned it is on fire and is now too late to  use the discovery mode of PDSA because prompt and effective action is needed.  So the delivery mode of PDSA is applied to a emergent, ill-understood crisis. The Plan is created using invalid assumptions and guesswork so it is fundamentally flawed and the Do then just makes the chaos worse.  In the ensuing panic the Study and Act steps are skipped so all hope of learning is lost and and a vicious and damaging spiral of knee-jerk Plan-Do-Plan-Do follows. The chaos worsens, quality falls, safety falls, confidence falls, trust falls, expectation falls and depression and despair increase.

<Leslie>That is exactly what is happening and why I feel powerless to help. What do I do?

<Bob>The toughest bit is past. You have looked squarely in the mirror and can now see harsh reality rather than hasty rhetoric. Now you can look out of the window with different eyes.  And you are now looking for a real-world example of where complex problems are solved effectively and efficiently. Can you think of one?

<Leslie>Well medicine is one that jumps to mind.  Solving a complex, emergent clinical problems requires a clear diagnosis and prompt and effective action to stabilise the patient and then to cure the underlying cause: the disease.

<Bob>An excellent example. Can you describe what happens as a PDSA sequence?

<Leslie>That is a really interesting question.  I can say for starters that it does not start with P – we have learned are not to have a preconceived idea of what to do at the start because it badly distorts our clinical judgement.  The first thing we do is assess the patient to see how sick and unstable they are – we use the Vital Signs. So that means that we decide to Act first and our first action is to Study the patient.

<Bob>OK – what happens next?

<Leslie>Then we will do whatever is needed to stabilise the patient based on what we have observed – it is called resuscitation – and only then we can plan how we will establish the diagnosis; the root cause of the crisis.

<Bob> So what does that spell?

<Leslie> A-S-D-P.  It is the exact opposite of P-D-S-A … the mirror image!

<Bob>Yes. Now consider the treatment that addresses the root cause and that cures the patient. What happens then?

<Leslie>We use the diagnosis is used to create a treatment Plan for the specific patient; we then Do that, and we Study the effect of the treatment in that specific patient, using our various charts to compare what actually happens with what we predicted would happen. Then we decide what to do next: the final action.  We may stop because we have achieved our goal, or repeat the whole cycle to achieve further improvement. So that is our old friend P-D-S-A.

<Bob>Yes. And what links the two bits together … what is the bit in the middle?

<Leslie>Once we have a diagnosis we look up the appropriate treatment options that have been proven to work through research trials and experience; and we tailor the treatment to the specific patient. Oh I see! The missing link is design. We design a specific treatment plan using generic principles.

<Bob>Yup.  The design step is the jam in the improvement sandwich and it acts like a mirror: A-S-D-P is reflected back as P-D-S-A

<Leslie>So I need to teach this backwards: P-D-S-A and then Design and then A-S-P-D!

<Bob>Yup – and you know that by another name.

<Leslie> 6M Design®! That is what my Improvement Science Practitioner course is all about.

<Bob> Yup.

<Leslie> If you had told me that at the start it would not have made much sense – it would just have confused me.

<Bob>I know. That is the reason I did not. The Mirror needs to be discovered in order for the true value to appreciated. At the start we look in the mirror and perceive what we want to see. We have to learn to see what is actually there. Us. Now you can see clearly where P-D-S-A and Design fit together and the missing A-S-D-P component that is needed to assemble a 6M Design® engine. That is Improvement-by-Design in a nine-letter nutshell.

<Leslie> Wow! I can’t wait to share this.

<Bob> And what do you expect the response to be?

<Leslie>”Yes, but”?

<Bob> From the die hard skeptics – yes. It is the ones who do not say “Yes, but” that you want to engage with. The ones who are quiet. It is always the quiet ones that hold the key.

There are three necessary parts before ANY improvement-by-design effort will gain traction. Omit any one of them and nothing happens.

stick_figure_drawing_three_check_marks_150_wht_5283

1. A clear purpose and an outline strategic plan.

2. Tactical measurement of performance-over-time.

3. A generic Improvement-by-Design framework.

These are necessary minimum requirements to be able to safely delegate the day-to-day and week-to-week tactical stuff the delivers the “what is needed”.

These are necessary minimum requirements to build a self-regulating, self-sustaining, self-healing, self-learning win-win-win system.

And this is not a new idea.  It was described by Joseph Juran in the 1960′s and that description was based on 20 years of hands-on experience of actually doing it in a wide range of manufacturing and service organisations.

That is 20 years before  the terms “Lean” or “Six Sigma” or “Theory of Constraints” were coined.  And the roots of Juran’s journey were 20 years before that – when he started work at the famous Hawthorne Works in Chicago – home of the Hawthorne Effect – and where he learned of the pioneering work of  Walter Shewhart.

And the roots of Shewhart’s innovations were 20 years before that – in the first decade of the 20th Century when innovators like Henry Ford and Henry Gantt were developing the methods of how to design and build highly productive processes.

Ford gave us the one-piece-flow high-quality at low-cost production paradigm. Toyota learned it from Ford.  Gantt gave us simple yet powerful visual charts that give us an understanding-at-a-glance of the progress of the work.  And Shewhart gave us the deceptively simple time-series chart that signals when we need to take more notice.

These nuggets of pragmatic golden knowledge have been buried for decades under a deluge of academic mud.  It is nigh time to clear away the detritus and get back to the bedrock of pragmatism. The “how-to-do-it” of improvement. Just reading Juran’s 1964 “Managerial Breakthrough” illustrates just how much we now take for granted. And how ignorant we have allowed ourselves to become.

Acquired Arrogance is a creeping, silent disease – we slip from second nature to blissful ignorance without noticing when we divorce painful reality and settle down with our own comfortable collective rhetoric.

The wake-up call is all the more painful as a consequence: because it is all the more shocking for each one of us; and because it affects more of us.

The pain is temporary – so long as we treat the cause and not just the symptom.

The first step is to acknowledge the gap – and to start filling it in. It is not technically difficult, time-consuming or expensive.  Whatever our starting point we need to put in place the three foundation stones above:

1. Common purpose.
2. Measurement-over-time.
3. Method for Improvement.

Then the rubber meets the road (rather than the sky) and things start to improve – for real. Lots of little things in lots of places at the same time – facilitated by the Junior Managers. The cumulative effect is dramatic. Chaos is tamed; calm is restored; capability builds; and confidence builds. The cynics have to look elsewhere for their sport and the skeptics are able to remain healthy.

Then the Middle Managers feel the new firmness under their feet – where before there were shifting sands. They are able to exert their influence again – to where it makes a difference. They stop chasing Scotch Mist and start reporting real and tangible improvement – with hard evidence. And they rightly claim a slice of the credit.

And the upwelling of win-win-win feedback frees the Senior Managers from getting sucked into reactive fire-fighting and the Victim Vortex; and that releases the emotional and temporal space to start learning and applying System-level Design.  That is what is needed to deliver a significant and sustained improvement.

And that creates the stable platform for the Executive Team to do Strategy from. Which is their job.

It all starts with the Three Essentials:

1. A Clear and Common Constancy of Purpose.
2. Measurement-over-time of the Vital Metrics.
3. A Generic Method for Improvement-by-Design.

Black_Curtain_and_DoorA couple of weeks ago an important event happened.  A Masterclass in Demand and Capacity for NHS service managers was run by an internationally renown and very experienced practitioner of Improvement Science.

The purpose was to assist the service managers to develop their capability for designing quality, flow and cost improvement using tried and tested operations management (OM) theory, techniques and tools.

It was assumed that as experienced NHS service managers that they already knew the basic principles of  OM and the foundation concepts, terminology, techniques and tools.

It was advertised as a Masterclass and designed accordingly.

On the day it was discovered that none of the twenty delegates had heard of two fundamental OM concepts: Little’s Law and Takt Time.

These relate to how processes are designed-to-flow. It was a Demand and Capacity Master Class; not a safety, quality or cost one.  The focus was flow.

And it became clear that none of the twenty delegates were aware before the day that there is a well-known and robust science to designing systems to flow.

So learning this fact came as a bit of a shock.

The implications of this observation are profound and worrying:

if a significant % of senior NHS operational managers are unaware of the foundations of operations management then the NHS may have problem it was not aware of …

because …

“if transformational change of the NHS into a stable system that is fit-for-purpose (now and into the future) requires the ability to design processes and systems that deliver both high effectiveness and high efficiency ...”

then …

it raises the question of whether the current generation of NHS managers are fit-for-this-future-purpose“.

No wonder that discovering a Science of  Improvement actually exists came as a bit of a shock!

And saying “Yes, but clinicians do not know this science either!” is a defensive reaction and not a constructive response. They may not but they do not call themselves “operational managers”.

[PS. If you are reading this and are employed by the NHS and do not know what Little's Law and Takt Time are then it would be worth doing that first. Wikipedia is a good place to start].

And now we have another question:

“Given there are thousands of operational managers in the NHS; what does one sample of 20 managers tell us about the whole population?”

Now that is a good question.

It is also a question of statistics. More specifically quite advanced statistics.

And most people who work in the NHS have not studied statistics to that level. So now we have another do-not-know-how problem.

But it is still an important question that we need to understand the answer to – so we need to learn how and that means taking this learning path one step at a time using what we do know, rather than what we do not.

Step 1:

What do we know? We have one sample of 20 NHS service managers. We know something about our sample because our unintended experiment has measured it: that none of them had heard of Little’s Law or Takt Time. That is 0/20 or 0%.

This is called a “sample statistic“.

What we want to know is “What does this information tell us about the proportion of the whole population of all NHS managers who do have this foundation OM knowledge?”

This proportion of interest is called  the unknown “population parameter“.

And we need to estimate this population parameter from our sample statistic because it is impractical to measure a population parameter directly: That would require every NHS manager completing an independent and accurate assessment of their basic OM knowledge. Which seems unlikely to happen.

The good news is that we can get an estimate of a population parameter from measurements made from small samples of that population. That is one purpose of statistics.

Step 2:

But we need to check some assumptions before we attempt this statistical estimation trick.

Q1: How representative is our small sample of the whole population?

If we chose the delegates for the masterclass by putting the names of all NHS managers in a hat and drawing twenty names out at random, as in a  tombola or lottery, than we have what is called a “random sample” and we can trust our estimate of the wanted population parameter.  This is called “random sampling”.

That was not the case here. Our sample was self-selecting. We were not conducting a research study. This was the real world … so there is a chance of “bias”. Our sample may not be representative and we cannot say what the most likely bias is.

It is possible that the managers who selected themselves were the ones struggling most and therefore more likely than average to have a gap in their foundation OM knowledge. It is also possible that the managers who selected themselves are the most capable in their generation and are very well aware that there is something else that they need to know.

We may have a biased sample and we need to proceed with some caution.

Step 3:

So given the fact that none of our possibly biased sample of mangers were aware of the Foundation OM Knowledge then it is possible that no NHS service managers know this core knowledge.  In other words the actual population parameter is 0%. It is also possible that the managers in our sample were the only ones in the NHS who do not know this.  So, in theory, the sought-for population parameter could be anywhere between 0% and very nearly 100%.  Does that mean it is impossible to estimate the true value?

It is not impossible. In fact we can get an estimate that we can be very confident is accurate. Here is how it is done.

Statistical estimates of population parameters are always presented as ranges with a lower and an upper limit called a “confidence interval” because the sample is not the population. And even if we have an unbiased random sample we can never be 100% confident of our estimate.  The only way to be 100% confident is to measure the whole population. And that is not practical.

So, we know the theoretical limits from consideration of the extreme cases … but what happens when we are more real-world-reasonable and say – “let us assume our sample is actually a representative sample, albeit not a randomly selected one“.  How does that affect the range of our estimate of the elusive number – the proportion of NHS service managers who know basic operation management theory?

Step 4:

To answer that we need to consider two further questions:

Q2. What is the effect of the size of the sample?  What if only 5 managers had come and none of them knew; what if had been 50 or 500 and none of them knew?

Q3. What if we repeated the experiment more times? With the same or different sample sizes? What could we learn from that?

Our intuition tells us that the larger the sample size and the more often we do the experiment then the more confident we will be of the result. In other words  narrower the range of the confidence interval around our sample statistic.

Our intuition is correct because if our sample was 100% of the population we could be 100% confident.

So given we have not yet found an NHS service manager who has the OM Knowledge then we cannot exclude 0%. Our challenge narrows to finding a reasonable estimate of the upper limit of our confidence interval.

Step 5

Before we move on let us review where we have got to already and our purpose for starting this conversation: We want enough NHS service managers who are knowledgeable enough of design-for-flow methods to catalyse a transition to a fit-for-purpose and self-sustaining NHS.

One path to this purpose is to have a large enough pool of service managers who do understand this Science well enough to act as advocates and to spread both the know-of and the know-how.  This is called the “tipping point“.

There is strong evidence that when about 20% of a population knows about something that is useful for the whole population – then that knowledge  will start to spread through the grapevine. Deeper understanding will follow. Wiser decisions will emerge. More effective actions will be taken. The system will start to self-transform.

And in the Brave New World of social media this message may spread further and faster than in the past. This is good.

So if the NHS needs 20% of its operational managers aware of the Foundations of Operations Management then what value is our morsel of data from one sample of 20 managers who, by chance, were all unaware of the Knowledge.  How can we use that data to say how close to the magic 20% tipping point we are?

Step 6:

To do that we need to ask the question in a slightly different way.

Q4. What is the chance of an NHS manager NOT knowing?

We assume that they either know or do not know; so if 20% know then 80% do not.

This is just like saying: if the chance of rolling a “six” is 1-in-6 then the chance of rolling a “not-a-six” is 5-in-6.

Next we ask:

Q5. What is the likelihood that we, just by chance, selected a group of managers where none of them know – and there are 20 in the group?

This is rather like asking: what is the likelihood of rolling twenty “not-a-sixes” in a row?

Our intuition says “an unlikely thing to happen!”

And again our intuition is sort of correct. How unlikely though? Our intuition is a bit vague on that.

If the actual proportion of NHS managers who have the OM Knowledge is about the same chance of rolling a six (about 16%) then we sense that the likelihood of getting a random sample of 20 where not one knows is small. But how small? Exactly?

We sense that 20% is too a high an estimate of a reasonable upper limit.  But how much too high?

The answer to these questions is not intuitively obvious.

We need to work it out logically and rationally. And to work this out we need to ask:

Q6. As the % of Managers-who-Know is reduced from 20% towards 0% – what is the effect on the chance of randomly selecting 20 all of whom are not in the Know?  We need to be able to see a picture of that relationship in our minds.

The good news is that we can work that out with a bit of O-level maths. And all NHS service managers, nurses and doctors have done O-level maths. It is a mandatory requirement.

The chance of rolling a “not-a-six” is 5/6 on one throw – about 83%;
and the chance of rolling only “not-a-sixes” in two throws is 5/6 x 5/6 = 25/36 – about 69%
and the chance of rolling only “not-a-sixes” in three throws is 5/6 x 5/6 x 5/6 – about 58%… and so on.

[This is called the "chain rule" and it requires that the throws are independent of each other - i.e. a random, unbiased sample]

If we do this 20 times we find that the chance of rolling no sixes at all in 20 throws is about 2.6% – unlikely but far from impossible.

We need to introduce a bit of O-level algebra now.

Let us call the proportion of NHS service managers who understand basic OM, our unknown population parameter something like “p”.

So if p is the chance of a “six” then (1-p) is a chance of a “not-a-six”.

Then the chance of no sixes in one throw is (1-p)

and no sixes after 2 throws is (1-p)(1-p) = (1-p)^2 (where ^ means raise to the power)

and no sixes after three throws is (1-p)(1-p)(1-p) = (1-p)^3 and so on.

So the likelihood of  “no sixes in n throws” is (1-p)^n

Let us call this “t”

So the equation we need to solve to estimate the upper limit of our estimate of “p” is

t=(1-p)^20

Where “t” is a measure of how likely we are to choose 20 managers all of whom do not know – just by chance.  And we want that to be a small number. We want to feel confident that our estimate is reasonable and not just a quirk of chance.

So what threshold do we set for “t” that we feel is “reasonable”? 1 in a million? 1 in 1000? 1 in 100? 1 in10?

By convention we use 1 in 20 (t=0.05) – but that is arbitrary. If we are more risk-averse we might choose 1:100 or 1:1000. It depends on the context.

Let us be reasonable – let is say we want to be 95% confident our our estimated upper limit for “p” – which means we are calculating the 95% confidence interval. This means that will accept a 1:20 risk of our calculated confidence interval for “p” being wrong:  a 19:1 odds that the true value of “p” falls outside our calculated range. Pretty good odds! We will be reasonable and we will set the likelihood threshold for being “wrong” at 5%.

So now we need to solve:

0.05= (1-p)^20

And we want a picture of this relationship in our minds so let us draw a graph of t for a range of values of p.

We know the value of p must be between 0 and 1.0 so we have all we need and we can generate this graph easily using Excel.  And every senior NHS operational manager knows how to use Excel. It is a requirement. Isn’t it?

Black_Curtain

The Excel-generated chart shows the relationship between p (horizontal axis) and t (vertical axis) using our equation:

t=(1-p)^20.

Step 7

Let us first do a “sanity check” on what we have drawn. Let us “check the extreme values”.

If 0% of managers know then a sample of 20 will always reveal none – i.e. the leftmost point of the chart. Check!

If 100% of managers know then a sample of 20 will never reveal none – i.e. way off to the right. Check!

What is clear from the chart is that the relationship between p and t  is not a straight line; it is non-linear. That explains why we find it difficult to estimate intuitively. Our brains are not very good at doing non-linear analysis. Not very good at all.

So we need a tool to help us. Our Excel graph.  We read down the vertical “t” axis from 100% to the 5% point, then trace across to the right until we hit the line we have drawn, then read down to the corresponding value for “p”. It says about 14%.

So that is the upper limit of our 95% confidence interval of the estimate of the true proportion of NHS service managers who know the Foundations of Operations Management.  The lower limit is 0%.

And we cannot say better than somewhere between  0%-14% with the data we have and the assumptions we have made.

To get a more precise estimate,  a narrower 95% confidence interval, we need to gather some more data.

[Another way we can use our chart is to ask "If the actual % of Managers who know is x% the what is the chance that no one of our sample of 20 will know?" Solving this manually means marking the x% point on the horizontal axis then tracing a line vertically up until it crosses the drawn line then tracing a horizontal line to the left until it crosses the vertical axis and reading off the likelihood.]

So if in reality 5% of all managers do Know then the chance of no one knowing in an unbiased sample of 20 is about 35% – really quite likely.

Now we are getting a feel for the likely reality. Much more useful than just dry numbers!

But we are 95% sure that 86% of NHS managers do NOT know the basic language  of flow-improvement-science.

And what this chart also tells us is that we can be VERY confident that the true value of p is less than 2o% – the proportion we believe we need to get to transformation tipping point.

Now we need to repeat the experiment experiment and draw a new graph to get a more accurate estimate of just how much less – but stepping back from the statistical nuances – the message is already clear that we do have a Black Curtain problem.

A Black Curtain of Ignorance problem.

Many will now proclaim angrily “This cannot be true! It is just statistical smoke and mirrors. Surely our managers do know this by a different name – how could they not! It is unthinkable to suggest the majority of NHS manages are ignorant of the basic science of what they are employed to do!

If that were the case though then we would already have an NHS that is fit-for-purpose. That is not what reality is telling us.

And it quickly become apparent at the master class that our sample of 20 did not know-this-by-a-different-name.

The good news is that this knowledge gap could hiding the opportunity we are all looking for – a door to a path that leads to a radical yet achievable transformation of the NHS into a system that is fit-for-purpose. Now and into the future.

A system that delivers safe, high quality care for those who need it, in full, when they need it and at a cost the country can afford. Now and for the foreseeable future.

And the really good news is that this IS knowledge gap may be  and extensive deep but it is not wide … the Foundations are is easy to learn, and to start applying immediately.  The basics can be learned in less than a week – the more advanced skills take a bit longer.  And this is not untested academic theory – it is proven pragmatic real-world problem solving know-how. It has been known for over 50 years outside healthcare.

Our goal is not acquisition of theoretical knowledge – is is a deep enough understanding to make wise enough  decisions to achieve good enough outcomes. For everyone. Starting tomorrow.

And that is the design purpose of FISH. To provide those who want to learn a quick and easy way to do so.

Stop Press: Further feedback from the masterclass is that some of the managers are grasping the nettle, drawing back their own black curtains, opening the door that was always there behind it, and taking a peek through into a magical garden of opportunity. One that was always there but was hidden from view.

Sat 5th October

It started with a tweet.

08:17 [JG] The NHS is its people. If you lose them, you lose the NHS.

09:15 [DO] We are in a PEOPLE business – educating people and creating value.

Sun 6th October

08:32 [SD] Who isn’t in people business? It is only people who buy stuff. Plants, animals, rocks and machines don’t.

09:42 [DO] Very true – it is people who use a service and people who deliver a service and we ALL know what good service is.

09:47 [SD] So onus is on us to walk our own talk. If we don’t all improve our small bits of the NHS then who can do it for us?

Then we were off … the debate was on …

10:04 [DO] True – I can prove I am saving over £160 000.00 a year – roll on PBR !?

10:15 [SD] Bravo David. I recently changed my surgery process: productivity up by 35%. Cost? Zero. How? Process design methods.

11:54 [DO] Exactly – cost neutral because we were thinking differently – so how to persuade the rest?

12:10 [SD] First demonstrate it is possible then show those who want to learn how to do it themselves. http://www.saasoft.com/fish/course

We had hard evidence it was possible … and now MC joined the debate …

12:48 [MC] Simon why are there different FISH courses for safety, quality and efficiency? Shouldn’t good design do all of that?

12:52 [SD] Yes – goal of good design is all three. It just depends where you are starting from: Governance, Operations or Finance.

A number of parallel threads then took off and we all had lots of fun exploring  each others knowledge and understanding.

17:28 MC registers on the FISH course.

And that gave me an idea. I emailed an offer – that he could have a complimentary pass for the whole FISH course in return for sharing what he learns as he learns it.  He thought it over for a couple of days then said “OK”.

Weds 9th October

06:38 [MC] Over the last 4 years of so, I’ve been involved in incrementally improving systems in hospitals. Today I’m going to start an experiment.

06:40 [MC] I’m going to see if we can do less of the incremental change and more system redesign. To do this I’ve enrolled in FISH

Fri 11th October

06:47 [MC] So as part of my exploration into system design, I’ve done some studies in my clinic this week. Will share data shortly.

21:21 [MC] Here’s a chart showing cycle time of patients in my clinic. Median cycle time 14 mins, but much longer in 2 pic.twitter.com/wu5MsAKk80

20131019_TTchart

21:22 [MC] Here’s the same clinic from patients’ point if view, wait time. Much longer than I thought or would like

20131019_WTchart

21:24 [MC] Two patients needed to discuss surgery or significant news, that takes time and can’t be rushed.

21:25 [MC] So, although I started on time, worked hard and finished on time. People were waited ages to see me. Template is wrong!

21:27 [MC] By the time I had seen the the 3rd patient, people were waiting 45 mins to see me. That’s poor.

21:28 [MC] The wait got progressively worse until the end of the clinic.

Sunday 13th October

16:02 [MC] As part of my homework on systems, I’ve put my clinic study data into a Gantt chart. Red = waiting, green = seeing me pic.twitter.com/iep2PDoruN

20131019_Ganttchart

16:34 [SD] Hurrah! The visual power of the Gantt Chart. Worth adding the booked time too – there are Seven Sins of Scheduling to find.

16:36 [SD] Excellent – good idea to sort into booked time order – it makes the planned rate of demand easier to see.

16:42 [SD] Best chart is Work In Progress – count the number of patients at each time step and plot as a run chart.

17:23 [SD] Yes – just count how many lines you cross vertically at each time interval. It can be automated in Excel

17:38 [MC] Like this? pic.twitter.com/fTnTK7MdOp

 

20131019_WIPchart

This is the work-in-progress chart. The most useful process monitoring chart of all. It shows the changing size of the queue over time.  Good flow design is associated with small, steady queues.

18:22 [SD] Perfect! You’re right not to plot as XmR – this is a cusum metric. Not a healthy WIP chart this!

There was more to follow but the “ah ha” moment had been seen and shared.

Weds 16th October

MC completes the Online FISH course and receives his well-earned Certificate of Achievement.

This was his with-the-benefit-of-hindsight conclusion:

I wish I had known some of this before. I will have totally different approach to improvement projects now. Key is to measure and model well before doing anything radical.

Improvement Science works.
Improvement-by-Design is a skill that can be learned quickly.
FISH is just a first step.

This week I heard an inspiring story of applied Improvement Science that has delivered a win-win-win result. Not in a hospital. Not in a factory. In the red-in-tooth-and-claw reality of rural Kenya.

Africa has vast herds of four-hoofed herbivors called zebra and wildebeast who are accompanied by clever and powerful carnivors – called lions. The sun and rain make the grass grow; the herbivors eat the grass and the carnivors eat the herbivors. It is the way of Nature – and has been so for millions of years.

Enter Man a few thousand years ago with his domesticated cattle and the scene is set for conflict.  Domestic cattle are easy pickings for a hungry lion. Why spend a lot of energy chasing a lively zebra or wildebeast and run the risk of injury that would spell death-by-starvation? Lions are strong and smart but they do not have a social security system to look after the injured and sick. So why not go for the easier option?

Maasai_WarriorsSo Man protects his valuable cattle from hungry lions. And Man is inventive.  The cattle need to eat and sleep like the rest of us – so during the day the cattle are guarded by brave Maasai warriors armed with spears; and at night the cattle are herded into acacia thorn-ringed kraals and watched over by the boys of the tribe.

The lions come at night. Their sense of smell and sight is much better developed than Man’s.

The boys job is to deter the lions from killing the cattle.

And this conflict has been going on for thousands of years.

So when a hungry lion kills a poorly guarded cow or bull – then Man will get revenge and kill the lion.  Everyone loses.

But the application of Improvement Science is changing that ancient conflict.  And it was not done by a scientist or an animal welfare evangelist or a trained Improvementologist. It was done by young Maasai boy called Richard Turere.

He describes the why, the what and the how  … HERE.

Richard_TurereSo what was his breakthrough?

It was noticing that walking about with a torch  was a more effective lion deterrent than a fire or a scarecrow.

That was the chance discovery.  Chance favours the prepared mind.

So how do we create a prepared mind that is receptive to the hints that chance throws at us?

That is one purpose of learning Improvement Science.

What came after the discovery was not luck … it was design.

Richard used what was to hand to design a solution that achieved the required purpose – an effective lion deterrent – in a way that was also an efficient use of his lifetime.

He had bigger dreams than just protecting his tribe’s cattle. His dream was to fly in one of those silver things that he saw passing high over the savannah every day.

And sitting up every night waving a torch to deter hungry lions from eating his father’s cattle was not going to deliver that dream.

So he had to nail that Niggle before he could achieve his Nice If.

Like many budding inventors and engineers Richard is curious about how things work – and he learned a lot about electronics by dismantling his mother’s radio! It got him into a lot of trouble – but the knowledge and understanding that he gained was put to good use when he designed his “lion lights”.

This true story captures the essence of Improvement Science better than any blog, talk, lecture, course or book could.

That is why it was shared by those who learned of his improvement; then to TED; then to the World; then passed to me and I am passing it on too.  It is an inspiring story. It says that anyone can do this sort of thing if they choose to.

And it shows how Improvement Science spreads.  Through the grapevine.  And understanding how that works is part of the Science.

puzzle_lightbulb_build_PA_150_wht_4587One of the biggest challenges in Improvement Science is diffusion of an improvement outside the circle of control of the innovator.

It is difficult enough to make a significant improvement in one small area – it is an order of magnitude more difficult to spread the word and to influence others to adopt the new idea!

One strategy is to shame others into change by demonstrating that their attitude and behaviour are blocking the diffusion of innovation.

This strategy does not work.  It generates more resistance and amplifies the differences of opinion.

Another approach is to bully others into change by discounting their opinion and just rolling out the “obvious solution” by top-down diktat.

This strategy does not work either.  It generates resentment – even if the solution is fit-for-purpose – which it usually is not!

So what does work?

The key to it is to convert some skeptics because a converted skeptic is a powerful force for change.

But doesn’t that fly in the face of established change management theory?  Innovation diffuses from innovators to early-adopters, then to the silent majority, then the laggards and dinosaurs doesn’t it?

Yes – but that style of diffusion is incremental, slow and has a very high failure rate.  What is very often required is something more radical, much faster and more reliable.  For that it needs both push from the Confident Optimists and pull from some Converted Pessimists.  The tipping point does not happen until the silent majority start to come off the fence in droves: and they do that when the noisy optimists and pessimists start to agree.  The fence-sitters jump when the tug-o-war stalemate stops and the force for change becomes aligned in the direction of progress.

So how is a skeptic converted?

Simple. By another Converted Skeptic.

Here is a real example.

We are all skeptical about many things that we would actually like to improve.

Personal health for instance. Something like weight. Yawn! Not that Old Chestnut!

We are bombarded with shroud-waver stories that we are facing an epidemic of obesity, rapidly rising  rates of diabetes, and all the nasty and life-shortening consequences of that. We are exhorted to eat “five portions of fruit and veg a day” …  or else! We are told that we must all exercise our flab away. We are warned of the Evils of Cholesterol and told that fat children are symptomatic of bad parenting.

The more gullible and fearful are herded en-masse in the direction of the Get-Thin-Quick sharks who have a veritable feeding frenzy. Their goal is their short-term financial health not the long-term health of their customers.

The more insightful, skeptical and frustrated seek solace in the chocolate Hob Nob jar.

For their part the healthcare professionals are rewarded for providing ineffective healthcare by being paid-for-activity not for outcome. They dutifully measure the decline and hand out ineffective advice. Their goal is survival too.

The outcome is predictable and seemingly unavoidable.

So when an innovation comes along that challenges the current dogma and status quo the skeptics inevitably line up and proclaim that it will not work. Not that it does not work. They do not know that because they never try it. They are skeptics. Someone else has to prove it to them.

I am a skeptic about many things.

I am very skeptical about diets – the evidence suggests that their proclaimed benefit is difficult to achieve and even more difficult to sustain: and that is the hall-mark of either a poor design or a deliberate profit-driven and perfectly legal scam.

So I decided to put an innovative approach to weight loss to the test.  It is not a diet – it is a design to achieve and sustain a healthier weight to height ratio.  And for it to work it must work for me because I am a diet skeptic.

The start of the story is  HERE

I am now a Converted Skeptic.

I call the innovative design a “2 out of 7 Lo-CHO” policy and what that means is for two days a week I just cut out as much carbohydrate (CHO) as feasible.  Stuff like bread, potatoes, rice, pasta and sugar. The rest of the time I do what I normally do.  There is no need for me to exercise and no need for me to fill up on Five Fruit and Veg.

LoCHO_Design

The chart above is the evidence of what happened. It shows a 7 kg reduction in weight over 140 days – and that is impressive given that it has required no extra exercise and no need to give up tasty treats completely and definitely no need to boost the bottom-line of a Get-Thin-Quick shark!

It also shows what to expect.  The weight loss starts steeper then tails off as it approaches a new equilibrium weight. This is the classic picture of what happens to a “system” when one of its “operational policies” is wisely re-designed.  Patience, persistence and a time-series chart are all that is needed. It takes less than a minute per day to monitor the improvement.

I can afford to invest a minute per day.

The BaseLine© chart clearly shows that the day-to-day variation is quite high: and that is expected – it is inherent in the 2-out-of-7 Lo-CHO design. It is the not the short-term change that is the measure of success – it is the long-term improvement that is important.

It is important to measure daily – because it is the daily habit that keeps me mindful, aligned, and  on-goal.  It is not the measurement itself that is the most important thing – it is the conscious act of measuring and then plotting the dot in the context of the previous dots. The picture tells the story. No further “statistical” analysis is required.

The power of this chart is that it provides hard evidence that is very effective for nudging other skeptics like me into giving the innovative idea a try.  I know because I have done that many times now.  I have converted other skeptics.  It is an innovation infection.

And the same principle appears to apply to other areas.  What is critical to success is tangible and visible proof of progress. That is what skeptics need. Then a rational and logical method and explanation that respects their individual opinion and requirements. The design has to work for them. And it must make sense.

They will come out with a string of “Yes … buts” and that is OK because that is how skeptics work.  Just answer their questions with evidence and explanations. It can get a bit wearing I admit but it is worth the effort.

An effective Improvement Scientist needs to be a healthy skeptic too.

[Beep Beep]

Bob tapped the “Answer” button on his smartphone – it was Lesley calling in for their regular mentoring session.

<Bob>Hi Lesley. How are you today? And which tunnel in the ISP Learning Labyrinth shall we explore today?

<Lesley>Hi Bob. I am OK thank you. Can we invest some time in the Engagement Maze?

<Bob>OK. Do you have a specific example?

<Lesley>Sort of. This week I had a conversation with our Chief Executive about the potential of Improvement Science and the reply I got was “I am convinced by what you say but it is your colleagues who need to engage. If you have not succeeded in convincing them then how can I?” I was surprised by that response and slightly niggled because it had an uncomfortable nugget of truth in it.

<Bob>That sounds like the sound a wise leader who understands that the “power” to make things happen does not sit wholly in the lap of those charged with accountability.

<Lesley> I agree. And at the same time everything that the “Top Team” suggest gets shot down in flames by a small and very vocal group of my more skeptical colleagues.

<Bob>Ah ha!  It sounds like the Victim Vortex is causing trouble here.

<Lesley>The Victim Vortex?

<Bob>Yes. Let me give you an example. One of the common initiators of the Victim Vortex is the data flow part of a complex system design. The Sixth Flow. So can I ask you: “How are new information systems developed in your organization?

<Lesley>Wow! You hit the nail on the head first time!  Just this week there has been another firestorm of angry emails triggered by yet another silver-bullet IT system being foisted on us!

<Bob>Interesting use of language Lesley. You sound quite “niggled”.

<Lesley>I am.  Not by the constant “drizzle of IT magic – that is irritating enough – but more by the cynical reaction of my peers.

<Bob>OK. This sounds like good enough example of the Victim Vortex. What do you expect the outcome will be?

<Lesley>Well if past experience is a predictor for future performance – an expensive failure, more frustration and a deeper well of cynicism.

<Bob>Frustrating for whom?

<Lesley>Everyone. The IT department as well. It feels like we are all being sucked into a lose-lose-lose black hole of depression and despair!

<Bob>A very good description of the Victim Vortex.

<Lesley>So the Victim Vortex is an example of the Drama Triangle acting on an organizational level?

tornada_150_wht_10155<Bob>Yes. Visualize a cultural tornado. The energy that drives it is the emotional  currency spent in playing the OK – Not OK Games.  It is a self-fueling system,  a stable design, very destructive and very resistant to change.

<Lesley>That metaphor works really well for me!

<Bob>A similar one is a whirlpool – a water vortex. If you were out swimming and were caught up in a whirlpool what are your exit strategy options?

<Lesley>An interesting question.  I have never had that experience and would not want it – it sounds rather hazardous. Let me think.  If I do nothing I will just get swept around in the chaos and I am at risk of  getting bashed, bruised and then sucked under.

<Bob>Yes – you would probably spend all your time and energy just treading water and dodging the flotsam and jetsam that has been sucked into the Vortex. That is what most people do. It is called the Hamster Wheel effect.

<Lesley>So another option is to actively swim towards the middle of the Vortex – the end would at least be quick! But that is giving up and adopting the Hopelessness attitude of burned out Victim.  That would be the equivalent of taking voluntary redundancy or early retirement. It is not my style!

<Bob>Yes. It does not solve the problem either. The Vortex is always hoovering up new Victims. It is insatiable.

<Lesley> And another option would be to swim with the flow to avoid being “got” from behind. That would be seem sensible and is possible; and at least I would feel better for doing something. I might even escape if I swim fast enough!

<Bob>That is indeed what some try. The movers and shakers. The pace setters. The optimists. The extrovert leaders. The problem is that it makes the Vortex spin even faster.  It actually makes the Vortex bigger,  more chaotic and more dangerous than before.

<Lesley>Yes – I can see that.  So my other option is to swim against the flow in an attempt to slow the Vortex down. Would that work?

<Bob>If everyone did that at the same time it might but that is unlikely to happen spontaneously. If you could achieve that degree of action alignment you would not have a Victim Vortex in the first place. Trying to do it alone is ineffective – you tire very quickly, the other Victims bash into you, you slow them down, and then you all get sucked down the Plughole of Despair.

<Lesley>And I suppose a small group of like-minded champions who try to swim-against the flow might last longer if they stick together but even then eventually they would get bashed up and broken up too. I have seen that happen.  And that is probably where our team are heading at the moment. I am out of options. Is it impossible to escape the Victim Vortex?

<Bob>There is one more direction you can swim.

<Lesley>Um? You mean across the flow heading directly away from the center?

<Bob>Exactly. Consider that option.

<Lesley>Well, it would still be hard work and I would still be going around with the Vortex and I would still need to watch out for flotsam but every stroke I make would take me further from the center. The chaos would get gradually less and eventually I would be in clear water and out of danger.  I could escape the Victim Vortex!

<Bob>Yes. And what would happen if others saw you do that and did the same?

<Lesley>The Victim Vortex would dissipate!

<Bob>Yes. So that is your best strategy. It is a win-win-win strategy too. You can lead others out of the Victim Vortex.

<Lesley>Wow! That is so cool!  So how would I apply that metaphor to the Information System niggle?

<Bob>I will leave you to ponder on that.  Think about it as a design assignment. The design of the system that generates IT solutions that are fit-for-purpose.

<Lesley> Somehow I knew you were going to say that! I have my squared-paper and sharpened pencil at the ready.  Yes – an improvement-by-design assignment. Thank you once again Bob. This ISP course is the business!

CertificateOne of the best things about improvement is the delight that we feel when someone else acknowledges it.

Particularly someone whose opinion we respect.

We feel a warm glow of pride when they notice the difference and take the time to say “Well done!”

We need this affirmative feedback to fuel our improvement engine.

And we need to learn how to give ourselves affirmative feedback because usually there is a LOT of improvement work to do behind the scenes before any externally visible improvement appears.

It is like an iceberg – most of it is hidden from view.

And improvement is tough. We have to wade through Bureaucracy Treacle that is laced with Cynicide and policed by Skeptics.  We know this.

So we need to learn to celebrate the milestones we achieve and to keep reminding ourselves of what we have already done.  Even if no one else notices or cares.

Like the certificates, cups, and medals that we earned at school – still proudly displayed on our mantlepieces and shelves decades later. They are important. Especially to us.

So it is always a joy to celebrate the achievement of others and to say “Well Done” for reaching a significant milestone on the path of learning Improvement Science.

And that has been my great pleasure this week – to prepare and send the Certificates of Achievement to those who have recently completed the FISH course.

The best part of all has been to hear how many times the word “treasured” is used in the “Thank You” replies.

We display our Certificates with pride – not so much that others can see – more to remind ourselves every day to Celebrate Achievement.

 

stick_figure_open_cupboard_150_wht_8038Improvement implies change.

Change requires motivation.

And there are two flavours of motivation juice – Fear and Food.

Fear is the emotion that results from anticipated loss in the future.  Loss means some form of damage. Physical, psychological or political harm.  We fear loss of peer-esteem and we fear loss of self-esteem almost more than we fear physical harm.

Our fear of anticipated loss may be based on reality. Our experience of actual loss in the past.  We remember the emotional pain and we learn from past pain to fear future loss.

Our fear of anticipated loss may also be fueled by rhetoric.  The doom-mongering of the Shroud-Wavers, the Nay-Sayers, the Skeptics and the Cynics.

And there are examples where the rhetorical fear is deliberately generated to drive the fear-of-reality to “the solution” -which of course they have to pay dearly for. This is Machiavellian mass manipulation for commercial gain.

“Fear of germs, fear of fatness, fear of the invisible enemies outside and inside”. Spreading and Ameliorating Fear is big business. It is a Burn-and-Scrape design.

What we are seeing is the Drama Triangle operating on a massive scale. The Persecutors create the fear, the Victims run away and the Persecutors then switch role to Rescuers and offer to sell the Terrified-and-now-compliant Victims “the  solution” to their fear.  The Victims do not learn.  That is not the purpose – because that would end the Game and derail the Gravy Train.

Fear is not an effective way to motivate for sustained improvement.  We have ample evidence to support that statement!

The Burn-and-Scrape design that we see everywhere is a fear-based-design.  Any improvements are transitory and usually only achieved at the emotional expense of a passionate idealist. When they get too tired to push any more the toast gets burnt again because the toaster is perfectly designed to burn toast.  Not intentionally designed to but perfectly designed to nevertheless.

The use of Delusional Ratios and Arbitrary Targets (DRATs) is a fear-based-design-strategy. It ensures the Game and Gravy Train continue.

BUT fear has a frightening cost. The cost of checking-and-correcting. The cost of the defensive-bureaucracy that may catch errors before too much local harm results but which itself creates unmeasurable global harm in a different way – by hoovering up the priceless human resource of life-time – like an emotional black hole.

The cost of errors. The cost of queues. The list of fear-based-design costs is long.

A fear-based-design for delivering improvement is a poor design.

So we need a better design.

And a better one is based on a positive-attractive-emotional force pulling us forwards into the future. The anticipation of gains for all. A win-win-win design.

Win-win-win design starts with the Common Purpose: the outcomes that everyone wants; and the outcomes that no-one wants.  We need both.  This balance creates alignment of effort on getting the NiceIfs (the wants) while avoiding the NooNoos (the do not wants).

Then we ask the simple question: “What is preventing us having our win-win-win outcome now?

The blockers are the parts of our current design that we need to change: our errors of omission and our errors of commission.  Our gaps and our gaffes.

And to change them we need to be clear what they are; where they are and how they came to be there … and that requires a diagnostic skill that is one of our errors of omission. We have never learned how to diagnose our process design flaws.

Another common blocker is that we believe that a win-win-win outcome is impossible. This is a learned belief. And it is a self-fulfilling prophesy.

We may also believe that all swans are white because we have never seen a black swan – even though we know, in principle, that a black swan could be possible.

Rhetoric and Reality are not the same thing.  Feeling it could be possible and knowing that it actually is possible are different emotions. We need real evidence to challenge our life-limiting rhetoric.

Weary and wary life-worn skeptics crave real evidence not rhetorical exhortation.

So when that evidence is presented – and the Impossibility Hypothesis is disproved – then an emotional shock is inevitable.  We are now on the emotional roller-coaster called the Nerve Curve.  And the deeper our skepticism the bigger the shock.

After the shock we characteristically do one of three things:

1. We discount the evidence and go into denial.  We refuse to challenge their own rhetoric. Blissful ignorance is attractive.

2. We go quiet because we are now stuck in the the painful awareness of the transition zone between the past and the future. The feelings associated with the transition is anxiety and depression.

3. We sit up, we take notice, we listen harder, we rub our chins, our minds race as we become more and more excited. The feelings associated with this stage of resolution are curiosity, excitement and hope.

It is actually a sequence not a choice. This is normal.

And those who reach Stage 3 of the Nerve Curve say things like “We have food for thought;  we feel inspired; our passion is re-ignited; we now have a beacon of hope for the future.”

That is the flavour of motivation-juice that is needed to fuel the improvement-by-design engine and to deliver win-win-win designs that are both surprising and self-sustaining.

And what actually changes our belief of what is possible when we learn to do it for ourselves. For real.

That is Improvement Science in Action.

[Bing Bong]  The sound bite heralded Leslie joining the regular Improvement Science mentoring session with Bob.  They were now using web-technology to run virtual meetings because it allows a richer conversation and saves a lot of time. It is a big improvement.

<Bob> Hi Lesley, how are you today?

<Leslie> OK thank you Bob.  I have a thorny issue to ask you about today. It has been niggling me even since we started to share the experience we are gaining from our current improvement-by-design project.

<Bob> OK. That sounds interesting. Can you paint the picture for me?

<Leslie> Better than that – I can show you the picture, I will share my screen with you.

DRAT_01 <Bob> OK. I can see that RAG table. Can you give me a bit more context?

<Leslie> Yes. This is how our performance management team have been asked to produce their 4-weekly reports for the monthly performance committee meetings.

<Bob> OK. I assume the “Period” means sequential four week periods … so what is Count, Fail and Fail%?

<Leslie> Count is the number of discharges in that 4 week period, Fail is the number whose length of stay is longer than the target, and Fail% is the ratio of Fail/Count for each 4 week period.

<Bob> It looks odd that the counts are all 28.  Is there some form of admission slot carve-out policy?

<Leslie> Yes. There is one admission slot per day for this particular stream – that has been worked out from the average historical activity.

<Bob> Ah! And the Red, Amber, Green indicates what?

<Leslie> That is depends where the Fail% falls in a set of predefined target ranges; less than 5% is green, 5-10% is Amber and more than 10% is red.

<Bob> OK. So what is the niggle?

<Leslie>Each month when we are in the green we get no feedback – a deafening silence. Each month we are in amber we get a warning email.  Each month we are in the red we have to “go and explain ourselves” and provide a “back-on-track” plan.

<Bob> Let me guess – this feedback design is not helping much.

<Leslie> It is worse than that – it creates a perpetual sense of fear. The risk of breaching the target is distorting people’s priorities and their behaviour.

<Bob> Do you have any evidence of that?

<Leslie> Yes – but it is anecdotal.  There is a daily operational meeting and the highest priority topic is “Which patients are closest to the target length of stay and therefore need to have their  discharge expedited?“.

<Bob> Ah yes.  The “target tail wagging the quality dog” problem. So what is your question?

<Leslie> How do we focus on the cause of the problem rather than the symptoms?  We want to be rid of the “fear of the stick”.

<Bob> OK. What you have hear is a very common system design flaw. It is called a DRAT.

<Leslie> DRAT?

<Bob> “Delusional Ratio and Arbitrary Target”.

<Leslie> Ha! That sounds spot on!  “DRAT” is what we say every time we miss the target!

<Bob> Indeed.  So first plot this yield data as a time series chart.

<Leslie> Here we go.

DRAT_02<Bob>Good. I see you have added the cut-off thresholds for the RAG chart. These 5% and 10% thresholds are arbitrary and the data shows your current system is unable to meet them. Your design looks incapable.

<Leslie>Yes – and it also shows that the % expressed to one decimal place is meaningless because there are limited possibilities for the value.

<Bob> Yes. These are two reasons that this is a Delusional Ratio; there are quite a few more.

DRAT_03<Leslie> OK  and if I plot this as an Individuals charts I can see that this variation is not exceptional.

<Bob> Careful Leslie. It can be dangerous to do this: an Individuals chart of aggregate yield becomes quite insensitive with aggregated counts of relatively rare events, a small number of levels that go down to zero, and a limited number of points.  The SPC zealots are compounding the problem and plotting this data as a C-chart or a P-chart makes no difference.

This is all the effect of the common practice of applying  an arbitrary performance target then counting the failures and using that as means of control.

It is poor feedback loop design – but a depressingly common one.

<Leslie> So what do we do? What is a better design?

<Bob> First ask what the purpose of the feedback is?

<Leslie> To reduce the number of beds and save money by forcing down the length of stay so that the bed-day load is reduced and so we can do the same activity with fewer beds and at the same time avoid cancellations.

<Bob> OK. That sounds reasonable from the perspective of a tax-payer and a patient. It would also be a more productive design.

<Leslie> I agree but it seems to be having the opposite effect.  We are focusing on avoiding breaches so much that other patients get delayed who could have gone home sooner and we end up with more patients to expedite. It is like a vicious circle.  And every time we fail we get whacked with the RAG stick again. It is very demoralizing and it generates a lot of resentment and conflict. That is not good for anyone – least of all the patients.

<Bob>Yes.  That is the usual effect of a DRAT design. Remember that senior managers have not been trained in process improvement-by-design either so blaming them is also counter-productive.  We need to go back to the raw data. Can you plot actual LOS by patient in order of discharge as a run chart.

DRAT_04

<Bob> OK – is the maximum LOS target 8 days?

<Leslie> Yes – and this shows  we are meeting it most of the time.  But it is only with a huge amount of effort.

<Bob> Do you know where 8 days came from?

<Leslie> I think it was the historical average divided by 85% – someone read in a book somewhere that 85%  average occupancy was optimum and put 2 and 2 together.

<Bob> Oh dear! The “85% Occupancy is Best” myth combined with the “Flaw of Averages” trap. Never mind – let me explain the reasons why it is invalid to do this.

<Leslie> Yes please!

<Bob> First plot the data as a run chart and  as a histogram – do not plot the natural process limits yet as you have done. We need to do some validity checks first.

DRAT_05

<Leslie> Here you go.

<Bob> What do you see?

<Leslie> The histogram  has more than one peak – and there is a big one sitting just under the target.

<Bob>Yes. This is called the “Horned Gaussian” and is the characteristic pattern of an arbitrary lead-time target that is distorting the behaviour of the system.  Just as you have described subjectively. There is a smaller peak with a mode of 4 days and are a few very long length of stay outliers.  This multi-modal pattern means that the mean and standard deviation of this data are meaningless numbers as are any numbers derived from them. It is like having a bag of mixed fruit and then setting a maximum allowable size for an unspecified piece of fruit. Meaningless.

<Leslie> And the cases causing the breaches are completely different and could never realistically achieve that target! So we are effectively being randomly beaten with a stick. That is certainly how it feels.

<Bob> They are certainly different but you cannot yet assume that their longer LOS is inevitable. This chart just says – “go and have a look at these specific cases for a possible cause for the difference“.

<Leslie> OK … so if they are from a different system and I exclude them from the analysis what happens?

<Bob> It will not change reality.  The current design of  this process may not be capable of delivering an 8 day upper limit for the LOS.  Imposing  a DRAT does not help – it actually makes the design worse! As you can see. Only removing the DRAT will remove the distortion and reveal the underlying process behaviour.

<Leslie> So what do we do? There is no way that will happen in the current chaos!

<Bob> Apply the 6M Design® method. Map, Measure and Model it. Understand how it is behaving as it is then design out all the causes of longer LOS and that way deliver with a shorter and less variable LOS. Your chart shows that your process is stable.  That means you have enough flow capacity – so look at the policies. Draw on all your FISH training. That way you achieve your common purpose, and the big nasty stick goes away, and everyone feels better. And in the process you will demonstrate that there is a better feedback design than DRATs and RAGs. A win-win-win design.

<Leslie> OK. That makes complete sense. Thanks Bob!  But what you have described is not part of the FISH course.

<Bob> You are right. It is part of the ISP training that comes after FISH. Improvement Science Practitioner.

<Leslie> I think we will need to get a few more people trained in the theory, techniques and tools of Improvement Science.

<Bob> That would appear to be the case. They will need a real example to see what is possible.

<Leslie> OK. I am on the case!

stick_figures_pulling_door_150_wht_6913It is surprising how competitive most people are. We are constantly comparing ourselves with others and using what we find to decide what to do next. Groan or Gloat.  Chase or Cruise.

This is because we are social animals.  Comparing with other is hard-wired into us. We have little choice.

But our natural competitive behaviour can become counter-productive when we learn that we can look better-by-comparison if we block or trip-up our competitors.  In a vainglorious attempt to make ourselves look better-by-comparison we spike the wheels of our competitors’ chariots.  We fight dirty.

It is not usually openly aggressive fighting.  Most of our spiking is done passively. Often by deliberately not doing something.  A deliberate act of omission.  And if we are challenged we often justify our act of omission by claiming we were too busy.

This habitual passive-aggressive learned behaviour is not only toxic to improvement, it creates a toxic culture too. It is toxic to everything.

And it ensures that we stay stuck in The Miserable Job Swamp.  It is a bad design.

So we need a better one.

One idea is to eliminate competition.  This sounds plausible but it does not work. We are hard-wired to compete because it has proven to be a very effective long term survival strategy. The non-competitive have not survived.  To be deliberately non-competitive will guarantee mediocrity and future failure.

A better design is to leverage our competitive nature and this is surprisingly easy to do.

We flip the “battle” into a “race”.

green_leader_running_the_race_150_wht_3444To do that we need:

1) A clear destination – a shared common purpose – that can be measured. We need to be able to plot our progress using objective evidence.

2) A proven, safe, effective and efficient route plan to get us to our destination.

3) A required arrival time that is realistic.  Open-ended time-scales do not work.

4) Regular feedback to measure our individual progress and to compare ourselves with others.  Selective feedback is ineffective.  Secrecy or anonymous feedback is counter-productive at best and toxic at worst.

5) The ability to re-invest our savings on all three win-win-win dimensions: emotional, temporal and financial.  This fuels the engine of improvement. Us.

The rest just happens – but not by magic – it happens because this is a better Improvement-by-Design.