smack_head_in_disappointment_150_wht_16653The NHS appears to be suffering from some form of obsessive-compulsive disorder.

OCD sufferers feel extreme anxiety in certain situations. Their feelings drive their behaviour which is to reduce the perceived cause of their feelings. It is a self-sustaining system because their perception is distorted and their actions are largely ineffective. So their anxiety is chronic.

Perfectionists demonstrate a degree of obsessive-compulsive behaviour too.

In the NHS the triggers are called ‘targets’ and usually take the form of failure metrics linked to arbitrary performance specifications.

The anxiety is the fear of failure and its unpleasant consequences: the name-shame-blame-game.

So a veritable industry has grown around ways to mitigate the fear. A very expensive and only partially effective industry.

Data is collected, cleaned, manipulated and uploaded to the Mothership (aka NHS England). There it is further manipulated, massaged and aggregated. Then the accumulated numbers are posted on-line, every month for anyone with a web-browser to scrutinise and anyone with an Excel spreadsheet to analyse.

An ocean of measurements is boiled and distilled into a few drops of highly concentrated and sanitized data and, in the process, most of the useful information is filtered out, deleted or distorted.

For example …

One of the failure metrics that sends a shiver of angst through a Chief Operating Officer (COO) is the failure to deliver the first definitive treatment for any patient within 18 weeks of referral from a generalist to a specialist.

The infamous and feared 18-week target.

Service providers, such as hospitals, are actually fined by their Clinical Commissioning Groups (CCGs) for failing to deliver-on-time. Yes, you heard that right … one NHS organisation financially penalises another NHS organisation for failing to deliver a result over which they have only partial control.

Service providers do not control how many patients are referred, or a myriad of other reasons that delay referred patients from attending appointments, tests and treatments. But the service providers are still accountable for the outcome of the whole process.

This ‘Perform-or-Pay-The-Price Policy‘ creates the perfect recipe for a lot of unhappiness for everyone … which is exactly what we hear and what we see.

So what distilled wisdom does the Mothership share? Here is a snapshot …


Q1: How useful is this table of numbers in helping us to diagnose the root causes of long waits, and how does it help us to decide what to change in our design to deliver a shorter waiting time and more productive system?

A1: It is almost completely useless (in this format).

So what actually happens is that the focus of management attention is drawn to the part just before the speed camera takes the snapshot … the bit between 14 and 18 weeks.

Inside that narrow time-window we see a veritable frenzy of target-failure-avoiding behaviour.

Clinical priority is side-lined and management priority takes over.  This is a management emergency! After all, fines-for-failure are only going to make the already bad financial situation even worse!

The outcome of this fire-fighting is that the bigger picture is ignored. The focus is on the ‘whip’ … and avoiding it … because it hurts!

Message from the Mothership:    “Until morale improves the beatings will continue”.

The good news is that the undigestible data liquor does harbour some very useful insights.  All we need to do is to present it in a more palatable format … as pictures of system behaviour over time.

We need to use the data to calculate the work-in-progress (=WIP).

And then we need to plot the WIP in time-order so we can see how the whole system is behaving over time … how it is changing and evolving. It is a dynamic living thing, it has vitality.

So here is the WIP chart using the distilled wisdom from the Mothership.


And this picture does not require a highly trained data analyst or statistician to interpret it for us … a Mark I eyeball linked to 1.3 kg of wetware running ChimpOS 1.0 is enough … and if you are reading this then you must already have that hardware and software.

Two patterns are obvious:

1) A cyclical pattern that appears to have an annual frequency, a seasonal pattern. The WIP is higher in the summer than in the winter. Eh? What is causing that?

2) After an initial rapid fall in 2008 the average level was steady for 4 years … and then after March 2012 it started to rise. Eh? What is causing is that?

The purpose of a WIP chart is to stimulate questions such as:

Q1: What happened in March 2012 that might have triggered this change in system behaviour?

Q2: What other effects could this trigger have caused and is there evidence for them?

A1: In March 2012 the Health and Social Care Act 2012 became Law. In the summer of 2012 the shiny new and untested Clinical Commissioning Groups (CCGs) were authorised to take over the reins from the exiting Primary care Trusts (PCTs) and Strategic Health Authorities (SHAs). The vast £80bn annual pot of tax-payer cash was now in the hands of well-intended GPs who believed that they could do a better commissioning job than non-clinicians. The accountability for outcomes had been deftly delegated to the doctors.  And many of the new CCG managers were the same ones who had collected their redundancy checks when the old system was shut down. Now that sounds like a plausible system-wide change! A massive political experiment was underway and the NHS was the guinea-pig.

A2: Another NHS failure metric is the A&E 4-hour wait target which, worringly, also shows a deterioration that appears to have started just after July 2010, i.e. just after the new Government was elected into power.  Maybe that had something to do with it? Maybe it would have happened whichever party won at the polls.


A plausible temporal association does not constitute proof – and we cannot conclude a political move to a CCG-led NHS has caused the observed behaviour. Retrospective analysis alone is not able to establish the cause.

It could just as easily be that something else caused these behaviours. And it is important to remember that there are usually many causal factors combining together to create the observed effect.

And unraveling that Gordian Knot is the work of analysts, statisticians, economists, historians, academics, politicians and anyone else with an opinion.

We have a more pressing problem. We have a deteriorating NHS that needs urgent resuscitation!

So what can we do?

One thing we can do immediately is to make better use of our data by presenting it in ways that are easier to interpret … such as a work in progress chart.

Doing that will trigger different conversions; ones spiced with more curiosity and laced with less cynicism.

We can add more context to our data to give it life and meaning. We can season it with patient and staff stories to give it emotional impact.

And we can deepen our understanding of what causes lead to what effects.

And with that deeper understanding we can begin to make wiser decisions that will lead to more effective actions and better outcomes.

This is all possible. It is called Improvement Science.

And as we speak there is an experiment running … a free offer to doctors-in-training to learn the foundations of improvement science in healthcare (FISH).

In just two weeks 186 have taken up that offer and 13 have completed the course!

And this vanguard of curious and courageous innovators have discovered a whole new world of opportunity that they were completely unaware of before. But not anymore!

So let us ease off applying the whip and ease in the application of WIP.


Here is a short video describing how to create, animate and interpret a form of diagnostic Vitals Chart® using the raw data published by NHS England.  This is a training exercise from the Improvement Science Practitioner (level 2) course.

How to create an 18 weeks animated Bucket Brigade Chart (BBC)


RIA_graphicA question that is often asked by doctors in particular is “What is the difference between Research, Audit and Improvement Science?“.

It is a very good question and the diagram captures the essence of the answer.

Improvement science is like a bridge between research and audit.

To understand why that is we first need to ask a different question “What are the purposes of research, improvement science and audit? What do they do?

In a nutshell:

Research provides us with new knowledge and tells us what the right stuff is.
Improvement Science provides us with a way to design our system to do the right stuff.
Audit provides us with feedback and tells us if we are doing the right stuff right.

Research requires a suggestion and an experiment to test it.   A suggestion might be “Drug X is better than drug Y at treating disease Z”, and the experiment might be a randomised controlled trial (RCT).  The way this is done is that subjects with disease Z are randomly allocated to two groups, the control group and the study group.  A measure of ‘better’ is devised and used in both groups. Then the study group is given drug X and the control group is given drug Y and the outcomes are compared.  The randomisation is needed because there are always many sources of variation that we cannot control, and it also almost guarantees that there will be some difference between our two groups. So then we have to use sophisticated statistical data analysis to answer the question “Is there a statistically significant difference between the two groups? Is drug X actually better than drug Y?”

And research is often a complicated and expensive process because to do it well requires careful study design, a lot of discipline, and usually large study and control groups. It is an effective way to help us to know what the right stuff is but only in a generic sense.

Audit requires a standard to compare with and to know if what we are doing is acceptable, or not. There is no randomisation between groups but we still need a metric and we still need to measure what is happening in our local reality.  We then compare our local experience with the global standard and, because variation is inevitable, we have to use statistical tools to help us perform that comparison.

And very often audit focuses on avoiding failure; in other words the standard is a ‘minimum acceptable standard‘ and as long as we are not failing it then that is regarded as OK. If we are shown to be failing then we are in trouble!

And very often the most sophisticated statistical tool used for audit is called an average.  We measure our performance, we average it over a period of time (to remove the troublesome variation), and we compare our measured average with the minimum standard. And if it is below then we are in trouble and if it is above then we are not.  We have no idea how reliable that conclusion is though because we discounted any variation.

A perfect example of this target-driven audit approach is the A&E 95% 4-hour performance target.

The 4-hours defines the metric we are using; the time interval between a patient arriving in A&E and them leaving. It is called a lead time metric. And it is easy to measure.

The 95% defined the minimum  acceptable average number of people who are in A&E for less than 4-hours and it is usually aggregated over three months. And it is easy to measure.

So, if about 200 people arrive in a hospital A&E each day and we aggregate for 90 days that is about 18,000 people in total so the 95% 4-hour A&E target implies that we accept as OK for about 900 of them to be there for more than 4-hours.

Do the 900 agree? Do the other 17,100?  Has anyone actually asked the patients what they would like?

The problem with this “avoiding failure” mindset is that it can never lead to excellence. It can only deliver just above the minimum acceptable. That is called mediocrity.  It is perfectly possible for a hospital to deliver 100% on its A&E 4 hour target by designing its process to ensure every one of the 18,000 patients is there for exactly 3 hours and 59 minutes. It is called a time-trap design.

We can hit the target and miss the point.

And what is more the “4-hours” and the “95%” are completely arbitrary numbers … there is not a shred of research evidence to support them.

So just this one example illustrates the many problems created by having a gap between research and audit.

And that is why we need Improvement Science to help us to link them together.

We need improvement science to translate the global knowledge and apply it to deliver local improvement in whatever metrics we feel are most important. Safety metrics, flow metrics, quality metrics and productivity metrics. Simultaneously. To achieve system-wide excellence. For everyone, everywhere.

When we learn Improvement Science we learn to measure how well we are doing … we learn the power of measurement of success … and we learn to avoid averaging because we want to see the variation. And we still need a minimum acceptable standard because we want to exceed it 100% of the time. And we want continuous feedback on just how far above the minimum acceptable standard we are. We want to see how excellent we are, and we want to share that evidence and our confidence with our patients.

We want to agree a realistic expectation rather than paint a picture of the worst case scenario.

And when we learn Improvement Science we will see very clearly where to focus our improvement efforts.

Improvement Science is the bit in the middle.

Stop Press:  There is currently an offer of free on-line foundation training in improvement science for up to 1000 doctors-in-training … here  … and do not dally because places are being snapped up fast!

Nerve_CurveThe emotional journey of change feels like a roller-coaster ride and if we draw as an emotion versus time chart it looks like the diagram above.

The toughest part is getting past the low point called the Well of Despair and doing that requires a combination of inner strength and external support.

The external support comes from an experienced practitioner who has been through it … and survived … and has the benefit of experience and hindsight.

The Improvement Science coach.

What happens as we  apply the IS principles, techniques and tools that we have diligently practiced and rehearsed? We discover that … they work!  And all the fence-sitters and the skeptics see it too.

We start to turn the corner and what we feel next is that the back pressure of resistance falls a bit. It does not go away, it just gets less.

And that means that the next test of change is a bit easier and we start to add more evidence that the science of improvement does indeed work and moreover it is a skill we can learn, demonstrate and teach.

We have now turned the corner of disbelief and have started the long, slow, tough climb through mediocrity to excellence.

This is also a time of risks and there are several to be aware of:

  1. The objective evidence that dramatic improvements in safety, flow, quality and productivity are indeed possible and that the skills can be learned will trigger those most threatened by the change to fight harder to defend their disproved rhetoric. And do not underestimate how angry and nasty they can get!
  2. We can too easily become complacent and believe that the rest will follow easily. It doesn’t.  We may have nailed some of the easier niggles to be sure … but there are much more challenging ones ahead.  The climb to excellence is a steep learning curve … all the way. But the rewards get bigger and bigger as we progress so it is worth it.
  3. We risk over-estimating our capability and then attempting to take on the tougher improvement assignments without the necessary training, practice, rehearsal and support. If we do that we will crash and burn.  It is like a game of snakes and ladders.  Our IS coach is there to help us up the ladders and to point out where the slippery snakes are lurking.

So before embarking on this journey be sure to find a competent IS coach.

They are easy to identify because they will have a portfolio of case studies that they have done themselves. They have the evidence of successful outcomes and that they can walk-the-talk.

And avoid anyone who talks-the-walk but does not have a portfolio of evidence of their own competence. Their Siren song will lure you towards the submerged Rocks of Disappointment and they will disappear like morning mist when you need them most – when it comes to the toughest part – turning the corner. You will be abandoned and fall into the Well of Despair.

So ask your IS coach for credentials, case studies and testimonials and check them out.

DiamondAs systems become bigger and more complicated they may fragment into a larger number of smaller parts.

There are many reasons for this behaviour but the essence is that the integrity of a system requires the parts to be connected to each other in some way.  Bonds that hold them together – bonds that are stronger than the forces of disruption that are always battering them.

In some systems these bonds are physical and chemical.

A diamond does not fragment, even under extreme pressure, because the chemical bonds between the carbon atoms in the crystal lattice are very strong . A diamond is not alive – the atoms cannot move around – and that is the secret of its extreme strength. So a diamond cannot adapt either … it is durable but it is dead.

Cell_StructureIn biological systems the bonds are informational.

A cell maintains its integrity because the nanoscale component parts are held together physically, chemically and with information.

Inside a cell the atoms and molecules move around – and that is the secret of its survival. It is alive. It senses. It responds. It evolves. It endures. And it is mortal.

So are the organisms made from cells. A lichen, a tree, an animal and a person.

And so are the organisations built by and from people. A couple, a family, a tribe, a nation, the world.

And it is informational bonds that hold people together – it is how they share data with each other.

These bonds manifest in many ways. Our senses – especially sight, sound and touch. Our language – body, verbal and visual. Our learning – individual and collective. And our emotions, beliefs and behaviours that emerge and evolve over time.

We all know we are mortal. We strive to protect our identity; and we yearn for longevity. We do not want to die. We want and need integrity – at all levels from chemical to cultural.

And to achieve that degree of synergy we need to share that which we have in common:

1) Shared purpose.
2) Shared language.
3) Shared pledge of acceptable behaviours.
4) Shared pool of data, information, knowledge, understanding and wisdom.

Everything else is dynamic. What we believe, what we decide, how we learn, what we do. It is that variability and adaptability that is part of our collective strength along with our shared commitment.

And the balance is critical.

Too rigid and we cannot flex quickly enough to a changing environment; too fluid and we fall apart at the first challenge. We need both stability and agility – so our system of information flows must be fit-for-purpose.

And the price we will all pay for not achieving that critical balance is death-by-fragmentation.

Dr_Bob_ThumbnailDr Bob runs a Clinic for Sick Systems and is sharing the Case of St Elsewhere’s® Hospital which is suffering from chronic pain in their A&E department.

The story so far: The history and examination of St.Elsewhere’s® Emergency Flow System have revealed that the underlying disease includes carveoutosis multiforme.  StE has consented to a knowledge transplant but is suffering symptoms of disbelief – the emotional rejection of the new reality. Dr Bob prescribed some loosening up exercises using the Carveoutosis Game.  This is the follow up appointment.

<Dr Bob> Hello again. I hope you have done the exercises as we agreed.

<StE> Indeed we have.  Many times in fact because at first we could not believe what we were seeing. We even modified the game to explore the ramifications.  And we have an apology to make. We discounted what you said last week but you were absolutely correct.

<Dr Bob> I am delighted to hear that you have explored further and I applaud you for the curiosity and courage in doing that.  There is no need to apologize; if this flow science was intuitively obvious then we we would not be having this conversation. So, how have you used this new understanding?

<StE> Before we tell the story of what happened next we are curious to know where you learned about this?

<Dr Bob> The pathogenesis of carveoutosis spatialis has been known for about 100 years but in a different context.  The story goes back to the 1870s when Alexander Graham Bell invented the telephone.  He was not an engineer or mathematician by background; he was interested in phonetics and he was a pragmatist and experimented by making things. He invented the telephone and the Bell Telephone Co. was born.  As you can imagine this innovation spread like wildfire and by the early 1900’s there were telephone companies all over the world. At that time the connections were made manually by telephone operators using patch boards and the growing demand created a new problem.  How many lines and operators were needed to provide a high quality service to bill paying customers … in other words an acceptably low probability of getting the reply “I’m sorry but all lines are busy, please try again later“.  Adding new lines and more operators was a slow and expensive business so they needed a way to predict how many would be needed – and how to do that was not at all obvious!  In 1917 a Danish mathematician, statistician and engineer called Agner Karup Erlang published a paper with the solution.  A mathematical formula that described the relationship and his formula allowed telephone exchanges to be designed, built and staffed and to provide a high quality service at an acceptably low cost.  Mass real-time voice communication by telephone became affordable and has transformed the world.

<StE> Fascinating! We sort of sense there is a link here and certainly the “high quality and low cost” message resonates for us, but how does designing telephone exchanges relate to hospital beds?

<Dr Bob> If we equate an emergency admission needing a bed to a customer making a phone call, and we equate the number of telephone lines to the number of beds, then the two systems are the same from the flow physics perspective. So Erlang’s scary-looking equation can be used to estimate the number of beds needed to achieve any specified level of admission service quality if you know the average rate of demand and average the length of stay.  That is how I did the estimate last week. It is this predictable-within-limits behaviour that you demonstrated to yourself with the Carveoutosis Game.

<StE> And this has been known for nearly 100 years but we have only just learned about it!

<Dr Bob> Yes. That is a bit annoying isn’t it?

<StE> And that explains why when we ‘ring-fence’ our fixed stock of beds the 4-hour performance falls!

<Dr Bob> Yes, that is a valid assertion. By doing that you are reducing your space-capacity resilience and the resulting danger, chaos, disappointment and escalating cost is completely predictable.

<StE> So our pain is iatrogenic as you said! We have unwittingly caused this. That is very uncomfortable news to hear.

<Dr Bob> The root cause is actually not what you have done, it is what you have not done. It is an error of omission. You have not learned to listen to what your system is telling you; and have not leaned how that can help you deepen your understanding of how your system works. It is that wisdom that is needed to design a safer, calmer, higher quality and more affordable healthcare system.

<StE> And now we can see the omission … before it was like a blind spot … and now we can see the fallacy of our previously deeply held belief: that it was impossible to solve this without more beds, more staff and more money. The gap is now obvious where before it was invisible. It is like a light has been turned on. Now we know what to do! We are on the road to recovery. We need to learn how to do this ourselves … but not by meddling … we need to learn to diagnose and then to design and then to deliver safety, flow, quality and productivity. All at the same time.

<Dr Bob> Welcome to the exciting world of Improvement Science. And I must sound a note of caution … there is a lot more to it than just blindly applying Erlang’s equation. That will get us into the right ball-park which is a big leap forward, but real systems are not just simple, passive games of chance; they are complicated, active and adaptive.  Applying the principles of flow design in that context requires more than just mathematics, statistics and computer models. But that know-how is available and accessible too … when you are ready to take that leap of learning.  I do not think you require any further follow up. I wish you well and please let me know the outcome.

<StE> Thank you and rest assured we will. We have started writing our story … and we wanted to share the that with you today … but with this new insight we will need to write a few more chapters before we can share it.  This is really exciting … thank you so much.

St.Elsewhere’s® is a registered trademark of Kate Silvester Ltd,  and to read more real cases of 4-hour A&E pain download Kate’s: The Christmas Crisis

If you would like to hear more stories from Dr Bob’s Casebook please vote:  Yes

And if you would like to subscribe to Dr Bob’s occasional newsletter please click: Subscribe

Dr_Bob_ThumbnailDr Bob runs a Clinic for Sick Systems and is sharing the Case of St Elsewhere’s® Hospital which is suffering from chronic pain in their A&E department.

The story so far: The history and examination of St.Elsewhere’s® Emergency Flow System have revealed the footprint of a Horned Gaussian in their raw A&E data. This characteristic sign suggests that the underlying disease includes carveoutosis.  StE has signed up for treatment and has started by installing learning loops. This is the one week follow up appointment.

<Dr Bob> Hi there. How are things? What has changed this week?

<StE> Lots! We shared the eureka moment we had when you described the symptoms, signs and pathogenesis of carvoutosis temporalis using the Friday Afternoon Snail Mail story.  That resonated strongly with lots of people. And as a result that symptom has almost gone – as if by magic!  We are now keeping on top of our emails by doing a few each day and we are seeing decisions and actions happening much more quickly.

<Dr Bob> Excellent. Many find it surprising to see such a large beneficial impact from such an apparently small change. And how are you feeling overall? How is the other pain?

<StE> Still there unfortunately. Our A&E performance has not really improved but we do feel a new sense of purpose, determination and almost optimism.  It is hard to put a finger on it.

<Dr Bob> Does it feel like a paradoxical combination of “feels subjectively better but looks objectively the same”?

<StE> Yes, that’s exactly it. And it is really confusing. Are we just fire-fighting more quickly but still not putting out the fire?

<Dr Bob> Possibly. It depends on your decisions and actions … you may be unwittingly both fighting and fanning the fire at the same time.  It may be that you are suffering from carveoutosis multiforme.

<StE> Is that bad?

<Dr Bob> No. Just trickier to diagnose and treat. It implies that there is more than one type of carveoutosis active at the same time and they tend to amplify each other. The other common type is called carveoutosis spatialis. Shall we explore that hypothesis?

<StE> Um, OK. Does it require more painful poking?

<Dr Bob> A bit. Do you want to proceed? I cannot do so without your consent.

<StE> I suppose so.

<Dr Bob> OK. Can you describe for me what happens to emergency patients after they are admitted. Where do they go to?

<StE> That’s easy.  The medical emergencies go to the medical wards and the others go to the surgical wards. Or rather they should. Very often there is spillover from one to the other because the specialty wards are full. That generates a lot of grumbling from everyone … doctors, nurses and patients. We call them outliers.

<Dr Bob> And when a patient gets to a ward where do they go? Into any available empty bed?

<StE> No.  We have to keep males and females separate, to maintain privacy and dignity.  We get really badly beaten up if we mix them.  Our wards are split up into six-bedded bays and a few single side-rooms, and we are constantly juggling bays and swapping them from male to female and back. Often moving patients around in the process, and often late at night. The patients do not like it and it creates lots of extra work for the nurses.

<Dr Bob> And when did these specialty and gender segregation policies come into force?

<StE> The specialty split goes back decades, the gender split was introduced after StE was built. We were told that it wouldn’t make any difference because we are still admitting the same proportion of males and females so it would average out, but it causes us a lot of headaches!  Maybe we are now having to admit more patients than the hospital was designed to hold!

<Dr Bob> That is possible, but even if you were admitting the same number for the same length of time the symptoms of carveoutosis spatialis are quite predictable. When there is any form of variation in demand, casemix, or gender then if you split your space-capacity into ‘ring-fenced’ areas you will always need more total space-capacity to achieve the same waiting time performance. Always. It is mandated by the Laws of Physics. It is not negotiable. And it does not average out.

<StE> What! So we were mis-informed?  The chaos we are seeing was predictable?

<Dr Bob> The effect of carveoutosis spatialis is predictable. But knowing that does not prove it is the sole cause of the chaos you are experiencing. It may well be a contributory factor though.

<StE> So how big an effect are we talking about here? A few percent?

<Dr Bob> I can estimate it for you.  What are your average number of emergency admissions per day, the split between medical and surgical, the split between gender, and the average length of stay in each group?

<StE> We have an average of sixty emergency admissions per day, the split between medicine and surgery is 50:50 on average;  the gender split is 50:50 on average and the average LoS in each of those 4 groups is 8 days.  We worked out using these number that we should need 480 beds but even now we have about 540 and even that doesn’t seem to be enough!

<Dr Bob> OK, let me work this out … with those parameters and assuming that the LoS does not change then the Laws of Flow Physics predict that you would need about 25% more beds than 480 – nearer six hundred – to be confident that there will always be a free bed for the next emergency admission in all four categories of  patient.

<StE> What! Our Director of Finance has just fallen off his chair! That can’t be correct!


But that is exactly what we are seeing.


If we we were able to treated this carvoutosis spatialis … if, just for the sake of argument, we could put any patient into any available bed … what effect would that have?  Would we then only need 480 beds?

<Dr Bob> You would if there was absolutely zero variation of any sort … but that is impossible. If nothing else changed the Laws of Physics predict that you would need about 520 beds.

<StE> What! But we have 540 beds now. Are you saying our whole A&E headache would evaporate just by doing that … and we would still have beds to spare?

<Dr Bob> That would be my prognosis, assuming there are no other factors at play that we have not explored yet.

<StE> Now the Head of Governance has just exploded! This is getting messy! We cannot just abandon the privacy and dignity policy.  But there isn’t much privacy or dignity lying on a trolley in the A&E corridor for hours!  We’re really sorry Dr Bob but we cannot believe you. We need proof.

<Dr Bob> And so would I were I in your position. Would you like to prove it to yourselves?  I have a game you can play that will demonstrate this unavoidable consequence of the Laws of Physics. Would you like to play it?

<StE> We would indeed!

<Dr Bob> OK. Here are the instructions for the game. This is your ‘medicine’ for this week.

St.Elsewhere’s® is a registered trademark of Kate Silvester Ltd,  and to read more real cases of 4-hour A&E pain download Kate’s: The Christmas Crisis

If you would like to hear more stories from Dr Bob’s Casebook please vote:  Yes

And if you would like to subscribe to Dr Bob’s occasional newsletter please click: Subscribe

Dr_Bob_ThumbnailDr Bob runs a Clinic for Sick Systems and is sharing the Case of St Elsewhere’s ® Hospital which is suffering from chronic pain in the A&E department.

Dr Bob is presenting the case study in weekly bite-sized bits that are ample food for thought.

Part 1 is here. Part 2 is here. Part 3 is here.

The story so far:

The history and initial examination of St.Elsewhere’s® Emergency Flow System have revealed the footprint of a Horned Gaussian in their raw A&E data.  That characteristic sign suggests that the underlying disease complex includes one or more forms of carveoutosis.  So that is what Dr Bob and StE will need to explore together.

<Dr Bob> Hello again and how are you feeling since our last conversation?

<StE> Actually, although the A&E pain continues unabated, we feel better. More optimistic. We have followed your advice and have been plotting our daily A&E time-series charts and sharing those with the front-line staff.  And what is interesting to observe is the effect of just doing that.  There are fewer “What you should do!” statements and more “What we could do …” conversations starting to happen – right at the front line.

<Dr Bob> Excellent. That is what usually happens when we switch on the fast feedback loop. I detect that you are already feeling the emotional benefit.  So now we need to explore carveoutosis.  Are you up for that?

<StE> You betcha! 

<Dr Bob> OK. The common pathology in carveoutosis is that we have some form of resource that we, literally, carve up into a larger number of smaller pieces.  It does not matter what the resource is.  It can be time, space, knowledge, skill, cash.  Anything.

<StE> Um, that is a bit abstract.  Can you explain with a real example?

<Dr Bob> OK. I will use the example of temporal carveoutosis.  Do you use email?  And if so what are your frustrations with it … your Niggles?

<StE> Ouch! You poked a tender spot with that question!  Email is one of our biggest sources of frustration.  A relentless influx of dross that needs careful scanning to filter out the important stuff. We waste hours every week on this hamster wheel.  And if we do not clear our Inboxes by close of play on Friday then the following week is even worse!

<Dr Bob> And how many of you put time aside on Friday afternoon to ‘Clear-the-Inbox’?

<StE> We all do. It does at least give us some sense of control amidst the chaos. 

<Dr Bob> OK. This is a perfect example of temporal carveoutosis.  Suppose we consider the extreme case where we only process our emails on a Friday afternoon in a chunk of protected time carved out of our diary.  Now consider the effect of our carved-out-time-policy on the flow of emails. What happens?

<StE> Well, if we all do this then we will only send emails on a Friday afternoon and the person we are sending them to will only read them the following Friday afternoon and if we need a reply we will read that the Friday after.  So the time from sending an email to getting a reply will be two weeks. And it does not make any difference how many emails we send!

<Dr Bob> Yes. That is the effect on the lead-time … but I asked what the effect was on flow?

<StE> Oops! So our answer was correct but that was not the question you asked.  Um, the effect on flow is that it will be very jerky.  Emails will only flow on Friday afternoons … so all the emails for the week will try to flow around in a few hours or minutes.  Ah! That may explain why the email system seems to slow down on Friday afternoons and that only delays the work and adds to our frustration! We naturally assumed it was because the IT department have not invested enough in hardware! Faster computers and bigger mailboxes!

<Dr Bob> What you are seeing is the inevitable and predictable effect of one form of temporal carveoutosis.  The technical name for this is a QBQ time trap and it is an iatrogenic disease. Self-inflicted.

<StE> So if the IT Department actually had the budget, and if they had actually treated the ear-ache we were giving them, and if they had actually invested in faster and bigger computers then the symptom of Friday Snail Mail would go away – but the time trap would remain.  And it might actually reinforce our emails-only-on-a-Friday-afternoon behaviour! Wow! That was not obvious until you forced us to think it through logically.

<Dr Bob> Well. I think that insight is enough to chew over for now. One eureka reaction at a time is enough in my experience. Food for thought requires time to digest.  This week your treatment plan is to share your new insight with the front-line teams.  You can use this example because email Niggles are very common.  And remember … Focus on the Flow.  Repeat that mantra to yourselves until it becomes a little voice in your head that reminds you what to do when you are pricked by the feelings of disappointment, frustration and fear.

St.Elsewhere’s® is a registered trademark of Kate Silvester Ltd. And to read more real cases of 4-hour A&E pain download Kate’s: The Christmas Crisis

If you would like more stories from Dr Bob’s Casebook please vote:  Yes

And if you would like to subscribe to Dr Bob’s occasional newsletter please click: Subscribe

Dr_Bob_ThumbnailDr Bob runs a Clinic for Sick Systems and is sharing the story of a recent case – a hospital that has presented with chronic pain in their A&E department.

It is a complicated story so Dr Bob is presenting it in bite-sized bits that only require a few minutes to read. Part 1 is here. Part 2 is here.

To summarise the case history so far:

The patient is St.Elsewhere’s® Hospital, a medium sized district general hospital situated in mid-England. StE has a type-1 A&E Department that receives about 200 A&E arrivals per day which is rather average. StE is suffering with chronic pain – specifically the emotional, operational, cultural and financial pain caused by failing their 4-hour A&E target. Their Paymasters and Inspectors have the thumbscrews on, and each quarter … when StE publish their performance report that shows they have failed their A&E target (again) … the thumbscrews are tightened a few more clicks. Arrrrrrrrrrrrgh.

Dr Bob has discovered that StE routinely collect data on when individual patients arrive in A&E and when they depart, and that they use this information for three purposes:
1) To calculate their daily and quarterly 4-hour target failure rate.
2) To create action plans that they believe will eliminate their pain-of-failure.
3) To expedite patients who are approaching the 4-hour target – because that eases the pain.

But the action plans do not appear to have worked and, despite their heroic expeditionary effort, the chronic pain is getting worse. StE is desperate and has finally accepted that it needs help. The Board are worried that they might not survive the coming winter storm and when they hear whispers of P45s being armed and aimed by the P&I then they are finally scared enough to seek professional advice. So they Choose&Book an urgent appointment at Dr Bob’s clinic … and they want a solution yesterday … but they fear the worst. They fear discovering that there is no solution!

The Board, the operational managers and the senior clinicians feel like they are between a rock and a hard place.  If Dr Bob’s diagnosis is ‘terminal’ then they cannot avert the launch of the P45’s and it is Game Over for the Board and probably for StE as well.  And if Dr Bob’s diagnosis is ‘treatable’ then they cannot avert accepting the painful exposure of their past and present ineptitude – particularly if the prescribed humble pie is swallowed and has the desired effect of curing the A&E pain.

So whatever the diagnosis they appear to have an uncomfortable choice: leave or learn?

Dr Bob has been looking at the A&E data for one typical week that StE have shared.

And Dr Bob knows what to look for … the footprint of a dangerous yet elusive disease. A characteristic sign that doctors have a name for … a pathognomic sign.

Dr Bob is looking for the Horned Gaussian … and has found it!

So now Dr Bob has to deliver the bittersweet news to the patient.

<Dr Bob> Hello again. Please sit down and make yourselves comfortable. As you know I have been doing some tests on the A&E data that you shared.  I have the results of those tests and I need to be completely candid with you. There is good news and there is not-so-good news.


Would you like to hear this news and if so … in what order?

<StE> Oh dear. We were hoping there was only good news so perhaps we should start there.

<Dr Bob> OK.  The good news is that you appear to be suffering from a treatable disease. The data shows the unmistakable footprint of a Horned Gaussian.

<StE> Phew! Thank the Stars! That is what we had hoped and prayed for! Thank you so much. You cannot imagine how much better we feel already.  But what is the not-so-good news?

<Dr Bob> The not-so-good news is that the disease is iatrogenic which is medical jargon for self-inflicted.  And I appreciate that you did not do this knowingly so you should not feel guilt or blame for doing things that you did not know are self-defeating.


And in order to treat this disease we have to treat the root cause and that implies you have a simple choice to make.

<StE> Actually, what you are saying does not come as a surprise. We have sensed for some time that there was something that we did not really understand but we have been so consumed by fighting-the-fire that we have prevaricated in grasping that nettle.  And we think we know what the choice is: to leave or to learn. Continuing as we are is no longer an option.

<Dr Bob> You are correct.  That is the choice.

StE confers and unanimously choose to take the more courageous path … they choose to learn.

<StE> We choose to learn. Can we start immediately? Can you teach us about the Horned Gaussian?

<Dr Bob> Bravo! Of course, but before that we need to understand what a Gaussian is.

Suppose we have some very special sixty-sided dice with faces numbered 1 to 59, and suppose we toss six of them and wait until they come to rest. Then suppose we count up the total score on the topmost facet of each die … and then suppose we write that total down. And suppose we do this 1500 times and then calculate the average total score. What do you suppose the average would be … approximately?

<StE> Well … the score on each die can be between 1 and 59 and each number is equally likely to happen … so the average score for 1500 throws of one die will be about 30 … so the average score for six of these mega-dice will be about 180.

<Dr Bob> Excellent. And how will the total score vary from throw to throw?

<StE> H’mm … tricky.  We know that it will vary but our intuition does not tell us by how much.

<Dr Bob> I agree. It is not intuitively obvious at all. We sense that the further away from 180 we look the less likely we are to find that score in our set of 1500 totals but that is about as close as our intuition can take us.  So we need to do an empirical experiment and we can do that easily with a spreadsheet. I have run this experiment and this is what I found …

Sixty_Sided_Dice_GameNotice that there is rather a wide spread around our expected average of 180 and remember that this is just tossing a handful of sixty-sided dice … so this variation is random … it is inherent and expected and we have no influence over it. Notice too that on the left the distribution of the scores is plotted as a histogram … the blue line. Notice the symmetrical hump-like shape … this is the footprint of a Gaussian.

<StE> So what? This is a bit abstract and theoretical for us. How does it help us?

<Dr Bob> Please bear with me a little longer. I have also plotted the time that each of your patients were in A&E last week on the same sort of chart. What do you notice?


<StE> H’mm. This is very odd. It looks like someone has taken a blunt razor to the data … they fluffed the first bit but sharpened up their act for the rest of it. And the histogram looks a bit like the one on your chart, well the lower half does, then there is a big spike. Is that the Horned thingamy?

<Dr Bob> Yes. This is the footprint of a Horned Gaussian. What this picture of your data says is that something is distorting the natural behaviour of your A&E system and that something is cutting in at 240 minutes. Four hours.

<StE> Wait a minute! That is exactly what we do. We admit patients who are getting close to the 4-hour target to stop the A&E clock and reduce the pain of 4-hour failure.  But we can only admit as many as we have space for … and sometimes we run out of space.  That happened last Monday evening. The whole of StE hospital was gridlocked and we had no option but to store the A&E patients in the corridors – some for more than 12 hours! Just as the chart shows.

<Dr Bob> And by distorting your natural system behaviour in this way you are also distorting the data.  Your 4-hour breach rate is actually a lot lower that it would otherwise be … until the system gridlocks then it goes through the roof.  This design is unstable and unsafe.


Are Mondays always like this?

<StE> Usually, yes. Tuesday feels less painful and the agony eases up to Friday then it builds up again.  It is worse than Groundhog Day … it is more like Groundhog Week!  The chaos and firefighting is continuous though, particularly in the late afternoon and evenings.      

<Dr Bob> So now we are gaining some understanding.  The uncomfortable discovery when we look in the mirror is that: part of the cause is our own policies that create the symptoms and obscure the disease. We have looked in the mirror and “we have seen the enemy and the enemy is us“. This is an iatrogenic disease and in my experience a common root cause is something called carveoutosis.  Understanding the pathogenesis of carveoutosis is the path to understanding what is needed to treat it.  Are you up for that?

<StE> You bet we are!

<Dr Bob> OK. First we need to establish a new habit. You need to start plotting your A&E data just like this. Every day. Every week. Forever. This is your primary feedback loop. This chart will tell you when real improvement is happening. Your quarterly average 4-hour breach percentage will not. The Paymasters, Inspectors and Government will still ask for that quarterly aggregated target failure data but you will use these diagnostic and prognostic system behaviour charts for all your internal diagnosis, decisions and actions.  And next week we will explore carveoutosis.

St.Elsewhere’s® is a registered trademark of Kate Silvester Ltd.
And to read more real cases of 4-hour pain download Kate’s:
 The Christmas Crisis

If you found this topic of interest and would like more of Dr Bob’s cases please click:  Yes

And if you would like to subscribe to Dr Bob’s occasional newsletter please click: Subscribe

Dr_Bob_ThumbnailHello, Dr Bob here.

This week we will continue to explore the Case of Chronic Pain in the A&E Department of St.Elsewhere’s Hospital.

Last week we started by ‘taking a history’.  We asked about symptoms and we asked about the time patterns and associations of those symptoms. The subjective stuff.

And as we studied the pattern of symptoms a list of plausible diagnoses started to form … with chronic carveoutosis as a hot contender.

Carveoutosis is a group of related system diseases that have a common theme. So if we find objective evidence of carveoutosis then we will talk about it … but for now we need to keep an open mind.

The next step is to ‘examine the patient’ – which means that we use the pattern of symptoms to focus our attention on seeking objective signs that will help us to prune our differential diagnosis.

But first we need to be clear what the pain actually is. We need a more detailed description.

<Dr Bob> Can you explain to me what the ‘4-hour target’ is?

<StE> Of course. When a new patient arrives at our A&E Department we start a clock for that patient, and when the patient leaves we stop their clock.  Then we work out how long they were in the A&E Department and we count the number that were longer than 4-hours for each day.  Then we divide this number by the number of patients who arrived that day to give us a percentage: a 4-hour target failure rate. Then we average those daily rates over three months to give us our Quarterly 4-hour A&E Target Performance; one of the Key Performance Indicators (KPIs) that are written into our contract and which we are required to send to our Paymasters and Inspectors.  If that is more than 5% we are in breach of our contract and we get into big trouble, if it is less than 5% we get left alone. Or to be more precise the Board get into big trouble and they share the pain with us.

<Dr Bob> That is much clearer now.  Do you know how many new patients arrive in A&E each day, on average.

<StE> About two hundred, but it varies quite a lot from day-to-day.

Dr Bob does a quick calculation … about 200 patients for 3 months is about 18,000 pieces of data on how long the patients were in the A&E Department …  a treasure trove of information that could help to diagnose the root cause of the chronic 4-hour target pain.  And all this data is boiled down into a binary answer to the one question in their quarterly KPI report:

Q: Did you fail the 4-hour A&E target this quarter? [Yes] [No]       

That implies that more than 99.99% of the available information is not used.

Which is like driving on a mountain road at night with your lights on but your eyes closed! Dangerous and scary!

Dr Bob now has a further addition to his list of diagnoses: amaurosis agnosias which roughly translated means ‘turning a blind eye’.

<Dr Bob> Can I ask how you use this clock information in your minute-to-minute management of patients?

<StE> Well for the first three hours we do not use it … we just get on with managing the patients.  Some are higher priority and more complicated than others, we call them Majors and we put them in the Majors Area. Some are lower priority and easier so we call them Minors and we put them in the Minors Area. Our doctors and nurses then run around looking after the highest clinical priority patients first … for obvious reasons. However, as a patient’s clock starts to get closer to 4-hours then that takes priority and those patients start to leapfrog up the queue of who to see next.  We have found that this is an easy and effective way to improve our 4-hour performance. It can make the difference between passing or failing a quarter and reducing our referred pain! To assist us implement the Leapfrog Policy our Board have invested in some impressive digital technology … a huge computer monitor on the wall that shows exactly who is closest to the 4-hour target.  This makes it much easier for us to see which patients needs to be leapfrogged for decision and action.

<Dr Bob>  Do you, by any chance, keep any of the individual patient clock data?

<StE> Yes, we have to do that because we are required to complete a report each week for the causes of 4-hour failures and we also have to submit an Action Plan for how we will eliminate them.  So we keep the data and then spend hours going back through the thousands of A&E cards to identify what we think are the causes of the delays. There are lots of causes and many patients are affected by more than one; and there does not appear to be any clear pattern … other than ‘too busy’. So our action plan is the same each week … write yet another business case asking for more staff and for more space. 

<Dr Bob> Could you send me some of that raw clock data?  Anonymous of course. I just need the arrival date and time and the departure date and time for an average week.

<StE> Yes of course – we will send the data from last week – there were about 1500 patients.

Dr Bob now has all the information needed to explore the hunch that the A&E Department is being regularly mauled by a data mower … one that makes the A&E performance look better … on paper … and that obscures the actual problem.

Just like treating a patient’s symptoms and making their underlying disease harder to diagnose and therefore harder to cure.

To be continued …

If you found this topic of interest would like to see more Dr Bob stories like this please click here:  Yes

If you would like to subscribe to Dr Bob’s newsletter please click here: Subscribe



The blog last week seems to have caused a bit of a stir … so this week we will continue on the same theme.

I’m Dr Bob and I am a hospital doctor: I help to improve the health of poorly hospitals.

And I do that using the Science of Improvement – which is the same as all sciences, there is a method to it.

Over the next few weeks I will outline, in broad terms, how this is done in practice.

And I will use the example of a hospital presenting with pain in their A&E department.  We will call it St.Elsewhere’s ® Hospital … a fictional name for a real patient.

It is a while since I learned science at school … so I thought a bit of a self-refresher would be in order … just to check that nothing fundamental has changed.


This is what I found on page 2 of a current GCSE chemistry textbook.

Note carefully that the process starts with observations; hypotheses come after that; then predictions and finally designing experiments to test them.

The scientific process starts with study.

Which is reassuring because when helping a poorly patient or a poorly hospital that is exactly where we start.

So, first we need to know the symptoms; only then can we start to suggest some hypotheses for what might be causing those symptoms – a differential diagnosis; and then we look for more specific and objective symptoms and signs of those hypothetical causes.

<Dr Bob> What is the presenting symptom?

<StE> “Pain in the A&E Department … or more specifically the pain is being felt by the Executive Department who attribute the source to the A&E Department.  Their pain is that of 4-hour target failure.

<Dr Bob> Are there any other associated symptoms?

<StE> “Yes, a whole constellation.  Complaints from patients and relatives; low staff morale, high staff turnover, high staff sickness, difficulty recruiting new staff, and escalating locum and agency costs. The list is endless.”

<Dr Bob> How long have these symptoms been present?

<StE> “As long as we can remember.”

<Dr Bob> Are the symptoms staying the same, getting worse or getting better?

<StE> “Getting worse. It is worse in the winter and each winter is worse than the last.”

<Dr Bob> And what have you tried to relieve the pain?

<StE> “We have tried everything and anything – business process re-engineering, balanced scorecards, Lean, Six Sigma, True North, Blue Oceans, Golden Hours, Perfect Weeks, Quality Champions, performance management, pleading, podcasts, huddles, cuddles, sticks, carrots, blogs  and even begging. You name it we’ve tried it! The current recommended treatment is to create a swarm of specialist short-stay assessment units – medical, surgical, trauma, elderly, frail elderly just to name a few.” 

<Dr Bob> And how effective have these been?

<StE> “Well some seemed to have limited and temporary success but nothing very spectacular or sustained … and the complexity and cost of our processes just seem to go up and up with each new initiative. It is no surprise that everyone is change weary and cynical.”

The pattern of symptoms is that of a chronic (longstanding) illness that has seasonal variation, which is getting worse over time and the usual remedies are not working.

And it is obvious that we do not have a clear diagnosis; or know if our unclear diagnosis is incorrect; or know if we are actually dealing with an incurable disease.

So first we need to focus on establishing the diagnosis.

And Dr Bob is already drawing up a list of likely candidates … with carveoutosis at the top.

<Dr Bob> Do you have any data on the 4-hour target pain?  Do you measure it?

<StE> “We are awash with data! I can send the quarterly breach performance data for the last ten years!”

<Dr Bob> Excellent, that will be useful as it should confirm that this is a chronic and worsening problem but it does not help establish a diagnosis.  What we need is more recent, daily data. Just the last six months should be enough. Do you have that?

<StE> “Yes, that is how we calculate the quarterly average that we are performance managed on. Here is the spreadsheet. We are ‘required’ to have fewer than 5% 4-hour breaches on average. Or else.”

This is where Dr Bob needs some diagnostic tools.  He needs to see the pain scores presented as  picture … so he can see the pattern over time … because it is a very effective way to generate plausible causal hypotheses.

Dr Bob can do this on paper, or with an Excel spreadsheet, or use a tool specifically designed for the job. He selects his trusted visualisation tool : BaseLine©.


<Dr Bob> This is your A&E pain data plotted as a time-series chart.  At first glance it looks very chaotic … that is shown by the wide and flat histogram. Is that how it feels?

<StE> “That is exactly how it feels … earlier in the year it was unremitting pain and now we have a constant background ache with sharp, severe, unpredictable stabbing pains on top. I’m not sure what is worse!

<Dr Bob> We will need to dig a bit deeper to find the root cause of this chronic pain … we need to identify the diagnosis or diagnoses … and your daily pain data should offer us some clues.

StE_4hr_Pain_Chart_RG_DoWSo I have plotted your data in a different way … grouping by day of the week … and this shows there is a weekly pattern to your pain. It looks worse on Mondays and least bad on Fridays.  Is that your experience?

<StE> “Yes, the beginning of the week is definitely worse … because it is like a perfect storm … more people referred by their GPs on Mondays and the hospital is already full with the weekend backlog of delayed discharges so there are rarely beds to admit new patients into until late in the day. So they wait in A&E.  

Dr Bob’s differential diagnosis is firming up … he still suspects acute-on-chronic carveoutosis as the primary cause but he now has identified an additional complication … Forrester’s Syndrome.

And Dr Bob suspects an unmentioned problem … that the patient has been traumatised by a blunt datamower!

So that is the evidence we will look for next …

If you found this topic of interest would like to see more Dr Bob stories like this please click here:  Yes

If you would like to subscribe to Dr Bob’s newsletter please click here: Subscribe



This week an interesting report was published by Monitor – about some possible reasons for the A&E debacle that England experienced in the winter of 2014.

Summary At A Glance

“91% of trusts did not  meet the A&E 4-hour maximum waiting time standard last winter – this was the worst performance in 10 years”.

So it seems a bit odd that the very detailed econometric analysis and the testing of “Ten Hypotheses” did not look at the pattern of change over the previous 10 years … it just compared Oct-Dec 2014 with the same period for 2013! And the conclusion: “Hospitals were fuller in 2014“.  H’mm.

The data needed to look back 10 years is readily available on the various NHS England websites … so here it is plotted as simple time-series charts.  These are called system behaviour charts or SBCs. Our trusted analysis tools will be a Mark I Eyeball connected to the 1.3 kg of wetware between our ears that runs ChimpOS 1.0 …  and we will look back 11 years to 2004.

A&E_Arrivals_2004-15First we have the A&E Arrivals chart … about 3.4 million arrivals per quarter. The annual cycle is obvious … higher in the summer and falling in the winter. And when we compare the first five years with the last six years there has been a small increase of about 5% and that seems to associate with a change of political direction in 2010.

So over 11 years the average A&E demand has gone up … a bit … but only by about 5%.

A&E_Admissions_2004-15In stark contrast the A&E arrivals that are admitted to hospital has risen relentlessly over the same 11 year period by about 50% … that is about 5% per annum … ten times the increase in arrivals … and with no obvious step in 2010. We can see the annual cycle too.  It is a like a ratchet. Click click click.

But that does not make sense. Where are these extra admissions going to? We can only conclude that over 11 years we have progressively added more places to admit A&E patients into.  More space-capacity to store admitted patients … so we can stop the 4-hour clock perhaps? More emergency assessment units perhaps? Places to wait with the clock turned off perhaps? The charts imply that our threshold for emergency admission has been falling: Admission has become increasingly the ‘easier option’ for whatever reason.  So why is this happening? Do more patients need to be admitted?

In a recent empirical study we asked elderly patients about their experience of the emergency process … and we asked them just after they had been discharged … when it was still fresh in their memories. A worrying pattern emerged. Many said that they had been admitted despite them saying they did not want to be.  In other words they did not willingly consent to admission … they were coerced.

This is anecdotal data so, by implication, it is wholly worthless … yes?  Perhaps from a statistical perspective but not from an emotional one.  It is a red petticoat being waved that should not be ignored.  Blissful ignorance comes from ignoring anecdotal stuff like this. Emotionally uncomfortable anecdotal stories. Ignore the early warning signs and suffer the potentially catastrophic consequences.

A&E_Breaches_2004-15And here is the corresponding A&E 4-hour Target Failure chart.  Up to 2010 the imposed target was 98% success (i.e. 2% acceptable failure) and, after bit of “encouragement” in 2004-5, this was actually achieved in some of the summer months (when the A&E demand was highest remember).

But with a change of political direction in 2010 the “hated” 4-hour target was diluted down to 95% … so a 5% failure rate was now ‘acceptable’ politically, operationally … and clinically.

So it is no huge surprise that this is what was achieved … for a while at least.

In the period 2010-13 the primary care trusts (PCTs) were dissolved and replaced by clinical commissioning groups (CCGs) … the doctors were handed the ignition keys to the juggernaut that was already heading towards the cliff.

The charts suggest that the seeds were already well sown by 2010 for an evolving catastrophe that peaked last year; and the changes in 2010 and 2013 may have just pressed the accelerator pedal a bit harder. And if the trend continues it will be even worse this coming winter. Worse for patients and worse for staff and worse for commissioners and  worse for politicians. Lose lose lose lose.

So to summarise the data from the NHS England’s own website:

1. A&E arrivals have gone up 5% over 11 years.
2. Admissions from A&E have gone up 50% over 11 years.
3. Since lowering the threshold for acceptable A&E performance from 98% to 95% the system has become unstable and “fallen off the cliff” … but remember, a temporal association does not prove causation.

So what has triggered the developing catastrophe?

Well, it is important to appreciate that when a patient is admitted to hospital it represents an increase in workload for every part of the system that supports the flow through the hospital … not just the beds.  Beds represent space-capacity. They are just where patients are stored.  We are talking about flow-capacity; and that means people, consumables, equipment, data and cash.

So if we increase emergency admissions by 50% then, if nothing else changes, we will need to increase the flow-capacity by 50% and the space-capacity to store the work-in-progress by 50% too. This is called Little’s Law. It is a mathematically proven Law of Flow Physics. It is not negotiable.

So have we increased our flow-capacity and our space-capacity (and our costs) by 50%? I don’t know. That data is not so easy to trawl from the websites. It will be there though … somewhere.

What we have seen is an increase in bed occupancy (the red box on Monitor’s graphic above) … but not a 50% increase … that is impossible if the occupancy is already over 85%.  A hospital is like a rigid metal box … it cannot easily expand to accommodate a growing queue … so the inevitable result in an increase in the ‘pressure’ inside.  We have created an emergency care pressure cooker. Well lots of them actually.

And that is exactly what the staff who work inside hospitals says it feels like.

And eventually the relentless pressure and daily hammering causes the system to start to weaken and fail, gradually at first then catastrophically … which is exactly what the NHS England data charts are showing.

So what is the solution?  More beds?

Nope.  More beds will create more space and that will relieve the pressure … for a while … but it will not address the root cause of why we are admitting 50% more patients than we used to; and why we seem to need to increase the pressure inside our hospitals to squeeze the patients through the process and extrude them out of the various exit nozzles.

Those are the questions we need to have understandable and actionable answers to.

Q1: Why are we admitting 5% more of the same A&E arrivals each year rather than delivering what they need in 4 hours or less and returning them home? That is what the patients are asking for.

Q2: Why do we have to push patients through the in-hospital process rather than pulling them through? The staff are willing to work but not inside a pressure cooker.

A more sensible improvement strategy is to look at the flow processes within the hospital and ensure that all the steps and stages are pulling together to the agreed goals and plan for each patient. The clinical management plan that was decided when the patient was first seen in A&E. The intended outcome for each patient and the shortest and quickest path to achieving it.

Our target is not just a departure within 4 hours of arriving in A&E … it is a competent diagnosis (study) and an actionable clinical management plan (plan) within 4 hours of arriving; and then a process that is designed to deliver (do) it … for every patient. Right, first time, on time, in full and at a cost we can afford.

Q: Do we have that?
A: Nope.

Q: Is that within our gift to deliver?
A: Yup.

Q: So what is the reason we are not already doing it?
A: Good question.  Who in the NHS is trained how to do system-wide flow design like this?

There is a wonderful invention called the retrospectoscope which is designed to provide clarity of hindsight.

on_top_of_the_books_150_wht_17482And there is an art to retrospectoscopy.

The key to the art is to carefully avoid committing to precise purpose at the start – in the prospectus; then after the actual outcome is demonstrated, to claim that it was predicted and using the ambiguity of the prospectus to hide the sleight-of-hand.

The purpose is to gain a reputation to have foresight and to be able to predict the future … because oracles, sages and soothsayers are much valued in society.

Retrospectoscopy has gained a tarnished reputation but it does have an important role … it provides the ability to learn from experience … but to be effective we have to use the retrospectoscope correctly. It is too easy to abuse it and to fall into the trap of self-justification  by distorting and deleting what we see.

To avoid the trap we need to do several things:

  1. Write down and share our clear diagnosis, plan and prediction at the start … ‘the prospectus’.
  2. Record and share the information that we will need to test our prediction robustly … ‘the evidence’.
  3. Compare our prospective rhetoric with the retrospective reality and share what we find … ‘the learning’.

It is unlikely that our prediction will be 100% accurate … and any deviation from aim is a valuable source of learning … better than predicted, worse than predicted and not predicted are all opportunities for new insights, deeper understanding,  new opportunities, wiser decisions and better outcomes.

If we fail to use the retrospectoscope correctly then we will be caught in a perpetual cycle of self-justifying delusion that is manifest as the name-shame-blame-game.  And if we side-step the expected discomfort of learning we will condemn ourselves to endlessly repeating the painful lessons that history can teach us to avoid.

The common theme in the self-justifying-delusion trap-avoiding recipe is share … if we are not prepared to learn in public then we should accept the inevitable consequences with grace.

Both courage and humility and are leadership assets.



Telling a compelling story of improvement is an essential skill for a facilitator and leader of change.

A compelling story has two essential components: cultural and technical. Otherwise known as emotional and factual.

Many of the stories that we hear are one or the other; and consequently are much less effective.

Some prefer emotive language and use stories of dismay and distress to generate an angry reaction: “That is awful we must DO something about that!”

And while emotion is the necessary fuel for action,  an angry mob usually attacks the assumed cause rather than the actual cause and can become ‘mindless’ and destructive.

Those who have observed the dangers of the angry mob opt for a more reflective, evidence-based, scientific, rational, analytical, careful, risk-avoidance approach.

And while facts are the necessary informers of decision, the analytical mind often gets stuck in the ‘paralysis of analysis’ swamp as layer upon layer of increasing complexity is exposed … more questions than answers.

So in a compelling story we need a bit of both.

We need a story that fires our emotions … and … we need a story that engages our intellect.

A bit of something for everyone.

And the key to developing this compelling-story-telling skill this is to start with something small enough to be doable in a reasonable period of time.  A short story rather than a lengthy legend.

A story, tale or fable.

Aesop’s Fables and Chaucer’s Canterbury Tales are still remembered for their timeless stories.

And here is a taste of such a story … one that has been published recently for all to read and to enjoy.

A Story of Learning Improvement Science

It is an effective blend of cultural and technical, emotional and factual … and to read the full story just follow the ‘Continue’ link.

Rogers_CurveThe early phases of a transformation are where most fall by the wayside.

And the failure rate is horrifying – an estimated 80% of improvement initiatives fail to achieve their goals.

The recent history of the NHS is littered with the rusting wreckage of a series of improvement bandwagons.  Many who survived the crashes are too scarred and too scared to try again.

Transformation and improvement imply change which implies innovation … new ways of thinking, new ways of behaving, new techniques, new tools, and new ways of working.

And it has been known for over 50 years that innovation spreads in a very characteristic way. This process was described by Everett Rogers in a book called ‘Diffusion of Innovations‘ and is described visually in the diagram above.

The horizontal axis is a measure of individual receptiveness to the specific innovation … and the labels are behaviours: ‘I exhibit early adopter behaviour‘ (i.e. not ‘I am an early adopter’).

What Roger’s discovered through empirical observation was that in all cases the innovation diffuses from left-to-right; from innovation through early adoption to the ‘silent’ majority.

Complete diffusion is not guaranteed though … there are barriers between the phases.

One barrier is between innovation and early adoption.

There are many innovations that we never hear about and very often the same innovation appears in many places and often around the same time.

This innovation-adoption barrier is caused by two things:
1) most are not even aware of the problem … they are blissfully ignorant;
2) news of the innovation is not shared widely enough.

Innovators are sensitive people.  They sense there is a problem long before others do. They feel the fear and the excitement of need for innovation. They challenge their own assumptions and they actively seek solutions. They swim against the tide of ignorance, disinterest, skepticism and often toxic cynicism.  So when they do discover a way forward they often feel nervous about sharing it. They have learned (the hard way) that the usual reaction is to be dismissed and discounted.  Most people do not like to learn about unknown problems and hazards; and they like it even less to learn that there are solutions that they neither recognise nor understand.

But not everyone.

There is a group called the early adopters who, like the innovators, are aware of the problem. They just do not share the innovator’s passion to find a solution … irrespective of the risks … so they wait … their antennae tuned for news that a solution has been found.

Then they act.

And they act in one of two ways:

1) Talkers … re-transmit the news of the problem and the discovery of a generic solution … which is essential in building awareness.

2) Walkers … try the innovative approach themselves and in so doing learn a lot about their specific problem and the new ways to solving it.

And it is the early adopters that do both of these actions that are the most effective and the most valuable to everyone else.  Those that talk-the-new-walk and walk-the-new-talk.

And we can identify who they are because they will be able to tell stories of how they have applied the innovation in their world; and the results that they have achieved; and how they achieved them; and what worked well; and what did not; and what they learned; and how they evolved and applied the innovation to meet their specific needs.

They are the leaders, the coaches and the teachers of improvement and transformation.

They See One, Do Some, and Teach Many.

The early adopters are the bridge across the Innovation and Transformation Chasm.

smack_head_in_disappointment_150_wht_16653One of the traps for the inexperienced Improvement Science Practitioner is to believe that applying the science in the real world is as easy as it is in the safety of the training environment.

It isn’t.

The real world is messier and more complicated and it is easy to get lost in the fog of confusion and chaos.

So how do we avoid losing our footing, slipping into the toxic emotional swamp of organisational culture and giving ourselves an unpleasant dunking!

We use safety equipment … to protect ourselves and others from unintended harm.

The Improvement-by-Design framework is like a scaffold.  It is there to provide structure and safety.  The techniques and tools are like the harnesses, shackles, ropes, crampons, and pitons.  They give us flexibility and security.

But we need to know how to use them. We need to be competent as well as confident.

We do not want to tie ourselves up in knots … and we do not want to discover that we have not tied ourselves to something strong enough to support us if we slip. Which we will.

So we need to learn an practice the basics skills to the point that they are second nature.

We need to learn how to tie secure knots, quickly and reliably.

We need to learn how to plan an ascent … identifying the potential hazards and designing around them.

We need to learn how to assemble and check what we will need before we start … not too much and not too little.

We need to learn how to monitor out progress against our planned milestones and be ready to change the plan as we go …and even to abandon the attempt if necessary.

We would not try to climb a real mountain without the necessary training, planning, equipment and support … even though it might look easy.

And we do not try to climb an improvement mountain without the necessary training, planning, tools and support … even though it might look easy.

It is not as easy as it looks.

Dr_Bob_ThumbnailThere is a big bun-fight kicking off on the topic of 7-day working in the NHS.

The evidence is that there is a statistical association between mortality in hospital of emergency admissions and day of the week: and weekends are more dangerous.

There are fewer staff working at weekends in hospitals than during the week … and delays and avoidable errors increase … so risk of harm increases.

The evidence also shows that significantly fewer patients are discharged at weekends.

So the ‘obvious’ solution is to have more staff on duty at weekends … which will cost more money.

Simple, obvious, linear and wrong.  Our intuition has tricked us … again!

Let us unravel this Gordian Knot with a bit of flow science and a thought experiment.

1. The evidence shows that there are fewer discharges at weekends … and so demonstrates lack of discharge flow-capacity. A discharge process is not a single step, there are many things that must flow in sync for a discharge to happen … and if any one of them is missing or delayed then the discharge does not happen or is delayed.  The weakest link effect.

2. The evidence shows that the number of unplanned admissions varies rather less across the week; which makes sense because they are unplanned.

3. So add those two together and at weekends we see hospitals filling up with unplanned admissions – not because the sick ones are arriving faster – but because the well ones are leaving slower.

4. The effect of this is that at weekends the queue of people in beds gets bigger … and they need looking after … which requires people and time and money.

5. So the number of staffed beds in a hospital must be enough to hold the biggest queue – not the average or some fudged version of the average like a 95th percentile.

6. So a hospital running a 5-day model needs more beds because there will be more variation in bed use and we do not want to run out of beds and delay the admission of the newest and sickest patients. The ones at most risk.

7. People do not get sicker because there is better availability of healthcare services – but saying we need to add more unplanned care flow capacity at weekends implies that it does.  What is actually required is that the same amount of flow-resource that is currently available Mon-Fri is spread out Mon-Sun. The flow-capacity is designed to match the customer demand – not the convenience of the supplier.  And that means for all parts of the system required for unplanned patients to flow.  What, where and when. It costs the same.

8. Then what happens is that the variation in the maximum size of the queue of patients in the hospital will fall and empty beds will appear – as if by magic.  Empty beds that ensure there is always one for a new, sick, unplanned admission on any day of the week.

9. And empty beds that are never used … do not need to be staffed … so there is a quick way to reduce expensive agency staff costs.

So with a comprehensive 7-day flow-capacity model the system actually gets safer, less chaotic, higher quality and less expensive. All at the same time. Safety-Flow-Quality-Productivity.

by Julian Simcox & Terry Weight

Ben Goldacre has spent several years popularizing the idea that we all ought all to be more interested in science.

Every day he writes and tweets examples of “bad science”, and about getting politicians and civil servants to be more evidence-based; about how governmental interventions should be more thoroughly tested before being rolled-out to the hapless citizen; about how the development and testing of new drugs should be more transparent to ensure the public get drugs that actually make a difference rather than risk harm; and about bad statistics – the kind that “make clever people do stupid things”(8).

Like Ben we would like to point the public sector, in particular the healthcare sector and its professionals, toward practical ways of doing more of the good kind of science, but just what is GOOD science?

In collaboration with the Cabinet Office’s behaviour insights team, Ben has recently published a polemic (9) advocating evidence-based government policy. For us this too is commendable, yet there is a potentially grave error of omission in their paper which seems to fixate upon just a single method of research, and risks setting-up the unsuspecting healthcare professional for failure and disappointment – as Abraham Maslow once famously said

.. it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail”(17)

We question the need for the new Test, Learn and Adapt (TLA) model he offers because the NHS already possesses such a model – one which in our experience is more complete and often simpler to follow – it is called the “Improvement Model”(15) – and via its P-D-S-A mnemonic (Plan-Do-Study-Act) embodies the scientific method.

Moreover there is a preexisting wealth of experience on how best to embed this thinking within organisations – from top-to-bottom and importantly from bottom-to-top; experience that has been accumulating for fully nine decades – and though originally established in industrial settings has long since spread to services.

We are this week publishing two papers, one longer and one shorter, in which we start by defining science, ruing the dismal way in which it is perennially conveyed to children and students, the majority of whom leave formal education without understanding the power of discovery or gaining any first hand experience of the scientific method.

View Shorter Version Abstract

We argue that if science were to be defined around discovery, and learning cycles, and built upon observation, measurement and the accumulation of evidence – then good science could vitally be viewed as a process rather than merely as an externalized entity. These things comprise the very essence of what Don Berwick refers to as Improvement Science (2) as embodied by the Institute of Healthcare Improvement (IHI) and in the NHS’s Model for Improvement.

We also aim to bring an evolutionary perspective to the whole idea of science, arguing that its time has been coming for five centuries, yet is only now more fully arriving. We suggest that in a world where many at school have been turned-off science, the propensity to be scientific in our daily lives – and at work – makes a vast difference to the way people think about outcomes and their achievement. This is especially so if those who take a perverse pride in saying they avoided science at school, or who freely admit they do not do numbers, can get switched on to it.

The NHS Model for Improvement has a pedigree originating with Walter Shewhart in the 1920’s, then being famously applied by Deming and Juran after WWII. Deming in particular encapsulates the scientific method in his P-D-C-A model (three decades later he revised it to P-D-S-A in order to emphasize that the Check stage must not be short-changed) – his pragmatic way of enabling a learning/improvement to evolve bottom-up in organisations.

After the 1980’s Dr Don Berwick , standing on these shoulders, then applied the same thinking to the world of healthcare – initially in his native America. Berwick’s approach is to encourage people to ask questions such as: What works? .. and How would we know? His method, is founded upon a culture of evidence-based learning, providing a local context for systemic improvement efforts. A new organisational culture, one rooted in the science of improvement, if properly nurtured, may then emerge.

Yet, such a culture may initially jar with the everyday life of a conventional organisation, and the individuals within it. One of several reasons, according to Yuval Harari (21), is that for hundreds of generations our species has evolved such that imagined reality has been lorded over objective reality. Only relatively recently in our evolution has the advance of science been leveling up this imbalance, and in our papers we argue that a method is now needed that enables these two realities to more easily coexist.

We suggest that a method that enables data-rich evidence-based storytelling – by those who most know about the context and intend growing their collective knowledge – will provide the basis for an approach whereby the two realities may do just that.

In people’s working lives, a vital enabler is the 3-paradigm “Accountability/Improvement/Research” measurement model (AIRmm), reflecting the three archetypal ways in which people observe and measure things. It was created by healthcare professionals (23) to help their colleagues and policy-makers to unravel a commonly prevailing confusion, and to help people make better sense of the different approaches they may adopt when needing to evidence what they’re doing – depending on the specific purpose. An amended version of this model is already widely quoted inside the NHS, though this is not to imply that it is yet as widely understood or applied as it needs to be.


This 3-paradigm A-I-R measurement model underpins the way that science can be applied by, and has practical appeal for, the stretched healthcare professional, managerial leader, civil servant.

Indeed for anyone who intuitively suspects there has to be a better way to combine goals that currently feel disconnected or even in conflict: empowerment and accountability; safety and productivity; assurance and improvement; compliance and change; extrinsic and intrinsic motivation; evidence and action; facts and ideas; logic and values; etc.

Indeed for anyone who is searching for ways to unify their actions with the system-based implementation of those actions as systemic interventions. Though widely quoted in other guises, we are returning to the original model (23) because we feel it better connects to the primary aim of supporting healthcare professionals make best sense of their measurement options.

In particular the model makes it immediately plain that a way out of the apparent Research/Accountability dichotomy is readily available to anyone willing to “Learn, master and apply the modern methods of quality control, quality improvement and quality planning” – the recommendation made for all staff in the Berwick Report (3).

In many organisations, and not just in healthcare, the column 1 paradigm is the only game in town. Column 3 may feel attractive as a way-out, but it also feels inaccessible unless there is a graduate in statistician on hand. Moreover, the mainstay of the Column 3 worldview: the Randomized Controlled Trial (RCT) can feel altogether overblown and lacking in immediacy. It can feel like reaching for a spanner and finding a lump hammer in your hand – as Berwick says “Fans of traditional research methods view RCTs as the gold standard, but RCTs do not work well in many healthcare contexts” (2).

Like us, Ben is frustrated by the ways that healthcare organisations conduct themselves – not just the drug companies that commercialize science and publish only the studies likely to enhance sales, but governments too who commonly implement politically expedient policies only to then have to subsequently invent evidence to support them.

Policy-based evidence rather than evidence-based policy.

Ben’s recommended Column 3-style T-L-A approach is often more likely to make day-to-day sense to people and teams on the ground if complemented by Column 2-style improvement science.
One reason why Improvement Science can sometimes fail to dent established cultures is that it gets corralled by organisational “experts” – some of whom then use what little knowledge they have gathered merely to make themselves indispensable, not realising the extent to which everyone else as a consequence gets dis-empowered.

In our papers we take the opportunity to outline the philosophical underpinnings, and to do this we have borrowed the 7-point framework from a recent paper by Perla et al (35) who suggest that Improvement Science:

1. Is grounded in testing and learning cycles – the aim is collective knowledge and understanding about cause & effect over time. Some scientific method is needed, together with a way to make the necessary inquiry a collaborative one. Shewhart realised this and so invented the concept “continual improvement”.

2. Embraces a combination of psychology and logic – systemic learning requires that we balance myth and received wisdom with logic and the conclusions we derive from rational inquiry. This balance is approximated by the Sensing-Intuiting continuum in the Jungian-based MBTI model (12) reminding us that constructing a valid story requires bandwidth.

3. Has a philosophical foundation of conceptualistic pragmatism (16) – it cannot be expected that two scientists when observing, experiencing, or experimenting will make the same theory-neutral observations about the same event – even if there is prior agreement about methods of inference and interpretation. The normative nature of reality therefore has to be accommodated. Whereas positivism ultimately reduces the relation between meaning and experience to a matter of logical form, pragmatism allows us to ground meaning in conceived experience.

4. Employs Shewhart’s “theory of cause systems” – Walter Shewhart created the Control Chart for tuning-in to systemic behaviour that would otherwise remain unnoticed. It is a diagnostic tool, but by flagging potential trouble also aids real time prognosis. It might have been called a “self-control chart” for he was especially interested in supporting people working in and on their system being more considered (less reactive) when taking action to enhance it without over-reacting – avoiding what Deming later referred to as “Tampering” (4).

5. Requires the use of Operational Definitions – Deming warned that some of the most important aspects of a system cannot be expressed numerically, and those that can require care because “there is no true value of anything measured or observed” (5). When it comes to metric selection therefore it is essential to understand the measurement process itself, as well as the “operational definition” that each metric depends upon – the aim being to reduce ambiguity to zero.

6. Considers the contexts of both justification and discovery – Science can be defined as a process of discovery – testing and learning cycles built upon observation, measurement and accumulating evidence or experience – shared for example via a Flow Chart or a Gantt chart in order to justify a belief in the truth of an assertion. To be worthy of the term “science” therefore, a method or procedure is needed that is characterised by collaborative inquiry.

7. Is informed by Systems Theory – Systems Theory is the study of systems, any system: as small as a quark or as large as the universe. It aims to uncover archetypal behaviours and the principles by which systems hang together – behaviours that can be applied across all disciplines and all fields of research. There are several types of systems thinking, but Jay Forrester’s “System Dynamics” has most pertinence to Improvement Science because of its focus on flows and relationships – recognising that the behaviour of the whole may not be explained by the behaviour of the parts.

In the papers, we say more about this philosophical framing, and we also refer to the four elements in Deming’s “System of Profound Knowledge”(5). We especially want to underscore that the overall aim of any scientific method we employ is contextualised knowledge – which is all the more powerful if continually generated in context-specific experimental cycles. Deming showed that good science requires a theory of knowledge based upon ever-better questions and hypotheses. We two aim now to develop methods for building knowledge-full narratives that can work well in healthcare settings.

We wholeheartedly agree with Ben that for the public sector – not just in healthcare – policy-making needs to become more evidence-based.

In a poignant blog from the Health Foundation’s (HF) Richard Taunt (24), he recently describes attending two recent conferences on the same day. At the first one, policymakers from 25 countries had assembled to discuss how national policy can best enhance the quality of health care. When collectively asked which policies they would retain and repeat, their list included: use of data, building quality improvement capability, ensuring senior management are aware of improvement approaches, and supporting and spreading innovations.

In a different part of London, UK health politicians happened also to be debating Health and Care in order to establish the policy areas they would focus on if forming the next government. This second discussion brought out a completely different set of areas: the role of competition, workforce numbers, funding, and devolution of commissioning. These two discussions were supposedly about the same topic, but a Venn diagram would have contained next to no overlap.

Clare Allcock, also from the HF, then blogged to comment that “in England, we may think we are fairly advanced in terms of policy levers, but (unlike, for example in Scotland or the USA) we don’t even have a strategy for implementing health system quality.” She points in particular to Denmark who recently have announced they are phasing out their hospital accreditation scheme in favour of an approach strongly focused around quality improvement methodology and person-centred care. The Danes are in effect taking the 3-paradigm model and creating space for Column 2: improvement thinking.

The UK needs to take a leaf out of their book, for without changing fundamentally the way the NHS (and the public sector as a whole) thinks about accountability, any attempt to make column 2 the dominant paradigm is destined to be still born.

It is worth noting that in large part the AIRmm Column 2 paradigm was actually central to the 2012 White Paper’s values, and with it the subsequent Outcomes Framework consultation – both of which repeatedly used the phrase “bottom-up” to refer to how the new system of accountability would need to work, but somehow this seems to have become lost in legislative procedures that history will come to regard as having been overly ambitious. The need for a new paradigm of accountability however remains – and without it health workers and clinicians – and the managers who support them – will continue to view metrics more as something intrusive than as something that can support them in delivering enhancements in sustained outcomes. In our view the Stevens’ Five Year Forward View makes this new kind of accountability an imperative.

“Society, in general, and leaders and opinion formers, in particular, (including national and local media, national and local politicians of all parties, and commentators) have a crucial role to play in shaping a positive culture that, building on these strengths, can realise the full potential of the NHS.
When people find themselves working in a culture that avoids a predisposition to blame, eschews naïeve or mechanistic targets, and appreciates the pressures that can accumulate under resource constraints, they can avoid the fear, opacity, and denial that will almost inevitably lead to harm.”
Berwick Report (3)

Changing cultures means changing our habits – it starts with us. It won’t be easy because people default to the familiar, to more of the same. Hospitals are easier to build than relationships; operations are easier to measure than knowledge, skills and confidence; and prescribing is easier than enabling. The two of us do not of course possess a monopoly on all possible solutions, but our experience tells us that now is the time for: evidence-rich storytelling by front line teams; by pharmaceutical development teams; by patients and carers conversing jointly with their physicians.

We know that measurement is not a magic bullet, but what frightens us is that the majority of people seem content to avoid it altogether. As Oliver Moody recently noted in The Times ..

Call it innumeracy, magical thinking or intrinsic mental laziness, but even intelligent members of the public struggle, through no fault of their own, to deal with statistics and probability. This is a problem. People put inordinate amounts of trust in politicians, chief executives, football managers and pundits whose judgment is often little better than that of a psychic octopus.     Short of making all schoolchildren study applied mathematics to A level, the only thing scientists can do about this is stick to their results and tell more persuasive stories about them.

Too often, Disraeli’s infamous words: “Lies, damned lies, and statistics” are used as the refuge of busy professionals looking for an excuse to avoid numbers.

If Improvement Science is to become a shared language, Berwick’s recommendation that all NHS staff “Learn, master and apply the modern methods of quality control, quality improvement and quality planning” has to be taken seriously.

As a first step we recommend enabling teams to access good data in as near to real time as possible, data that indicates the impact that one’s intervention is having – this alone can prompt a dramatic shift in the type of conversation that people working in and on their system may have. Often this can be initiated simply by converting existing KPI data into System Behaviour Chart form which, using a tool like BaseLine® takes only a few mouse clicks.

In our longer paper we offer three examples of Improvement Science in action – combining to illustrate how data may be used to evidence both sustained systemic enhancement, and to generate engagement by the people most directly connected to what in real time is systemically occurring.

1. A surgical team using existing knowledge established by column 3-type research as a platform for column 2-type analytic study – to radically reduce post-operative surgical site infection (SSI).

2. 25 GP practices are required to collect data via the Friends & Family Test (FFT) and decide to experiment with being more than merely compliant. In two practices they collectively pilot a system run by their PPG (patient participation group) to study the FFT score – patient by patient – as they arrive each day. They use IS principles to separate signal from noise in a way that prompts the most useful response to the feedback in near to real time. Separately they summarise all the comments as a whole and feed their analysis into the bi-monthly PPG meeting. The aim is to address both “special cause” feedback and “common cause” feedback in a way that, in what most feel is an over-loaded system, can prompt sensibly prioritised improvement activity.

3. A patient is diagnosed with NAFLD and receives advice from their doctor to get more exercise e.g. by walking more. The patient uses the principles of IS to monitor what happens – using the data not just to show how they are complying with their doctor’s advice, but to understand what drives their personal mind/body system. The patient hopes that this knowledge can lead them to better decision-making and sustained motivation.

The landscape of NHS improvement and innovation support is fragmented, cluttered, and currently pretty confusing. Since May 2013 Academic Health Science Networks (AHSNs) funded by NHS England (NHSE) have been created with the aim of bringing together health services, and academic and industry members. Their stated purpose is to improve patient outcomes and generate economic benefits for the UK by promoting and encouraging the adoption of innovation in healthcare. They have a 5 year remit and have spent the first 2 years establishing their structures and recruiting, it is not yet clear if they will be able to deliver what’s really needed.

Patient Safety Collaboratives linked with AHSN areas have also been established to improve the safety of patients and ensure continual patient safety learning. The programme, coordinated by NHSE and NHSIQ will provide safety improvements across a range of healthcare settings by tackling the leading causes of avoidable harm to patients. The intention is to empower local patients and healthcare staff to work together to identify safety priorities and develop solutions – implemented and tested within local healthcare organisations, then later shared nationally.

We hope our papers will significantly influence the discussions about how improvement and innovation can assist with these initiatives. In the shorter paper to echo Deming, we even include our own 14 points for how healthcare organisations need to evolve. We will know that we have succeeded if the papers are widely read; if we enlist activists like Ben to the definition of science embodied by Improvement Science; and if we see a tidal wave of improvement science methods being applied across the NHS?

As patient volunteers, we each intend to find ways of contributing in any way that appears genuinely helpful. It is our hope that Improvement Science enables the cultural transformation we have envisioned in our papers and with our case studies. This is what we feel most equipped to help with. When in your sixties it easy to feel that time is short, but maybe people of every age should feel this way? In the words of Francis Bacon, the father of the scientific method.


Download Long Version



custom_life_balance_13780A common challenge is the need to balance the twin constraints of safety and cost.

Very often we see that making a system safer will increase its cost; and cutting costs leads to increased risk of harm.

So when budgets are limited and allowing harm to happen is a career limiting event then we feel stuck between a Rock and a Hard Place.

One root cause of this problem is the incorrect belief that ‘utilisation of capacity’ is a measure of ‘efficiency’ and the association of high efficiency with low cost. This then leads to another invalid belief that if we drive up utilisation then we will get a lower cost solution.

Let us first disprove the invalid belief with a simple thought experiment.

Suppose I have a surgical department with 100 beds and I want to run it at 100% utilisation but I also need to be able to admit urgent surgical patients without delay.  How would I do that?

Simple … just delay the discharge of all the patients who are ready for discharge until a new admission needs a bed … then do a ‘hot swap’.

This is a tried and tested tactic that surgeons have used for decades to ensure their wards are full with their patients and to prevent ‘outliers’ spilling over from other wards. It is called bed blocking.

The effect is that the length of stay of patients is artifically expanded which means that more bed days are used to achieve the same outcome. So it is a less efficient design.

It also disproves the belief that utilisation is a measure of efficiency … in the example above utilisation went up while efficiency went down and without also causing a safety problem.

So what is the problem here?

The problem is that we are confusing two different sorts of ‘capacity’ … space-capacity and flow-capacity.

And when we do that we invent and implement plausible sounding plans that are doomed to fail as soon as they hit the reality test.

So why do we continue to confuse these different sorts of capacity?

Because (a) we do not know any better and (b) we copy others who do not know any better and (c) we collectively fail to learn from the observable fact that our plausible plans do not seem to work in practice.

Is there a way out of this blind-leading-the-blind mess?

For sure there is.

But it requires a willingness to unlearn our invalid assumptions and replace them with valid (i.e. tested) ones.  And it is the unlearning that is the most uncomfortable bit.

Lack of humility is what prevents us from unlearning … our egos get in the way … they quite literally blind us to what is plain to see.

We also fear loss of face … and so we avoid threats to our reputations … we simply ignore the evidence of our ineptitude.  The problem of ‘hubris’ that Atul Gawande eloquently pointed out in the 2014 Reith Lectures.

And by so doing we achieve the very outcome we are so desperately trying to avoid … we fail.

Which is sad really because with just a pinch of humility we can so easily succeed.

Dr_Bob_ThumbnailA recurring theme this week has been the concept of ‘quality’.

And it became quickly apparent that a clear definition of quality is often elusive.

Which seems to have led to a belief that quality is difficult to measure because it is subjective and has no precise definition.

The science of quality improvement is nearly 100 years old … and it was shown a long time ago, in 1924 in fact, that it is rather easy to measure quality – objectively and scientifically.

The objective measure of quality is called “yield”.

To measure yield we simply ask all our customers this question:

Did your experience meet your expectation?” 

If the answer is ‘Yes’ then we count this as OK; if it is ‘No’ then we count it as Not OK.

Yield is the ratio of the OKs divided by the number of customers who answered.

But this tried-and-tested way of measuring quality has a design flaw:

Where does a customer get their expectation from?

Because if a customer has an unrealistically high expectation then whatever we do will be perceived by them as Not OK.

So to consistently deliver a high quality service (i.e. high yield) we need to be able to influence both the customer experience and the customer expectation.

If we set our sights on a worthwhile and realistic expectation and we broadcast that to our customers, then we also need a way of avoiding their disappointment … that our objective quality outcome audit may reveal.

One way to defuse disappointment is to set a low enough expectation … which is, sadly, the approach adopted by naysayers,  complainers, cynics and doom-mongers. The inept.

That is not the path to either improvement or to excellence. It is the path to apathy.

A better approach is to set ourselves some internal standards of expectation and to check at each step if our work meets our own standard … and if it fails then we know we need have some more work to do.

This commonly used approach to maintaining quality is called a check-and-correct design.

So let us explore the ramifications of this check-and-correct approach to quality.

Suppose the quality of the product or service that we deliver is influenced by many apparently random factors. And when we actually measure our yield we discover that the chance of getting a right-first-time outcome is about 50%.  This amounts to little more than a quality lottery and we could simulate that ‘random’ process by tossing a coin.

So to set a realistic expectation for future customers there are two further questions we need to answer:
1. How long can an typical customer expect to wait for our product or service?
2. How much can an typical customer expect to pay for our product or service?

It is not immediately and intuitively obvious what the answers to these questions are … so we need to perform an experiment to find out.

Suppose we have five customers who require our product or service … we could represent them as Post It Notes; and suppose we have a clock … we could measure how long the process is taking; and suppose we have our coin … we can simulate the yield of the step; … and suppose we do not start the lead time clock until we start the work for each customer.

We now have the necessary and sufficient components to assemble a simple simulation model of our system … a model that will give us realistic answers to our questions.

So let us see what happens … just click the ‘Start Game’ button.

It is worth running this exercise about a dozen times and recording the data for each run … then plotting the results on a time-series chart.

The data to plot is the make-time (which is the time displayed on the top left) and the cost (which is display top middle).

The make-time is the time from starting the first game to completing the last task.

The cost is the number of coin tosses we needed to do to deliver all work to the required standard.

And here are the charts from my dozen runs (yours will be different).



The variation from run to run is obvious; as is the correlation between a make-time and a high cost.

The charts also answer our two questions … a make time up to 90 would not be exceptional and an average cost of 10 implies that is the minimum price we need to charge in order to stay in business.

Our customers are waiting while we check-and-correct our own errors and we are expecting them to pay for the extra work!

In the NHS we have a name for this low-quality high-cost design: Payment By Results.

The charts also show us what is possible … a make time of 20 and a cost of 5.

That happened when, purely by chance, we tossed five heads in a row in the Quality Lottery.

So with this insight we could consider how we might increase the probability of ‘throwing a head’ i.e. doing the work right-first-time … because we can see from our charts what would happen.

The improved quality and cost of changing ourselves and our system to remove the root causes of our errors.

Quality Improvement-by-Design.

That something worth learning how to do.

And can we honestly justify not doing it?

It was the time for Bob and Leslie’s regular coaching session. Dr_Bob_ThumbnailBob was already on line when Leslie dialed in to the teleconference.

<Leslie> Hi Bob, sorry I am a bit late.

<Bob> No problem Leslie. What aspect of improvement science shall we explore today?

<Leslie> Well, I’ve been working through the Safety-Flow-Quality-Productivity cycle in my project and everything is going really well.  The team are really starting to put the bits of the jigsaw together and can see how the synergy works.

<Bob> Excellent. And I assume they can see the sources of antagonism too.

<Leslie> Yes, indeed! I am now up to the point of considering productivity and I know it was introduced at the end of the Foundation course but only very briefly.

<Bob> Yes,  productivity was described as a system metric. A ratio of a steam metric and a stage metric … what we get out of the streams divided by what we put into the stages.  That is a very generic definition.

<Leslie> Yes, and that I think is my problem. It is too generic and I get it confused with concepts like efficiency.  Are they the same thing?

<Bob> A very good question and the short answer is “No”, but we need to explore that in more depth.  Many people confuse efficiency and productivity and I believe that is because we learn the meaning of words from the context that we see them used in. If  others use the words imprecisely then it generates discussion, antagonism and confusion and we are left with the impression of that it is a ‘difficult’ subject.  The reality is that it is not difficult when we use the words in a valid way.

<Leslie> OK. That reassures me a bit … so what is the definition of efficiency?

<Bob> Efficiency is a stream metric – it is the ratio of the minimum cost of the resources required to complete one task divided by the actual cost of the resources used to complete one task.

<Leslie> Um.  OK … so how does time come into that?

<Bob> Cost is a generic concept … it can refer to time, money and lots of other things.  If we stick to time and money then we know that if we have to employ ‘people’ then time will cost money because people need money to buy essential stuff that the need for survival. Water, food, clothes, shelter and so on.

<Leslie> So we could use efficiency in terms of resource-time required to complete a task?

<Bob> Yes. That is a very useful way of looking at it.

<Leslie> So how is productivity different? Completed tasks out divided by cash in to pay for resource time would be a productivity metric. It looks the same.

<Bob> Does it?  The definition of efficiency is possible cost divided by actual cost. It is not the as our definition of system productivity.

<Leslie> Ah yes, I see. So do others define productivity the same way?

<Bob> Try looking it up on Wikipedia …

<Leslie> OK … here we go …

Productivity is an average measure of the efficiency of production. It can be expressed as the ratio of output to inputs used in the production process, i.e. output per unit of input”.

Now that is really confusing!  It looks like efficiency and productivity are the same. Let me see what the Wikipedia definition of efficiency is …

“Efficiency is the (often measurable) ability to avoid wasting materials, energy, efforts, money, and time in doing something or in producing a desired result”.

But that is closer to your definition of efficiency – the actual cost is the minimum cost plus the cost of waste.

<Bob> Yes.  I think you are starting to see where the confusion arises.  And this is because there is a critical piece of the jigsaw missing.

<Leslie> Oh …. and what is that?

<Bob> Worth.

<Leslie> Eh?

<Bob> Efficiency has nothing to do with whether the output of the stream has any worth.  I can produce a worthless product with low waste … in other words very efficiently.  And what if we have the situation where the output of my process is actually harmful.  The more efficiently I use my resources the more harm I will cause from a fixed amount of resource … and in that situation it is actually safer to have a very inefficient process!

<Leslie> Wow!  That really hits the nail on the head … and the implications are … profound.  Efficiency is onbective and relates only to flow … and between flow and productivity we have to cross the Safety-Quality line. Productivity also includes the subjective concept of worth or value. That all makes complete sense now. A productive system is a subjectively and objectively win-win-win design.

<Bob> Yup.  Get the safety. flow and quality perspectives of the design in synergy and productivity will sky-rocket. It is called a Fit-4-Purpose design.

stick_figure_balance_mind_heart_150_wht_9344Improvement implies learning.  And to learn we need feedback from reality because without it we will continue to believe our own rhetoric.

So reality feedback requires both sensation and consideration.

There are many things we might sense, measure and study … so we need to be selective … we need to choose those things that will help us to make the wise decisions.

Wise decisions lead to effective actions which lead to intended outcomes.

Many measures generate objective data that we can plot and share as time-series charts.  Pictures that tell an evolving story.

There are some measures that matter – our intended outcomes for example. Our safety, flow, quality and productivity charts.

There are some measures that do not matter – the measures of compliance for example – the back-covering blame-avoiding management-by-fear bureaucracy.

And there are some things that matter but are hard to measure … objectively at least.

We can sense them subjectively though.  We can feel them. If we choose to.

And to do that we only need to go to where the people are and the action happens and just watch, listen, feel and learn.  We do not need to do or say anything else.

And it is amazing what we learn in a very short period of time. If we choose to.

If we enter a place where a team is working well we will see smiles and hear laughs. It feels magical.  They will be busy and focused and they will show synergism. The team will be efficient, effective and productive.

If we enter place where is team is not working well we will see grimaces and hear gripes. It feels miserable. They will be busy and focused but they will display antagonism. The team will be inefficient, ineffective and unproductive.

So what makes the difference between magical and miserable?

The difference is the assumptions, attitudes, prejudices, beliefs and behaviours of those that they report to. Their leaders and managers.

If the culture is management-by-fear (a.k.a bullying) then the outcome is unproductive and miserable.

If the culture is management-by-fearlessness (a.k.a. inspiring) then the outcome is productive and magical.

It really is that simple.

smack_head_in_disappointment_150_wht_16653Many organisations proclaim that their mission is to achieve excellence but then proceed to deliver mediocre performance.

Why is this?

It is certainly not from lack of purpose, passion or people.

So the flaw must lie somewhere in the process.

The clue lies in how we measure performance … and to see the collective mindset behind the design of the performance measurement system we just need to examine the key performance indicators or KPIs.

Do they measure failure or success?

Let us look at some from the NHS …. hospital mortality, hospital acquired infections, never events, 4-hour A&E breaches, cancer wait breaches, 18 week breaches, and so on.

In every case the metric reported is a failure metric. Not a success metric.

And the focus of action is getting away from failure.

Damage mitigation, damage limitation and damage compensation.

So we have the answer to our question: we know we are doing a good job when we are not failing.

But are we?

When we are not failing we are not doing a bad job … is that the same as doing a good job?

Q: Does excellence  = not excrement?

A: No. There is something between these extremes.

The succeed-or-fail dichotomy is a distorting simplification created by applying an arbitrary threshold to a continuous measure of performance.

And how, specifically, have we designed our current system to avoid failure?

Usually by imposing an arbitrary target connected to a punitive reaction to failure. Management by fear.

This generates punishment-avoidance and back-covering behaviour which is manifest as a lot of repeated checking and correcting of the inevitable errors that we find.  A lot of extra work that requires extra time and that requires extra money.

So while an arbitrary-target-driven-check-and-correct design may avoid failing on safety, the additional cost may cause us to then fail on financial viability.

Out of the frying pan and into the fire.

No wonder Governance and Finance come into conflict!

And if we do manage to pull off a uneasy compromise … then what level of quality are we achieving?

Studies show that if take a random sample of 100 people from the pool of ‘disappointed by their experience’ and we ask if they are prepared to complain then only 5% will do so.

So if we use complaints as our improvement feedback loop and we react to that and make changes that eliminate these complaints then what do we get? Excellence?


We get what we designed … just good enough to avoid the 5% of complaints but not the 95% of disappointment.

We get mediocrity.

And what do we do then?

We start measuring ‘customer satisfaction’ … which is actually asking the question ‘did your experience meet your expectation?’

And if we find that satisfaction scores are disappointingly low then how do we improve them?

We have two choices: improve the experience or reduce the expectation.

But as we are very busy doing the necessary checking-and-correcting then our path of least resistance to greater satisfaction is … to lower expectations.

And we do that by donning the black hat of the pessimist and we lay out the the risks and dangers.

And by doing that we generate anxiety and fear.  Which was not the intended outcome.

Our mission statement proclaims ‘trusted to achieve excellence’ not ‘designed to deliver mediocrity’.

But mediocrity is what the evidence says we are delivering. Just good enough to avoid a smack from the Regulators.

And if we are honest with ourselves then we are forced to conclude that:

A design that uses failure metrics as the primary feedback loop can achieve no better than mediocrity.

So if we choose  to achieve excellence then we need a better feedback design.

We need a design that uses success metrics as the primary feedback loop and we use failure metrics only in safety critical contexts.

And the ideal people to specify the success metrics are those who feel the benefit directly and immediately … the patients who receive care and the staff who give it.

Ask a patient what they want and they do not say “To be treated in less than 18 weeks”.  In fact I have yet to meet a patient who has even heard of the 18-week target!

A patient will say ‘I want to know what is wrong, what can be done, when it can be done, who will do it, what do I need to do, and what can I expect to be the outcome’.

Do we measure any of that?

Do we measure accuracy of diagnosis? Do we measure use of best evidenced practice? Do we know the possible delivery time (not the actual)? Do we inform patients of what they can expect to happen? Do we know what they can expect to happen? Do we measure outcome for every patient? Do we feed that back continuously and learn from it?


So …. if we choose and commit to delivering excellence then we will need to start measuring-4-success and feeding what we see back to those who deliver the care.

Warts and all.

So that we know when we are doing a good job, and we know where to focus further improvement effort.

And if we abdicate that commitment and choose to deliver mediocrity-by-default then we are the engineers of our own chaos and despair.

We have the choice.

We just need to make it.

beehive_bees_150_wht_12723There is a condition called SFQPosis which is an infection that is transmitted by a vector called an ISP.

The primary symptom of SFQPosis is sudden clarity of vision and a new understanding of how safety, flow, quality and productivity improvements can happen at the same time …

… when they are seen as partners on the same journey.

There are two sorts of ISP … Solitary and Social.

Solitary ISPs infect one person at a time … often without them knowing.  And there is often a long lag time between the infection and the appearance of symptoms. Sometimes years – and often triggered by an apparently unconnected event.

In contrast the Social ISPs will tend to congregate together and spend their time foraging for improvement pollen and nectar and bringing it back to their ‘hive’ to convert into delicious ‘improvement honey’ which once tasted is never forgotten.

It appears that Jeremy Hunt, the Secretary of State for Health, has recently been bitten by an ISP and is now exhibiting the classic symptoms of SFQPosis.

Here is the video of Jeremy describing his symptoms at the recent NHS Confederation Conference. The talk starts at about 4 minutes.

His account suggests that he was bitten while visiting the Virginia Mason Hospital in the USA and on return home then discovered some Improvement hives in the UK … and some of the Solitary ISPs that live in England.

Warwick and Sheffield NHS Trusts are buzzing with ISPs … and the original ISP that infected them was one Kate Silvester.

The repeated message in Jeremy’s speech is that improved safety, quality and productivity can happen at the same time and are within our gift to change – and the essence of achieving that is to focus on flow.

SFQPThe sequence is safety first (eliminate the causes of avoidable harm), then flow second (eliminate the causes of avoidable chaos), then quality (measure both expectation and experience) and then productivity will soar.

And everyone will  benefit.

This is not a zero-sum win-lose game.

So listen for the buzz of the ISPs …. follow it and ask them to show you how … ask them to innoculate you with SFQPosis.

And here is a recent video of Dr Steve Allder, a consultant neurologist and another ISP that Kate infected with SFQPosis a few years ago.  Steve is describing his own experience of learning how to do Improvement-by-Design.

chained_to_big_weight_ball_anim_10331One of the traps for the less experienced improvement scientist is to take on a project that is too ambitious, too early.

The success with a “small” project will attract the attention of those with an eye on a bigger prize and it is easy to be wooed by the Siren call to sail closer to their Rocks.

This is a significant danger and a warning flag needs to be waved.


Organisations can only take on these bigger challenges after they have developed enough improvement capability themselves … and that takes time and effort.  It is not a quick fix.

And it makes no difference how much money is thrown at the problem.  The requirement is for the leaders to learn how to do it first and that does not take long to do … but it does require some engagement and effort.

And this is difficult for busy people to do …but it is not impossible.

The questions that need to be asked repeatedly are:

1. Is this important enough to dedicate some time to?  If not then do not start.

2. What can I do in the time I can dedicate to this? Delegation is abdication when it comes to improvement.

Those who take on too big a project too early will find it is like being chained to a massive weight … and it gets heavier over time as others add their problems to your heap in the belief that delegating a problem is the same as solving it. It isn’t.


So if your inner voice says “This feels too big for me” then listen to it and ask it what specifically is creating that feeling … work backwards from the feeling.  And only after you have identified the root causes can you make a rational decision.

Then make the decision and stick to it … explaining your reasons.


knee_jerk_reflexA commonly used technique for continuous improvement is the Plan-Do-Study-Act or PDSA cycle.

This is a derivative of the PDCA cycle first described by Walter Shewhart in the 1930’s … where C is Check.

The problem with PDSA is that improvement does not start with a plan, it starts with some form of study … so SAPD would be a better order.

IHI_MFITo illustrate this point if we look at the IHI Model for Improvement … the first step is a pair of questions related to purpose “What are we trying to accomplish?” and “How will we know a change is an improvement?

With these questions we are stepping back and studying our shared perspective of our desired future.

It is a conscious and deliberate act.

We are examining our mental models … studying them … and comparing them.  We have not reached a diagnosis or a decision yet, so we cannot plan or do yet.

The third question is a combination of diagnosis and design … we need to understand our current state in order to design changes that will take up to our improved future state.

We cannot plan what to do or how to do it until we have decided and agreed what the future design will look like, and tested that our proposed future design is fit-4-purpose.

So improvement by discovery or by design does not start with plan, it starts with study.

And another word for study is ‘sense’ which may be a better one … because study implies a deliberate, conscious, often slow process … while sense is not so restrictive.

Very often our actions are not the result of a deliberative process … they are automatic and reflex. We do not think about them. They just sort of happen.

The image of the knee-jerk reflex illustrates the point.

In fact we have little conscious control over these automatic motor reflexes which respond much more quickly than our conscious thinking process can.  We are aware of the knee jerk after it has happened, not before, so we may be fooled into thinking that we ‘Do’ without a ‘Plan’.  But when we look in more detail we can see the sensory input and the hard-wired ‘plan’ that links to to motor output.  Study-Plan-Do.

The same is true for many other actions – our unconscious mind senses, processes, decides, plans and acts long before we are consciously aware … and often the only clue we have is a brief flash of emotion … and usually not even that.  Our behaviour is largely habitual.

And even in situations when we need to make choices the sense-recognise-act process is fast … such as when a patient suddenly becomes very ill … we switch into the Resuscitate mode which is a pre-planned sequence of steps that is guided by what are sensing … but it is not made up on the spot. There is no committee. No meetings. We just do what we have learned and practiced how to do … because it was designed to.   It still starts with Study … it is just that the Study phase is very short … we just need enough information to trigger the pre-prepared plan. ABC – Airway … Breathing … Circulation. No discussion. No debate.

So, improvement starts with Study … and depending on what we sense what happens next will vary … and it will involve some form of decision and plan.

1. Unconscious, hard-wired, knee jerk reflex.
2. Unconscious, learned, habitual behaviour.
3. Conscious, pre-planned, steered response.
4. Conscious, deliberation-diagnosis-design then delivery.

The difference is just the context and the timing.   They are all Study-Plan-Do.

 And the Plan may be to Do Nothing …. the Deliberate Act of Omission.

And when we go-and-see and study the external reality we sometimes get a surprise … what we see is not what we expect. We feel a sense of confusion. And before we can plan we need to adjust our mental model so that it better matches reality. We need to establish clarity.  And in this situation we are doing Study-Adjust-Plan-Do …. S(A)PD.

missing_custom_puzzle_completionSystems are made up of inter-dependent parts. And each part is a smaller system made up of inter-dependent parts. And so on.

But there is a limit … eventually we reach a size where we only have a small number of independent parts … and that is called a micro-system.

It is part of a meso-system which in turn is part of a macro-system.

And it appears that in human systems the manageable size of a micro-system is about seven people – enough to sit around a table and work together on a problem.

So the engine of organisational improvement is many micro-systems of about seven people who are able to solve the problems that fall within their collective circles of control.

And that means the vast majority of problems are solvable at the micro-system level.

In fact, without this foundation level of competent and collaborative micro-teams, the meso-systems and the macro-systems cannot get a grip on the slippery problem of systemic change for the better.

The macro-system is also critical to success because it has the strategic view and it sets the vision and values to which every other part of the system aligns.  A dysfunctional macro-system sends cracks down through the whole organisation … fragmenting it into antagonistic, competiting silos.

The meso-system level is equally critical to success because it translates the strategy into tactics and creates the context for the multitude of micro-systems to engage.

The meso-system is the nervous system of the organisation … the informal communication network that feeds data and decisions around.

And if the meso-system is dysfunctional then the organisation can see, feel and move … but it is uncoordinated, chaotic, inefficient, ineffective and largely unproductive.

So the three levels are different, essential and inter-dependent.

The long term viability of a complex adaptive system is the emergent effect of a system design that is effective and efficient. Productive. Collaborative. Synergistic.

And achieving that is not easy … but it is possible.

And for each of us it starts with just us … Mono.