Causality

From Cynefin.io
Jump to navigation Jump to search

Linear causality is the simplest type of causal relationship between events, usually involving a single cause that produces a single effect or a straightforward causal chain that flows from past to future (A + B -> C). Linear causality implies a one-directional flow of causation between the micro and macro levels of a system where higher-level effects are caused solely by lower-level phenomena. According to the notion of linear causality, anything that happens within a system can be directly attributed to some previous occurrence within the same system. This implies that if you get the input right, you can define the output, or that you should be able to forecast or backcast to a future state. Forecasting predicts projects forward. Backcasting says what we would like to get and close the gaps. Linear causal thinking is related to reductionism; that is it tries to explain the whole in terms of its parts through causal laws. Assuming that every effect has a cause, and like causes have like effects, then all effects (events) must have a causal explanation, all causes reduce to efficient causes as regularities in the form of causal laws

Framework

In a complex adaptive system you cannot say: if we do x it will produce result y. Complex adaptive systems are not like recipes, cause and effect relationships do not repeat, since the context the relations are in is not the same every time. CAS and its relations are to be understood and explained in terms of dispositionality, not linear causality. An example of this in the natural world is evolution by natural selection with its unpredictable outcomes. Even if we could start with the same initial conditions, evolution would play out differently.

We can make statements on the probability of evolutionary pathways, but we cannot make linear causal statements. In complexity, mutually ontologically diverse systems or even ontologically contradictory states co-exist. The only thing we can say for certain in a complex system is that an intervention will produce unintended consequences therefore it makes sense to design interventions on that basis. A path diagram is hugely valuable as long as you realise that it is a partial abstraction and cannot represent the totality of the system in practice. It is also important to realise that even in a deeply entangled system there are some cause and effect pathways.

Learning can still be achieved in many ways and excluding more traditional and structured approaches is almost as bad as claiming those approaches are all that is needed. We also need to establish, where possible facts. One common element to all of this however is to understand the context as well as to test for coherence.

At a systems level, we have dispositions (the way in which something is placed or arranged, in relation to other things) which indicate a set of possibilities and plausibilities but a future state cannot be predicted. How things connect is more important than the nature of the things themselves; especially true when you are trying to change attitudes and beliefs. The main consequence of this is that analysis will not reveal any answer and instead, we need to shift to parallel safe-to-fail experiments. The purpose of those experiments is to allow resilience and sustainable practice to emerge. There are a number of things that be done to the dispositions of the system to affect their manifestations and there are three that seem most central: interfering, triggering, and enhancing or retarding.

The systems dispositionality and its behaviour is modulated by constraints. Those constraints can be managed and their nature understood, or our lack of knowledge acknowledged (dark constraints).

Although we discarded the notion of formal and final causes from Aristotle, we did keep the notion that nothing causes itself. But this is a bit problematic when we think of parts interacting to produce wholes which in turn constrain their part. – Alicia Juarerro

What this implies is that there are no causal laws that can give rise to evolution and adaptation in complex systems. Yet the essence of a complex system is there constraints of some type are necessary for emergence to happen.

Common misunderstandings

Linear causality approach.jpg
  • The desire for linearity, which is the essence of traditional approaches, is disturbing given that nature is entangled and human nature both entangled and entwined! A feedback loop that is drawn on anything remotely resembling a flow chart is not complex of itself, it is an attempt to claim complexity for an ordered approach. Thinking of dispositions and propensities requires a mental leap but once made life gets a lot easier. Next to that Causal theories and there underlying laws exclude exaptation: A dinosaur’s feathers evolve in a linear way for warmth and possibly for sexual display. One day, dinosaurs with lots of feathers start to fall off trees and they glide. That’s how we get flight. How do we manage for serendipity? How do we manage for exaptation? That is not a structured linear process with outcome based targets. That will limit accidental encounter.
  • Defining/designing future states and outcomes. The general approach in organisations, as well as society, is to define the type of behavior and outcomes that are to be desired and then set targets and define practices that are to be desired. If those don’t work, and they generally don’t then the enforcement becomes more draconian and we get into quotas and the like. The main problem here is that the outcome itself is assumed and it will generally reflect the aspirations of the dominant culture. Everything will be fine if you all end up like us. Reducing complexity to simple problems with simple outcomes and a simple path for getting there. Managing the present to create a new direction of travel is more important than creating false expectations about how things could be in the future as opposed to what most management science has been doing for the past 40 years, which is a series of fads, each of which claims to have found the solution to the problem of life, the universe, everything. They’re all fictional constructs.
  • The causal relations- evidence: We evolved to make decisions very quickly based on a partial data scan, privileging our most recent experiences. This undermines our traditional approach to evidence-based decisions.
  • Simulation is powerful, but you tend to get the confusion of simulation with prediction, like their predecessors confused correlation with causation. Some of the AI people are now actually are now arguing that correlation is causation which is deeply worrying. I use a Gaussian Pareto distinction in terms of what we call the trigger from anticipation to anticipatory triggers. If you’re in a Pareto world, the best you can do is trigger human beings to a heightened state of alert when they need to look at something. What you need is the ability to summarize agency perspectives and non-agency perspectives in real time, not through consultation.
  • You cannot manage and scale emergence, you can create conditions that enable emergence
  • one of the challenges faced by many people and organisations is the mis-assumption of correct context, and the subsequent misapplication of management for that context.A good example being the current pandemic. It has clearly shown elements of unorder, both chaotic (and requiring crisis management) and complex (requiring probing and sensemaking). A large factor of organisations, leadership, and people have fallen back into familiar patterns of managing order based on causality.
  • Confusing correlation with causation may result from physics envy, but it is a real problem in management science. There is some evidence that is part of what we are as a species and is more likely when aspects are presented in a distinctive or unusual way.
  • The symptoms and cause confusion: an example is confusing creativity with innovation, the former is a symptom of the latter. But if we focus on an observational/deductive model of research its a danger. Healthy people meditate, so if we meditate we will be healthy and so on.
  • We tend to be looking for the root cause of something .There are no root causes, no initial conditions and no free of context situations in complex adaptive systems. Patterns emerge as a result of a number of different interconnected and interdependent elements and structures. In many organisations it is ‘good practice’ to do a root cause analysis pretty early on in a project. The logic is simple: we want to go beyond tackling mere symptoms and discover the ‘deeper’ reasons for these patterns (to prevent future failure). – the root causes. The method used to find a root cause is for a particular symptom to keep asking ‘why does this happen?’ until one hits some sort of a bottom Once this root cause is found, you design some interventions to ‘fix’ it, expecting that once it is fixed, also the symptom will disappear. First, you define the problem, and then you create a timeline showing connections between events. Once you have that you seek to establish the difference between cause and coincidence remembering at all times the danger of the Post hoc ergo propter hoc fallacy of believing that because something followed something else there is a causal relationship. The narrowing down of pathways through analysis contains some of the problems of Delphi techniques, where weak signals can be eliminated early. There is also a danger of not spotting a catalyst in the ‘causal’ chains as you construct them.
  • The other key issue is the need to move towards thinking about plausibility not probability; shifting from anticipation to anticipatory awareness; internal complexity v natural complexity and the fact that distributed cognition is not distributed decision making.
  • Best practice fallacy: We see something working in one context and assume it will work and is scalable in another. We need to understand context before we imitate practice. If you don’t understand why, you shouldn’t replicate what.
  • An obsession with measurement. People focus on the measure, regardless of the consequences of the thing being measured. Business managers regard the target as a sacred object and its achievement, regardless of consequence, the only goal. Judgement is sacrificed on the altar of the explicit. Of course, if the degree of modulation is limited and if causality is linear (repeating and predictable) then there is no issue with measures becoming targets. The achievement of the target is the goal and there is direct correspondence between the target and the thing being measured. Even then there will be exceptions but in the main the KPIs (to reference the most common manifestation) are all we need. When a measure becomes a target, it ceases to be a good measure Strathern (variation on Goodhart’s Law) Goodhart originally argument was that statistical instruments should not be used for policy purposes, he was not opposed to measurement, but to making a measurement system a goal or target. Given that measurement is the current orthodoxy then the most effective way to change is to create measurement systems that are authentic to the thing being measured. The quantitative approach to understanding the dispositional state of the present which became possible with SenseMaker®. Outcome targets would work for the complicated domain of Cynefin; vector targets for the complex.
  • Evidence based policy has become an industry in its own right. So a government department is looking at mindfulness to deal with more motivational factors and they commission research to see if there is an evidence base. The organisation awarded the contract already knows what the goal is and they focus their search on material that links mindfulness to motivation which means a partial selection from the field and a reading which is predisposed to a conclusion. Especially as these days in research the junior staff may have less that five teen minutes to skim each article in a search and draw conclusions.


Core theory

One of the key principle of naturalised sense-making is the fact things exist independently of human perception, in particular things can be known (which is not the same thing as I can know) independently of that perception. Our perception is not necessarily aligned with reality, or for that matter the epistemological understanding of the system. We can see things in different ways and we can experience uncertainty with radically differing levels of confidence. Cynefin can be represented in many ways to make sense of different aspects of human activity and perception, but at its heart is the assumption that there is a reality even if we don’t know it.

One of the basic principles of Cynefin is a goal of maximising the alignment between what things are, how we know things and how we perceive them: ontology, epistemology, and phenomenology. The first version of Cynefin was in the field of knowledge management and had strong originating links to Boisot’s I-Space. That was explicitly focused on the nature of knowledge distinguishing between situations where only experts know and those where knowledge is commonly understood and the consequent actions are not in dispute. The Complex domain at that stage focused on informal networks and distributed forms of knowledge and acts of knowing. In chaos, knowledge would always be novel in nature. That in turn built on various versions of the known-knowable unknowable/unimaginable matrix.

An ordered system is one, which has a very high level of constraint, to the point where everything is predictable. The relation between cause and effect is self-evident. Everybody understands it. Everybody buys into it. That’s the domain of best practice. We sense, we categorize, respond. We have rigid constraints. The point about the Cynefin framework is to say that human beings have learned, for example, in traffic management or operating theaters, to create highly predictable Newtonian systems. There’s nothing wrong with them provided we don’t think they’re universal. In a complicated system for experts cause and effect relations may be obvious but for the decision-maker it isn’t. It’s not self-evident, so you have to carry out some sort of investigation, bring in expertise. You effectively sense, analyze, respond, but there is a right answer and it can be discovered. You may discover the answer within range of possibility. It may not have to be precise, which is why we talk about that as the domain of good practice, not best practice. A complicated system is the sum of its parts. You can solve problems by breaking things down and solving them separately. In a complex system on the other hand, everything is somehow or other connected with everything else, but the connections aren’t fully known. The only way to understand it is to probe it, to experiment in it. But critically you have to experiment in parallel. In Cynefin you construct a safe-to-fail micro experiment around each coherent hypothesis. You run them in parallel, that changes the dynamics of the space, and then the solution starts to emerge. Complex systems can be embedded in complicated systems. We are dealing with complex ideation patterns and until those change no better outcome is possible. The immediate future may be clear, but then it becomes more ambiguous; holding and celebrating that ambiguity is key to resilience Chaos and catastrophe by their definition don’t repeat. Chaos is the absence of effective constraint.

With SenseMaker® we can create landscapes of plausible descriptive patterns. Within those landscapes we can see which tropes have the lowest energy cost of replication, if we are not happy then we first have to change the dispositional state of the system so that a desirable trope or emergent pattern will replicate at lower cost than the negative trope we are unhappy with. Shift the ecology so good things happen in a sustainable way.


Supporting artefacts

Any training material, posters and like with links to where they can be acquired

References

Articles

Blog posts

Related concepts

Cases

Link to case articles here or third party material