Skip to Content

Download the slides from all of our speakers from the London Causal AI Conference!

Go to our new site
  • Blog

Towards Artificial Imagination

At a Glance…

In certain very narrow contexts, conventional machine learning (ML) systems can make predictions about the immediate future. But Causal AI, a new generation of AI technology, can travel more freely through spacetime —and outside of it.

Beyond just making predictions, Casual AI can explore potentialities that never actually happened— “counterfactuals”— while still maintaining a connection with reality. It can reimagine the past, explaining why events unfolded as they did.

You could be forgiven for thinking that counterfactual reasoning is clever but useless. Why waste time worrying about what didn’t happen? However, counterfactuals can be an immensely powerful tool for unearthing insights about the real world.

What Would’ve Happened If…?

If Sanders had won the primary instead of Biden, would Trump have won the election? If Netflix had recommended Black Mirror instead of The Queen’s Gambit, would you have clicked?

These are counterfactual queries: they concern alternative possible histories that didn’t really happen. We humans naturally use our imaginations to reflect on such questions. But building AI systems that can replicate this capability has been a profound challenge.

Imagination is an immensely powerful tool for unearthing insights about the real world

Counterfactuals have a “problematic relationship with data”, as AI luminary Judea Pearl puts it, “because data are, by definition, facts”. We can’t get data on what might’ve happened had Sanders received the Democratic Party nomination, because he didn’t.

For AI systems to compute a counterfactual, they need a model of how the real world connects with a given hypothetical world. For this reason, model-free conventional ML has no ability to reason with counterfactuals — ML can’t think outside of the correlations that held in past data.

Getting to the root of the problem


Picture a cleantech manufacturer producing solar panels. Maintaining exacting quality standards is a strategic priority: small defects can compromise the efficiency of an entire power system and can ratchet up costs due to high “infant failure” rates. In the Industry 4.0 era, AI systems can potentially assist with fault diagnostics by leveraging big data from industrial IoT and high-frequency, high-precision sensors in the field.

Suppose the manufacturer finds newly installed systems are failing with “hotspots” (areas on a solar panel of higher temperature). Hotspots are known to be caused by a wide range of possible factors, from shading to material contamination. In this instance, perhaps the root cause is that factories have unusually high humidity levels that are
causing defects in cell materials.

Root cause analysis is fundamentally a counterfactual inference problem, which can be solved with Causal AI.

The problem of identifying this root cause is fundamentally a counterfactual inference problem. To solve it, Causal AI examines IoT data and learns a detailed model (known as a “structural equation model”) of the manufacturing environment. The model encodes qualitative cause-and-effect relationships, that include feedback loops and interaction effects, as well as quantitative information about the strength of these relationships.

Causal AI can run counterfactual tests on all the potential causes of the defect, resolving a series of questions such
as “If the humidity levels had been lower, would the solar panels have developed hotspots?” By “imagining” the consequences of these counterfactuals, the system identifies the factors that are responsible for the defect.
Standard ML algorithms can see how various factors correlate with hotspots, but root cause analysis demands going beyond correlations. Standard ML has no appreciation of causal directionality (that solar cell defects are caused by ambient humidity in the manufacturing environment, and not the other way round), and so it can’t identify the roots of a chain of cause and effect. This is a prerequisite for even attempting counterfactual analysis.

Applied Imagination

Analysts estimate that automated maintenance in manufacturing has the potential to increase asset availability by 5-15% and reduce maintenance costs by 18-25%. Root cause analysis conducted by Causal AI promises to be a disruptive enabler of these efficiency savings.

In abstract terms, root cause analysis is just one example of a broad class of problems that involve identifying the factors that are responsible, or that deserve credit for, a given outcome. Counterfactuals enable us to find out how the millions of decisions which went into a project contributed to its success or failure. They permit us to find out why something happened as it did, allowing us to truly learn from experience.

Counterfactual reasoning is the basis of our ability to ask and answer Why-questions

In healthcare, counterfactual analysis has been applied to identify the social and economic determinants of health outcomes. In one use case, Causal AI was applied to understand why a preponderance of women in India choose to deliver their babies at home despite the health risks. Researchers found that specific cultural beliefs about hospital safety are often responsible, enabling policy makers to target the right demographic groups with the right messages. Similarly, medical diagnosis can be productively approached as a credit assignment problem — a problem of identifying the disease that is most likely to be responsible for the patient’s symptoms — which can only reliably be solved via counterfactual inference.

In retail, counterfactuals can shed new kinds of light on the underlying consumer preferences that are responsible for their observed buying behavior, allowing retailers to build a three-dimensional picture of how a consumer would have chosen under exposure to different products. It enables the retailer to design more personalized marketing and to predict the purchase propensity for rare items.

Counterfactuals help tech makers to eliminate algorithmic discrimination

In a wide range of settings, from recruitment to credit scoring, it’s becoming increasingly important to ensure that algorithms don’t unfairly discriminate. Perhaps the most promising solution to the problem of algorithmic unfairness is to use counterfactual inference to determine whether a given algorithmic decision would have been different had an individual belonged to a different demographic group. That is, counterfactuals can determine whether a person’s race or gender is responsible for the outcome of a decision, and can thereby help tech developers to rule out any algorithmic discrimination.

These handful of examples represent a small sampling of the powerful practical applications of counterfactual inference — we’ve just scratched the surface.


The shift from conventional data science to imaginative causal inference is a giant leap forward. Tech makers can leverage counterfactuals to ensure that algorithms are making decisions for the right reasons. And organizations can apply counterfactuals to radically boost learning from limited feedback.

Artificial imagination revolutionizes automated diagnostics, from manufacturing to healthcare. It enables organizations to partner with AI systems to understand how decisions are responsible for business outcomes.

It’s widely attested that humans are biased towards evaluating decisions based on their outcomes: so-called “outcome bias”. But given that decision-making involves uncertainty and risk, businesses end up rewarding luck and overlooking decision quality.

Artificial imagination empowers businesses to overcome outcome bias, to distinguish between luck and wisdom, to learn from mistakes and reinforce good decisions. Just imagine what could become possible if your organization adopted it.