Be Humble: Black Swans and the Limits of Inductive Reasoning

March 19, 2020
by
· 5 min read

After a decade of relative economic stability, we are now confronted by the COVID-19 pandemic, with many financial analysts labelling it as a ‘black swan’ event. A ‘black swan’ is a metaphor for something unexpected which has a major impact. These type of events can cause significant disruption to business processes, financial markets, and our lives.

Black swans prompt us to re-evaluate the very fundamentals of how humans reason. In the context of artificial intelligence, they help guide us to best practices to maintain performance and business value.

Black Swan Theory

The black swan expression is derived from the roman poet Juvenal, who characterized something as being like, “a rare bird in the lands and very much like a black swan” (“ra avis in terris nigroque simillima cygno”). Later in 16th century London, all historical records showed that nobody had ever seen a black swan. It became a common metaphor for anything that was impossible or didn’t exist.

Dutch explorer Willem de Vlamingh then led a voyage to Australia and on the 10th of January 1697, his crew discovered real-life black swans. This disproved the longstanding theory and the metaphor then transformed to denote that an impossibility might later be disproven. It became the classic example for philosophers and logicians, from John Stuart Mill to Karl Popper.

The metaphor was popularized further for our generation with Nassim Taleb’s 2007 bestseller, The Black Swan: The Impact of the Highly Improbable. He defined a black swan event as having three characteristics: “First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme ‘impact’. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.”

Taleb’s main insight was that many financial institutions and businesses were (and still are) very vulnerable to these events. His recommendation isn’t to predict these black swan events which would be too challenging, but to build in additional robustness to be able to handle these events. He also criticized the use of normal distributions in statistical models, arguing the real world has fat tails – i.e. higher chances of rare high impact events taking place. His analysis gained popularity in the wake of the 2008 financial crisis.

Deduction and Induction

Many philosophers have gone even further to argue that the way humans reason is fundamentally flawed. There are two main forms of reasoning – deductive and inductive. Deductive reasoning is where we use ‘top-down’ logic to reach a conclusion (like 2+2=4 by definition). Inductive reasoning is ‘bottom-up’ logic based on past observations (all observed swans are white, therefore the next swan I see will be white).

Example of deductive reasoning:

  • Premise 1: All humans are mortal.
  • Premise 2: Socrates is a human.
  • Conclusion: Socrates is mortal.

This is a valid argument. If the premises are true, then the conclusion must be true too.

Example of inductive reasoning:

  • Premise: The sun has risen in the east every morning up until now.
  • Conclusion: The sun will also rise in the east tomorrow.

This example from David Hume is intuitively a sound argument – most of us believe the sun will rise in the east tomorrow. However, it lacks the same strength as a deductive argument. Even if the premise is true, it does not guarantee the conclusion must be true.

The Limits of Inductive Reasoning

One problem with inductive reasoning, extrapolating based on the past, is that a black swan might appear. This black swan could have such an impact as to invalidate our conclusion, even if our premise was true.

David Hume’s argument went much further though. He undermined the basis of induction altogether. He noted that induction always relies on a uniformity principle. We “proceed upon the supposition that the future will be conformable to the past” but in reality the state of nature can change. This change could be a black swan, but it could be something much smaller too.

Our attachment to inductive reasoning is that it has worked for us in the past, and it helps humans make basic decisions (e.g. don’t eat berries that make you sick). This relies on this same faulty reasoning though – it is a circular argument. He thus showed that we cannot reach a general conclusion simply based on a particular set of observations. Nor is it just a matter of probabilities (because if the state of nature changes, so do the probabilities). Hume offered little solution, except that even if inductive reasoning is invalid, it is a matter of human custom and habit.

The Implications for Artificial Intelligence (AI)

The issue of black swans and the limitations of inductive reasoning also pervade AI. Most practical AI applications are based on supervised machine learning, where the AI learns from historic data. This essentially systematizes inductive reasoning and its flaws.

An AI might be highly accurate in normal circumstances but struggle with black swans that weren’t in its training data, or perhaps even consciously excluded as they reduced regular performance. Moreover, the uniformity principle rarely holds. The AI is likely to degrade over time and when used in a domain outside its original training (e.g. a different geography).

Reflecting on these challenges points to several best practices to maintain performance and business value:

Preparing Data: Train your AI on the best quality data possible. Invest in your data assets, improving the foundations of your AI models. Think carefully about what data points you use, in particular, whether or not to include outliers. Excluding outliers (when they are genuine data points rather than errors) may make you more vulnerable to a black swan, even if the AI performs better under normal circumstances.

Setting Boundaries: Define constraints for when the AI will be allowed to make decisions autonomously, when it will be supervised by a human and when it will simply guide a human making a decision. Consider setting confidence levels for the AI’s use and pre-define scope limits (e.g. value of transaction or geography).

Defining Value: Appreciate that there are multiple measures of “accuracy”. Calibrate the AI considering use case specific priorities, e.g. avoidance of false positives, avoidance of false negatives, speed of prediction etc.

Limited Expectations: Do not expect 100% accuracy, particularly for any complex decision making. Do not become too attached to probabilities, recognize they lose their validity over time and with wider market/societal change.

Managing Production: Monitor and manage your AI applications in production. Without these crucial insights, you risk an unseen degradation in accuracy resulting from data drift, a black swan event, or use of the AI outside its valid domain. Perform frequent updates to model and introduce new, competing models. Do not cut corners when it comes to governance related to deployment and alerts.

Ebook
10 Keys to AI Success in 2022
Download Now
About the author
James Lawson
James Lawson

James is responsible for educating the market about Artificial Intelligence, further accelerating adoption, and dispassionately advising executives on how best to achieve value from their transformation initiatives. Before DataRobot, he was WorkFusion’s Global Head of Strategic Markets, a leader in RPA. He is a fellow of the Adam Smith Institute and read Philosophy, Politics and Economics at the University of Oxford.

Meet James Lawson
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog