Data Science Fails: Watch Out for Typos

November 8, 2019
by
· 4 min read

Back when I was doing my undergraduate degree, I remember being asked to proofread a friend’s economics essay. I didn’t make it past the first paragraph before spotting the surprising phrase “John Maynard Keynes’ work was impotent.” While there was a possibility that my friend wanted to criticize Keynes’ work, the remainder of the essay was pro-Keynes. Then I realized that it was merely a typographic error: “impotent” should have been “important.”

Typographical errors, also known as typos, are surprisingly common. Grammarly published statistics showing that on average, people type more than 12 errors per 100 words in emails and more than 34 errors per 100 words in social media!

A Typo Cost Me First Place

Later, when I first started competing in data science competitions on Kaggle, I had a similar problem with my own work. I was at the top of the leaderboard, feeling confident because I had half the error rate of the second-placed data scientist. But 24 hours before the competition, a new competitor joined and jumped to the top of the leaderboard, with half my error rate. I was stunned! I had spent two months experimenting and fine-tuning my solution, yet someone who had been in the competition for less than 12 hours was beating me! How could they have placed higher than I did? How could they have halved my error score?

For the next few hours, I searched through my code, looking for anything that would improve my score. Then I found it – I had typed the wrong data field name. And since I had used copy-paste to write code faster, the same error had occurred across several sections of the script. The data field name was an actual data field, so no error messages appeared, but nevertheless, it was the wrong data field. Damn!

It took me more than 24 hours to correct and rerun my script. While I was able to use early corrections to resubmit competition entries and improve my score, I ran out of time to correct all of the typos and failed to regain first place.

Reversing the Conclusion of Published Research

In November 2015, a study was published that concluded that:

  • Family religious identification decreases children’s altruistic behaviors
  • Religiousness predicts parent-reported child sensitivity to injustices and empathy
  • Children from religious households are harsher in their punitive tendencies

The research was carried out with children in six countries (Canada, China, Jordan, Turkey, South Africa, and the United States), and included 510 Muslim, 280 Christian, and 323 nonreligious children. It was the first research to take a large-scale look at how religion and moral behavior interact in children from across the globe.

As you can imagine, the study’s results were controversial, and the news stories were sensationalist. News of the study made headlines in more than 80 newspapers around the world. For example, The Economist published an article “Religious children are meaner than their secular counterparts, study finds” and concluded that the study suggested “not only that what is preached by religion is not always what is practiced . . . but that in some unknown way the preaching makes things worse.” Other newspapers wrote similar stories, with headlines such as “Study: Religious Kids are Jerks.” Articles about this research were still being published in 2019, such as “Could Religion Actually Make Children Less Generous?” on Buzzworthy.

Researchers immediately raised questions about the study. The results didn’t match previous research in the field. University of Oregon psychologist Azim Shariff told Science Magazine in 2015 that he was confused by the results as they didn’t match previous research. The results contrast with his analysis that, taken as a whole, found no overall effect of religion on adults faced with these kinds of moral tests. “It doesn’t fit in easily with what’s been out there so far. So I’ve got to do some thinking — other people have got to do some thinking — with how it does fit.” He requested that the authors share their data so that he could understand why the paper obtained such different results versus previous research.

What was the cause? It came down to a simple typo. When coding in their results, they used numbers to represent each country — 1 for the US, 2 for Canada, and so on. When analyzing the data, they tried to control for the country, but instead of controlling for country, Psychology Today reports, they “just treated it as a single continuous variable so that, for example, “Canada” (coded as 2) was twice the “United States” (coded as 1).”

After correcting the mistake, the researchers discovered that country of origin, rather than religious affiliation, was the primary predictor of several of the outcomes. A simple typo reversed the conclusion of the research.

Conclusion

Typos occur easily and are an inevitable result of manual processes. The solution is to automate these processes with well tested, widely used tools and libraries. In data science, the solution is automated machine learning that reduces the dependence on manual scripting and has guardrails to identify possible errors. Just as we trust the reliability of modern cars that are built on a production line, reliable and trusted AI is built in a model factory.

Are you currently building AIs artisanally? Are your current data science tools and processes slow, producing variable quality results? If so, then it’s time to upgrade to a model factory using automated machine learning to build the latest generation of human-friendly machine learning models at volume and with consistent quality. Click here to arrange for a demonstration of DataRobot’s automated machine learning for AI you can trust.

New call-to-action

About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog