Ready for AI? Start With AI Ethics

October 17, 2019
by
· 5 min read

In the past 12 months, artificial intelligence has become headline news, and not always for the right reasons. We’ve heard stories of sexist AI hiring algorithms, racist face detection algorithms, and AIs using your private social media data to influence elections. Embarrassing AI failures such as these can damage your reputation, incur regulatory penalties, and even negatively impact a company’s stock prices.

These AI failures have received much attention and scrutiny from the media, but many of these news articles focus on the fear factor, devoting their narrative to doom and gloom forecasts and the inscrutability of complex algorithms. Too few news stories explore the true root causes of the problems. It’s no wonder that some businesses are wavering in their decision to move forward. On the one hand, they know that they need to embrace AI innovation to remain competitive. On the other hand, they know that AI can be challenging. How do you manage AI to ensure that it follows your business rules and core values? For AI to be successful, it needs to be trustworthy.

For AI to be trusted, it needs to be aligned with your interests. Trusted AI is AI that shares your values, that you can understand, and that works as planned.

AI Ethics is Good for Business

Ethical behavior is good for business. For example, in the study “Doing Well by Doing Good: The Benevolent Halo of Corporate Social Responsibility” marketing professor Alexander Chernev concluded that acts of corporate social responsibility, even when they are unrelated to the company’s core business, influence consumer perceptions of the functional performance of the company’s products. The products of companies engaged in socially responsible activities are likely to be perceived as being of higher quality. And the benefits of ethical behavior go beyond customer perception. The research paper “Ethics as a Risk Management Strategy” concluded that “there are compelling reasons to consider good ethical practice to be an essential part of … risk management” and that the benefits of ethical behavior include the identification of potential risks, fraud prevention, and reduced court penalties.

A Growing Consensus on the Principles of AI Ethics

Governments around the world have responded, publishing guidelines for AI ethics principles such as the OECD Principles on Artificial Intelligence. Here too at DataRobot, we’ve been a part of that trend, publishing a white paper on the principles of AI ethics. The good news is that the ideas expressed in these documents are converging, that there is a growing consensus around the world on the principles of ethical AI. But these are just principles, not detailed guidance. In general, governments haven’t been prescriptive, with the rare exceptions of GDPR and industry-specific regulations such as the Equal Credit Opportunity Act. So, people remain unsure about the practical steps they can take to apply AI ethics. Furthermore, every organization is different, having its own unique values, and there’s more to ethics and risk management than merely obeying the law.

While many of the events shared common themes, we’ve discovered the one common factor in these high profile AI failures – these organizations didn’t have clearly articulated policies that define their values, nor did they define the risk management procedures that must always be applied to ensure those values are upheld. Without published policies, data scientists, the people who build AI systems, have no guidance about the ethical values to apply to the systems they build, so they build AIs that do not have values. Lacking well-defined risk management protocols, and clearly communicated standards, and under pressure to achieve delivery timelines, AI projects apply incomplete and inconsistent checks and benchmarks. Due to a lack of internal resources, many organizations outsource their AI capabilities, purchasing black box AI systems built by third party suppliers, and these AI systems make decisions that are misaligned with the organization’s strategy and values.

Three Critical Steps to Encourage Ethical AI

Here are the three steps to manage your AIs so that they share your values, don’t cause you embarrassment, or damage your brand value:

  1. Define your values and publish them within your organization.
    Without this step, you cannot have trusted AI. By clearly defining and communicating your organization’s values, both your business and technical staff will know the requirements to be built into new AI systems. Since AI ethics is a relatively new discipline, DataRobot has created a free, no-obligation online tool that covers all of the key principles of AI ethics, asking you guided questions that help you define your individual values.
  2. Define a thorough AI governance process that applies to every AI project.
    Just like any other project, AI projects come with risks that can be mitigated via appropriate AI governance processes. In addition to the standard risk management governance processes, be sure to include processes to check for alignment of values, alignment with business strategy, stakeholder analysis, identification of key AI risks, auditing and testing, and sign-off by business subject matter experts.
  3. Don’t use third-party black-box AI systems – Own Your AIs.
    Third-party systems were not built to match your organization’s unique values. Black box systems compound the problem, as you have no means for understanding the AI’s behavior to check for consistency with your values. With the democratization of data science via the latest generation of automated machine learning tools, plus advice from trusted partners such as DataRobot, you can build your own AIs with human-friendly explanations.

To manage the risk, you must take ownership of your AI destiny. Own your values. Own your intellectual property. Own the process and AI governance. Own your AI business strategy.

New call-to-action

About the Author

Colin Priest is the VP AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held several CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

About the author
Colin Priest
Colin Priest

VP, AI Strategy, DataRobot

Colin Priest is the VP of AI Strategy for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing. Colin is a firm believer in data-based decision making and applying automation to improve customer experience. He is passionate about the science of healthcare and does pro-bono work to support cancer research.

Meet Colin Priest
  • Listen to the blog
     
  • Share this post
    Subscribe to DataRobot Blog
    Newsletter Subscription
    Subscribe to our Blog