In February of this year, President Trump issued an important executive order that directs federal agencies to help maintain U.S. leadership in artificial intelligence research, development, and deployment. The order provides a five-point coordinated government strategy. It directs agencies to invest in AI research, develop technical standards, train Americans in the new skills they will need for the jobs of the future, police AI applications to protect civil liberties, and to open world markets to American AI industries.
The President’s AI directives are practical and valuable for immediate and long-term growth in U.S. artificial intelligence. For example, one directive requires the Office of Management and Budget to collect public comment on government data and models that would be helpful for research and testing. The required request for commentary was published earlier this month (July 10, 2019) and provides a 30-day window for people to make their thoughts known.
The executive order also addresses AI funding, both current and future. It requires that agencies take the necessary steps to fund AI immediately, and prioritize AI R&D funding for Fiscal 2020 program planning and beyond. These recommendations are consistent with the urgency expressed in the National AI R&D Strategic Plan published in 2016, where it was noted that China is well ahead of the U.S. in national funding for AI. China’s high level of AI funding continues to be a concern today.
As a data scientist working with commercial and federal clients, I have seen some of the difficulties agencies face when formulating and implementing their AI agendas. While the goals of the Executive Order are the right ones, there are real barriers to actually implementing the President’s AI agenda.
AI is Not Magic
It is important to understand that AI is not some magical superhuman force. Most practical AI applications use a large subset of AI called machine learning, which is as simple and familiar as a spreadsheet. With machine learning, computers can learn to predict one of the columns of the spreadsheet from the information in the other columns. Buzzwords be gone!
In many cases, a machine learning model simply provides statistical backing for what our intuitions already know. A practiced examiner might look at a loan application and think, “Hmmm…high debt to income ratio, high interest, I bet this is going to default,” and an AI trained on a lot of loan applications will say (in a charming robotic voice), “Probability of default is assessed at 54.7%. Primary reasons for estimate are debt-to-income ratio and interest rate”.
Deployed machine learning models are used by people to make some processes better. With large scale processes, the benefits can be large, even if the percentage improvement in the process are small. For example, Steward Health Care saved $2M at eight hospitals (as could, perhaps, the VA) by more accurately predicting hospital stays in advance and thereby reducing by just 1% the number of paid nursing hours per patient day.
While machine learning does deliver real performance improvements, this doesn't mean that these models can or should replace human intelligence. When presenting the Steward Health Care results at a government innovation conference, a savvy official asked if the models could help in a situation in which a plane crash suddenly brought in an unpredictable surge in patients. The answer, of course, is that no person or machine can predict the unpredictable. Even if a machine model could predict that the odds of a plane crash this weekend were going to increase by tenfold to one in a million, hospital management, the real intelligent agent here, presumably would not increase staff to mitigate that still-remote risk.
Deploying AI to Federal Agencies
While AI can deliver seemingly magical results, integrating it in federal agencies can be far from magical. Most consultancies recommend a roadmapping workshop as the best way to clarify objectives and to engage executive sponsorship. A roadmap helps executives maintain effort across the several years that an AI transition will require. Executive sponsorship is essential for breaking through cross-organizational and bureaucratic resistance in large organizations.
We see that federal agencies have somewhat more difficulty deploying models than do commercial organizations, which also have difficulties. At one federal agency, data science teams can face a year from inception to deployment of models. Although the agency has a framework for deploying modeling results to the field, they have established a model “shadowing” process that puts potential model upgrade against the status quo model. When a model upgrade was provided for a status quo model that was known to be incorrect (due to unrelated IT system changes), it still required six weeks of shadowing. Streamlining their cross-organizational deployment processes would not be possible without an executive who wants models deployed more efficiently.
DataRobot is engaged in world affairs, and we share the government’s interest in trusted AI solutions, protection of civil liberties, and growth in American AI firms. As a world leader in automated machine learning, DataRobot is uniquely positioned to help federal agencies make this dream a reality. Visit us here to learn more about our work in the public sector.
About the Author:
Eric Loeb works as a Customer-Facing Data Scientist at DataRobot. Previously, Eric wrote the first Congressional website (by hand, in raw HTML!) and contributed to first websites at all levels of government. He was the chief software engineer for Gore 2000, chief internet architect for the Democratic National Committee, targeting and modeling lead for Obama ‘08, and a political appointee in the Department of Defense. Eric has a Ph.D. from MIT in cognitive neuroscience, an MS from UC Berkeley in adaptive signal processing, and a BS in mathematics from the University of Illinois.