UAlbany Study: Cultivating Public Trust Key to AI Investments

Dec. 7, 2021

ALBANY, N.Y. (Dec. 7, 2021) — With a growing consensus among researchers of the potential for the analytical and cognitive tools of artificial intelligence (AI) to transform government in positive ways, a new study from the University at Albany calls for conservative approaches to AI that focus on cultivating and sustaining public trust.

Publishing in a special issue of Social Science Computer Review, authors Teresa M. Harrison and Luis Felipe Luna-Reyes use a framework to illustrate the distinctions between policy analysis and decision making as traditionally understood and practiced and how they are evolving in the current AI context.

Their article, “Cultivating Trustworthy Artificial Intelligence in Digital Government,” sets out recommendations for practices, processes and governance structures to provide for trust in AI and for research that support them.

“Public trust in AI must be cultivated and sustained; otherwise, useful AI systems may be rejected and government decision making may lose its legitimacy,” said Harrison, faculty fellow with the Center for Technology in Government (CTG UAlbany) and Professor Emerita in UAlbany’s Department of Communication. “We argue that public trust can be achieved when AI development takes place in contexts characterized by policies situated firmly in democratic rights and supported by well-documented and fully implemented governance practices.”

The authors seek to provide a balanced view on the potential of AI in government, acknowledging its transformative potential, but also highlighting important challenges that may affect not only decision-making processes but also democratic values. The results have important practical implications related to how to design processes and structures in government to build trustworthy AI applications.

“Our analysis exposes a stark contrast between the reasoning used by traditional decision support tools for government and those enabled by current AI development,” said Luna Reyes, associate professor of Public Administration and Policy at the Rockefeller College of Public Affairs and Policy and CTG UAlbany faculty fellow. “We argue that the latter jeopardizes values used traditionally to frame innovations in data and technology as well as government’s ability to cultivate and sustain public trust in AI.”

The authors argue that of greatest importance is the need to ensure that democratic values will frame the deployment of AI in government, since there are powerful economic arguments and political forces driving its development. Depending on industry to translate codes of ethics into AI products for government is an unacceptable solution.

“Even though the technical landscape of AI may evolve in ways that resolve or mitigate current issues, it may also generate entirely new causes for concern,” continued Harrison.

Therefore, public managers must take active roles in creating the conditions for trustworthy AI using values that have historically sustained trust in government.

Further reading:

Exploring the Role of AI in Government