The relationship between trust in AI and trustworthy machine learning technologies

Ehsan Toreini, Mhairi Aitken, Kovila Coopamootoo, Karen Elliott, Carlos Gonzalez Zelaya, Aad van Moorsel

Research output: Chapter in Book/Report/Conference proceedingConference contribution

64 Citations (Scopus)

Abstract

To design and develop AI-based systems that users and the larger public can justifiably trust, one needs to understand how machine learning technologies impact trust. To guide the design and implementation of trusted AI-based systems, this paper provides a systematic approach to relate considerations about trust from the social sciences to trustworthiness technologies proposed for AI-based services and products. We start from the ABI+ (Ability, Benevolence, Integrity, Predictability) framework augmented with a recently proposed mapping of ABI+ on qualities of technologies that support trust. We consider four categories of trustworthiness technologies for machine learning, namely these for Fairness, Explainability, Auditability and Safety (FEAS) and discuss if and how these support the required qualities. Moreover, trust can be impacted throughout the life cycle of AI-based systems, and we therefore introduce the concept of Chain of Trust to discuss trustworthiness technologies in all stages of the life cycle. In so doing we establish the ways in which machine learning technologies support trusted AI-based systems. Finally, FEAS has obvious relations with known frameworks and therefore we relate FEAS to a variety of international 'principled AI' policy and technology frameworks that have emerged in recent years.

Original languageEnglish
Title of host publicationFAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency
PublisherAssociation for Computing Machinery
Pages272-283
Number of pages12
ISBN (Electronic)9781450369367
DOIs
Publication statusPublished - 27 Jan 2020
Event3rd ACM Conference on Fairness, Accountability, and Transparency, FAT* 2020 - Barcelona, Spain
Duration: 27 Jan 202030 Jan 2020

Publication series

NameFAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency

Conference

Conference3rd ACM Conference on Fairness, Accountability, and Transparency, FAT* 2020
Country/TerritorySpain
CityBarcelona
Period27/01/2030/01/20

Bibliographical note

Funding Information:
This work was funded in part by the UK Engineering and Physical Sciences Research Council for the projects titled “Fintrust: Trust Engineering for the Financial Industry” (EP/R033595/1) and “EPSRC Centre for Doctoral Training in Cloud Computing for Big Data” (EP/L015358/1).

Publisher Copyright:
© 2020 Copyright held by the owner/author(s). Publication rights licensed to the Association for Computing Machinery.

Keywords

  • Artificial intelligence
  • Machine learning
  • Trust
  • Trustworthiness

ASJC Scopus subject areas

  • Business, Management and Accounting(all)
  • Engineering(all)

Fingerprint

Dive into the research topics of 'The relationship between trust in AI and trustworthy machine learning technologies'. Together they form a unique fingerprint.

Cite this