Human perceptions on moral responsibility of ai: A case study in ai-assisted bail decision-making

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 72
  • Download : 0
How to attribute responsibility for autonomous artifcial intelligence (AI) systems' actions has been widely debated across the humanities and social science disciplines. This work presents two experiments (N=200 each) that measure people's perceptions of eight diferent notions of moral responsibility concerning AI and human agents in the context of bail decision-making. Using real-life adapted vignettes, our experiments show that AI agents are held causally responsible and blamed similarly to human agents for an identical task. However, there was a meaningful diference in how people perceived these agents' moral responsibility; human agents were ascribed to a higher degree of present-looking and forward-looking notions of responsibility than AI agents. We also found that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature. We discuss policy and HCI implications of these fndings, such as the need for explainable AI in high-stakes scenarios.
Publisher
Association for Computing Machinery
Issue Date
2020-02
Language
English
Citation

10th International Conference on Materials Processing and Characterisation, ICMPC 2020

URI
http://hdl.handle.net/10203/289044
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0