The Conflict Between Explainable and Accountable Decision-Making Algorithms

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 49
  • Download : 0
DC FieldValueLanguage
dc.contributor.authorLima, Gabrielko
dc.contributor.authorGrgić-Hlača, Ninako
dc.contributor.authorJeong, Jin Keunko
dc.contributor.authorCha, Meeyoungko
dc.date.accessioned2022-11-15T10:03:46Z-
dc.date.available2022-11-15T10:03:46Z-
dc.date.created2022-11-11-
dc.date.created2022-11-11-
dc.date.issued2022-06-23-
dc.identifier.citation5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022, pp.2103 - 2113-
dc.identifier.urihttp://hdl.handle.net/10203/299686-
dc.description.abstractDecision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired. Even though these systems are currently deployed in high-stakes scenarios, many of them cannot explain their decisions. This limitation has prompted the Explainable Artificial Intelligence (XAI) initiative, which aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability. This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems. We suggest that XAI systems that provide post-hoc explanations could be seen as blameworthy agents, obscuring the responsibility of developers in the decision-making process. Furthermore, we argue that XAI could result in incorrect attributions of responsibility to vulnerable stakeholders, such as those who are subjected to algorithmic decisions (i.e., patients), due to a misguided perception that they have control over explainable algorithms. This conflict between explainability and accountability can be exacerbated if designers choose to use algorithms and patients as moral and legal scapegoats. We conclude with a set of recommendations for how to approach this tension in the socio-technical process of algorithmic decision-making and a defense of hard regulation to prevent designers from escaping responsibility.-
dc.languageEnglish-
dc.publisherAssociation for Computing Machinery-
dc.titleThe Conflict Between Explainable and Accountable Decision-Making Algorithms-
dc.typeConference-
dc.type.rimsCONF-
dc.citation.beginningpage2103-
dc.citation.endingpage2113-
dc.citation.publicationname5th ACM Conference on Fairness, Accountability, and Transparency, FAccT 2022-
dc.identifier.conferencecountryKO-
dc.identifier.conferencelocationSeoul-
dc.identifier.doi10.1145/3531146.3534628-
dc.contributor.localauthorCha, Meeyoung-
dc.contributor.nonIdAuthorLima, Gabriel-
dc.contributor.nonIdAuthorGrgić-Hlača, Nina-
dc.contributor.nonIdAuthorJeong, Jin Keun-
Appears in Collection
CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0