Will Punishing Robots Become Imperative in the Future?

Cited 2 time in webofscience Cited 3 time in scopus
  • Hit : 259
  • Download : 0
The possibility of extending legal personhood to artificial intelligence (AI) and robots has raised many questions on how these agents could be held liable given existing legal doctrines. Intending to promote a broader discussion, we conducted a survey (N=3315) asking online users' impressions of electronic agents' liability. Results suggest the existence of what we call the punishment gap that refers to the public's demand to punish automated agents upon a legal offense, even though their punishment is currently not feasible. Participants were also negative in granting assets or physical independence to electronic agents, which are crucial liability requirements. We discuss possible solutions to this punishment gap and present how legal systems might handle this contradiction while maintaining existing legal persons liable for the actions of automated agents.
Publisher
ACM Conference on Human Factors in Computing Systems
Issue Date
2020-04-26
Language
English
Citation

2020 CHI Conference on Human Factors in Computing Systems, CHI 2020

DOI
10.1145/3334480.3383006
URI
http://hdl.handle.net/10203/277521
Appears in Collection
STP-Conference Papers(학술회의논문)CS-Conference Papers(학술회의논문)
Files in This Item
There are no files associated with this item.
This item is cited by other documents in WoS
⊙ Detail Information in WoSⓡ Click to see webofscience_button
⊙ Cited 2 items in WoS Click to see citing articles in records_button

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0