As Artificial Intelligence (AI) systems expand in autonomy, learning abilities, and capacity for intentional action, so does the risk of them engaging in harmful activities for which no human possesses the corresponding mens rea, i.e., actions that no human has planned or was even able to foresee (‘hard AI crime’). How should the legal system respond to this gap in criminal liability? In this chapter, we make a threefold contribution: First, we define this gap by offering a taxonomy of AI-generated harms into ‘easy’, ‘medium’ and ‘hard’ cases, arguing that only the last ones give rise to a ‘culpability gap’. Second, we classify and critically engage with the literature responses on the ‘who is to blame’ question for this harm. Finally, we introduce our own novel approach to the ‘hard AI crime’ problem, which shifts the discussion from blame to deterrence and seeks to design an ‘AI deterrence paradigm’ inspired by the Criminal Law and Economics.
|
Scooped by
Stéphane Cottin
onto Bonnes pratiques en documentation June 28, 1:24 PM
|