This book proposes three liability regimes to combat the wide responsibility gap caused by AI systems – vicarious liability for autonomous software agents (actants); enterprise liability for inseparable human-AI interactions (hybrids); and collective fund liability for interconnected AI systems (crowds).
Based on information technology studies, the book first develops a threefold typology that distinguishes individual, hybrid and collective machine behaviour. A subsequent social sciences analysis specifies the socio-technical configurations of this threefold typology and theorises their social risks when being used in social practices: actants raise the risk of digital autonomy, hybrids the risk of double contingency, crowds the risk of opaque interconnections. The book demonstrates that it is these specific risks to which the law needs to respond, by recognising personified algorithms as vicarious agents, human-machine associations as collective enterprises, and interconnected systems as risk pools – and by developing corresponding liability rules.
The book relies on a unique combination of information technology studies, sociological configuration and risk analysis, and comparative law. This unique approach uncovers recursive relations between types of machine behaviour, emergent socio-technical configurations, their concomitant risks, the legal conditions of liability rules, and the ascription of legal status to the algorithms involved.