This prescient Research Handbook analyses the ethical development of Artificial Intelligence systems through the prism of meaningful human control. It encapsulates a multitude of disciplinary lenses including technical, philosophical and legal, making a crucial contribution to the ongoing discourse about control and responsibility in the field of AI.
The Research Handbook combines empirical insights from various fields to examine the emergence of responsible AI development. These perspectives are discussed in relation to key topics including automated intelligent mobility, recommender and decision-support systems, cure and care, and AI in the military. Contributors examine diverse problems, requirements and context-specific approaches to responsible AI development, ultimately providing the reader with a transdisciplinary view of the subject and highlighting the way forward on both an individual and societal level.
Bringing together a wide range of experts, this Research Handbook is an essential read for scholars of science and technology, law, philosophy of technology, security studies and innovation policy. It also appeals to those involved in the development of AI technology and policy, from policymakers aiming to codify human-centric AI to computer scientists engaging in the design of responsible AI.