
AI is perhaps the most disruptive technology of our era, and debates are intensifying over its reliability, explainability, and inherent risks. How should artificial intelligence (AI) be subject to regulation or, at least, human supervision? While major powers, especially the United States, China, and Europe, have initiated legislative processes to regulate AI, excessive regulation may undermine AI’s innovative dynamics. We propose a framework to determine which systems require human oversight and what regulatory implications might be. This article sheds light on AI’s risks, supervision needs, and regulatory implications, using a segmentation of AI systems along two axes: degree of criticality (impact on individuals or institutions) and immediacy (velocity of system response). Using the theory of responsive regulation, we link the taxonomy of AI systems with the subsequent types of regulation that each AI category requires.