Hofstra Law Review
Abstract
A substantial body of literature has emerged around concerns that machine learning and artificial intelligence systems are opaque, or "black boxes. " The black box nature ofA.I.-powered services and applications has resulted in alarming risks in social life, including insecurity, mistrust, lack of accountability, and exacerbated bias and discrimination. Despite the call to open the black boxes, corresponding legal and regulatory measures tend to run aground due to their infeasibility, inefficacy, and ambiguity. This Article offers a unique perspective on the A.I black box problem. Using systems theory as a heuristic tool, this Article views A.I. as a law-related system and A.I regulations as interactions of many subsystems. Each system has its own purposes and operates according to its own logic. This Article argues that current Explainable A.I (XAI) regulations fail because they do not adequately consider the relationships between these subsystems or create effective interactions. Drawing on global examples, the Article suggests that rather than merely attempting to "open" the black boxes, XAI regulations should focus on fostering dynamic, coherent interactions that align with the overall objectives of the A.I. system. Based on the observations of systems thinking in XAI regulation, this Article concludes that XAI regulation needs to establish clear and compatible communication frameworks. It proposes practical regulatory techniques, such as counterfactual explanations, controlled disclosures, and benchmarking, to enhance transparency and improve interactions between A.I. subsystems, thereby ensuring the robust operation of the entire A.I. system.
Recommended Citation
Xi, Ran
(2025)
"A Systems Approach to Shedding Sunlight on A.I. Black Boxes,"
Hofstra Law Review: Vol. 53:
Iss.
2, Article 5.
Available at:
https://scholarlycommons.law.hofstra.edu/hlr/vol53/iss2/5
