Explaining the Black Box: when law controls AI
The explainability of Artificial Intelligence algorithms, in particular Machine-Learning algorithms, has become a major concern for society. Policy-makers across the globe are starting to reply to such concern.
In Europe, a High-level Expert Group on AI has proposed seven requirements for a trustworthy AI, which are: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity/non-discrimination/fairness, societal and environmental wellbeing, and accountability.
On that basis, the Commission proposed six types of requirement for high risk AI applications in its White Paper on AI: ensuring quality of training data; keeping data and records of the programming of AI systems; information to be proactively provided to various stakeholders (transparency and explainability); ensuring robustness and accuracy; having human oversight; and other specific requirements for certain particular AI applications, such as those used for purposes of remote biometric identification. Thus in both documents, transparency and explainability are considered key. This is why several new obligations, specific to automated systems (and thus, to AI), in particular in data protection rules and consumer protection rules, have been adopted in Europe to enhance the explainability of algorithmic decisions.
This Issue Paper deals with various aspects of AI explainability obligations:
- the different meanings of explainability, in particular by confronting the legal and the computer science meanings;
- the European AI-specific obligations imposing explainability to operators of such systems;
- the rationale of the above-mentioned rules;
- obligations implemented by different Machine Learning techniques.
The paper also lists a series of issues for further discussion.