AI and the Law Part 2: The Explainability of AI
By: Stephen Skipp
Lead Consultant, Data Exploitation
27th May 2021
In my previous blog, I summarised key points in the AI legislative landscape and indicated the importance of explainable AI to businesses, Government organisations and the public. But how do we go about making AI explainable? This appears particularly difficult when the leading edge of technology involves deep learning techniques that are vast networks with potentially millions of parameters trained on massive datasets. This part 2 blog seeks to find if the law gives any guidance on the definition of, or limits on, what is considered to be an acceptable explanation to be given by an AI system.
The House of Lords Select Committee on Artificial Intelligence published a report on the 16 April 2018 titled AI in the UK: ready, willing and able? In which section 104 states “The style and complexity of explanations will need to vary based on the audience addressed and the context in which they are needed.” And section 105 states: “We believe that the development of intelligible AI systems is a fundamental necessity if AI is to become an integral and trusted tool in our society. Whether this takes the form of technical transparency, explainability, or indeed both, will depend on the context and the stakes involved”. This statement implies a desire that the level of explanation should be appropriate for the users and the situation in which it is being used.
Article 5 of the GDPR defines the principles relating to the processing of personal data and section 1(a) states that personal data shall be “processed lawfully, fairly and in a transparent manner in relation to the data subject”. So that means that the information given to the data subject should be clear and understandable by them. It does not say to a specialist in that business nor in data science: it is appropriate to the user in their situation.
Is it practical to implement?
The GDPR gives the data subject a right to an explanation but it also provides the data controller with some protection from excessive requests. In article 12, Transparent information, communication and modalities for the exercise of the rights of the data subject, section 5 “Where requests from a data subject are manifestly unfounded or excessive, in particular because of their repetitive character, the controller may either:
(a) charge a reasonable fee taking into account the administrative costs of providing the information or communication or taking the action requested; or
(b) refuse to act on the request.”
Section 60 of the UK Data Protection Act 2018 indicates that the level of explanation expected is such as to provide clarity. It states “information may be provided in combination with standardised icons in order to give in an easily visible, intelligible and clearly legible manner, a meaningful overview of the intended processing.”
For the avoidance of doubt and at the expense of repetition, this is also in the GDPR where article 12, Transparent information, communication and modalities for the exercise of the rights of the data subject, section 7 states: “The information to be provided to data subjects pursuant to Articles 13 and 14 may be provided in combination with standardised icons in order to give in an easily visible, intelligible and clearly legible manner a meaningful overview of the intended processing.”
The legislation is clearly indicating that brevity and clarity within the context of the AI’s use is an expectation of the provided explanation. Developers of AI systems should therefore look to using techniques that base their decisions on the same models and concepts as are in the natural discourse of the users of that system.
Is there a desire for explainablity?
House of Lords report helps guide AI developers’ and AI purchasers’ decision making by stating that from a legislative point of view there is a preference for explainability over techniques that are impenetrable. Section 105 states: “We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take. In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”
Given the requirements and constraints on the explanation required, organisations seeking to use AI systems and the developers of such systems must choose the technology appropriate to the importance of the decisions being taken and the level of explanation required of components of that decision.