AI and the Law Part 1: How should it evolve?
By: Stephen Skipp
Lead Consultant, Data Exploitation
13th May 2021
Artificial intelligence is set to play an increasingly important role in our lives. It will be used within products to aid their operation and within organisations to improve their efficiency and to make decisions about us. It is therefore important as developers to understand societies’ requirements of the technology. This article is an engineers’ view of the law on data protection in order to illuminate its future evolution into other areas. The reason being that AI will take increasingly important decisions on behalf of humans with significant affects that must individually be regulated, even where the AI’s contributary decisions are only part of a final decision taken by a human.
Most people will be familiar with the General Data Protection Regulation, GDPR covered by EU law and also by the UK Data Protection Act 2018. This gives the data subject the right to:
- be informed about how their data is being used
- access personal data
- have incorrect data updated
- have data erased
- stop or restrict the processing of their data
- data portability (allowing them to obtain and reuse their data for different services)
- object to how their data is processed in certain circumstances
A data subject also has rights when an organisation is using their personal data for:
- automated decision-making processes (without human involvement)
- profiling, for example to predict a persons behaviour or interests
These issues must be carefully considered during the design of any machine learning or AI system that uses such data. For example, if the original data collected from the subject is deleted, it has not been fully removed as it still exists through its influence on the training of the AI system. Consideration must be given as to whether part or all of that data may be revealed by appropriate stimulation of the AI system. Similarly if the data subject requests that an error is corrected it has not been fully adhered to if only the original files are corrected. The error can only be fully removed if any AI system trained using the erroneous data is retrained with the correct data.
Automated Individual Decision-Making
Article 22, section 1, of the GDPR states that in the case of automated individual decision-making, including profiling: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” It is important for developers to understand that this applies equally to data that is derived or inferred, possibly by combination with other information.
What are the rules around Reidentification?
Section 171(1) of the Data Protection Act 2018 states that: “It is an offence for a person knowingly or recklessly to re-identify information”. This reidentification could occur after a persons data is deleted from the original files if the machine learning system is not retrained without their data. To take the decision not to update the machine learning system might be regarded as deliberately allowing reidentification. If an organisation is to rely upon their systems now and into the future, then appropriate and robust anonymisation and deletion must be designed in at the start.
Is it only personal data?
It is often assumed that if the data that a machine learning system uses is not personal data then the act does not apply. However, if the system infers or derives personal data from that which it does collect then the provisions do apply. Issues such as gaining the data subjects consent for the collection of data and the effect on the person of any decisions taken must be considered.
Article 25 of the GDPR enshrines the principals of data protection by design and by default. Subsection 2 states: “The controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons.”
An inability to understand the data flows and the processing undertaken by the AI system would mean that the data controller could not audit the system to ensure their organisation is compliant. The system must be designed to comply to the law at the outset. The act envisages this and subsection 1 involves the data controller in the system specification stage by stating ”the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, such as pseudonymisation, which are designed to implement data-protection principles, such as data minimisation, in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects.”
Article 13 of the GDPR, Information to be provided where personal data are collected from the data subject, section (f) clarifies the information that the data subject is entitled to when AI is used: “the existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”
This is also replicated in Article 14, Information to be provided where personal data have not been obtained from the data subject, subsection 2(g) and article 15: Right of access by the data subject, section 1(h).
The Importance of Transparency
Given the data controller and the data subject requirements, it is clear that the logic of any AI processing system must be explained. It is further defined by article 12, Transparent information, communication, and modalities for the exercise of the rights of the data subject section 1. “The controller shall take appropriate measures to provide any information referred to in Articles 13 and 14 and any communication under Articles 15 to 22 and 34 relating to processing to the data subject in a concise, transparent, intelligible and easily accessible form, using clear and plain language”
Unless processing is transparent it would lead to difficulties for an organisation using such data when trying to determine the cause of mistakes or their liability. Concern over the ramifications of AI and the need to ensure the correct legal frameworks led to the European Commission establishing the High-Level Expert Group on Artificial Intelligence. On the 8 April 2019 they published a series of recommendations in Ethics Guidelines for Trustworthy AI.
This gave a non-exhaustive list of requirements as:
- Human agency and oversight
Including fundamental rights, human agency and human oversight
- Technical robustness and safety
Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility
- Privacy and data governance
Including respect for privacy, quality and integrity of data, and access to data
Including traceability, explainability and communication
- Diversity, non-discrimination and fairness
Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation
- Societal and environmental wellbeing
Including sustainability and environmental friendliness, social impact, society and democracy
Including auditability, minimisation and reporting of negative impact, trade-offs and redress.
This gives a valuable set of guidelines for industry to follow that also aligns with the likely path of future legislation such that businesses can have a degree of assurance that the systems they put in place now will still be able to serve them into the future rather than become a potential source of future misfortune.
Are we ready for AI?
The House of Lords Select Committee on Artificial Intelligence published a report on the 16 April 2018 titled AI in the UK: ready, willing and able? This was even stronger in its statement on intelligibility in AI. Section 105 state: “We believe that the development of intelligible AI systems is a fundamental necessity if AI is to become an integral and trusted tool in our society.” It also gave indications of the legislative momentum that brought and probably will continue to strengthen protection of the AI user and consumer when in section 102 it said “the GDPR already appears to have prompted action in the UK, with the Data Protection Bill going further still towards enshrining a ‘right to an explanation’ in UK law.” And in section 103 when it said “we were also concerned to hear that the automated decision safeguards in the Data Protection Bill still only apply if a decision is deemed to be ‘significant’ and based ‘solely on automated processing’”.
From this quick scan of the landscape it can be seen that explainable AI will be essential for businesses, Government institutions and the public. It is imperative that the user, the development and the legal communities engage with this and communicate such that the rapidly changing implementations evolve to serve societies needs and ensure stability for the organisations that rely upon them as the legislative framework evolves.