Decorative image symbolizing artificial intelligence with a microchip in the centre of a brain.

Artificial intelligence in healthcare: social and ethical challenges

Reading Time: 4 minutes

Scotland’s Artificial Intelligence Strategy aims to promote the use of “trustworthy, ethical and inclusive” AI technologies. But what is the significance of these issues in relation to AI?  

This blog discusses some of the ethical and social challenges that arise from the use of AI in healthcare. It draws from the recently published SPICe Briefing on Artificial Intelligence and Healthcare in Scotland

This is the third blog in a series of publications on AI from SPICe. The first blog looked at what AI is and how it works. The second one explored how AI could be used in NHS Scotland

Safety

Like all healthcare technologies, the use of AI in a health context involves considering questions relating to safety. AI tools, if designed or used poorly, could lead to health-related harm. However, AI could also improve patient safety, for example by reducing communication errors between clinicians. 

At the time of writing, the AI tools used and tested in the Scottish healthcare sector are generally designed to assist humans, rather than work independently. Therefore, the safe use of these tools relies on human involvement. 

To make sure medical devices are safe, they are regulated. In the UK, this is done by the MHRA (Medicines and Healthcare products Regulatory Agency), and the regulation of medical devices is a reserved matter. The MHRA is currently conducting the Software and AI as a Medical Device Change Programme. This programme aims to: 

  1. reform the existing regulations relating to software (including AI) as a medical device, and 
  2. consider the challenges that AI poses over and above traditional software.

Therefore, it is anticipated that the regulatory environment for healthcare AI technologies will change in the coming years.

Explainability

While issues relating to safety can arise with all technologies, AI tools also bring some new challenges. One issue that relates particularly to machine learning is the potential lack of explainability (or interpretability).  

Unlike traditional software or simple, rule-based AI, machine learning does not rely on explicitly programmed rules. Instead, machine learning algorithms use statistical methods to infer their own rules from a large set of training data. This means that it is not always possible for developers to explain how a complicated machine learning algorithm has reached its output. Although AI might get the right answer, we might not know how it got there. 

In the context of healthcare, the importance of explainability is debated. Some scholars worry that the lack of explainability can: 

  • make it difficult to resolve disagreement between AI tools and human clinicians 
  • disempower patients by making decisions about care harder to understand and challenge 
  • raise legal problems with determining responsibility. 

Others point out that explainability sometimes comes at the cost of lower accuracy. They also note that there are many things in medicine (such as paracetamol) that we routinely use, despite not fully understanding the underlying mechanism. This is because their safety and efficacy has been shown in practice.

Generalisability and bias

Another technical feature of AI systems that can create new challenges is that the performance of an AI system depends heavily on the data that it has been trained on. This means that even very good performance in one setting may not be transferable (or generalisable) to other settings, and AI systems can be biased. 

How generalisable an AI tool is has important implications for its use in healthcare. Sometimes, an AI tool that has been trained with data from a different part of the world may not work as effectively when used in a different geographic area. For example, a retrospective study by researchers in Aberdeen found that before being calibrated with local data, an AI breast cancer detection tool called MIA would have produced a very high recall rate in Scotland. This is even though MIA was created by a UK-based company, Kheiron Medical Technologies. 

Algorithmic bias refers to systemic and repeatable features of AI systems that create unfair outcomes for specific individuals or groups, often amplifying existing inequality or discrimination. It can be caused by imbalanced or incomplete training data, as well as decisions made during data collection and algorithmic design.  

For example, an AI system designed for diagnosing skin cancer that is less accurate with dark-skinned patients due to being trained mainly with images from light-skinned patients exhibits algorithmic bias. Similar examples can be found for bias in relation to gender, age, sexuality, and many other dimensions. 

Privacy and data protection

Developing a useful AI tool requires lots of data. This development usually involves private companies, and data sharing in the field of healthcare is a very sensitive issue surrounded by much regulation. Together, these factors raise questions relating to data protection and privacy. 

Data protection is a reserved matter. The Information Commissioner’s Office (ICO) is responsible for promoting and enforcing data protection legislation. The ICO has outlined its thinking on AI and data protection in a 2017 discussion paper, titled Big data, artificial intelligence, machine learning and data protection.  

A key point made by the ICO is that while AI technologies are new, the way that they need to use personal data is not. Therefore, the existing data protection legislation in the UK provides a strong framework for AI. One central feature of this is the principle of ‘privacy by design‘ that developers of new technologies must follow as a legal requirement. 

In Scotland, the Data Safe Havens provide access to anonymised NHS health records to academic and industry partners for research and innovation purposes, when it is not practical to obtain consent from individual patients. Their operating principles are set out in the Safe Haven Charter. An important principle is that the Safe Havens maintain NHS ownership of the data. 

NHS Safe Havens, like all UK trusted research environments, follow the Five Safes framework to protect data. The Five Safes are: 

  1. Safe People: researchers accessing data are subject to an appropriate accreditation process  
  2. Safe Projects: data must be used ethically and for the public benefit  
  3. Safe Settings: the physical spaces used to access data are secured and monitored  
  4. Safe Data: researchers only access the data necessary for their project  
  5. Safe Outputs: research outputs are checked to make sure individuals remain unidentified. 

Where can I learn more?

As AI tools become more common, the vision of “trustworthy, ethical and inclusive” AI remains important. The Scottish Government has outlined much of its more detailed work towards this vision in the Digital Health and Care Strategy 2021 and Scotland’s Data Strategy for Health and Social Care 2023. These strategies relate to all digital technologies, not just AI. 

The use of AI will raise many challenges above and beyond the ones discussed here. The Alan Turing Institute has produced a comprehensive guide, Understanding artificial intelligence ethics and safety, for the public sector. It includes further information on ethical issues relating to AI, as well as guidance on how to manage them. 

Karri Heikkinen, Researcher, Health and Social Care Team, SPICe