10 Key Takeaways from the King’s College London Policy Institute "The Use of AI in Healthcare" report
NA
The report from the King’s College London Policy Institute, titled "The Use of AI in Healthcare" (and related studies by the KCL Institute for AI), outlines the current landscape of artificial intelligence within the NHS and the broader UK health sector.
Below are 10 key points summarised from the report and its findings:
1) Rise of "Self-Diagnosis" via AI: One in seven people (15%) have used AI chatbots for health advice instead of contacting a GP. The primary drivers are convenience (46%), curiosity (45%), and uncertainty about whether their health concern is serious enough for a doctor.
2) Mental Health Support Gap: Approximately 10% of the public have used AI for mental health therapy or wellbeing support instead of seeing a trained professional. While 53% of users found it helpful, 42% of the general public still believe AI is "bad" for mental health overall.
3) Efficiency in Diagnostics: Clinical trials (specifically in sonography) found that AI-assisted scans could identify abnormalities almost twice as quickly (reducing scan time by over 40%) without sacrificing accuracy, allowing staff to spend more time on direct patient communication.
4) Public Demand for Regulation: Three-quarters (76%) of the public believe AI tools in patient care should be officially approved and regulated, even if this slows down their adoption. There is a clear consensus that "doctors should not be able to choose AI tools freely" without oversight.
5) Desire for Opt-Out Rights: Majorities of the public (58–63%) believe they should be informed in advance if AI is being used for reading test results, reviewing X-rays, or deciding queue priority, and should have the right to opt out of its use.
6) Accountability Concerns: When AI makes an error (e.g., missing a problem in a clinical image), 34% of the public believe the doctor using the tool should be held responsible, while 24% believe the NHS Trust is accountable. Only 6% would primarily blame the company that developed the AI.
7) Data Privacy vs. Training: Public comfort with data sharing is conditional. 47% feel uncomfortable with their records being used to train AI if the data identifies them personally, but comfort increases significantly if the data is anonymized.
8) Digital Maturity Gap: While 86% of NHS organizations have electronic patient records, only 20% are considered "digitally mature" enough to scale AI effectively. Infrastructure issues like poor Wi-Fi and outdated hardware remain significant "blockers."
9) Workforce Anxiety: While 76% of NHS staff support using AI for care, 65% worry that AI will make them feel more distant from patients. There is a "tech novice" gap in the workforce that requires significant upskilling and training.
10) Funding Challenges: Funding remains the top barrier to transformation. Recent cuts to the NHS AI Lab budget (from £250 million to £139 million) have complicated the ability to scale AI initiatives beyond the pilot phase.
Source: https://www.kcl.ac.uk/policy-institute/assets/use-of-ai-in-healthcare.pdf