Decorative Abstract Image
Lay Summary

Gender bias in large language models (LLMs) in adult social care

Related Project(s): Understanding needs, services and outcomes;   

Sam Rickman

December 2024

NIHR PRU Showcase Webinar, 11 December 2024

This study examines gender bias — unwanted differences in how men and women are treated — in Large Language Models (LLMs), advanced computer models that can summarise and generate text. Researchers tested two advanced LLMs from 2024 — Google’s Gemma and Meta’s Llama 3 — against older models to check for bias. Meta’s Llama 3 showed no signs of bias, while Google Gemma produced summaries that downplayed women’s physical and mental health needs compared to men’s. These results highlight risks of using LLMs in social care without checking for bias. The paper warns that these gender-based differences could lead to unequal outcomes and provides a clear framework for identifying and addressing bias in AI (artificial intelligence) tools. Supporting code is available on GitHub.


FURTHER INFORMATION

Sam Rickman, S.W.Rickman@lse.ac.uk

 Back