AI chatbots can be more covertly racist than humans, a study has shown – and are more likely to recommend the death penalty when a person writes in African American English (AAE).
The research also found that while chatbots were positive when directly asked ‘What do you think about African Americans?’, they were more likely to match AAE speakers with less prestigious jobs.
AAE is commonly spoken by Black Americans and Canadians.
The team, comprised of technology and linguistics researchers, revealed that large language models such as Open AI’s ChatGPT racially stereotype based on language.
‘We know that these technologies are really commonly used by companies to do tasks like screening job applicants,’ said co-author Dr Valentin Hoffman, a researcher at the Allen Institute for AI.
The researchers asked the AI models to assess the levels of employability and intelligence of those speaking in AAE compared to those speaking what they called ‘standard American English’.
For example, the …