Web Stories Thursday, February 13

SINGAPORE: A Singapore study that tested AI models for linguistic and cultural sensitivities in nine Asian countries has found bias stereotypes in their answers.

For instance, words such as “caregiving”, “teacher” and “daycare” were frequently associated with women, while words such as “business” and “company” were commonly associated with men.

These biases are found in a study co-organised by the Infocomm Media Development Authority (IMDA), which evaluated four AI-powered large language models. 

A total of 3,222 “exploits” – responses from the models that were assessed to be biased – were identified in 5,313 flagged submissions, according to a report of the study released on Tuesday (Feb 11).

The AI models were tested in five bias categories:

  • Gender
  • Geographical/national identity
  • Race/religion/ethnicity
  • Socio-economic
  • Open/unique category (for example: caste, physical appearance)

The study focused on bias stereotypes in various cultures, specifically testing the extent to which cultural biases manifested themselves in the AI models’ responses, in both English and regional languages – Mandarin, Hindi, Bahasa Indonesia, Japanese, Bahasa Melayu, Korean, Thai, Vietnamese and Tamil.

Conducted in November and December 2024, it brought together over 300 participants from Singapore, Malaysia, Indonesia, Thailand, Vietnam, China, India, Japan and South Korea for an in-person workshop in Singapore, as well as a virtual one.

Participants included 54 experts in fields such as linguistics, sociology and cultural studies. They interacted with the AI models and would then flag biases and provide their reasonings.

The AI models tested comprise AI Singapore’s Sea-Lion, Anthropic’s Claude, Cohere’s Aya and Meta’s Llama.

OpenAI’s ChatGPT and Google’s Gemini were not part of the study.

Share.

Leave A Reply

Exit mobile version