View publication

Large Language Models (LLMs) have made substantial progress in the past several months, shattering state-of-the-art benchmarks in many domains. This paper investigates LLMs' behavior with respect to gender stereotypes, a known stumbling block for prior models. We propose a simple paradigm to test the presence of gender bias, building on but differing from WinoBias, a commonly used gender bias dataset which is likely to be included in the training data of current LLMs. We test four recently published LLMs and demonstrate that they express biased assumptions about men and women, specifically those aligned with people's perceptions, rather than those grounded in fact. We additionally study the explanations provided by the models for their choices. In addition to explanations that are explicitly grounded in stereotypes, we find that a significant proportion of explanations are factually inaccurate and likely obscure the true reason behind the models' choices. This highlights a key property of these models: LLMs are trained on unbalanced datasets; as such, even with reinforcement learning with human feedback, they tend to reflect those imbalances back at us. As with other types of societal biases, we suggest that LLMs must be carefully tested to ensure that they treat minoritized individuals and communities equitably.

Related readings and updates.

Progress in natural language processing enables more intuitive ways of interacting with technology. For example, many of Apple’s products and services, including Siri and search, use natural language understanding and generation to enable a fluent and seamless interface experience for users. Natural language is a rapidly moving area of machine learning research, and includes work on large-scale data curation across multiple languages, novel…
Read more
Machine Translation (MT) enables people to connect with others and engage with content across language barriers. Grammatical gender presents a difficult challenge for these systems, as some languages require specificity for terms that can be ambiguous or neutral in other languages. For example, when translating the English word "nurse" into Spanish, one must decide whether the feminine "enfermera" or the masculine "enfermero" is appropriate…
Read more