Abstract
This poster offers a critical examination of racial and gender biases in leading text-to-image generative AI platforms: OpenAI’s Dalle, Mid Journey, and Stable Diffusion. The study methodically analyzed the output of these platforms in response to specific prompts aimed at generating realistic human portraits, revealing significant disparities in racial and gender representation. The research unveiled a stark racial bias in the depiction of men, with a predominant skew towards white representation across all platforms. Notably, Mid Journey exhibited the most significant bias, followed by Dalle and Stable Diffusion, which displayed a marginally more diverse output. The bias persisted in the portrayal of women, with Mid Journey producing exclusively white female images. In contrast, Dalle and Stable Diffusion showed slightly more varied, yet still unbalanced, distributions. A notable gender bias emerged when generating non-specific gendered portraits, with Stable Diffusion and Dalle heavily favoring male images. In contrast, Mid Journey presented a more balanced gender representation. These findings highlight the urgent need for rectification in AI platforms to ensure fair and inclusive representation of all races, genders, and ethnicities. The poster calls for diversified training datasets, thorough bias testing, and enhanced transparency in AI development. This research not only uncovers prevailing prejudices in AI algorithms but also advocates for a future where AI mirrors the diversity of the human population it serves.
Presenters
Danne WooAssistant Professor, Design / Art, Queens College, CUNY, New York, United States
Details
Presentation Type
Theme
KEYWORDS
AI, Bias, Racial, Gender, Artificial Intelligence, Generative AI, Text-to-Image, Data
Digital Media
This presenter hasn’t added media.
Request media and follow this presentation.