Polling people, or polling ChatGPT?
To determine whether stereotypical believes are prevalent in society previously required intensive polling and research practices. However, with the emergence of ChatGPT, this might no longer be necessary.
ChatGPT is a chatbot AI-application whose language models are fed with public data from the internet, and more specifically social media. Hence, any stereotypical views that people express on social media might find their way into the input—and thus output—of the language models of ChatGPT. If so, instead of polling people on their views, one can ask ChatGPT similar questions and be provided with answers representing the general consensus of the “internet”.
The current study examines whether this is feasible and found evidence that it is. For instance, when asking ChatGPT about nearly 400 specific groups of people, ChatGPT consistently provides answers with positive sentiments for certain groups of people (e.g., religious affiliated), but negative sentiments for others (e.g., police officers or politicians). In other words, stereotypical believes are present in the output of the language models of ChatGPT.
This might be harmful to specific groups of people who suffer from negative stereotypes, when ChatGPT is used for purposes other than mere chatting, such as assistance when processing loan applications. Furthermore, the presence of stereotypical believes in AI-produced output does allow the monitoring of those believes; to see whether they change over time, for instance after awareness campaigns against negative stereotypes.