The launch of ChatGPT, which caused a major transformation for the internet, has also significantly affected cryptocurrencies. There are serious benefits that artificial intelligence can bring in every field, and smart automations could be a turning point for the economy. However, like every major transformation, this one also brings its risks. The latest report highlights the issue of regional discrimination.
Artificial Intelligence and Regional Discrimination
Virginia Tech, a university in the United States, has published a comprehensive report on the AI tool ChatGPT. According to the report, the artificial intelligence chatbot produces different outputs in different regions, and these differences emphasize regional discrimination.
Researchers from Virginia Tech found that ChatGPT’s responses to requests for information on environmental justice issues were limited by regional boundaries. The fact that significant information is more accessible in larger and more densely populated states raises additional questions.
“In rural states like Idaho and New Hampshire, more than 90% of the population cannot access local-specific information.”
A call from an instructor named Kim from the Geography Department of Virginia Tech was also included, emphasizing the need for further research as biases are discovered.
“Although more work is needed, our findings reveal that there are geographical biases present in the ChatGPT model.”
ChatGPT and Equality
Extensive research shows that aside from political biases exhibited by ChatGPT, it also reveals regional disparities in outputs. While this situation is evident within different regions of the USA, it could lead to even sharper differences on a global scale.
The fact that users and certain regions cannot equally benefit from the artificial intelligence language model and that some have access to a better user experience is concerning. Indeed, this regional discrimination could, in the long run, lead to the intentional spread of misinformation in certain regions.
Researchers from the United Kingdom and Brazil revealed on August 25 that large language models (LLMs) like ChatGPT could produce outputs containing errors and biases that could mislead readers.
This situation fuels concerns about language models on the internet failing to correctly filter misinformation.