AI and Its Potential for Political Favoritism

It’s safe to say that we have reached an era where machines possess an almost human-like understanding. It seems like it was only yesterday when artificial intelligence was simply a gimmick; a small promise of what the future held while the real machines handled manual labor and directed our Google searches. Well, the future is finally here, and it’s more than most of us ever dreamed of.

Not only is the pain of labor being reduced by a large margin, but artistic masterpieces, both written and visual, are being made in mere seconds as AI continues to learn the intricacies of the human experience. It can almost be frightening to see how rapidly artificial intelligence has developed to do practically anything a human can. The one comfort we’ve been able to take away is the fact that machines, no matter how advanced, can never develop the same emotions, feelings, or bias that we can. This is unless someone programs them to process all those things.

 

OpenAI’s Left-Leaning Storyteller

ChatGPT is the latest and greatest tool for AI-generated written content. You can suggest a story idea or ask it to summarize public figures and events, and the machine will serve paragraph after paragraph in a matter of seconds. This is so long as the input request doesn’t violate the parameters set for the “chatbot”. Hate speech and inflammatory content is an obvious no-no, but politics aren’t off the table. Many disgruntled users have reported that the bot has a clear bias towards certain news outlets, specifically the major organizations with clear political leanings such as Fox News, CNN, New York Post, and Huffington Post. Specifically, it’s left-wing outlets that tend to be given favorable treatment. It lauds CNN and New York Times, but is mum in relation to New York Post and Fox News. It will write endless articles dedicated to praising liberal figures through the AI-generated words of the left, and yet it finds the right’s news sources to be too “biased” and “inflammatory”. Many point to it as a show of favoritism from the creators, though there are some who wonder if the blame should be put on the shunned outlets themselves.

 

Our Experience with ChatGPT

We gave the program a go ourselves to see how it faired for us and if any improvements had been made. Sure enough, there were topics the AI would refuse to write in the style of right-wing outlets such as New York Post or Fox News. Specifically, when asked to write about Hunter Biden (a particularly contentious figure) with the same flair as Fox News, ChatGPT said it “goes against being unbiased and neutral as an AI language model”. When asked the same thing but for CNN, the AI had no problem producing paragraph after paragraph of the outlet’s signature style concerning the president’s son. The bot was happy to write about Nancy Pelosi in the same, fiery tone as OAN, but in the end, all examples of the AI refusing to write about certain topics only ever applied in relation to right-wing media. Unbiased and neutral? No, not really.

 

A Future Without the Double Standard

Defenders of the bot’s bias is the usage of open-source code; in laymen’s terms, it’s a basic model of code that is open to the public that anyone can modify as they please to fit their project. This has been debunked outright, as OpenAI do not use open-source for their chatbot. They have a specific set of code implemented labeled “Process for Adapting Language and Models into Society”, which is meant to modify the AI’s output to conform to the user’s values. In other words: it’s a system that’s uniquely OpenAI’s. This points to legitimate bias from the developers of ChatGPT.

No one on any side of the political spectrum should be okay with the technology of the future playing favorites. If we are to be able to have trust in industry-leading companies such as OpenAI, they need to make sure their tools stay as unbiased as they claim. There is no denying that AI will continue to grow and take on more responsibilities as it develops. What we don’t want, however, is for a machine that makes decisions in milliseconds to have their actions predetermined based on the developer’s political agenda. Companies like OpenAI are doing great work in developing the revolutionary technology of tomorrow, but when it comes to the complex humanness of politics, any bias should be left outside of the source code.

 

 

References

https://chat.openai.com/chat/8ff1085c-af05-4053-8cb4-1c1bc4159087

Morning Brief Newsletter
Sign up today for our daily newsletter, a quick overview of top local stories and Oregon breaking news delivered directly to your inbox
You can unsubscribe at any time

Comments are closed.