Technological advances present ethical conundrum as ChatGPT triggers waves of conflict

This article could have been written by an AI. Your friend’s essay could have been written by an AI. Your mom’s dissertation could have been written by an AI. Has artificial intelligence finally crossed the line? Should people start worrying? The release of ChatGPT, a new OpenAI chatbot program, has people excited and also asking these hard hitting questions. 

Junior Ethan Oey has insight on how this program works.

“ChatGPT is a chatbot that uses a type of machine learning called Natural Language Processing, which allows the program to understand human text,” he said. “It also uses Generative AI that allows the program to be able to produce text.”

Michael Rogers, Assistant Professor of Computer Science at UWO, sees the program as a sinister application of advanced computation.

“It uses a lot of sophisticated mathematics; it builds these mathematical models by looking at large amounts of data,” he said. “It churns through a bunch of text from all over the internet as part of the process.”

This new program offers a direct insight into what AI is capable of. When prompted with a question, it is able to respond in a way that is structured so well that a reader may be unable to decipher if it was said by a human or an AI. It is also able to create pieces of writing that perform in a similar manner. Rogers sees this program being beneficial in some ways.

“It does a really good job of answering technical questions,” he said. “If you are trying to understand something and your google searches aren’t working, you can just write a natural language sentence. It can be useful for anyone who needs to find the answer to something on a vast array of topics.”

Oey thinks that this can help many people, including himself.

“ChatGPT can help students that have learning disabilities, people who are learning a new language, and people who need new ideas,” he said. “This program could help me in Calculus by being able to explain how and why they got the answer.”

However, Oey also sees some downsides in ChatGPT.

“It is not always correct; sometimes the produced text isn’t necessarily written appropriately, and it does not truly understand human languages ” he said. “Its responses may be detailed, but it doesn’t mean it is an in-depth response. Most of the time, the responses are usually one-dimensional.”

Rogers generally sees the takeover of technology as something to be watchful for.

“Every time a new technology comes along, we lose the ability to do whatever it is that that technology does for us,” he said.

Rogers has seen this takeover of technology happen with many things in the past.

“I think the drawback is that we are going to be losing some really good talent; if you gave people nowadays a paper map they wouldn’t know what to do with it,” he said. “Most people probably can’t do basic arithmetic in their head anymore because of calculators.”

With technology becoming more powerful, Rogers thinks that more people know less about it.

“The weird thing about it is that people don’t really know what it is doing when it’s writing,” he said. “They have made these mathematical models but they can't explain them and they are doing these calculations but there isn’t really any thinking behind them.”

Teachers also have something to worry about with this new program, Rogers has a perspective on this.

“As an educator I worry about plagiarism,” he said. “You are going to be able to go to ChatGPT and say you need an essay, and it will generate something.”

According to English Department Chair Trent Scott, the solution to these fears has already been created.

“Obviously, this is something that came up in conversation in our department,” he said. “Mr. Phelps found an article on NPR about a Princeton student named Edward Tian who has already created an app to detect whether a student has used ChatGPT.”

The app, which analyzes a text for ‘perplexity’ and ‘burstiness’, can determine through complexity of thought and sentence structure whether human or artificial intelligence constructed a writing. Tian, a double major in computer science and journalism, was angered by the number of students using technology as a type of borrowed ladder.

Scott sees the situation as another in a long line of loopholes separating humans from actual growth.

“I appreciate Tian’s immediate response and the offense he took to this abuse,” he said. “It is reassuring that there are academic watchmen on the turret just as concerned as we are here in the trenches. Humans will always be looking for that perfect way to game the system, but they need to understand that, in the end, they are only gaming themselves.”

AI in general is always receiving backlash, and some people think that it can take away jobs, but Oey doesn’t see this being an issue.

“I don't believe that this program specifically is a risk to taking certain jobs,” he said. “If we are discussing AI/robots in general, some jobs can be reproduced by these AIs.”

Oey sees people as the forefront of all jobs, therefore he doesn’t find risk in the advancement of AI technology.

“There will always be a need for an actual person for difficult jobs, like doctors, programmers and more,” he said. “Furthermore, industries like entertainment and sports will not be at risk whatsoever as the whole intent is for people to display their skills.”

Amongst the controversy, people believe that there will be a time when it goes too far and takes over.  Oey thinks that this issue is not in our future.

“There will never be a time where AI can cross human spaces or become too powerful,” he said. “AI can never be able to think for themselves; that is the programmers’ job.”

However, Rogers understands the fear that some people have.

“If you look at how technology continues to progress, it’s scary what it will look like in a couple of years,” he said. “It could be good and it could be bad, but I have no doubt that they are only going to get better.”


by Fareeha Ahmad

Graphic by Jad Alzoubi

Published January 30, 2023

Oshkosh West Index Volume 119 Issue 4

Index Web EditorsComment