ChatGPT Needs to Be Regulated Before it’s Too Late

Halli Vickers, Contributing Writer

ChatGPT is an emerging software with many positive uses. In fact, Ms. Emerson and Ms. Hallihan believe it is very interesting and could be helpful in some instances; nonetheless, ChatGPT already has several risks. Although some may argue that the government cannot regulate this technology, I strongly disagree. A writer for the New York Times said, “Much like a good storyteller, chatbots have a way of taking what they have learned and reshaping it into something new–with no regard for whether it is true. The ChatGPT bots create problems on the inside.” This comment alone demonstrates one reason that ChatGPT must be regulated before this software advances and worsens. 

Our government has regulated many things in our country, such as ratings on movies, pharmaceuticals, safety laws, firearms, and some online services. David Wong, an American writer, once said, ¨New technology is not good or evil in itself. It’s all about how people choose to use it.” ChatGPT´s inventors may not have considered all of the software’s drawbacks, but as time passes, this technology may become more powerful as well as more dangerous. We cannot trust that people aren’t going to start using ChatGPT for more dangerous concepts. If the government leaves this software unregulated, it may be extremely destructive to our country.

Our own school has even managed to impose restrictions on online services; for example, our world language teachers forbid us from using Google Translate in class as it is a form of cheating. We’ve been learning foreign languages since middle school, and we’ve been just fine doing so without the use of online help. Not only do we have regulations on online services at school, but our country has regulations on the protocols during SAT and ACT. Because of how simple it is to now cheat with ChatGPT, this software could have a negative impact on these restrictions.

Academics are being disturbed by ChatGPT since it’s impossible to distinguish human writing from the bot’s writing. Erik Gregersen, who is a writer for Britannica, said that students are now able to ask the bots to write an essay for them, which is causing concern among teachers and professors–some teachers no longer want to assign essays for at-home assignments because of plagiarism. You may say that there are websites that can detect AI, but these are unreliable and very simple to fool.

Introducing software such as CHATGPT can be problematic and harmful for students with disabilities such as dyslexia since it may reduce the child’s ability to be creative and learn, rather than learning traditionally at the beginning of their academic career. Students with dyslexia may also grow overly reliant on it, which raises concerns about how a person will develop as a student. 

Not only is this software a threat to a child’s education, but it’s a threat to our privacy. Without proper safety precautions, data taken by ChatGPT can be used for unauthorized purposes, such as targeted advertising, political manipulation, and identity theft. Someone may have certain morals, values, and beliefs, but a robot certainly doesn’t. A ChatGPT bot may respond to someone without enough context leading to a dangerous and/or problematic answer. There have been instances where a bot has responded to someone with sexist and racist language.

Chat GPT is an evolving technology that puts a lot of risks on the influence of our daily lives. It’s dangerous and destructive, and we as a country have no idea what this software could turn into. Therefore, we need to regulate it before it’s too late.