AI Writing Tools

What impact will they have on the future of education?
Profile picture of VoiceBox

Created by VoiceBox

Published on Feb 15, 2023
the chat gpt logo on a 3D square on a teal background

In January 2023, OpenAI’s writing tool ChatGPT – commonly known as a chatbot – passed an exam set by Ivy League professor Christian Terwiesch. This was a development that, while progressive for technology, muddies the water in part of what humanity holds dear: academia and creative expression. 

Chatbots – along with AI art generators  – have raised many eyebrows since their mainstream implementation. ChatGPT and the recent release of Google’s Bard made headlines as tools that could transform the next phase of the digital world. Some see potential. Others see harm. 

“Once again AI is making us question the purpose of human intelligence, creativity, and expression.” - Young person

So what exactly are chatbots, and how do they work? Let’s begin first and foremost with the reminder that chatbots should be approached with curiosity, not fear. They were developed by humans, and they are used by humans. Even the neural network behind them – known as large language learning models – are trained on colossal amounts of internet text, which, again, are only available because humans put the data there. A sceptic would say that chatbots aren’t creating anything new. They’re just recycling what’s already available. 

“AI doesn't come up with this stuff alone, it can't. So it must be plagiarised from somewhere.” - Young person

We had the chance to ask ChatGPT some questions related to VoiceBox; including what we should write about. Here’s a sample of what it came back with (which we have to admit, is pretty spot-on):

“Technology: Topics related to the latest advancements in technology, such as artificial intelligence, robotics, the Internet of Things (IoT), and more."

“Pop culture: Trending topics in popular culture, such as music, movies, and celebrities.”

So what we’re looking at is software that can answer complex questions. While chatbots are undeniably advanced in this area, question/answer technology has actually been around for years. Magic 8-Balls can technically answer any question you throw at them with a simple shake. At one point, Apple’s Siri could tell you where to hide a body, and even today, it answers back if you say something rude. 

But Magic 8-Balls and Siri can’t write an essay for you. Chatbots can. Because of this, we find ourselves face to face with a long list of ethical-based questions that no one yet knows the answers to. Some university campuses in the US are embracing chatbots, allowing students to utilise them for assignments providing a set of rules are followed such as removing AI-generated text from word counts and citing their usage. But a number of public school districts in the US are prohibiting the technology from their classrooms. VoiceBox are concerned that this case-by-case approach will place certain students at a disadvantage, and we are eager to see how governing bodies across the world tackle such a difficult debate. Here’s what some young people had to say about it:

“If an AI essay gets a better grade is it fair?” 

“Even if the student didn't steal the info themselves, they're not writing their own paper and facilitating plagiarism.” 

“So what if you use AI to help you write? People used to be against using calculators to help solve math problems because, as they used to tell us, ‘you aren’t always going to have a calculator in your pocket’ and look where we are now. We should embrace technology instead of continuing to do things the hard way.” 

“Doing the work is the point of the essay, if an AI does the work then you learned nothing.”

“If you’re just using it as a starting point but then using mostly your original writing then I think it’s okay but you shouldn’t just use it to write the essay for you.”

“It’s a waste of your education.”

“I think schools need to continue to adapt to technology. It’s going to be available in the ‘real world’ so why try and block us from it now? I think they should teach us how to use it as a starting point and then incorporate our own ideas, like they do when they teach us how to use outside sources for research.”

When we asked ChatGPT about fairness to students, its response was pretty diplomatic: 

“In the context of education, AI models like me can be a valuable tool for students by providing access to information and resources that can support their learning. However, it's also important for students to understand that AI models like me are not perfect, and the information I provide should be fact-checked and verified before it is used as a primary source. Additionally, students should be encouraged to think critically and independently, and to use AI models like me as a starting point for their own research and learning.”

Despite such a balanced answer to a very legitimate question, ChatGPT developers have made an arguably controversial decision that could impact students worldwide. On the 1st February 2023, it was announced they were trialling a $20 monthly subscription fee for users. While we understand that such complex programmes require funding, it raises some questions. A student that can afford to use chatbots may get a better grade than a student who can’t. Will the former be graded more harshly as a result? Is it fair that the latter can’t broaden their studies simply because they don’t have the funds? Will state/public schools face pressure to finance chatbots, in order to keep up with their private counterparts? We will be watching closely to see how these integral questions are considered before the digital divide widens further. 

Here’s what ChatGPT said about the new subscription model: 

“Some people might view the charges as fair, given the significant costs and resources required to develop and maintain an AI language model like ChatGPT, while others might view them as unfair, given the potential benefits that the technology can provide. Ultimately, the fairness of the charges is a subjective matter that depends on the perspective and situation of each individual.”

As a tech-positive enterprise, we urge decision-makers to see chatbots as an opportunity, rather than a threat. Blocking and filtering new technology will not equip children and young people with the skills to thrive throughout school, university and work. If chatbots are here to stay, they need to be accepted and regulated appropriately to provide opportunities and limit harm.

“It’s about adaptation. The internet didn’t exist till the 90s, and education adapted to that.” - Young person

More for you