It would not be an exaggeration to state that ChatGPT has caused excitement and consternation equally across the world. The chatbot is the latest model of GPT (Generative Pre-Trained Transformer), from OpenAI, the same company behind the Artificial Intelligence (AI)-fuelled art generation platform DALL-E. The text-generating AI platform, which employs similar technology to make the users feel like they are messaging with a real person, has been stunning audiences with its ability to spit out human-like text.
The chatbot is currently enjoying a ‘research preview,’ during which the public is allowed to use the platform for free while the company gathers information about users’ experience. An example of ChatGPT’s prowess was revealed by a new research from a professor at the University of Pennsylvania’s Wharton School who found it was able to pass the final exam for the school’s MBA programme. While the tech has impressed a lot of people, it has also worried them. In particular, critics fear that the chatbot will potentially make students lazy, lead to a swell in disinformation, and prove otherwise disruptive to major media industries.
As expected, the ChatGPT backlash officially began early this month in the US when New York City public schools barred teachers and students from using the chatbot, apparently fearing that the powerful AI would lead to a tsunami of cheating. New York City Education Department spokesperson Jenna Lyle explained that “due to concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content, access to ChatGPT is restricted on New York City Public Schools’ networks and devices. While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success.” NYC Public Schools’ statement also seems to make reference to ChatGPT’s accuracy problem.
More and more people seem to be waking up to the darker implications of ChatGPT’s technology. While the chatbot has so far managed to impress users with its ability to spin up a wealth of creative material, concerns persist over how it will inevitably be misused. When it comes to education specifically, many have predicted that ChatGPT will be used to cheat, to automatically fabricate college essays, and to otherwise hamper students’ ability to learn and do things for themselves.
On a related note, a college student spent New Year’s Day creating an app that can decipher what content was written by a human and what was spawned by ChatGPT. Edward Tian, a computer science and journalism student at Princeton, created the program, GPTZero, to help combat academic plagiarism generated by the new chatbot. Tian’s program analyses text for complexity and ‘randomness’ to assess whether it was spawned by a human or machine. He shared links to his creation on Twitter early January, explaining how it was designed to “quickly and efficiently detect whether an essay is ChatGPT or human written.”
An Amazon lawyer, reportedly urged employees not to share any Amazon confidential information (including Amazon code they are working on) with the AI chatbot. The guidance comes after the company reportedly witnessed ChatGPT responses that have mimicked internal Amazon data. So eventually, it has boiled down to this – all academic or professional writings are going to be run through GPTZero like programs, in addition to the routine anti-plagiarism screenings. Technology has always been a double-edged sword, which could produce fantastic results if judiciously and ethically deployed. It just got smarter, sharper, and powerful and indeed elevated to the next level.
Related Story