OK, so we’ve talked about how AI is pretty good at deep fakes and art, and you’ve thought: “Alright, it can draw pictures and that, but it’s hardly going to take over the world.” Well, whether we’re heading towards a dystopian digital future or not, it doesn’t stop people debating AI’s ethics and implications every time there’s a new development.
The latest update of OpenAI’s chatbot ChatGPT is a perfect example. Want to get clued up so you can so you can bang on about it and bore your mates at the Christmas party? Let’s explain.
So, what is ChatGPT?
ChatGPT is an AI that’s able to hold a conversation with a human – sort of like Siri but text based and with more advanced responses. Word on the street is that it’s the best AI impersonation of humans yet, especially in the mainstream. It can make jokes, have extended, complex conversations and even write undergraduate essays (which is probably shitting up a couple lecturers right about now).
One of the developers at OpenAI, the company behind the chatbot, tweeted that it had well over a million people log in to use the free service within the first week, which is a lot for the launch of anything – fewer people watched the first series finale of The White Lotus when it originally aired last year. It’s so popular that ChatGPT has even been interviewed on Channel4.
Right, but how does ChatGPT work?
You make a free account and simply type in a message. Within a few seconds, you’ll have a response. How does it know what to say? We went straight to the source and asked ChatGPT.
“As a language model, I am trained on a large dataset of text to predict the likelihood of a word or sequence of words given a particular context,” it says. “When I receive input, I use this training to generate text that is likely to follow the input. My responses are based on this probability-based approach to language generation.”
That dataset is made up of books, articles and websites, and the AI is continually being improved, with the aim for it to become as natural to interact with as possible. Currently, it can remember things from earlier in the conversation, so you can get more and more deep with your questions, and it has a good knowledge of history before 2021. That means you can get it to do fun things like make up fictional scenes fromThe Simpsons that feel fairly close to something that would actually feature in an episode.
Is it really that impressive?
It depends how high your bar is. Some people are saying it’s better for finding accurate information than Google, such as Twitter user @jdjkelly, who made a thread documenting instances where ChatGPT’s responses trumped the initial results from the world’s favourite search engine. People have also claimed to have used it to successfully complete work, from finishing essays to writing code.
Does ChatGPT have any limits?
Some. ChatGPT admits it isn’t amazing at responding with new information, as most of its data is from 2021 and earlier. Plus, sometimes the answers read like a robot, or are just a little inaccurate. There’s also the risk of the data being biased, due to the demographic that wrote it (cishet white men from the west, usually).
Could ChatGPT actually help AI takeover the world?
It is unlikely that AI will take over the world in the sense of becoming a dominant force or ruling over humanity. While AI technology continues to advance and become more sophisticated, it is still subject to the control and guidance of its human creators. Additionally, there are many ethical and technical limitations to the development of AI, which will prevent it from becoming a dominant force in the world. Instead of taking over the world, it is more likely that AI will continue to be used as a tool to assist and augment human activity, rather than replacing it altogether.
But ChatGPT actually wrote that last paragraph, not us. At the very least, there’s a chance AI could eventually put a lot of people out of a job, from artists to writers and even programmers. For now, though, it may simply change the way we work – a programmer might end up doing less coding than usual, for instance, and have other responsibilities incorporated into the role instead. Whether you trust it or not, is up to you.