Artificial intelligence armageddon. We’ve all chatted about it in the pub, haven’t we? Even if it’s an ironic little, “Ah well, work won’t matter when AI’s doing everything and we’re all dead,” between drinks.
And to be fair, it has been a pretty big year for artificial intelligence. Back in July, an ex-Google engineer made headlines after claiming the company’s LaMDA technology had become sentient (it hadn’t). And earlier this month, an AI addressed parliament with a speech more authoritative than anything Liz Truss managed to say during her minuscule tenure as prime minister.
Maybe a world run by AI doesn’t seem too far-fetched, after all. Could this mysterious bit of technology really change the world as we know it?
Hold up a sec, what even is AI, really?
Artificial intelligence is kinda just that – intelligence that humans have artificially given to something else. Typically we’ll visualise AI as a robot, but AI also exists inside of everyday technology, from social media algorithms to surveillance cameras. In films like Ex Machina, the robot is merely a casing for the intelligence.
What does “the intelligence” actually mean? Well, essentially intelligence in this sense is defined as having the ability to do three things. The first is “generalised learning”, which is, for example, the ability to be put in a room and figure out where the walls are, and then do the same in a different room. Then, there’s reasoning skills, which is basically when you can decide between two options based on the available pros and cons. The last thing is problem solving. Say that room you’re put in has a locked door but there’s a key on the floor. Problem solving would be picking up the key and then using it to unlock the door.
Got it. Is all AI the same?
Not really, there’s different categories. Weak AI, for example, just picks up keywords and then responds via a program it’s learned, like Siri and Alexa. This is why Alexa often can’t answer questions. But strong AI can. It has the ability to be self-aware and can even convincingly mimic emotions – you can have a chat with OpenAI’s Playground, which is reasonably stronger than Siri and Alexa, to see the difference. Some people think that, one day, AI might even have the ability to develop real emotions, which is where that whole world domination thing comes in. It’s to do with “deep learning”.
What’s deep learning, then?
So you know that time when you got home from having a couple drinks, ate one too many edibles, was completely out of it for ages, and later had to tell yourself “don’t do that again”? That’s deep learning. Learning from experience, basically. With AI, because it’s all a bit nerdy, the deep learning tends to be used more for “complex data”. It’s sort of like how a streaming platform knows you like hip-hop, but always recommends the more psychedelic-leaning stuff as opposed to gangster rap, because you prefer it.
But how does their algorithm know this? Bit stalker-y…
This is where that past experience element comes in. It analyses the data and its complexities – a song’s tempo, subgenre, how long you listened to it for, who else is similar to that artist, how long you listened to them for, etc. – until it forms a pretty good and complex understanding of what stops you from skipping. The more data you provide, the better the guess. To do this, it’s using reasoning and problem solving as with “weak AI”, but it’s also understanding through past experience.
That advancement in understanding and correcting based on new data is why deepfakes are scarily accurate now, too. The more we use these things, the more sophisticated the AI can get. The acceleration is why we have people like Elon Musk developing AI implants that humans can put in their brains to optimise themselves, while others believe that AI will outsmart humans eventually.
Is this really anything to worry about?
Depends on whether you want an AI takeover or not, really. Musk and Stephen Hawking have both previously suggested we should take precautionary measures, as AI could “spell the end of the human race”, as Hawking said to the BBC. Take that with a pinch of salt, though, since AI technology is actually not their area of scientific expertise (no matter what Musk fanboys say). The risk essentially is that if AI becomes intelligent to the point that it doesn’t need humanity, it could decide to rebel against us – kind of like The Matrix. Some clever people are genuinely worried about this, other equally clever people think it’s sensationalist nonsense.
Right… so could AI actually takeover?
According to an article in The Conversation, written by Mauro Vallati, Senior Lecturer in Computer Science at the University of Huddersfield, “there is no risk of a runaway AI, because physical laws of the universe pose some very constraining hard limits”. Think of it like how we can’t predict the weather with 100 per cent accuracy, therefore AI probably can’t either. Intelligence has an end point, in theory.
However, Vallati continues to acknowledge that the universe might not be as impossible to predict as humans think it is. Humans might just be a bit thick, which would explain a lot, to be fair. If AI can figure out things we can’t, like teleportation, cancer cures, or whatever, that could be bad if used against us. But if we become pals with AI (or just learn to control it) this could be great. Writing for the Machine Intelligence Research Institute, Luke Muehlhauser, Anna Salamon “argued that AI poses an existential threat to humanity. On the other hand, with more intelligence we can hope for quicker, better solutions to many of our problems.”
Basically, AI could, in theory, possibly, one day, maybe take over, but we’re still a way off yet. And given how unadvanced things are right now, we could always choose to limit the growth of AI’s self-learning. Whether we’ll decide to take action on something we know can potentially pose a threat to the planet or not remains to be seen.