No, AI won’t kill us
The humans behind the technology are far scarier than ChatGPT. Panic and hysteria aside, what are some of the positive things AI could help us achieve?
Society
Words: Letty Cole
It’s been an eventful time for AI lately, not to mention the rest of us. When ChatGPT launched in November last year, we suddenly all realised a robot could write emails, and in some cases entire essays, better than humans could (though it couldn’t write a restaurant review, as critic Jay Rayner observed). Fast forward a few months and Elon Musk – who actually co-founded OpenAI, the company behind ChatGPT, but left its board in 2018 – has called for an industry-wide pause of AI development. This has been co-signed by the likes of Microsoft and Google, but Musk isn’t necessarily sticking to his own suggested rules. We get the sense he might be a little bitter.
Either way, there’s no doubt that all of this is a big deal. After all, we’re creating technology that is thousands of times smarter than humans and capable of completely shifting the world order as we know it. Given our built-in anxiety (thanks for that, ancestors), it seems only natural that we’d respond with a flurry of doomscrolling and hysteria. What if AI comes for our jobs? What if it wants to kill us all?
Really, though, it doesn’t take an AI-sized brain to see that fuelling hype and panic probably isn’t productive, but neither is dropping the conversation completely. AI technology is here to stay. The more we learn about it, good and bad, the better we can hold the humans developing it – the truly scary ones – to account and make sure they don’t fuck it all up. So, for the record…
Will AI replace jobs?
Yes. But similarly to computers and combine harvesters, AI will only change the nature of jobs, not eradicate them. As for the long-term: will AI supercharge capitalist productivity? Or lead us into a labour-free utopia? That’s still all to play for.
And for those worried about ChatGPT: yes, it’s really clever. But think of it more like a calculator. Businesses will start using it as a tool, but as long as humans still want to learn things and communicate with one another, it’s a stretch to think we’d want AI to do everything for us.
What about creativity?
Again, as long as we value human emotion, creativity and connection, there’s only so far AI can take us.
Like music, for example. Sure, AI can churn out catchy TikTok songs which makes for genuine competition when it comes to already-manufactured pop tunes. But we value music for the meaning, person and creativity behind it, all of which would be massively diluted if made by robots, who, to put it simply, can’t feel anything. That goes for DJs, too. We happily pay to watch a human spin some tracks in a sweaty club – would you pay to listen to a machine to the same? If it’s happening anywhere you wouldn’t just whack Spotify on, probably not.
What else can it do?
AI is up to lots of genuinely good stuff behind the scenes. It can solve problems way more quickly and accurately than humans and has achieved some proper lifesaving scientific breakthroughs. Take the AI that can predict sepsis before it happens, cutting hospital deaths by 20 per cent. Or Destination Earth, which the European Commission is currently using to predict and mitigate extreme weather events.
It can help us be better day to day, too. There’s a lot of concern about AI being biased, but that’s only happening as a result of the humans making it. In the right hands, AI can help us work towards a better society. Case in point: creative agency MØRNING, which is building a bot to counter bias within the workplace, helping employees to be fairer and more inclusive.
So what are people worried about?
A few things.
Firstly, there’s no need to panic about AI being inherently evil. Even if robots did become the dominant race, they’d probably still rely on humans to keep the planet going – yes, really.
The dangers? That humans programme AI to be unsafe or malicious. The issue is that the technology is developing at such a quick rate that even scientists are struggling to keep up with it. And so far, there’s little regulation over who can create AI technology, which means we can’t always guarantee it’s safe.
There are some very immediate climate concerns at play, too. Training GPT‑3 took as much electricity as 120 US homes would consume in a year. Is using AI like ChatGPT really worth the environmental cost?
But there are solutions...
Let’s not forget that we are the ones making these machines. We need to make sure that we legislate and regulate before it’s too late. As Sulabh Soral, the head of AI at consulting giant Deloitte puts it, that involves doing three things: “Create trustworthy AI, regulate the use of AI and AI products [and] reach a global consensus on when we use AI and when we don’t.”
And this is already happening. Musk’s call for a research pause is dubious – if anything, we should be continuing research into existing AI, while halting further development. But he did recently curb Ukraine’s use of his Starlink AI to prevent the war from escalating. Typically, the UK government’s current response to AI is to remain “hands off”, while the US and China are busy pulling together laws to ensure that AI is being made responsibly.
As for the rest of us? Keep learning, because we’re soon going to have to make decisions, both as consumers and voters. Stay critical. Think about the AI you’re using. Hold the company you work for to account. Speak to your friends and family. Speak up when the government starts making laws around AI. Don’t doomscroll. AI (probably) won’t kill us.
Letty Cole is a strategist at MØRNING, and editor of the tech and culture newsletter Burn After Reading.