In the short time I’ve been writing for TechInsight, it’s become abundantly clear that we’re in the thick of a seismic digital shift. Just this week, a household name and telecoms giant BT announced it was to shed over 55,000 jobs by the end of the decade; as per BBC’s recent report, up to a fifth of those workers will be casualties of artificial intelligence’s incursion. Shockingly, Goldman Sachs has predicted AI could eradicate 300 million jobs.
Anecdotally, I have had many conversations with my creative friends – graphic designers and music producers – who are terrified as the copyright goalposts shift in real-time. A day doesn’t pass where I don’t see a social media post bemoaning how “artificial intelligence was supposed to do the boring hard jobs and leave us with the fun ones… not the other way around.” Sting has a stark prediction about the future of music, and even
Tom Hanks doesn’t think people will care if AI keeps him propped up forevermore.
It’s a bleak outlook. I have tried to present a balanced view of the topic on TechInsight, presenting reports on both sides of the argument; I recently reported that the CEO of Digital Science & Research Limited, Dr Daniel Hook, stated that “modern workers must embrace the AI dark arts to thrive.” Reading through the article, I considered this to be a relatively positive spin – yes, there will be a tumultuous recalibration, but ultimately, workers will be buoyed by the burgeoning technology.
Right?
At the beginning of May, it was reported that the ‘godfather’ of AI, Dr Geoffrey Hinton, was stepping down from his high-ranking position at Google. Part of this, yes, has to do with his age (a fact that has been glossed over by some outlets), but Hinton, 75, has served as somewhat of a harbinger for the destructive potential of artificial intelligence. An intelligent utopia of shared, super-powered information is nice though, but Hinton’s right to warn of “bad actors”.
“You can imagine, for example, some bad actor like Putin decided to give robots the ability to create their sub-goals,” he shares. This eventuality might, “create sub-goals like ‘I need to get more power'”. As a species, we’ve marvelled at the theoretical potential of artificial intelligence for close to a century; Vannevar Bush predicted a system which amplifies people’s knowledge and understanding in 1945.
In 1950, Alan Turing proposed that these systems could one day simulate human beings, and play chess.
As a nerd who writes for a tech site, until recently, my first thought on the matter would have been the swatches of spectacular cinema on the topic. Recently, my wife and I have embarked on a bit of an ‘AI binge’, enjoying low-budget gems. 2017’s Marjorie Prime and 2021’s I’m Your Man is both near-future evolutions of 2013’s Her that explore the sociological ramifications of seeking out corporeal (or close enough) intimacy between the lines of code. We’re not quite there yet.
2022’s The Artifice Girl introduces audiences to a nascent presence – a technological breakthrough – generated for thought-provoking, and wholly altruistic reasons. Yet even a film released last year, pondering the future, feels so wholly stuck in the past. Legendary AIs such as HAL 9000 from 1968’s 2001: A Space Odyssey and SkyNet from 1985’s Terminator were presented with cold, calculating, deadly efficiency.
Logical beings, just doing their jobs.
As people imagine they remember where they were when Michael Jackson died, I remember where I was when I heard about ChatGPT for the first time. I’m a writer. Of course, I’m going to be nervous that such technology is threatening me, but after some research – and some usage – I’ve come to realise that it’s not a great writer. It’s a great thinker. It’s also been posited as something other than just another search engine (™).
OpenAI’s invention, at least GPT-4, was trained on data that are capped around September 2021; it can philosophise until the cows come home, but like a guidebook from the past, it’s not going to be able to give you up-to-date information about the Greek restaurant around the corner, but it can guide you towards our landmarks and achievements. Even back in 2019, OpenAI stated its program was “too dangerous” for primetime, providing only limited access to the public.
Skip forward four years and we stand at a precipice. Recently, the British government has baulked at how to wrangle the ever-fluctuating prospect of cryptocurrency, only to later take a backfooted ‘regulatory’ stance. According to Rishi Sunak, the UK will lead on “guard rails” to limit the dangers of AI. His comments come as more than 1,000 artificial experts called for a pause to limit its real-world ramifications.
The rapid rise and integration of artificial intelligence into countless services has come even quicker than many expected; the theoretical is no longer fiction. Isaac Asimov, Bush and Turing’s famous thoughts on the subject are now footnotes in a discourse that’s unfurling in real-time. Rather than doom forecasting, it’s wise to heed Dr Daniel Hook’s words: there’s no harm in upskilling. As every new technology finds its way to the common man, we’re forced to contend with it – today, offices would be unrecognisable without computers, but this wasn’t always the case.
The Fourth Industrial Revolution represents a fundamental shift in the way we live, work and cooperate, and it’s wholly digital. Artificial intelligence is charting its course. If Goldman Sachs’ predictions are true, then our greatest asset is our humanity: we still have our superpowers to collaborate, communicate and be compassionate.
Right now we’re caught in the riptide – a seismic sea change – and at least one teacher thinks this is a valuable time to instil empathy and ethics into the next generation of workers. In a recent Guardian article, Siva Vaidhyanathan opted not to admonish her students at the University of Virginia for using ChatGPT to generate essay answers but to seize this as a vital opportunity to enlighten and educate.
She says, “We asked them to consider whether the results reflected well on their goal of becoming educated citizens. Of course, they did not.” Artificial intelligence is all around us – we can’t change that fact. In a world where countless citizens will let computers think for them, there’s never been a better time to think for yourself.