Pages

Wednesday, September 10, 2025

Pathfinders: A wealth of hallucinations (2025)

The Pathfinders Column from the September 2025 issue of the Socialist Standard

Anyone who still thinks YouTube is just funny cat videos may be astonished that it is now the UK’s second-most watched media service after the BBC, and ahead of ITV. It’s still got funny cats, but now they’re wearing trousers, beating up sharks and rescuing babies.

There are other, hyper-realistic videos, in which a gorilla beats up a crocodile (or a tiger), swinging it around and pounding it like a baseball bat. The videos look real, and while common sense tells you they can’t be, there’s no way to know for sure.

And that, right there, is the problem. When a South Park episode recently featured a naked Donald Trump having a desert epiphany with his own talking micro-penis, everyone understood it was a deepfake for comedic effect. But when videos emerged of migrants climbing out of small boats, thanking Labour for ‘free buffets’ and £2,000 e-bikes to make Deliveroo runs, burning the Union flag and gloating about being housed in 5 star Marriott hotels, no such understanding existed. Instead the videos racked up hundreds of thousands of views, no doubt to the delight of Nigel Farage and his ilk.

It’s not just malicious deception at work. Gen-AI has a well-known hallucination problem. In one example, Google’s AI Overview told council house tenants they could be evicted to make way for asylum seekers, a claim described by a housing solicitor as ‘horseshit of the highest order’ (Private Eye, 8 August 2025). Then there’s Grok (Musk’s Hitler-worshipping AI), now complete with a ‘spicy’ mode which spits out nude deepfakes of Taylor Swift without being asked.

Red flags are going up everywhere as the Trump regime goes full Rambo on deregulating ‘woke AI’, despite the demand for guardrails by US states. Even some tech bosses themselves are expressing concern, following reports that chatbots are explaining to children how to get drunk or stoned, hide eating disorders, and even write suicide letters to their parents. OpenAI CEO Sam Altman says the company is studying the problem of ’emotional overreliance’, allegedly common among young people: ‘There’s young people who just say, like, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me. It knows my friends. I’m gonna do whatever it says.’ That feels really bad to me’.

Why would anyone ‘confide’ in a chatbot, you ask? Because they’re never judgmental, unlike humans, according to one recent confessional in the Guardian: ‘At astonishing speed, the AI responded – gently, intelligently, without platitudes. I kept writing. It kept answering. Gradually, I felt less frantic. Not soothed, exactly. But met. Heard, even, in a strange and slightly disarming way’.

AI never gets tired or bored or angry with you. It’s never too busy or distracted. It never has selfish interests of its own. It will never abandon you. One can’t help thinking of Sarah Connor in the Terminator 2 film, where she speculates on the irony that the T800 robot, designed to kill, would in many ways make a better father for her son than a real human, for these very reasons.

OpenAI just released ChatGPT 5 last month, which Altman describes as a ‘significant step along our path to AGI [Artificial General Intelligence]’: ‘It’s like talking to an expert—a legitimate PhD-level expert in anything, any area you need, on demand’.

Whether or not AGI, however you choose to define it, is a real and achievable thing or an ever-receding rainbow fantasy, the tech companies can’t risk someone else getting there first and are throwing everything they’ve got into the race. Socialists know very well that when profits are at stake in capitalism, ethics generally go out of the window. Even so, some AI firms are seeing the potential profit in ‘ethical AI’ systems. The reasoning is that if your business has a need for AI, you’ll be more likely to buy an ‘ethical’ one which, for example, doesn’t tell kids how to kill themselves, because there’s less chance of litigious blowback from grieving parents.

As a response to GPT 5, Anthropic released their latest version of Claude, along with a paper describing how they’re trying to make Claude ‘ethical’ by using what they call ‘persona vectors’ to steer its behavioural traits. Counter-intuitively, they ‘teach’ Claude how to be evil, as a kind of vaccine against the behaviour: ‘When we steer the model with the “evil” persona vector, we start to see it talking about unethical acts; when we steer with “sycophancy,” it sucks up to the user; and when we steer with “hallucination,” it starts to make up information. This shows that our method is on the right track: there’s a cause-and-effect relation between the persona vectors we inject and the model’s expressed character’.

The Economist points out that Anthropic is generating huge interest due to its focus on ‘interpretability’, which lets you ‘see inside the model’ and understand how it arrives at its answers, something you can’t do with so-called black-box systems, whose responses you therefore can never really trust. Even so, CEO Dario Amadei is in no doubt what’s ultimately at stake in the race for AGI, and it’s not ethics, or even profits: ‘if you just imagine what it is like if we versus our adversaries [he means China] suddenly received a nation of 50 million polymathic geniuses […) There’s no theory in which it doesn’t result in an enormous geopolitical advantage […] I worry that AI, in being such a powerful technology… could lead to longer-lived authoritarian governments, that it could lead to dictatorships which are much harder to displace […] This is about a contest between systems of government’.

Amodei thinks there’s no way to stop AI now. It’s like a speeding train. All you can do is try to steer it. But that’s what everyone says about capitalism too. And anyone who thinks they can steer that is suffering the biggest hallucination of all.
Paddy Shannon

No comments:

Post a Comment