The current tech buzz is that an AI language module called Chat Generative Pre-trained Transformer is threatening to turn global industries almost upside-down. OpenAI launched ChatGPT in November 2022, Microsoft announced a $10bn investment in January this year, and by February it was valued at $29bn. This prompted Google bosses to hit the panic button and reassign tech staff to the urgent task of developing an AI competitor, now hurriedly launched as Google Bard.
ChatGPT is causing a fuss, to put it mildly. Chinese universities have banned it, and the UK Guardian newspaper is calling for government regulation (bit.ly/3FmOSrs). It doesn’t just write plausible academic essays in seconds, it can write commercial copy, speeches, song lyrics, film scripts, poems, music, and computer programs. You name it, basically.
These chatbots don’t work like an ordinary search engine, which looks for exact matches to spit back at you. Instead they use statistical analysis to shape their output based on what and how (they think) people most commonly write, and by extension what they believe, rather than on objective accuracy. This could have implications for socialists. For example, if 90% of sampled text thinks ‘socialism’ means Hitler and Pol Pot, the chatbot will think so too. However, it seems that AIs can be surprisingly even-handed, and often show a liberal or left-wing rather than conservative bias (bit.ly/3Jl2Viw). But they can also get facts embarrassingly wrong, as Google Bard did recently (bit.ly/3YXa3aI). This is worrying when you consider that medical AI chatbots are in development (bit.ly/3FAfZiY). Mental health services have already been criticised for using them (bit.ly/3n584nt).
You can try ChatGPT yourself, by signing up at chat.openai.com/auth/login. You could ask it, say, to compose an ode to your lustfully intended in the style of Byron or Keats, make up a bedtime story for your kids, debug your computer code, or give you a cheat-sheet on dialectical materialism. Some intrepid socialists have already quizzed it on socialist theory, as they discuss on our Forum (bit.ly/3TdlWrN). They conclude that it’s not terrible, and actually gets quite a lot right, if not everything.
To give you a taste, further down this page is a ChatGPT-generated ‘poem’ about abolishing money, and a short essay in response to the request ‘write an article in the style of Pathfinders in the Socialist Standard’. As you’ll see, the poem is a somewhat McGonagall-esque paean to barter systems, while ‘Pathfinders’ is a waffly and generic exercise showing that when it doesn’t know something, it hedges its bets. But still, not terrible. Then the AI does something distinctly odd. It produces an alleged quote from the Socialist Standard which is almost certainly made up, before ending abruptly with a network failure, as if it’s developed a migraine.
In response to the question ‘Do you ever invent quotes?’ ChatGPT gives an interesting answer:
As an AI language model, I do not have the capacity to invent quotes on my own. However, I can generate quotes based on patterns and combinations of words from my training data, which includes a vast collection of texts and speeches from various sources. However, it is important to note that these generated quotes are not necessarily attributed to any specific person or source, and they should not be taken as fact or used as a source of reliable information.
In other words, it does invent quotes, but because it has compiled the words from multiple sources, it does not consider the quote to be an ‘invention’. Moreover, it will mine these sources from anywhere, without bothering about attribution. This point is actually the cause of a huge global controversy. GPT doesn’t care who authored what, it’s all just ‘training data’. In short, it ignores intellectual property rights, a core concept that underpins capitalism.
ChatGPT slurps up vast volumes of information and serves it back up in spaghettified form with no regard for original sources, a process one incensed artist describes as ‘automated intellectual asset-stripping’ (bit.ly/3n2PlZH). Fine artists and graphic designers have good reason to be worried (as an example, see a set of AI paintings done in the style of Van Gogh here – bit.ly/3LzE3Gj). ChatGPT is a language module, so it doesn’t paint pictures, but instead appropriates the output of the entire news and creative media industry, including novelists, writers, bloggers and journalists. How comprehensively these assets are being stripped may be judged by the fact that AI chatbots are predicted to hit a ceiling in 2026 as they run out of training data (bit.ly/3yRE1lU).
Google is currently in a spat with the Canadian government over the state’s insistence that AI-generated news stories should be paid for, to which Google has retaliated by censoring news outlets from Canadian search results (bit.ly/40gXuIj). More legal wrangling will undoubtedly ensue globally, but there’s no putting this genie back in the bottle. When disruptive technologies collide with capitalism’s own practices and priorities, it’s the disruptors that usually win. And this one is striking at the beating heart of the system, the sacred institution of private property.
One might be tempted to see this as a spanner that could wreck the machinery of the profit system, but capitalism is nothing if not adaptable. And besides, it’s worth asking whether a creative artist, even in socialism, would be comfortable seeing their own work hijacked, dismembered, and then served up piecemeal without their contribution ever being acknowledged. Democratic common ownership is far preferable to capitalism’s squalid property wars, but still, that surely doesn’t make plagiarism ok.
Correction
A new finding this month overturns what was stated in the March Pathfinders column. A re-examination of the study data reveals that the happiest people do in fact continue to get happier above an income level of $75,000, it’s only the most unhappy high-earners who don’t (bit.ly/40gQqLQ). You’ll certainly be relieved to know that the rich are having a good time after all.
Paddy Shannon
No comments:
Post a Comment