Thursday, June 30, 2022

Life and Times: the climate campaigner (2022)

The Life and Times column from the June 2022 issue of the Socialist Standard

‘Climate Chaos’ was the title of a leaflet I was handed in the main shopping centre of the town I live in. I was given it by a young woman who was also keen to engage me in conversation and let me know about the purpose of the leaflet and the group of people she was part of who were using a loudhailer to put their point across to passers-by.

She told me that fossil fuels were polluting the environment and were the main cause of climate change and the best thing I could do was to get involved in her group’s cause and spread their arguments among the local community. The group was Extinction Rebellion (XR) Cymru.

Barclays and the government
I said I found it admirable that a group of young people cared enough about the state of the world to dedicate their time and energy in trying and improve it, but I wondered whether they were on quite the right track. She asked me what I meant and I pointed to a couple of the things I’d seen in the leaflet, in particular the statement that ‘government and Barclays are both criminally responsible for destroying our future’. I asked her how she thought that different policies by either could really make a significant difference to pollution and climate change.

This could happen in two ways, she said. Firstly Barclays could stop investing in fossil fuels, since they were, in the words of the leaflet, ‘knowingly destroying the world that we depend on’. Secondly the government could ‘use our taxes to create a sustainable future’ and get HMRC to ‘stop banking with Barclays and use an ethical bank’.

I told her I understood the group’s objectives and appreciated their determination to get XR’s message across in such a public way. But I also asked her whether she didn’t think that, even if they succeeded in putting enough pressure on the government and Barclays to get them to change their environmental policies, it would be more than just a drop on the ocean and do much to change the basic situation of ever increasing degradation of the eco-system.

A different take
I could tell by the look on her face that this didn’t please her and her rather sharp response was to ask me what I was doing about it. This was unexpectedly good for me, because it gave me the opportunity to say what I wanted to say but was worried that, if I simply came out with it unsolicited, it might seem preachy or dismissive of her efforts. And I didn’t want to have that effect, since then she probably wouldn’t listen to me seriously.

So as briefly and in as broad brush a way as possible, I tried to outline the position that the Socialist Party takes on the environment and climate change. I suggested that it wasn’t a freestanding problem but one of a whole range of problems that the profit-driven society we live in creates, meaning that even if we managed to alleviate one of those problems piecemeal, we would not actually solve it (since the need for economic expansion and profit would remain key) and anyway all the other problems implicit in the system (eg poverty, inequality, war, alienation) would remain and continue to torment us. I went on to say that I do my best to communicate to my fellow-workers, people just like herself, instead of trying to change bits of the current system, to unite together to bring in a completely different kind of world society, based on voluntary work, democratic decision-making and free access to all goods and services – so no money or wages, no buying and selling, no leaders or led, no borders or frontiers. And with a final flourish, I announced that this would only be possible once a majority of us wanted it and were prepared to take democratic action to bring it about.

Baby steps
I didn’t know quite what to expect as a response, so I was relieved to find her nodding and saying something like ‘sounds good, but… but it’s a long way off and we’ve got to do something in the meantime’. So, though I was pleased that any hostility seemed to have melted away, her stock ‘in the meantime’ response was still a barrier which I knew I wasn’t going to be able to overcome in a short discussion, especially as she was now going to want to give out more leaflets and speak to more people. It didn’t seem right to try to detain her either, but I could at least hope that, having implanted a new idea in her mind, once the day’s leafleting and campaigning had finished, she might reflect on that idea and wonder whether it wasn’t worth considering further.

I would have liked to have had a Socialist Standard with me to give her but I didn’t. However, I’d mentioned the name of the Socialist Party of Great Britain, so that might stick in her mind and maybe, who knows, bring her to the Party’s website? And I was encouraged that her parting shot was that she understood what I was saying and realised that what XR were advocating was only ‘baby steps’, but it was surely better than nothing and she didn’t see that their aims and the Socialist Party’s were incompatible. I didn’t think it was a good idea to disagree with that, even though what I would have liked to say was that it was worth thinking about whether XR’s aims, even in the unlikely event they were fulfilled within the current system, would bring us any nearer to the establishment of a socialist world which was the only feasible way to ensure the survival and indeed the flourishing of the natural environment and of all its living creatures.
Howard Moss

Pathfinders: AI: the last invention we ever need to make . . . (2022)

The Pathfinders Column from the June 2022 issue of the Socialist Standard

In this issue we spotlight the rise and rise of Artificial Intelligence, a hot topic that raises fundamental questions about how it should be used, and what happens if it develops in ways we don’t expect and don’t want.

Currently AI is strictly horses for courses, confined within rule-based parameters and master of just one thing at a time, rather than becoming a super-jack of all trades. So, like numerical engines before the era of programmable general-purpose computing, it has been of limited use. But artificial general intelligence (AGI) is without doubt the ultimate goal, and the race is on to achieve it.

With this in mind, and with a chequered history of failed AI winters behind them, developers are concentrating on the ‘can we do it?’ question rather than the bigger ‘should we do it?’ question. Even less ethically distracted are investors whose only question is ‘can we make money out of it?’ This is not encouraging, given capitalism’s track record.

One problem with AI is that the more advanced it gets, the less we understand it. AI is increasingly a ‘black-box’ phenomenon, whose inner workings are a mystery to us and whose results are often inexplicable and unverifiable by other means. We can’t just treat it like a Delphic oracle, because it’s already clocked up embarrassing gaffes such as building racism and sexism into its staff-hiring rationales, or factorising income instead of health conditions into its medical outcomes estimates. And there have been several public relations disasters, with AIs answering enquiries with profanities after reading the online Urban Dictionary, Facebook chatbots creepily inventing their own language that no human can understand, and Amazon’s Alexa laughing demonically at its own joke: ‘Why did the chicken cross the road? Answer – because humans are a fragile species who have no idea what’s coming next’ (bit.ly/3wd4vh6).

Then there is the lack of internationally agreed definitions, paradigms and developmental standards, in the absence of which each developer is left to make up their own rules. Can we expect global agreement when we can’t get states to agree on climate change? In the absence of such a framework, it’s no wonder that people fear the worst.

Frankenstein-anxiety is nothing new in the history of technology, of course, and if we banned every advance that might go wrong we would never have stopped wearing animal skins and woad. It’s uncontroversial to say that the possible advantages to capitalism are huge, and indeed we’re already seeing AI in everything from YouTube preference algorithms to self-drive tractors and military drone swarms. And that’s small potatoes next to the quest for the holy grail of AGI. But while all this promises big profits for capitalists, what are the pros and cons in human terms? What is the long-term effect of the automation of work, for example? Tech pundits including Tesla boss Elon Musk take it for granted that most of us will have no jobs and that the only solution is a Universal Basic Income, a solution we argue is unworkable.

That’s not the worst of it. In 1950 Alan Turing wrote, ‘[T]he machine thinking method […] would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control’. IJ Good, Turing’s colleague at Bletchley Park, helpfully added, ‘The first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control’ (bit.ly/3FNCekb). The last thing we ever need, or the last thing we ever do, this side of a Singularity that wipes humans from the Earth?

It’s not so much a question of a Terminator-style Armageddon with machines bent on our annihilation. Even in capitalism it’s hard to imagine anyone investing in developing such a capability, at least not on purpose. But the fear is that it could happen by accident, as in the proposed ‘paperclip apocalypse’, in which a poorly considered instruction to make as many paperclips as possible results in the AI dutifully embarking on the destruction of the entire globe in order to turn everything into paperclips. Musk has similarly argued that AI does not have to be evil to destroy humanity: ‘It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road, and so, goodbye anthill’ (cnb.cx/3yJ7pMl).

Stuart Russell, in his excellent 2021 Reith lectures on AI (see our summary here), makes a telling observation about capitalist corporations like the fossil fuel industry, arguing that they operate as uncontrolled superintelligent AIs with fixed objectives which ignore externalities. But why only certain industries? We would go one further and argue that capitalism as a whole works like this. It doesn’t hate humans or the planet, but is currently destroying both in the blind and disinterested quest to build ever greater profits, so goodbye world, to paraphrase its richest beneficiary, one Elon Musk.

Musk is right about one thing, saying ‘the least scary future I can think of is one where we have at least democratized AI because if one company or small group of people manages to develop godlike digital superintelligence, they could take over the world’. It’s rather ironic that, once again, Musk sees himself as part of the solution, not part of the problem.

To democratise AI you would first need to democratise social production, because in capitalism science and tech are sequestered behind barriers of ownership by private investors anxious to avoid any uncontrolled release of potentially profitable knowledge into the environment. AI needs to belong to all humanity, just like all other forms of wealth, which is why socialists advocate post-capitalist common ownership. In such circumstances, a global standardisation of AI development rules becomes genuinely feasible, and as Russell argues, it wouldn’t be that difficult to program AIs not to kill us all in the quest for more paperclips: you simply build in an uncertainty principle, so that the AI understands that the solution it has devised may not be the one humans really want or need. It’s a sensible approach. If only humans used a bit of natural intelligence and adopted it, they’d get rid of capitalism tomorrow.
Paddy Shannon

Artificial Intelligence (2022)

From the June 2022 issue of the Socialist Standard 

We have always been intrigued by mechanical efforts to imitate us. Our technology ostensibly exists to enhance and improve our lives by imitating the labours needed for our existence. As such there’s always an element of emulation in the appearance or behaviour of our technology. From 18th century automata to the robots of today we delight in their ability to imitate us. But, as ever, there’s a flip side to this as illustrated by the Luddites of the past and the techno-sceptics and environmentalists of today. Many films and novels feature a dystopia caused by technology and AI in particular (Blade Runner and Terminator come to mind) where the computer becomes ‘self-aware’ or conscious and perceives humanity as a threat to its needs and even its very existence. Quite apart from the reactionary fear that robots will take away all our jobs does the possibility that Artificial Intelligence may achieve self-awareness present either a threat or the hope of a better life for us both (humans and AI entities)?

Most thought works on the principle of dualities – we define something by what it is not as well as by its innate qualities. The term ‘artificial’ is used to describe something that is not ‘natural’ and is often associated with negative characteristics. This is irrational in some respects since all the components we use in the manufacture of technology are taken from nature. We don’t refer to the intricate structures of a termite mound or a beaver’s dam as ‘unnatural’. We have become alienated from the products of our own labour which can lead us to be suspicious of technology – sometimes with good reason. All labour under capitalism has been alienated from its producers and even the most advanced technology will not escape the iron laws of production for profit for long. Artificial is also a term used for the inauthentic and the fake so when combined with the equally problematic concept of intelligence we have a ready-made topic for intense debate. People have unsuccessfully tried to define intelligence for centuries and possibly the best we can do is compare outcome with intention with the proximity of the two as being some measure of intelligence. Intention, of course, implies some level of purpose and self-awareness that we can call consciousness. At the moment, for computers, this is supplied by human programmers but can there ever arise a possibility that AI might provide its own purposes and intentions? Some see this as the great divide between our type of intelligence and that of AI. However, as so often happens, if we use our technology as a metaphor to understand ourselves in terms of complex machinery we may ask: who or what programs us?

Socialists, like all materialists, are believers in cause and effect as the universal determinant of all observed phenomena. For us then intelligence is determined by the evolution of the brain due to natural selection. As humans have always been a social species our ability to communicate and act communally led to the success of our species. Gradually the complexity of our technology demanded that our childhood would last ever longer so that we might learn from those with experience, and this quality began to replace genetic determinism as a measure of individual success and therefore intelligence. We are ‘programmed’ by the culture into which we are born with the genetic element becoming ever less important. Nurture rather than nature has become responsible for what we are. Ideology has become a dominant feature of our education and it is this that determines our activity in terms of to what degree we reject or embrace the dominant value system into which we are born. Whatever we are we are certainly not capable of ‘free will’ and the concept of the transcendental self is a myth. So if we are also programmed by forces outside of our control, what separates our consciousness from that of AI – could it be that the concept of an artificial intelligence is itself artificial?

In trying to develop technology that possesses intelligence we have inadvertently discovered more about the nature of our own intellect. There still remains, however, a profound distinction between our intelligence and that of a machine in that we are ‘alive’ and a machine is not. The dialectical distinction between life and death and the organic and inorganic remains important and instructive. Although our intelligence is primarily a cultural construct our biological inheritance still provides us with certain drives and instincts such as the need to survive and to procreate. These drives affect the intellect through our emotions which can clash with our intellect and cause irrational behaviour. Although most of us would happily do without anger, greed, jealousy and hate few of us would want to live without love, aesthetic pleasure and empathy. Such feelings are identified with being ‘alive’. Their effect on our ability to reason may be dubious but the effect on our imagination and creativity is indisputable. Can machines ever replicate such a synthesis of reason, logic, anxiety and imagination? We might be tempted to program emotion in but it would be extremely dangerous to create a powerful technology with an inbuilt possibility of irrational behaviour! But without the fear and anxiety that comes with knowledge of death how can machines replicate our intelligence?

If you look up the scientific definition of ‘life’ you’ll see that it runs to several paragraphs and may or may not be dependent on the long chain molecules identified with carbon and its derivatives. Perhaps someone somewhere is attempting to develop an organic computer but there remains the possibility that intelligence may not be dependent on life and that a machine may develop a different type of intelligence than the one we seek to create. It may become conscious but not organically ‘alive’. Such an entity may strike us with horror because of the probability that it will lack any moral values or empathy with which we seek to mitigate the fear and suffering of living with pain and death.

Alan Turing , one of the ‘founding fathers’ of AI, provided us with a test which he thought might help to define the transformation between mere mechanical computing and a semblance of human type intelligence by putting a computer and a human on one side and a human tester on the other side. If the tester couldn’t recognize which candidate was human and which candidate was a computer after a series of questions, then the computer successfully passed the Turing test. To date no computer has passed this test. John von Neumann, another of the giants in the field, was optimistic that a self-replicating machine would be developed that would help to enable a technological singularity which would change human life and culture forever. We can only speculate if this ‘paradigm shift’ would be beneficial or otherwise for our species. Would such an event aid us in the struggle for socialism? We take pride in the coherence and logic of our case for revolutionary change and so would hope that any such event would not oppose this. But we note that intelligence alone doesn’t seem to be enough as many of the world’s greatest intellects do indeed oppose the establishment of socialism. The emotional strength and ability needed to conceive of alternatives to the organisation of the world into which we’re born seems to be dependent on other factors which, for the moment, appear exclusively human. Older readers may recall a TV series called The Prisoner and one scene in particular in which a super computer is developed by our hero’s interrogators, which they are confident will finally crack the defiance of ‘Number Six’. Such is their hubris that they allow their adversary to ask one question of this device which, he insists, it will not be able to answer. This question then proceeds to cause the computer to self-destruct. When the shocked interrogators ask as to the nature of the question, Number Six replies that it was simply a one word enquiry: ‘Why?’ Science and its technologies can provide us with many answers to the questions it continually creates but there are some that may be best left to old fashioned philosophy and politics.
Wez.

AI – It's your move (2022)

From the June 2022 issue of the Socialist Standard 

Last year’s World Chess Championship in Dubai saw reigning Champion Magnus Carlsen and his challenger Ian Nepomniachtchi draw one game in what has been billed as the most accurate game of championship chess in history. That is, according to the best available computer analysis neither player made a move that appreciably lost nor gave any advantage. An article on the event noted that the analysis suggests that chess games at the top level have been getting progressively accurate, and more so since advanced computer analysis became truly available to players (tinyurl.com/2p8x9mb4).

This is an instructive example of computers and humans interacting to provide increasing accuracy and effectiveness. The event which saw then World Champion Gary Kasparov defeated by a computer, in 1997, has been seen as a landmark in the progress of the development of machine thinking, and the point at which computers became better than humans (even if, at the event, the computer required human adjustment to its programming between matches to achieve the feat). In 2016, a computer programme called AlphaGo defeated one of the best human minds at the complex Japanese game of Go. Go is considered harder for machines to play, since it is much more intuitive and probabilistic than chess, but AlphaGo was trained to play it through machine learning. Again, the expectation is that the best Go players will now improve through using computers to analyse their games for flaws and plan their strategies.

It is now a commonplace in chess circles to talk about ‘computer moves’ that appear utterly unintelligible to the human mind, but which the computer proves make sense, about 14 moves down the line. The computer can now see lines that the human mind just wouldn’t be able to begin to consider (because they often violate the general principles of play in the immediate instance).

Researchers are now trying to train computers to play more like humans. They have trained a neural net to be able to identify players just by their moves and playing style, which will, in the future, enable computers to offer a more bespoke approach to using computer assistance. ‘Chess engines play almost an “alien style” that isn’t very instructive for those seeking to learn or improve their skills. They’d do better to tailor their advice to individual players. But first, they’d need to capture a player’s unique form’ (tinyurl.com/263ej48a). Or, as the paper’s abstract puts it: ‘The advent of machine learning models that surpass human decision-making ability in complex domains has initiated a movement towards building AI systems that interact with humans’ (tinyurl.com/2p9a7fwk). There are considerable commercial applications for such capacity, not to mention the police and security implications of such stylometric analysis that goes beyond the chess board.

In the field of sport, we’ve seen technology appear to improve umpiring decisions, particularly the ever-controversial leg-before-wicket rule in cricket. The introduction of ball tracking technology (alongside Hotspot and ‘Snicko’) has not only improved the accuracy of decisions finally taken, but they have also influenced the umpires’ decision-making in the first place. What they previously, to the naked eye, might have considered not out has been empirically proven to be a legal dismissal. Umpires have learned to adapt and reduce the chance of themselves being overruled by the machine.

Perhaps the most extreme manifestation of this human-computer interaction was announced in December last year: human brain cells in a Petri dish were trained to play a computer game (tinyurl.com/2p9efekb). That they learnt faster than computer AI is itself intriguing, and is indicative of the issues that have been around development of AI for a long time. While computers have promised a general adaptive intelligence since they were first conceived of, a great deal of research has yet to produce tangible results beyond the very restricted rules-based situations in chess or Go.

The dream of driverless cars, for example, has taken a great deal of a knock. In practice the cars tend not to respond well to the chaotic environment of a real road (which has led to a number of fatalities during the testing of these machines. The fact that their sensors can be fooled by something resembling the white lines of a road (or even worse, actually spoofed into thinking something is a road by malicious actors) has seriously dented the idea that there will be a general roll-out of driverless vehicles on our roads any time soon.

This ignores the possibility of even more dangerous things such as hackers gaining control of automated driving systems and using them to make the car do what they want.

This is itself alarming, as military applications of AI are being increasingly deployed in the real world (and form part of the current military competition between the US and China. Last year, the British government announced that its forces had deployed AI in combat manoeuvres:

‘Through the development of significant automation and smart analytics, the engine is able to rapidly cut through masses of complex data. Providing efficient information regarding the environment and terrain, it enables the Army to plan its appropriate activity and outputs […] In future, the UK armed forces will increasingly use AI to predict adversaries’ behaviour, perform reconnaissance and relay real-time intelligence from the battlefield’ (tinyurl.com/5nrbxv7t).

Of course, some of this is puffery to promote the armed forces but the AI competition between powers is real. It isn’t just in the development labs, though. In the recent war between Azerbaijan and Armenia, the Azerbaijanis used AI-assisted drones to considerable effect:

‘Relatively small Azerbaijani mobile groups of crack infantry with light armor and some Israeli-modernized tanks were supported by Turkish Bayraktar TB2 attack drones, Israeli-produced loitering munitions, and long-range artillery and missiles, […] Their targeting information was supplied by Israeli- and Turkish-made drones, which also provided the Azerbaijani military command with a real-time, accurate picture of the constantly changing battlefield situation’ (tinyurl.com/4cwb37au).

Wikipedia defines a loitering munition as: ‘a weapon system category in which the munition loiters around the target area for some time, searches for targets, and attacks once a target is located’ (tinyurl.com/bdhmmvp7).

The applications are still limited, but given that drone swarm displays have become as common and spectacular as fireworks, it’s clear that the technology exists for serious damage to be done on a wide scale with these devices, and some of the finest minds in the world are looking to make them even more lethal and autonomous to combat potential threats to communication lines.

Tellingly, in parliamentary answers, the British government refuses to back a moratorium on autonomous killing devices: ‘the UK will continue to play an active role[…], working with the international community to agree norms and positive obligations to ensure the safe and responsible use of autonomy’ (tinyurl.com/2p8vbkr5).

The fact is that no-one quite knows where this will end. Futurologists talk of an event called ‘the Singularity’, a point in time beyond which we cannot make meaningful predictions and after which is a completely unrecognisable world. The likeliest cause of an imminent singularity, they claim, is the invention of superhuman intelligence, capable of redesigning and improving itself. This would in turn lead, so they claim, to new innovations coming so fast that they would be obsolete by the time they were implemented.

At present, this remains theoretical, and decades of research into artificial intelligence and machine learning still has not provided even a theoretical route to a machine capable of general intelligence, as opposed to a specific task-focused capability.

‘The common shortcoming across all AI algorithms is the need for predefined representations[…]. Once we discover a problem and can represent it in a computable way, we can create AI algorithms that can solve it, often more efficiently than ourselves. It is, however, the undiscovered and unrepresentable problems that continue to elude us’ (tinyurl.com/26ve3zky).

Computers can work very effectively at what they do, but they lack intention or volition. At www.chess.com/computer-chess-championship computer engines tirelessly and sterilely play each other at chess, endlessly making moves with no love for the game nor pleasure in victory.

We can look forward to the impact of AI on our lives being a co-operative effort between humans and machines. This article, for instance, was written with the assistance of Google’s searching algorithms to bring up relevant articles at a moment’s notice, replacing hours of research in a physical library.

Stuart Russell’s 2021 Reith Lectures on ‘Living with Artificial Intelligence’ (2022)

From the June 2022 issue of the Socialist Standard 

The 4 lectures are online here but here’s a quick and sketchy summary.

Lecture 1: The biggest event in human history

Machines don’t have objectives, so the ‘standard model’ of AI is to feed objectives in and let the machine figure out the method. But if the objectives are ill-considered, the machine won’t know that. Machine ‘consciousness’ is an anthropocentric irrelevance.

Artificial General Intelligence (AGI) could herald a Golden Age, in which we could ‘raise the living standard of everyone on Earth in a sustainable way to a respectable level.’

Corporations have been called ‘profit-maximising algorithms’. It’s silly to blame them, because ‘we’re the ones who wrote the rules.’ Russell says we should change the rules.

Lecture 2: AI in warfare

AI features in drone swarms, supersonic missile fighters, self-drive tanks and submarines, and robotics, but he is mainly concerned with Lethal Autonomous Weapons Systems (LAWS) – weapons which locate, select and engage (kill) human targets without human supervision. The entire AI industry is opposed to LAWS on ethical and practical grounds. The biggest drawback is the eventual availability to all actors of cheap LAWS, making conflict and escalation more likely. In 2017 Russell and some students made a scary YouTube video called Slaughterbot to demonstrate the future potential of LAWS. Russian commentators dismissed the technology as 30 years away. A Turkish firm built one 3 weeks later.

Nonetheless he is optimistic about a comprehensive LAWS ban, citing treaties on nuclear, chemical and biological weapons as well as land mines, blinding laser weapons, etc.

Lecture 3: AI in the economy

Most experts think AGI is a ‘plausible outcome within the next few decades’. JM Keynes postulated ‘technological unemployment’, but classical economists dismissed this as a Luddite fantasy. Russell disagrees, illustrating why with an ingenious paintbrush analogy. He describes the ‘wealth effect’ as automation makes things cheaper, but sees AGI pushing ‘virtually all sectors into decreased employment’. He acknowledges that wealth percolates up and doesn’t trickle down, increasing inequality: ‘I don’t know any near-term solution other than redistribution’.

Lecture 4: AI – A Future for Humans

The EU asked him if Asimov’s 3 Laws of Robotics could be made into law. He said they are illogical and unworkable. Instead he proposes three design principles:

The sole objective of any AI system must be the realisation of human preferences.
The machine can never assume that those preferences are fixed and known, it needs to ask (the uncertainty principle). It must always allow itself to be switched off, in case it is the problem.
Machines should rely not just on what some humans say (they may be mistaken, or bad actors) but also on general human behaviour, and written records.
Russell thinks businesses will have a strong financial interest in following these principles, to avoid bad press and payouts after a disaster. But they also have a ‘first mover’ incentive, hence the need for defining codes of conduct.

The Economics of AI (2022)

From the June 2022 issue of the Socialist Standard 

AI involves the use of machines to take, on their own, a continuing series of certain ‘intelligent’ decisions currently made by humans but in principle is no different from the use of other machines in production. AI is a further extension of the continual mechanisation that has been going on since capitalism started. Competition drives capitalist firms to mechanise in order to reduce their cost of production and stay in the battle of competition for profits. The same economic laws that govern the introduction of machinery under capitalism apply to the application of AI.

These economic laws are not what they might at first be assumed to be – that machines are introduced to reduce the amount of past and present human labour involved in producing something. This is because under capitalism there is a difference between the total amount of human labour required from start to finish to produce something and what it costs a capitalist firm to have it produced.

The total labour required to produce an item of wealth is not just that expended in the last stage of its production, as in the factory from which it emerges as a finished product, but also that expended on the production of the materials re-worked, the energy consumed and the wear and tear of the machines and buildings.

Only the labour expended at the last stage adds new labour and value. Under capitalism this is divided into wages (corresponding to the labour embodied in the workers’ labour-power) and unpaid surplus labour (the source of profit). The past labour embodied in the materials, energy, machines and buildings is transferred to the product without increasing. In the case of machines this is transferred gradually in the form of wear and tear (depreciation) until they need to be replaced. So while machines transfer value to the product it is their own pre-existing value. They do not add any new value and so do not produce surplus value; it is only the labour of those who use them that does.

Productivity can be said to increase when less labour (past and present) is required to produce an item of wealth. A machine only increases productivity to the extent that it displaces more labour than needed to produce it. Unless it does this there is no point, as far as increasing productivity is concerned, in installing it.

Under capitalism there is another limit. Machines are only installed to the extent that they replace the paid part of newly-added labour. This places the bar higher than it would be in a society, such as socialism will be, where there was no division of newly-added labour into paid and unpaid parts. It means that machines that could be installed are not, because it is not profitable to do so even though they would increase productivity.

What is relevant for a capitalist firm when considering whether to mechanise some work is the level of wages, the price of labour-power, compared to the price of the machine. The lower the wages the less the incentive to install machinery but wages can vary from place to place and from industry to industry. Marx, writing in the 1860s, gave some interesting historical examples:

‘Hence the invention nowadays in England of machines that are employed only in North America; just as in the sixteenth and seventeenth centuries machines were invented in Germany for use exclusively in Holland, and just as many French inventions of the eighteenth century were exploited only in England… The Yankees have invented a stone-breaking machine. The English do not make use of it because the “wretch” who does this work gets paid for such a small portion of his labour that machinery would increase the cost of production to the capitalist’ (Capital, Vol 1, chapter 15, section 2. Penguin translation).

The same considerations apply to AI machines. AI will only be applied under capitalism when and where it will displace more paid labour than its machines cost, not when it will reduce the total amount of labour expended. Together with the high cost of producing AI machines, this will slow down and limit the extent to which AI will be applied under capitalism. In industries and countries where the labour-power to be replaced is relatively cheaper it won’t be applied at all.

Futurologists who see a more or less rapid spread of AI fail to take this into account. As do those who, following their lead, think that AI will quickly lead to mass unemployment and so to a drastic fall in paying demand which they propose to remedy by paying everyone a ‘universal basic income’. To compensate for the fall in paying demand this payment, financed out of profits, would have to be at a level that would be incompatible with capitalism. Any lesser amount would have the perverse result of reducing money wages.

The idea of the whole working class being replaced by intelligent robots is also a fantasy. AI equipment, like all machines, does not create any new value (transfer any new labour to the product) and so no surplus value; it just transfers gradually the labour expended from start to finish to make it. If production were fully automated, no surplus value would be produced, so there would be no profits and capitalism would no longer exist. Not that there is any chance of capitalism evolving into a ‘fully automated’ economy. This could only come into being if, at some point in the future after the abolition of capitalism, socialist society were to decide to go down that route (not an evident decision) and establish ‘fully automated luxury communism’. At the present time, given the low level of productivity compared to what it would need to be for that, this is science fiction. Humans are still going to have to have a substantial direct input into production for a long time to come, even after socialism has been established,

A ‘lights-out’ sector of the capitalist economy is another matter. A factory requiring no input of living human labour is not inconceivable, and a few are indeed operational. Although no surplus value would be produced in it, the capital invested there would make a profit due to the averaging of the rate of profit. This is brought about by capitals competing to invest in the most profitable sectors, with the result that all capitals, irrespective of their composition (into past and present labour), tend to get the same rate of return. The source of the profit of a lights-out factory would be a share of the surplus value created in the rest of the capitalist economy.

Given capitalism, AI will only be introduced gradually as it becomes profitable (replaces more paid labour than its own considerable cost) and much slower than would be technologically possible to increase productivity. The robots are not going to take over any time soon.
Adam Buick