The 4 lectures are online here but here’s a quick and sketchy summary.
Lecture 1: The biggest event in human history
Machines don’t have objectives, so the ‘standard model’ of AI is to feed objectives in and let the machine figure out the method. But if the objectives are ill-considered, the machine won’t know that. Machine ‘consciousness’ is an anthropocentric irrelevance.
Artificial General Intelligence (AGI) could herald a Golden Age, in which we could ‘raise the living standard of everyone on Earth in a sustainable way to a respectable level.’
Corporations have been called ‘profit-maximising algorithms’. It’s silly to blame them, because ‘we’re the ones who wrote the rules.’ Russell says we should change the rules.
Lecture 2: AI in warfare
AI features in drone swarms, supersonic missile fighters, self-drive tanks and submarines, and robotics, but he is mainly concerned with Lethal Autonomous Weapons Systems (LAWS) – weapons which locate, select and engage (kill) human targets without human supervision. The entire AI industry is opposed to LAWS on ethical and practical grounds. The biggest drawback is the eventual availability to all actors of cheap LAWS, making conflict and escalation more likely. In 2017 Russell and some students made a scary YouTube video called Slaughterbot to demonstrate the future potential of LAWS. Russian commentators dismissed the technology as 30 years away. A Turkish firm built one 3 weeks later.
Nonetheless he is optimistic about a comprehensive LAWS ban, citing treaties on nuclear, chemical and biological weapons as well as land mines, blinding laser weapons, etc.
Lecture 3: AI in the economy
Most experts think AGI is a ‘plausible outcome within the next few decades’. JM Keynes postulated ‘technological unemployment’, but classical economists dismissed this as a Luddite fantasy. Russell disagrees, illustrating why with an ingenious paintbrush analogy. He describes the ‘wealth effect’ as automation makes things cheaper, but sees AGI pushing ‘virtually all sectors into decreased employment’. He acknowledges that wealth percolates up and doesn’t trickle down, increasing inequality: ‘I don’t know any near-term solution other than redistribution’.
Lecture 4: AI – A Future for Humans
The EU asked him if Asimov’s 3 Laws of Robotics could be made into law. He said they are illogical and unworkable. Instead he proposes three design principles:
The sole objective of any AI system must be the realisation of human preferences.
The machine can never assume that those preferences are fixed and known, it needs to ask (the uncertainty principle). It must always allow itself to be switched off, in case it is the problem.
Machines should rely not just on what some humans say (they may be mistaken, or bad actors) but also on general human behaviour, and written records.
Russell thinks businesses will have a strong financial interest in following these principles, to avoid bad press and payouts after a disaster. But they also have a ‘first mover’ incentive, hence the need for defining codes of conduct.
No comments:
Post a Comment