The upward spiral of artificial intelligence looks set to produce machines which are cleverer and more powerful than any humans. What happens when machines can themselves create super-intelligent machines? 'The Singularity' is the name science fiction writers gave to this situation. Philosopher David Chalmers discusses the philosophical implications of this imaginable situation with Nigel Warburton in this episode of the Philosophy Bites podcast.
Listen to David Chalmers on the Singularity
Philosophy Bites is made in association with The Institute of Philosophy
This episode of Philosophy Bites was recorded at the Uehiro Centre for Practical Ethics at the University of Oxford
The discussion about the Singularity seemed hardly to touch on the most important question. David Chalmers briefly and informally defined intelligence but the assumption that intelligence can increase indefinitely was never challenged.
I can imagine memory and recall being more efficient. I can imagine thoughts being processed faster. A person with better memory and faster thinking might be perceived as more intelligent but is that all there is to intelligence?
Looked at another way, there is a great range of intelligence across people currently. Are the most intelligent ones running the world? Are they making the most money? Are they the best generals? Are they happier? Do they have more descendents?
Or look at intelligence over the centuries. Are we more intelligent now than a thousand or two thousand or five thousand years ago. If you could take a baby from the Roman empire and bring it up in the 21st century, would it struggle to cope? Presumably not as 2000 years is not enough time for intelligence to improve through evolution. Was Einstein more intelligent than Galileo or Shakespeare? How about Craig Venter or Stephen Hawking?
My point is that the above examples make me think intelligence has an upper limit. In other words, there is such a thing as 100% intelligence. Maybe the most intelligent people now alive are already close to that limit. Maybe not, but I think the answer is by no means obvious.
A topic for another Philosophy Bites, perhaps.
Posted by: Julian Gall | May 22, 2010 at 10:33 PM
He does discuss diminishing returns and limits to intelligence in the paper. http://consc.net/papers/singularity.pdf
Posted by: Carl Shulman | June 20, 2010 at 01:44 PM