by Matthew Cobb
You may recall the links to the video discussion a couple of weeks back on whether alien life existed. We also recorded a second discussion, discussing if machines will rule the world.
Apart from myself, the other participants were David Kirby (biologist and historian), Alastair Reynolds (astronomer and SF writer), Danielle George (radio engineer), Aravind Vijayaraghavan (nanotechnologist) and Sheena Cruickshank (immunologist) (all of us except Alastair are from the University of Manchester). The programmes were made by my pal Mark Gorton.
Here are the three 18-minute clips.
Part 2:
Part 3:
And if you have any doubt about what the future holds, there’s this video of the Boston Dynamics ‘Atlas’ robot, which has had a new swearing module installed:
Life is just a phase all intelligence goes through. Sort of like the terrible twos.
Beautifully put and spot on.
To take over the world you don’t have to be very smart, to do bad things you don’t have to be a machine. Aren’t humans some sort of machine?
“to do bad things you don’t have to be a machine”
True, you just need to settle on an objective and pursue it with a complete detachment from human feelings, that is, like a machine…
https://twitter.com/ArtStationHQ/status/624931077870747648
They already have started their rule.
Besides kicking the current human master in Go (and is now called “master AI” by some: http://www.bbc.com/news/technology-35771705), I hear 2015 was the year cloud AI’s became the staple of all sorts of software (due to the capacity of the latest hardware).
The question is what our new combined techno-organic intelligent civilization do. My 2c bet, continue as before. The best way to make humans accept machines is to instill moral behavior in the latter. I.e. they should never attempt to pour fresh tea water over a human, just over the leafs.
On the topic of rule, seems evolution does that (rule over all life). A possible heads up (I haven’t had time to read it):
“A difference of one hundredth of a percent in fitness is sufficient to select between winners and losers in evolution. For the first time researchers have quantified the tiny selective forces that shape bacterial genomes. The story is published today in the journal PLoS Genetics.”
“Brandis and Hughes quantified the fitness cost of changing codons in this gene. On average, changing a single codon reduced the fitness of the bacteria, by 0.01 procent [sic!] per generation. This tiny change in fitness is big enough for evolution to select the ‘fittest’ DNA sequence and causes what is called ‘codon usage bias’ — the widespread use of particular codons to make highly expressed proteins.”
(From my alma mater, by the way. The inadvertent swenglish is a sometimes tell. :-/)
[ https://www.sciencedaily.com/releases/2016/03/160310143908.htm ]
Hmmm…. I think there is an even more fundamental question lurking here. This comes down to “do we owe it to Evolution (which thankfully got us here in the first place) the ‘duty’ of allowing a superior species to supersede us, if we have the capability of enabling this transition ourselves”? Questions such as “should it contain human moral behaviour” then becomes one of “does a particular moral behaviour serve an effective evolutionary purpose” for the new species? Human nature has proven that social organisation is highly efficient (ants also prove this) so that seems a pretty good objective. So co-operative behaviours seems a good innate thing to establish in our AI successors.
But the approach that you Torbjörn, and most others take is “can we build such AI machines to be helpers or servants that we needn’t be afraid of? It is Asimov’s “Three Laws of Robotics” thinking.
I lean toward the view that we do “owe it” to Evolution to design a better species (in silicon if this is a good basis) and forget about the things innate in ourselves for our own biology if they are not relevant. Perhaps we should just enjoy the fact that we alone have progressed enough evolutionarily that we can take an active part in “guiding” such an “AI stage of evolution” rather than letting our own evolution occur by chance?
“The Extended Phenotype” evolves to JUST the EXTENSION then, it seems…..
… and an interesting moral question arises for ourselves if we use AI beings as slaves, when they develop the capable to be sentient intelligent agents in their own right? Isn’t this a form of biological racism?
The comparison to ants reminds me of the Borg in Next Generation. In that case humans and the Borg critters are at war, the outcome to be determined in a later episode. Maybe a dynamic equilibrium will develop. (I’ll be working on the pilot, while you scare up the initial funding. I’m thinking Tom Cruise as the aging ships captain.)
If we are to become a machine culture, capitalism and its values as we know them must die.
Why?
I’ve always felt that our necessary present social structure where we MUST have “human use of human beings” provides the principle reasons for most social ills…. not capitalism in itself.
If freeing our hands in the process of becoming a bipedal ape led our species to develop larger brains, then perhaps freeing ourselves from dull, repetitive work, could lead us to further progress in the long term…
And BTW, let’s not forget that at the moment close to 90% of workers admit to not being satisfied with their jobs.
http://www.forbes.com/sites/susanadams/2013/10/10/unhappy-employees-outnumber-happy-ones-by-two-to-one-worldwide/#2235613f2f29
Many of us bipeds love to do repetitive things. Repetitive motions are also important for happiness. Maybe this evolve away.
My experience suggests that more than 50% of all people who are unhappy at one job will be unhappy at another and some, like myself, find every job (even toilet cleaning) to be immensely satisfying. Still, I welcome as many robots as possible…too many songs to learn on all my instruments. 🎸
Our time on this planet is very short, and we should manage it wisely. Why waste 9 to 10 hours of your day cleaning toilets, when you could be perfecting your musicianship skills at the time or engaging in some other creative activity?
The issue of robot competence is an interesting one, but I think when comparing robots to humans the twist comes in the domain of emotion. We have many emotional preferences and tendencies which inform out decision-making, while robots do not and probably won’t for a very long time, if ever. Emotions arise from our biology. Hormones influencing the brain or preferences derived from genetic programming of the brain-body system. So, humans tend to favor small children over elderly adults. Driving a car, we would most likely swerve to avoid hitting a child even if it meant running over an adult. Robots would have no natural way of deciding the question.
That’s why it would have to be programmed into them, just like it already is into us through biological programming.
You could in principle I suppose program emotional reactions, but it would be a complex problem. Something like feelings of sexual jealously are strongly biologically based and probably contribute to a lot of complex, often counterproductive, human behavior. Could we simulate all that to the extent that a robot would be willing to pay $10 to sit in a theater and watch “Eyes Wide Shut” or “Annie Hall”? So much of our “will to power”, our drive to accomplish anything, derives from emotional cravings which I doubt robots will ever possess. The idea of a rogue robot is certainly circumscribed by a lack of desire or will to go beyond a narrow set of programmed activities. For example, the most capable robot will likely not have a shred of curiosity to learn and know more than necessary to accomplish its programmed function – to the extent that curiosity is a complex emotion in humans. Neil Armstrong “wanted” and probably “needed” to achieve great things. Can you imagine a robot with a similar motivation and self image?
Biologically-based sexual feelings are digitally programmed into us, although not with binary language, but rather, using “four-letter” code.
Considering the above, I can basically imagine a robot that would be a mirror image of a human being, but it would, of course, require a whole lot of programming. 🙂
“require a whole lot of programming”
Right, so for the foreseeable future, it would be a lot simpler to use humans for the hard stuff and let the robots move furniture.
Robots (including the virtual ones) already do a lot of stuff for us, e.g. manufacturing all kinds of material goods (in auto manufacturing plants, or just about any modern production plant really), as well as assisting airplane pilots, car drivers, among other things, and as their AI capabilities continue to improve they should become ever more autonomous in performing those tasks.
But I would agree that the human-level deep AI (the general purpose artificial intelligence) is looooooooooong years away from us. At present, we aren’t event able to fully simulate the brain of a fruit fly, let alone the infinitely more complicated human brain.
Exactly so. Think of it this way – to simulate a human brain, you also have to simulate the body. The human brain and body is a fully integrated system. The key is to simulate the whole endocrine system which provides much of our emotional input. Just as a baby becomes human by exploring the sense of touch, tactile sensation, skin, would almost surely need to be part of the simulation as well.
” Driving a car, we would most likely swerve to avoid hitting a child even if it meant running over an adult. Robots would have no natural way of deciding the question.”
Easy enough to program – “Avoid the smaller [object] is higher priority than avoid the larger [object]”
This means of course that robotic drivers would avoid cats and hedgehogs even if they ran over cows, or people, which is in my mind exactly the correct order of priorities. 😉
cr
I am not worried. I prefer to never drive again. Of course I bike to work anyway but all the more reason…driverless cars will be less likely to take me out.
Great videos. The last one was hilarious. Way to go Kevin.
I for one welcome our machine overlords. They can’t do much worse of a job than we’ve done already.
Sub
Well, I’ll be a sick sumbitch — or at least a guy with a big, fat cruel streak — but that last video was a riot, especially Kevin’s shoving the module over face-first with the wide-body tube. It helps, of course, that the module was never in peril of physical pain, nor even of psychological trauma, but only of a blow to its earnest yet awkward dignity. In this, it recalls the comedies made when silent films were giving way to “talkies,” the humor of Chaplin and Keaton, of Laurel and Hardy, of the Marx Brothers’ romps with Margaret Dumont.
And oh, btw, the discussions in the videos about the rise of the machines were, like the earlier series concerning space aliens, riveting.