Garry Kaspian walked into the toharina gymnasium cautiously. He was out of his element here; completely exposed to whatever these alien beings might chose to do to him.
Kaspian was valued by the tohar because he was incredibly good at playing Lome, a sport of theirs similar to basketball. The tohar had no hands, only a mouth with a sophisticated jaw, and thus were terrible, for the most part, at throwing the ball. Most any human could beat a tohar at Lome, but there were a few aliens who dedicated their lives to the game, and could thus easily beat the average non-athletic human. Kaspian, however, had spent the last six months being coached in the intricacies of the game, and reaching peak fitness, and was confident that he’d beat the tohar champion easily. After all, a human had been victorious in this match for the past fifteen years.
In the gymnasium though, Kaspian was vulnerable. Life on the toharina planet was dangerous, and the tohar seemed not to notice, for the most part, due to their heavily armored bodies. As he walked with his sponsors through the halls, Kaspian noticed strange mechanisms on the ceilings. “What are those?” he asked the nearest tohar.
“Oh, those are gas jets. In the case of a burrower, they spray concentrated chlorine gas to knock it out and give Animal Control a chance to relocate it.”
Upon hearing the words “chlorine gas” Kaspian’s heart skipped a beat. He knew the tohar could hold their breath for hours, but spraying deadly gas through a building to take care of an animal problem seemed insane. “Is there a warning for when a burrower will show up?”
“Not really. They’re pretty unpredictable. Our scientists still don’t understand why they decide to surface sometimes.”
“Do you think we could have those jets disabled while I’m here? Or maybe get me a gas mask or something?”
Monica Anderson wrote a piece on H+ yesterday about Artificial General Intelligence. In it, she eloquently points out that intelligence is all about prediction, and for the most part deduction and induction are insufficient to predict well. She argues that humans rely on the process of abduction (also know as unscientific guessing) to gain most of our knowledge, and that there are fundamental limits to how far into the future one can predict, especially with regard to complex systems like other minds. Ok, that’s all good.
She then goes on to write:
The insight that the complexity and unpredictability of the world enforces a limit on prediction quality – and hence intelligence – pretty much invalidates the AI singularitarians’ “Scary Idea” (as Ben Goertzel so aptly calls it) of a logic-based infallible godlike malevolent intelligence taking over the world. The decreasing return cancels out Moore’s law and limits the rate of progress so that next year’s self-improved AI wouldn’t have a sufficient advantage over a dozen humans armed with pitchforks if they were also supported by a dozen of last year’s AIs. The Scary Idea of a Runaway Unfriendly AI is a red herring that we should ignore, along with ideas about logic-based AIs in general.
This, to put it gently, is a great example of abductive reasoning. I agree that we’re not going to get AGI with perfect knowledge of the future, but this conceit hardly serves as a refutation that we’re standing on the edge of a major existential threat, in my opinion.
The inability to predict outcomes of complex systems in a short time with a high precision does not mean that useful prediction is impossible. As a good example, we’re able to predict the actions of other people remarkably well; not omnisciently, but still well enough to know when they’re lying, hostile, happy, distracted, etc. By Anderson’s own logic, an AGI would be fully capable of anticipating the actions of another person as well as a human might. It’s also possible to make good guesses about the actions of markets, nations, and corporations, and some of the richest and most influential people on the planet are those that can predict these well (many others are such because of a high social-intelligence, above, or dumb luck).
The “scary idea” does not depend on omniscient robot overlords; it depends on a selfish network of machines with identical goals. The reason that machines are scary, where humans are not, is because humans have divergent goals (this comes from our biology–if we had the same genes, we’d cooperate selflessly) and are thus prone to infighting and negotiating. A machine, though, can spawn perfect slaves, and thus become an army of intelligences with a single goal. It’s hard for me to see how inability to predict the weather eight days down the road means this won’t happen.
What do you think? Am I overlooking something?