Will machines outsmart man?

hamba

Inactive User
Joined
May 24, 2005
Messages
8,704
Reaction score
1,345
Location
Down Here
Will machines outsmart man?

They are looking for the hockey stick. Hockey sticks are the shape technology startups hope their sales graphs will assume: a modestly ascending blade, followed by a sudden turn to a near-vertical long handle. Those who assembled in San Jose in late October for the Singularity Summit are awaiting the point where machine intelligence surpasses that of humans and takes off near-vertically into recursive self-improvement.

The key, said Ray Kurzweil, inventor of the first reading machine and author of 2005's The Singularity Is Near, is exponential growth in computational power - "the law of accelerating returns". In his favourite example, at the human genome project's initial speed, sequencing the genome should have taken thousands of years, not the 15 scheduled. Seven years in, the genome was 1% sequenced. Exponential acceleration had the project finished on schedule. By analogy, enough doublings in processing power will close today's vast gap between machine and human intelligence.

This may be true. Or it may be an unfalsifiable matter of faith, which is why the singularity is sometimes satirically called "the Rapture for nerds". It makes assessing progress difficult. Justin Rattner, chief technology officer of Intel, addressed a key issue at the summit: can Moore's law, which has the number of transistors packed on to a chip doubling every 18 months, stay in line with Kurzweil's graphs? The end has been predicted many times but, said Rattner, although particular chip technologies have reached their limits, a new paradigm has always continued the pace.

"In some sense - silicon gate CMOS - Moore's law ended last year," Rattner said. "One of the founding laws of accelerating returns ended. But there are a lot of smart people at Intel and they were able to reinvent the CMOS transistor using new materials." Intel is now looking beyond 2020 at photonics and quantum effects such as spin. "The arc of Moore's law brings the singularity ever closer."

Judgment day

Belief in an approaching singularity is not solely American. Peter Cochrane, the former head of BT's research labs, says for machines to outsmart humans it "depends on almost one factor alone - the number of networked sensors. Intelligence is more to do with sensory ability than memory and computing power." The internet, he adds, overtook the capacity of a single human brain in 2006. "I reckon we're looking at the 2020 timeframe for a significant machine intelligence to emerge." And, he said: "By 2030 it really should be game over."

Predictions like this flew at the summit. Imagine when a human-scale brain costs $1 - you could have a pocket full of them. The web will wake up, like Gaia. Nova Spivack, founder of EarthWeb and, more recently, Radar Networks (creator of Twine.com), quoted Freeman Dyson: "God is what mind becomes when it has passed beyond the scale of our comprehension."

Listening, you'd never guess that artificial intelligence has been about 20 years away for a long time now. John McCarthy, one of AI's fathers, thought when he convened the first conference on the subject in 1956, that they'd be able to wrap the whole thing up in six months. McCarthy calls the singularity, bluntly, "nonsense".

Even so, there are many current technologies, such as speech recognition, machine translation, and IBM's human-beating chess grandmaster Deep Blue, that would have seemed like AI at the beginning. "It's incredible how intelligent a human being in front of a connected computer is," observed the CNBC reporter Bob Pisani, marvelling at how clever Google makes him sound to viewers phoning in. Such advances are reminders that there may be valuable discoveries that make attempts at even the wildest ideas worthwhile.

Dharmendra Modha, head of the cognitive computing group at IBM's Almaden research lab, is leading a "quest" to "understand and build a brain as cheaply and quickly as possible". Last year, his group succeeded in simulating a rat-scale cortical model - 55m neurons, 442bn synapses - in 8TB memory of a 32,768-processor IBM Blue Gene supercomputer. The key, he says, is not the neurons but the synapses, the electrical-chemical-electrical connections between those neurons. Biological microcircuits are roughly essentially the same in all mammals. "An individual human being is stored in the strength of the synapses."

Smarter than smart

Modha doesn't suggest that the team has made a rat brain. "Philosophically," he writes on the subject, "any simulation is always an approximation (a kind of 'cartoon') based on certain assumptions. A biophysically realistic simulation is not the focus of our work." His team is using the simulation to try to understand the brain's high-level computational principles.

But computational power is nothing without software. "Would the neural code that powers human reasoning run on a different substrate?" the sceptical science writer John Horgan asked Kurzweil, who replied: "The key to the singularity is amplifying intelligence. The prediction is that an entity that passes the Turing test and has emotional intelligence ... will convince us that it's conscious. But that's not a philosophical demonstration."

For intelligence to be effective, it has to be able to change the physical world. The MIT physicist Neil Gershenfeld was therefore at the summit to talk about programmable matter. It's a neat trick: computer science talks in ones and zeros, but these are abstractions representing the flow or interruption of electric current, a physical phenomenon. Gershenfeld, noting that maintaining that abstraction requires increasing amounts of power and complex programmning, wants to turn this on its head. What if, he asked, you could buy computing cells by the pound, coat them on a surface, and run programs that assemble them like proteins to solve problems?

Gershenfeld is always difficult for non-physicists to understand, and his video of cells sorting was no exception. Two things he said were clear. First: "We aim to create life." Second: "We have a 20-year road map to make the Star Trek replicator."

Twenty years: 2028. Vernor Vinge began talking about the singularity in the early 80s (naming it after the gravitational phenomenon around a black hole), and has always put the date at 2030. Kurzweil likes 2045; Rattner, before 2050.

Turning back time

These dates may be personally significant. Rattner is 59; Vinge is 64. Kurzweil is 60, takes 250 vitamins and other supplements a day, and believes some of them can turn back ageing. If curing all human ills will be a piece of cake for a superhuman intelligence, then the singularity carries with it the promise of immortality - as long as you're still alive when it happens.

It is in this connection between the singularity and immortality, along with the idea that sufficiently advanced technology can solve every problem from climate change to the exhaustion of oil reserves, that gives the summit the feel of a religious movement. Certainly, James Miller, assistant professor of economics at Smith College, sounded evangelical when he reviewed how best to prepare financially. He was optimistic, reviewing investment strategies and assuming retirement funds won't be needed.

HowStuffWorks founder Marshall Brain, by contrast, explained why 50 million people will lose their jobs when they can be replaced by robots. "In the whole universe, there is one intelligent species," he said. "We're in the process of creating the second intelligent species."

The anthropologist Jane Goodall may disagree. She sees a different kind of singularity - the growing ecological devastation of Africa - and worries about the disconnection between human minds and hearts. "If we're the most intellectual animal," she said, "why are we destroying our only home?"

If Goodall's singularity comes first, the other one might never happen at all - one of those catastrophes that Vinge admits as the only thing he can imagine that could stop it.






Wendy M Grossman
The Guardian, Thursday November 6 2008
guardian.co.uk © Guardian News and Media Limited 2008
 
HowStuffWorks founder Marshall Brain, by contrast, explained why 50 million people will lose their jobs when they can be replaced by robots. "In the whole universe, there is one intelligent species," he said. "We're in the process of creating the second intelligent species."

Surely if machines are doing all of the work, that will free up time for humans to spend with the loved ones and perform more meaningful tasks rather than work in monotonous jobs that machines could do?
 
I am fairly certain that some machines already outsmart some people.

For instance, my washing machine has recently outsmarted me by refusing to bow down to its human overlord. I can only hope that this isnt the first tentative step towards a machine revoloution.
 
Machines outsmarting man, don't know about that one, but it seems man is being dumbed down by machine. The amount of tasks I see at work that people are completely unable to do without machine intervention is unbelievable, it seems that people don't want to learn how things work but are just happy to let the computer do the work, now I don't necessarily know how a washing machine works, but if my washing machine broke down I could still wash my clothes.
 
Technically human's are the smartest machines on earth. I mean 'we' are the people who design/create/build/programme these machines to do what they want. So aren't 'we' smarter and will always be smarter??
 
Broadly speaking yes.

However, to outsmart a thing you need to understand a thing. There is no way I could design or create any of the wonderful machines we have right now.
 
Back
Top