|FROM ||Ruben Safir
|SUBJECT ||Re: [NYLXS - HANGOUT] Building a Better Mouse Trap
|On Sun, Jul 26, 2009 at 08:23:02AM -0400, Simon Fondrie-Teitler wrote:
> Out of curiosity, what is the source for this? Thanks.
> Simon Fondrie-Teitler
The NY Times
> On Sat, Jul 25, 2009 at 11:00 PM, Ruben Safir wrote:
> > uly 26, 2009 Scientists Worry Machines May Outsmart Man By JOHN
> > MARKOFF
> > A robot that can open doors and find electrical outlets to recharge
> > itself. Computer viruses that no one can stop. Predator drones,
> > which, though still controlled remotely by humans, come close to
> > a machine that can kill autonomously.
> > Impressed and alarmed by advances in artificial intelligence, a
> > group of computer scientists is debating whether there should be
> > limits on research that might lead to loss of human control over
> > computer-based systems that carry a growing share of society?s
> > workload, from waging war to chatting with customers on the phone.
> > Their concern is that further advances could create profound social
> > disruptions and even have dangerous consequences.
> > As examples, the scientists pointed to a number of technologies as
> > diverse as experimental medical systems that interact with patients
> > to simulate empathy, and computer worms and viruses that defy
> > extermination and could thus be said to have reached a ?cockroach?
> > stage of machine intelligence.
> > While the computer scientists agreed that we are a long way from
> > Hal, the computer that took over the spaceship in ?2001: A Space
> > Odyssey,? they said there was legitimate concern that technological
> > progress would transform the work force by destroying a widening
> > range of jobs, as well as force humans to learn to live with machines
> > that increasingly copy human behaviors.
> > The researchers ? leading computer scientists, artificial intelligence
> > researchers and roboticists who met at the Asilomar Conference
> > Grounds on Monterey Bay in California ? generally discounted the
> > possibility of highly centralized superintelligences and the idea
> > that intelligence might spring spontaneously from the Internet.
> > But they agreed that robots that can kill autonomously are either
> > already here or will be soon.
> > They focused particular attention on the specter that criminals
> > could exploit artificial intelligence systems as soon as they were
> > developed. What could a criminal do with a speech synthesis system
> > that could masquerade as a human being? What happens if artificial
> > intelligence technology is used to mine personal information from
> > smart phones?
> > The researchers also discussed possible threats to human jobs, like
> > self-driving cars, software-based personal assistants and service
> > robots in the home. Just last month, a service robot developed by
> > Willow Garage in Silicon Valley proved it could navigate the real
> > world.
> > A report from the conference, which took place in private on Feb.
> > 25, is to be issued later this year. Some attendees discussed the
> > meeting for the first time with other scientists this month and in
> > interviews.
> > The conference was organized by the Association for the Advancement
> > of Artificial Intelligence, and in choosing Asilomar for the
> > discussions, the group purposefully evoked a landmark event in the
> > history of science. In 1975, the world?s leading biologists also
> > met at Asilomar to discuss the new ability to reshape life by
> > swapping genetic material among organisms. Concerned about possible
> > biohazards and ethical questions, scientists had halted certain
> > experiments. The conference led to guidelines for recombinant DNA
> > research, enabling experimentation to continue.
> > The meeting on the future of artificial intelligence was organized
> > by Eric Horvitz, a Microsoft researcher who is now president of
> > the association.
> > Dr. Horvitz said he believed computer scientists must respond to
> > the notions of superintelligent machines and artificial intelligence
> > systems run amok.
> > The idea of an ?intelligence explosion? in which smart machines
> > would design even more intelligent machines was proposed by the
> > mathematician I. J. Good in 1965. Later, in lectures and science
> > fiction novels, the computer scientist Vernor Vinge popularized
> > the notion of a moment when humans will create smarter-than-human
> > machines, causing such rapid change that the ?human era will be
> > ended.? He called this shift the Singularity.
> > This vision, embraced in movies and literature, is seen as plausible
> > and unnerving by some scientists like William Joy, co-founder of
> > Sun Microsystems. Other technologists, notably Raymond Kurzweil,
> > have extolled the coming of ultrasmart machines, saying they will
> > offer huge advances in life extension and wealth creation.
> > ?Something new has taken place in the past five to eight years,?
> > Dr. Horvitz said. ?Technologists are replacing religion, and their
> > ideas are resonating in some ways with the same idea of the Rapture.?
> > The Kurzweil version of technological utopia has captured imaginations
> > in Silicon Valley. This summer an organization called the Singularity
> > University began offering courses to prepare a ?cadre? to shape
> > the advances and help society cope with the ramifications.
> > ?My sense was that sooner or later we would have to make some sort
> > of statement or assessment, given the rising voice of the technorati
> > and people very concerned about the rise of intelligent machines,?
> > Dr. Horvitz said.
> > The A.A.A.I. report will try to assess the possibility of ?the loss
> > of human control of computer-based intelligences.? It will also
> > grapple, Dr. Horvitz said, with socioeconomic, legal and ethical
> > issues, as well as probable changes in human-computer relationships.
> > How would it be, for example, to relate to a machine that is as
> > intelligent as your spouse?
> > Dr. Horvitz said the panel was looking for ways to guide research
> > so that technology improved society rather than moved it toward a
> > technological catastrophe. Some research might, for instance, be
> > conducted in a high-security laboratory.
> > The meeting on artificial intelligence could be pivotal to the
> > future of the field. Paul Berg, who was the organizer of the 1975
> > Asilomar meeting and received a Nobel Prize for chemistry in 1980,
> > said it was important for scientific communities to engage the
> > public before alarm and opposition becomes unshakable.
> > ?If you wait too long and the sides become entrenched like with
> > G.M.O.,? he said, referring to genetically modified foods, ?then
> > it is very difficult. It?s too complex, and people talk right past
> > each other.?
> > Tom Mitchell, a professor of artificial intelligence and machine
> > learning at Carnegie Mellon University, said the February meeting
> > had changed his thinking. ?I went in very optimistic about the
> > future of A.I. and thinking that Bill Joy and Ray Kurzweil were
> > far off in their predictions,? he said. But, he added, ?The meeting
> > made me want to be more outspoken about these issues and in particular
> > be outspoken about the vast amounts of data collected about our
> > personal lives.?
> > Despite his concerns, Dr. Horvitz said he was hopeful that artificial
> > intelligence research would benefit humans, and perhaps even
> > compensate for human failings. He recently demonstrated a voice-based
> > system that he designed to ask patients about their symptoms and
> > to respond with empathy. When a mother said her child was having
> > diarrhea, the face on the screen said, ?Oh no, sorry to hear that.?
> > A physician told him afterward that it was wonderful that the system
> > responded to human emotion. ?That?s a great idea,? Dr. Horvitz said
> > he was told. ?I have no time for that.?
> > Ken Conley/Willow Garage