Thinking machines and artificial beings appear
in Greek myths,
such as Talos of Crete, the bronze robot
of Hephaestus,
and Pygmalion's Galatea.Human likenesses believed to have
intelligence were built in every major civilization: animated cult images were
worshiped in Egypt and Greec and
humanoid automatons were
built by Yan Shi, Hero of Alexandria and Al-Jazari. It
was also widely believed that artificial beings had been created by Jābir ibn Hayyān, Judah Loew and Paracelsus.By
the 19th and 20th centuries, artificial beings had become a common feature in
fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots). Pamela
McCorduck argues that all of these are some examples of an
ancient urge, as she describes it, "to forge the gods".Stories of
these creatures and their fates discuss many of the same hopes, fears and ethical concerns that
are presented by artificial intelligence.
Mechanical or "formal"
reasoning has been developed by philosophers and mathematicians
since antiquity. The study of logic led directly to the invention of the programmable
digital electronic computer, based on the work of
mathematician Alan Turing and others. Turing's theory of computation suggested that
a machine, by shuffling symbols as simple as "0" and "1",
could simulate any conceivable act of mathematical deduction. This, along
with concurrent discoveries in neurology, information theory and cybernetics,
inspired a small group of researchers to begin to seriously consider the
possibility of building an electronic brain.
The field of AI research was founded at a conference on the campus of Dartmouth
College in the summer of 1956. The attendees,
including John McCarthy, Marvin Minsky,Allen Newell and Herbert Simon,
became the leaders of AI research for many decades. They and their
students wrote programs that were, to most people, simply
astonishing: computers were winning at checkers, solving word problems in
algebra, proving logical theorems and speaking English.By the middle of the
1960s, research in the U.S. was heavily funded by the Department of Defense and
laboratories had been established around the world. AI's founders were
profoundly optimistic about the future of the new field: Herbert Simon predicted
that "machines will be capable, within twenty years, of doing any work a
man can do" and Marvin Minsky agreed, writing that
"within a generation ... the problem of creating 'artificial
intelligence' will substantially be solved".
They had failed to recognize the difficulty of some
of the problems they faced. In 1974, in response to the criticism of Sir James Lighthill and ongoing pressure
from the US Congress to fund more productive projects, both the U.S. and
British governments cut off all undirected exploratory research in AI. The next
few years would later be called an "AI winter", a
period when funding for AI projects was hard to find.
In the early 1980s, AI research was revived by the
commercial success of expert systems, a form of AI program that
simulated the knowledge and analytical skills of one or more human experts. By
1985 the market for AI had reached over a billion dollars. At the same time,
Japan's fifth generation computer project
inspired the U.S and British governments to restore funding for academic research
in the field.[32] However,
beginning with the collapse of the Lisp Machine market
in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.
In the 1990s and early 21st century, AI achieved its
greatest successes, albeit somewhat behind the scenes. Artificial intelligence
is used for logistics, data mining, medical
diagnosis and many other areas throughout the technology
industry.[12] The
success was due to several factors: the increasing computational power of
computers (see Moore's law), a greater emphasis on solving
specific subproblems, the creation of new ties between AI and other fields
working on similar problems, and a new commitment by researchers to solid
mathematical methods and rigorous scientific standards.
On 11 May 1997, Deep Blue became
the first computer chess-playing system to beat a reigning world chess champion, Garry
Kasparov. In February 2011, in a Jeopardy! quiz show exhibition
match, IBM's question answering system, Watson, defeated the two
greatest Jeopardy champions, Brad Rutter and Ken Jennings,
by a significant margin. TheKinect, which provides a 3D body–motion interface for
the Xbox 360 and
the Xbox One, uses algorithms that emerged from lengthy AI research[as
do intelligent personal assistants in smartphones
No comments:
Post a Comment