Pages

Friday 13 March 2015

The future of artificial intelligence!

The field of artificial intelligence may not be able to create a robotic vacuum cleaner that never knocks over a vase, at least not within a couple of years, but intelligent machines will increasingly replace knowledge workers in the near future, a group of AI experts predicted.
An AI machine that can learn the same way humans do, and has the equivalent processing power of a human brain, is still a few years off, the experts said. But AI programs that can reliably assist with medical diagnosis and offer sound investing advice are on the near horizon, said Andrew McAfee, co-founder of the Initiative on the Digital Economy at the Massachusetts Institute of Technology.
For decades, Luddites have mistakenly predicted that automation will create large unemployment problems, but those predictions may finally come true as AI matures in the next few years, McAfee said Monday during a discussion on the future of AI at the Council on Foreign Relations in Washington, D.C.
Innovative companies will increasingly combine human knowledge with AI knowledge to refine results, McAfee said. “What smart companies are doing is buttressing a few brains with a ton of processing power and data,” he said. “The economic consequences of that are going to be profound and are going to come sooner than a lot of us think.”

Rote work will be replaced by machines

Many knowledge workers today get paid to do things that computers will soon be able to do, McAfee predicted. “I don’t think a lot of employers are going to be willing to pay a lot of people for what they’re currently doing,” he said.
Software has already replaced human payroll processors, and AI will increasingly move up the skill ladder to replace U.S. middle-class workers, he said. He used the field of financial advising as an example.
It’s a “bad joke” that humans almost exclusively produce financial advice today, he said. “There’s no way a human can keep on top of all possible financial instruments, analyze their performance in any rigorous way, and assemble them in a portfolio that makes sense for where you are in your life.”
But AI still has many limitations, with AI scientists still not able to “solve the problem of common sense, of endowing a computer with the knowledge that every 5-year-old has,” said Paul Cohen, program manager in the Information Innovation Office at the U.S. Defense Advanced Research Projects Agency (DARPA) and founding director of the University of Arizona School of Information’s science, technology and arts program.
There is, however, a class of problems where AI will do “magnificent things,” by pulling information out of huge data sets to make increasingly specific distinctions, he added. IBM’s recent decision to focus its Watson AI computer on medical diagnostics is a potential “game changer,” he said.
For more information click the link below:

History of Artificial Intelligence!

Thinking machines and artificial beings appear in Greek myths, such as Talos of Crete, the bronze robot of Hephaestus, and Pygmalion's Galatea.Human likenesses believed to have intelligence were built in every major civilization: animated cult images were worshiped in Egypt and Greec and humanoid automatons were built by Yan ShiHero of Alexandria and Al-Jazari. It was also widely believed that artificial beings had been created by Jābir ibn HayyānJudah Loew and Paracelsus.By the 19th and 20th centuries, artificial beings had become a common feature in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots)Pamela McCorduck argues that all of these are some examples of an ancient urge, as she describes it, "to forge the gods".Stories of these creatures and their fates discuss many of the same hopes, fears and ethical concerns that are presented by artificial intelligence.
Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This, along with concurrent discoveries in neurologyinformation theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.
The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. The attendees, including John McCarthyMarvin Minsky,Allen Newell and Herbert Simon, became the leaders of AI research for many decades. They and their students wrote programs that were, to most people, simply astonishing: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English.By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI's founders were profoundly optimistic about the future of the new field: Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".
They had failed to recognize the difficulty of some of the problems they faced. In 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off all undirected exploratory research in AI. The next few years would later be called an "AI winter", a period when funding for AI projects was hard to find.
In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of one or more human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research in the field.[32] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer lasting AI winter began.
In the 1990s and early 21st century, AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence is used for logistics, data miningmedical diagnosis and many other areas throughout the technology industry.[12] The success was due to several factors: the increasing computational power of computers (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and a new commitment by researchers to solid mathematical methods and rigorous scientific standards.
On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering systemWatson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. TheKinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One, uses algorithms that emerged from lengthy AI research[as do intelligent personal assistants in smartphones

What is Artificial Intelligence?

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an academic field of study which studies the goal of creating intelligence. Major AI researchers and textbooks define this field as "the study and design of intelligent agents, where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.John McCarthy, who coined the term in 1955 defines it as "the science and engineering of making intelligent machines".
AI research is highly technical and specialized, and is deeply divided into sub fields that often fail to communicate with each other. Some of the division is due to social and cultural factors: sub fields have grown up around particular institutions and the work of individual researchers. AI research is also divided by several technical issues. Some sub fields focus on the solution of specific problems. Others focus on one of several possible approaches or on the use of a particular tool or towards the accomplishment of particular applications.
The central problems (or goals) of AI research include reasoningknowledgeplanninglearningnatural language processing (communication), perception and the ability to move and manipulate objects.General intelligence is still among the field's long-term goals. Currently popular approaches include statistical methodscomputational intelligence and traditional symbolic AI. There are a large number of tools used in AI, including versions of search and mathematical optimizationlogicmethods based on probability and economics, and many others. The AI field is interdisciplinary, in which a number of sciences and professions converge, including computer sciencemathematicspsychology,linguisticsphilosophy and neuroscience, as well as other specialized fields such as artificial psychology.
The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—"can be so precisely described that a machine can be made to simulate it."This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been addressed by mythfiction and philosophy since antiquity. Artificial intelligence has been the subject of tremendous optimism but has also suffered stunning setbacks.Today it has become an essential part of the technology industry, providing the heavy lifting for many of the most challenging problems in computer science.