Still working to recover. Please don't edit quite yet.

artificial intelligence

From Anarchopedia
Jump to: navigation, search

The term Artificial Intelligence (AI) was first used by John McCarthy who used it to mean "the science and engineering of making intelligent machines".[1] It can also refer to intelligence as exhibited by an artificial (man-made, non-natural, manufactured) entity. While AI is the generally accepted term others, including both Computational Intelligence and Synthetic Intelligence have been proposed as potentially being "more accurate."[4] The terms strong and weak AI can be used to narrow the definition for classifying such systems. AI is studied in overlapping fields of computer science, psychology, philosophy, neuroscience, and engineering, dealing with intelligent behavior, learning, and adaptation and usually developed using customized machines or computers.

Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, natural language, speech, and facial recognition. As such, the study of AI has also become an engineering discipline, focused on providing solutions to real life problems, knowledge mining, software applications, strategy games like computer chess and other video games. One of the biggest difficulties with AI is that of comprehension. Many devices have been created that can do amazing things, but critics of AI claim that no actual comprehension by the AI machine has taken place.


History[edit]


The field of artificial intelligence truly dawned in the 1950s, since then have been many achievements in the History of artificial intelligence, some of the more notable moments include:

Year Development
1950 Alan Turing introduces the Turing test intended to test a machine's capability to participate in human-like conversation.
1951 The first working AI programs were written to run on the Ferranti Mark I machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz.
1956 John McCarthy coined the term "artificial intelligence" as the topic of the Dartmouth Conference.
1958 John McCarthy invented the Lisp programming language.
1965 Joseph Weizenbaum built ELIZA, an interactive program that carries on a dialogue in English language on any topic.
1965 Edward Feigenbaum initiated Dendral, a 10-yr effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system.
1966 Machine Intelligence workshop at Edinburgh - the first of an influential annual series organized by Donald Michie and others.
1972 The Prolog programming language was developed by Alain Colmerauer.
1973 Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.
1974 Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems.
1991 AI logistics systems deployed in the first Gulf War save the US more money than spent on all AI research since 1950.
1997 The Deep Blue chess machine (IBM) beats the world chess champion, Garry Kasparov.
1999 Sony introduces the AIBO, an artificially intelligent pet.
2000 Computer game, The Sims, is released after 5 years in development. Supporting a highly advanced Artificial Intelligence program, The Sims becomes the best selling computer game of all time, with various expansion packs and a sequel released shortly after.
2004 DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money.

During the 1970s and 1980s AI development experienced an AI winter due to failure to achieve expectations and lack of governmental funding.

During the 1990s and 2000s AI has become very influenced by probability theory and statistics. Bayesian networks are the focus of this movement, providing links to more rigorous topics in statistics and engineering such as Markov models and Kalman filters, and bridging the divide between 'neat' and 'scruffy' approaches. This new school of AI is sometimes called 'machine learning'. The last few years have also seen a big interest in game theory applied to AI decision making. After the September 11, 2001 attacks there has been much renewed interest and funding for threat-detection AI systems, including machine vision research and data-mining.

Mechanisms[edit]

Generally speaking AI systems are built around automated inference engines. Based on certain conditions ("if") the system infers certain consequences ("then"). AI applications are generally divided into two types, in terms of consequences: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions and therefore classification form a central part of most AI systems.

Classifiers make use of pattern recognition for condition matching. In many cases this does not imply absolute, but rather the closest match. Techniques to achieve this divides roughly into two schools of thought: Conventional AI and Computational intelligence (CI)[unverified]

Classifiers[edit]

Classifiers are functions that can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set.

When a new observation is received, the observation is classified based on previous experience. A classifier can be trained in various ways, there are mainly statistical and machine learning approaches.

A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems, this is also referred to as the 'No free lunch theorem'. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science.

The most widely used classifiers are the neural network (multi-layer perceptron), support vector machines, k-nearest neighbors, Gaussian mixture model, Gaussian, naive Bayes, decision trees and radial basis functions. Van der Walt and Barnard[2] investigated very specific artificial data sets to determine conditions under which certain classifiers perform better and worse than others.

Conventional AI[edit]

Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). (Also see semantics.) Methods include:

  • Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them.
  • Case based reasoning: stores a set of problems and answers in an organized data structure called cases. A Case Based Reasoning system upon being presented with a problem finds a case in its knowledge base that is most closely related to the new problem and presents its solutions as an output with suitable modifications.[3]
  • Bayesian networks
  • Behavior based AI: a modular method building AI systems by hand.

Computational intelligence[edit]

Computational intelligence involves iterative development or learning (e.g. parameter tuning e.g. in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Methods mainly include:

With hybrid intelligent systems attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R. It is thought that the human brain uses multiple techniques to both formulate and cross-check results. Thus, systems integration is seen as promising and perhaps necessary for true AI.

Research challenges[edit]

Team ENSCO's entry in the first Grand Challenge, DAVID
A legged league game from RoboCup 2004 in Lisbon, Portugal.

The DARPA Grand Challenge is a race for a $2 million prize where cars drive themselves across several hundred miles of challenging desert terrain without any communication with humans, using GPS, computers and a sophisticated array of sensors. In 2005 the winning vehicles completed all 132 miles of the course in just under 7 hours.

A popular challenge amongst AI research groups is the RoboCup and FIRA annual international robot soccer competitions.

In the post-dot com boom era, some search engine websites use a simple form of AI to provide answers to questions entered by the visitor. Questions such as "What is the tallest building?" can be entered into the search engine's input form and a list of answers will be returned.

AI in other disciplines[edit]

AI is not only seen in computer science and engineering. It is studied and applied in various different sectors.

Philosophy[edit]

Template:portalpar


The strong AI vs. weak AI debate ("can a man-made artifact be conscious?") is still a hot topic amongst AI philosophers. This involves philosophy of mind and the mind-body problem. Most notably Roger Penrose in his book The Emperor's New Mind and John Searle with his "Chinese room" thought experiment argue that true consciousness cannot be achieved by formal logic systems, while Douglas Hofstadter in Gödel, Escher, Bach and Daniel Dennett in Consciousness Explained argue in favour of functionalism. In many strong AI supporters’ opinion, artificial consciousness is considered as the holy grail of artificial intelligence. Edsger Dijkstra famously opined that the debate had little importance: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."

Epistemology, the study of knowledge, also makes contact with AI, as engineers find themselves debating similar questions to philosophers about how best to represent and use knowledge and information. (e.g. semantic networks).

Psychology[edit]

Main article: Cognitive science


Business[edit]

Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition (BBC News, 2001).[4] A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and to provide medical information. Many practical applications are dependent on artificial neural networks ; networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition. Financial institutions have long used such systems to detect charges or claims outside of the norm, flagging these for human investigation. Neural networks are also being widely deployed in homeland security, speech and text recognition, medical diagnosis (such as in Concept Processing technology in EMR software), data mining, and e-mail spam filtering.

Robots have become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration, and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan.[5].

Fiction[edit]

File:I Robot - Runaround.jpg
Cover art for I, Robot by Isaac Asimov.

In science fiction AI — almost always strong AI — is commonly portrayed as an upcoming power trying to overthrow human authority as in HAL 9000, Skynet, Colossus and The Matrix or as service humanoids like C-3PO, Marvin, Data, KITT from Knight Rider, the Bicentennial Man, the Mechas in A.I. and Sonny in I, Robot.

A notable exception is Mike in Robert A. Heinlein's The Moon Is a Harsh Mistress: a supercomputer that becomes aware and aids humans in a local revolution to overthrow the authority of other humans. A careful reading of Arthur C. Clarke's version of 2001 suggests that the HAL 9000 found himself/itself in a similar position of divided loyalties. On one hand, HAL needed to take care of the astronauts, on the other the humans who created HAL entrusted him with a secret to be withheld from the astronauts.

The inevitability of world domination by out-of-control AI is also argued by some writers like Kevin Warwick. In works such as the Japanese manga Ghost in the Shell, the existence of intelligent machines questions the definition of life as organisms rather than a broader category of autonomous entities, establishing a notional concept of systemic intelligence. See list of fictional computers and list of fictional robots and androids.

Author Frank Herbert explored the idea of a time when mankind might ban clever machines entirely. His Dune series makes mention of a rebellion called the Butlerian Jihad in which mankind defeats the smart machines of the future and then imposes a death penalty against any who would again create thinking machines. Often quoted from the fictional Orange Catholic Bible, "Thou shalt not make a machine in the likeness of a human mind." A similar idea is also explored in the re-imagined Battlestar Galactica, where artificial intelligence research is illegal after the Cylons, a species of intelligent machines created by man, had rebelled against their masters and tried to destroy them. The character Dr. Gaius Baltar is known for his controversial view that the ban on research in this area is outmoded and should be lifted.

Artificial intelligence plays a major role in How to Make a Monster, where the fictional character Sol uses his sophisticated AI for the game's monster, which comes to life after the lightning strike.

The Singularity[edit]

Should the promise of strong AI be realized, some futurists such as Vernor Vinge and Ray Kurzweil predict that a period of abrupt and dramatic societal change will ensue. This hypothetical period is sometimes referred to as "The Singularity."


List of applications[edit]

Typical problems to which AI methods are applied

Template:MultiCol

Template:ColBreak

Template:EndMultiCol

Other fields in which AI methods are implemented

Template:MultiCol

Template:ColBreak

Template:EndMultiCol

Lists of researchers, projects & publications

See also[edit]

Template:portalpar

Main list: List of basic artificial intelligence topics

Template:MultiCol

Template:ColBreak

Template:EndMultiCol

References[edit]

  1. WHAT IS ARTIFICIAL INTELLIGENCE? by John McCarthy[1]
  2. C.M. van der Walt and E. Barnard,“Data characteristics that determine classifier performance”, in Proceedings of the Sixteenth Annual Symposium of the Pattern Recognition Association of South Africa, pp.160-165, 2006[2]
  3. Hammond J, Kristian. Case-based planning: viewing planning as a memory task. Academic Press Perspectives In Artificial Intelligence; Vol 1. Pages: 277. 1989. ISBN 0-12-322060-2
  4. (August 8 2001). Robots beat humans in trading battle. BBC News, Business. The British Broadcasting Corporation. URL accessed on 2006-11-02.
  5. "Robot," Microsoft® Encarta® Online Encyclopedia 2006 [3]

External links[edit]

Template:sisterlinks

This article contains content from Wikipedia. Current versions of the GNU FDL article Artificial intelligence on WP may contain information useful to the improvement of this article WP