Definitions may be organised into four categories.
To pass this test requires:
Example: GPS (General Problem Solver) was an early computer program that attempted to model human thinking. The developers were not so much interested in whether or not GPS solved problems correctly. They were more interested in showing that it solved problems like people, going through the same steps and taking around the same amount of time to perform those steps.
Example: All computers use energy. Using energy always generates heat. Therefore, all computers generate heat.
This initiate the field of logic. Formal logic was developed in the late nineteenth century. This was the first step toward enabling computer programs to reason logically.
By 1965, programs existed that could, given enough time and memory, take a description of the problem in logical notation and find the solution, if one existed. The logicist tradition in AI hopes to build on such programs to create intelligence.
There are two main obstacles to this approach: First, it is difficult to make informal knowledge precise enough to use the logicist approach particularly when there is uncertainty in the knowledge. Second, there is a big difference between being able to solve a problem in principle and doing so in practice.
In the logical approach to AI, the emphasis is on correct inferences. This is often part of being a rational agent because one way to act rationally is to reason logically and then act on ones conclusions. But this is not all of rationality because agents often find themselves in situations where there is no provably correct thing to do, yet they must do something.
There are also ways to act rationally that do not seem to involve inference, e.g., reflex actions.
The study of AI as rational agent design has two advantages:
Philosophers staked out most of the important ideas of AI, but to move to a formal science requires a level of mathematical formalism in three main areas: computation, logic and probability.
Mathematicians have proved that there exists an algorithm to prove any true statement in first-order logic. However, if one adds the principle of induction required to capture the semantics of the natural numbers, then this is no longer the case. Specifically, the incompleteness theorem showed that in any language expressive enough to describe the properties of the natural numbers, there are true statements that are undecidable: their truth cannot be established by any algorithm.
Analogously, Turing showed that there are some functions that no Turing machine can compute.
Although undecidability and noncomputability are important in the understanding of computation, the notion of intractability has had much greater impact on computer science and AI. A class of problems in called intractable if the time required to solve instances of the class grows at least exponentially with the size of the instances.
Exponential vs. polynomial. In between is nondeterministic polynomial.
Even moderately sized instances of intractable problems classes cannot be solved in reasonable amounts of time. Therefore, one should strive to divide the overall problem of generating intelligent behaviour into tractable subproblems rather than intractable ones.
Another important concept from mathematics is problem reduction. A reduction is a general transformation from one class of problems to another such that the solutions to the first class can be found by reducing them to problems in the second class and then solving those.
One notion for recognizing intractable problems in that of NP-Completeness. Problems that can be solved in nondeterministic polynomial time. Any problem class to which an NP-complete problem can be reduced is likely to be intractable.
Probability is the principle mathematical tool that we have to represent and reason about uncertainty. Bayes proposed a rule for updating subjective probabilities in the light of new evidence. This rule forms the basis of the modern approach to uncertain reasoning in AI.
Decision theory combines probability theory with utility theory (which provides a framework for specifying the preferences of an agent) to give a general theory that can distinguish good actions from bad ones.
The principle characteristic of cognitive psychology is that the brain processes and processes information. The claim is that beliefs, goals, and reasoning steps can be useful components of a theory of human behaviour. The knowledge-based agent has three key steps:
Having a theory of how humans successfully process natural language is an AI-complete problem - if we could solve this problem then we would have created a model of intelligence.
Much of the early work in knowledge representation was done in support of programs that attempted natural language understanding.
The microworlds approach to AI was pioneered in the 1960's and tried to solve problems in limited domains.
The ANALOGY program could solve geometric analogy problems such as this
The most famous microworld is the Blocks World (1970's). A command such as "Pick up the red block" could be used to manipulate the world.
The microworld approach turned out to have problems because the advances made in writing programs for microworlds turned out not to be generalisable.
Early work in the Logicist camp also had problems because of the use of weak methods (these use weak information about a domain). However, knowledge intensive approaches have been more successful. A key development from the Logicist tradition was knowledge-based systems in the 1980's.
In the late 1980's Neural Networks became fashionable again (they had been popular in the 60's) due to improved learning algorithms and faster processors.
AI Time Line
1943 - McCulloch and Pits propose modelling neurons using on/off devices.
1950's - Claude Shannon and Alan Turing try to write chess playing programs.
1957 - John McCarthy thinks of the name "Artificial Intelligence".
1960's - Logic Theorist, GPS, microworlds, neural networks.
1971 - NP-Comlpeteness theory (Cook and Karp) casts doubt on general applicability of AI methods.
1970's - Knowledge based systems and expert systems.
1980's - AI techniques in widespread use, neural networks rediscovered.
1990's - Deep Blue wins against world chess champion. Image and Speech recognition become practical.