Symbolic artificial intelligence is the collective name for all research methods of artificial intelligence , based on a high-level " symbolic " (human-readable) representation of tasks, logic, and search. Symbolic AI formed the basis of the dominant paradigm of AI research from the mid-1950s to the late 1980s.
In 1985, John Haugeland gave the symbolic AI the name GOFAI ( Good Old-Fashioned Artificial Intelligence ) in his book Artificial Intelligence: The Very Idea , dedicated to the philosophical reflection of the consequences of artificial intelligence research . [1] In robotics, the similar term GOFAIR ("good old artificial intelligence in robotics") is used.
The most successful form of symbolic AI is expert systems using a network of production rules . Production rules combine characters into relationships similar to the if-then operator. The expert system, processing these rules, draws logical conclusions and determines what additional information it needs, that is, what questions should be asked using human-readable characters.
History
The symbolic approach to the creation of artificial intelligence is based on the assumption that many aspects of intelligence can be understood and interpreted through the manipulation of symbols . This idea formed the basis of the Newell-Simon hypothesis . It was formulated by Allen Newell and Herbert Simon in 1976. In general terms, the hypothesis boils down to the fact that any meaningful action (regardless of whether it is performed by a person or a machine) is determined by a certain system of symbols. This assumption was made as a result of a successful study related to the universal problem solver created by Newell and Simon. This program was designed to simulate human reasoning.
At that time, many AI researchers had too high hopes for him. It was believed that using the formal rules of logic, generating syntax, creating a logical language, you can create intelligence comparable to human. However, in practice, systems based on these principles, although they worked, did not cope well with complex adaptive tasks. Therefore, in the 1980s and 90s, such concepts were severely criticized, and the interest of many researchers shifted to other methods ( evolutionary algorithms , artificial neural networks , etc.). [2]
See also
- Knowledge engineering
- Automatic proof of theorems
Notes
- ↑ Haugeland, J. (1985). Artificial Intelligence: The Very Idea . Cambridge, Mass: MIT Press. ISBN 0-262-08153-9
- ↑ Tina Kataeva. Game of the mind (based on a conversation with Konstantin Anokhin ) // "In the world of science" No. 6, 2006.