Default Reasoning
Overview
Default reasoning is a subfield of Artificial Intelligence (AI) and Cognitive Science, which focuses on the process of making assumptions based on a lack of complete information. It is a type of non-monotonic logic, meaning that adding knowledge may reduce the set of conclusions. This contrasts with classical logic systems, where adding knowledge can only increase the set of conclusions.
History and Development
The concept of default reasoning was first proposed by Raymond Reiter in 1978, as a way to represent the common-sense reasoning that humans often use when making decisions with incomplete information. This was a significant departure from the traditional symbolic AI approaches of the time, which relied on complete and consistent knowledge bases.
Principles of Default Reasoning
Default reasoning operates on the principle of making the most likely assumption in the absence of evidence to the contrary. This is often referred to as the "closed-world assumption", which assumes that what is not currently known to be true, is false.
Default Logic
Default logic, proposed by Reiter, is a formalism for default reasoning. In default logic, a default is a rule that can be used unless its application would lead to an inconsistency. A default rule has the form: α:β/γ, where α is the prerequisite, β is the justification, and γ is the consequent.
Applications
Default reasoning has found applications in various fields including AI, cognitive science, computer science, and philosophy. It is used in AI to model the reasoning process of intelligent agents. In cognitive science, it helps in understanding human decision-making processes.
Criticisms and Limitations
Despite its usefulness, default reasoning has been criticized for its lack of a formal model of uncertainty. It also faces challenges in handling contradictions and inconsistencies that arise from the introduction of new information.