Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-r5zm4 Total loading time: 0 Render date: 2024-06-25T10:41:49.138Z Has data issue: false hasContentIssue false

Conditionals and knowledge-base update

Published online by Cambridge University Press:  21 September 2009

Peter Gärdenfors
Affiliation:
Lunds Universitet, Sweden
Get access

Summary

Knowledge update has been a matter of concern to two quite separate traditions: one in philosophical logic, and another in artificial intelligence. In this paper we draw on both traditions to develop a theory of update, based on conditional logic, for a kind of knowledge base that has proven to be of interest in artificial intelligence. After motivating and formulating the logic on which our theory is based, we will prove some basic results and show how our logic can be used to describe update in an environment in which knowledge bases can be treated as truth-value assignments in four-valued logic. In keeping with Nuel Belnap's terminology in Belnap (1977a) and Belnap (1977b), we will refer to such truth-value assignments as set-ups or as four-valued set-ups.

Paraconsistency, primeness, and atomistic update

For the moment we will not say exactly what a four-valued set-up is. Instead we will describe informally some conditions under which it would be natural to structure one's knowledge base as a four-valued set-up. One of these conditions has to do with the treatment of inconsistent input; a second has to do with the representation of disjunctive information; the third concerns what kinds of statements can be the content of an update.

Inconsistent input

A logical calculus is paraconsistent if it cannot be used to derive arbitrary conclusions from inconsistent premises. Belnap argues in general terms that paraconsistent reasoning is appropriate any context where an automated reasoner must operate without any guarantee that its input is consistent, and where nondegenerate performance is desirable even if inconsistency is present. Knowledge bases used in AI applications are cases of this sort.

Type
Chapter
Information
Belief Revision , pp. 247 - 275
Publisher: Cambridge University Press
Print publication year: 1992

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×