Repository | Journal | Volume | Article
Nonmonotonic inferences and neural networks
pp. 143-174
Abstract
There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neural networks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as non-monotonic inferences, and (b) that there is a strict correspondence between the coding of knowledge in Hopfield networks and the knowledge representation in weight-annotated Poole systems. These results show the usefulness of non-monotonic logic as a descriptive and analytic tool for analyzing emerging properties of connectionist networks. Assuming an exponential development of the weight function, the present account relates to optimality theory – a general framework that aims to integrate insights from symbolism and connectionism. The paper concludes with some speculations about extending the present ideas.
Publication details
Published in:
(2004) Knowledge, rationality & action. Synthese 142 (2).
Pages: 143-174
DOI: 10.1007/s11229-004-1929-y
Full citation:
Blutner Reinhard (2004) „Nonmonotonic inferences and neural networks“. Synthese 142 (2), 143–174.