How we learn about things we don’t already understand

Landy, D., & Goldstone, R. L. (2005). How we learn about things we don’t already understand. Journal of Experimental and Theoretical Artificial Intelligence, 17, 343-369.

The computation-as-cognition metaphor requires that all cognitive objects are constructed from a fixed set of basic primitives; prominent models of cognition and perception try to provide that fixed set. Despite this effort, however, there are no extant computational models that can actually generate complex concepts and processes from simple and generic basic sets, and there are good reasons to wonder whether such models may be forthcoming. We suggest that one can have the benefits of computationalism without a commitment to fixed feature sets, by postulating processes that slowly develop special-purpose feature languages, from which knowledge is constructed. This provides an alternative to the fixed-model conception without radical anti-representationlism. Substantial evidence suggests that such feature development adaptation actually occurs in the perceptual learning that accompanies category learning. Given the existence of robust methods for novel feature creation, the assumption of a fixed basis set of primitives as psychologically necessary is at best premature. Methods of primitive construction include (a) perceptual sensitization to physical stimuli, (b) unitization and differentiation of existing (non-psychological) stimulus elements into novel psychological primitives, guided by the current set of features, and (c) the intelligent selection of novel inputs, which in turn guides the automatic construction of new primitive concepts. Modeling the grounding of concepts as sensitivity to physical properties reframes the question of concept construction from the generation of an appropriate composition of sensations, to the tuning of detectors to appropriate circumstances.

Download PDF version of this paper