Multi Store Model of Memory
Understanding memory is one of the most important parts of understanding psychology as a whole.
The multi store model was first put forward by Atkinson and Shiffrin in 1968 and helps us consider memory as a system of information flowing through a series of ‘stores’.
This article will explain why the multi store model has been so important, its strengths and weaknesses and research that has been conducted since it was introduced.
What is Memory?
As memory is such a fundamental concept, it is difficult yet important to define. The general consensus is that “memory is the means by which we draw on our past experiences in order to use this information in the present” (Sternberg, 1999). Or, put more simply, it is “the process of maintaining information over time” (Matlin, 2005).
Without memory, it isn’t just the past that we would lose. None of us would be able to perceive the future (let alone plan for it) and it would be impossible even to function normally in the present. When we refer to ‘memory’, this can be either the physical structure or the processes involved with the storage and subsequent retrieval of information in the brain.
In 1890, William James first distinguished primary from secondary memory in his influential book ‘The Principles of Psychology’. He postulated that primary memory consisted of thoughts held for a short time in consciousness and that secondary memory consisted of a permanent, unconscious store.
His work paved the way for the multi store model, a more complicated three-part explanation of how memory processes work.
What is the Multi Store Model of Memory?
The multi store model was put forward by Richard Atkinson and Richard Shiffrin in 1968 and is therefore also referred to as the Atkinson–Shiffrin model. The model purports that information gets encoded into the brain via the senses as an immediate, fleeting figment, available in sensory registers.
These figments get encoded to the short-term store, where it can reside for up to 30 seconds without significant rehearsal (Posner, 1966).
In terms of capacity, the short-term store can only hold a limited amount of information before decaying. George Miller suggested a capacity of 7±2 bits of information, whereas modern estimates are as low as 4 bits (Cowan, 2001).
With sufficient rehearsal, however, information can be encoded into the long-term store. Once encoded in the long-term store, information can then be transferred back to the short-term store for manipulation or processing.
Whilst the multi store model has been influential in the field of psychology, it has been heavily criticised since its publication, as it does not reflect accurately enough the complexity of the processes involved.
What about other Memory Models?
Driven to describe a more complete picture of how memory works, Alan Baddeley and Graham Hitch proposed a model in 1974 that has come to be known as the working memory model.
The working memory model was based on both a combination of clinical evidence (e.g. Brenda Milner’s case study on Henry Molaison) and experimental evidence (e.g. research on differences between acoustically and semantically similar stimulus material) throughout the late 1960s and early 1970s, which made it an appealing theory.
Baddeley and Hitch used a dual task interference activity to show that there are at least two separate sets of functions of working memory. They examined how participants processed sounds and sequences, and then how visual and kinaesthetic information was processed. Due to processing being impaired when performing the dual tasks, Baddeley and Hitch argued the existence of the central executive, a flexible cognitive system.
Collating their evidence, the final model can be described as having three separate components that feed into the central executive, the overarching feature that monitors the other components and allocates attention:
i) the phonological loop, which a consists of a short-term phonological store with auditory memory traces and an articulatory rehearsal component that can recall those memory traces,
ii) the visuospatial sketchpad, which handles information from the senses or from long-term memory, and
iii) the episodic buffer, which binds the information from the phonological loop and the visuospatial sketchpad.
In 2000, Bunge et al. performed further dual task interference activities to provide evidence for the central executive. They found that the same parts of participants’ brains were active during reading and recalling tasks, but were more active when they had to perform two attentional tasks simultaneously than when performed sequentially.
However, this particular research is limited in validity due to its specific clinical setting.
Additionally, a research subject codenamed KF suffered brain damage from a motorcycle accident that damaged his short-term memory. His impairment was mainly verbal, whereas his visual memory almost unaffected. This supports the view that there are separate short-term memory components for visual and verbal information (the visuospatial sketchpad vs. phonological loop).
One of the more common criticisms of the working memory model is the role of the executive function. There is little direct evidence for how it works, and Baddeley himself acknowledges that more work needs to be done in this area.
In a 1980 publication, Lieberman indicates that blind people are capable of high levels of spatial awareness despite never having had any experience of visual information. This, in effect, is a criticism of the visuospatial sketchpad, in that it implies that spatial information is initially visual. Lieberman suggest separating the visual and spatial aspects of this component.
Whilst a vast improvement on the multi store model of the late 1960s, and generally viewed as the best understanding of memory we currently have, the working memory model is still an incomplete picture of the processes that govern memory.
Lots of different things can effect our memory and as new evidence is collected, our understanding will improve, and a new model could be just around the corner.