Describe the Difference Between Elaboration and Visual Imagery
Describe the Difference Between Elaboration and Visual Imagery
2010; ane: 228.
Semantic Elaboration in Auditory and Visual Spatial Memory
1Section of Psychiatry, Douglas Mental Health University Establish, McGill Academy, Verdun, QC, Canada
oneSection of Psychiatry, Douglas Mental Wellness University Institute, McGill University, Verdun, QC, Canada
Robert J. Zatorre
2Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
Véronique D. Bohbot
iDepartment of Psychiatry, Douglas Mental Health University Institute, McGill Academy, Verdun, QC, Canada
Received 2010 Jul 22; Accepted 2010 Dec ane.
The aim of this study was to investigate the hypothesis that semantic information facilitates auditory and visual spatial learning and memory. An auditory spatial task was administered, whereby good for you participants were placed in the heart of a semi-circle that contained an array of speakers where the locations of nameable and non-nameable sounds were learned. In the visual spatial task, locations of pictures of abstruse fine art intermixed with nameable objects were learned by presenting these items in specific locations on a computer screen. Participants took part in both the auditory and visual spatial tasks, which were counterbalanced for order and were learned at the aforementioned rate. Results showed that learning and memory for the spatial locations of nameable sounds and pictures was significantly meliorate than for non-nameable stimuli. Interestingly, in that location was a cross-modal learning effect such that the auditory task facilitated learning of the visual task and vice versa. In decision, our results support the hypotheses that the semantic representation of items, as well equally the presentation of items in different modalities, facilitate spatial learning and retentiveness.
audition, vision, hippocampus, spatial memory, cerebral map
Spatial memory is based on the formation of a cerebral map, i.e., a mental representation of the spatial relationships amid various elements in the environment. Information technology has been shown to be critically dependent on the hippocampus (Scoville and Milner, 1957; O’Keefe, 1978; Maguire et al., 1998; Bohbot et al., 2004). It is allocentric, meaning that the human relationship between environmental elements or landmarks is synthetic independently of the position of the observer. However, it is less well understood whether a cognitive map can be formed of abstruse sensory features as readily as a cognitive map based on semantically meaningful elements. In other words, does prior semantic cognition of elements facilitate the learning of their spatial relationships? Here, we inquire whether semantic elaboration has an impact on spatial retentivity based on a cognitive map.
Previous research in the expanse of spatial retentiveness and cognition has revealed several factors that influence location memory, such every bit emotional valence (Crawford and Cacioppo, 2002) and generation and mental rehearsal of words (Slamecka and Graf, 1978; Greene, 1992; Mulligan, 2001; Marsh, 2006). Additionally, object location memory was previously found to exist better in women (Silverman and Eals, 1992; Eals and Silverman, 1994; James and Kimura, 1997; Barnfield, 1999), only Choi and L’Hirondelle (2005) suggested that the female advantage was due to superior verbal memory ability, and that this effect disappeared when objects are abstract or unfamiliar. The findings of the higher up mentioned studies are very intriguing and reverberate the electric current state of object location retention research. Here, we extend this enquiry by raising specific questions pertaining to semantic elaboration and object location memory.
Semantic elaboration tin can be defined every bit the procedure of rehearsal of a stimulus representation in words. It is well known that for many memory tasks, semantic elaboration during learning leads to ameliorate call back than does learning without semantic elaboration (Hyde and Jenkins, 1969; Craik and Tulving, 1975; Belmore, 1981; Mennemeier et al., 1992; Brown and Lloyd-Jones, 2006). This upshot is often referred to as the levels-of-processing effect (Craik and Lockhart, 1972) and has been shown in several types of verbal recall tasks, normally involving lists of words. It has likewise been tested for recalling details most nameable pictures (Marks, 1989) and the recollect of faces (Anderson and Reder, 1979; Bruce and Young, 1986; Schooler et al., 1996; Brown and Lloyd-Jones, 2006), but it has non yet been investigated for auditory stimuli. It has also not still been investigated how just the power to name a stimulus, which by definition involves more semantic elaboration than perceiving a stimulus without assigning it a name, may aid in storing and retrieving spatial memories. The purpose of this written report is to investigate how naming a sound or visual object might lead to ameliorate spatial memory, and if it does, whether information technology is pervasive across modalities.
A study by Marks (1989) examined the caste to which elaborative processing of picture names affects retention of the names and content of the pictures. In this experiment, participants always had initial semantic access by naming the moving-picture show, because all the pictures of familiar objects used were easily nameable. It was plant that further semantic elaboration, by rehearsal of the proper name of the picture in a sentence, leads to better name recall and name recognition. Semantic elaboration did not, on the other hand, benefit picture-recognition performance, measured as the participants’ ability to remember specific details almost the moving-picture show. This suggests that while semantic elaboration facilitates memory for picture names, information technology does not aid retentiveness for perceptual details of the pictures. This study confirms that semantic elaboration does help recollect and suggests that labeled pictures are encoded in two divide ways: semantic admission to the movie’s proper noun and perceptual details of the film itself. The semantic aspect is aided by semantic elaboration, while memory for the perceptual details is not. This implies that visual stimuli that can be named may be easier to retrieve because there are two dissimilar routes for encoding.
Klatzky et al. (2002) investigated whether multiple locations of stimuli could be learned from spatial language (a verbal description of the locations) as easily as from auditory and visual perception. The stimuli were all names of objects, either spoken, or written. Stimuli were presented sequentially through a caput-mounted virtual reality brandish for the visual condition, in which the object labels appeared on faux cards in a item virtual direction relative to the participant. In the auditory condition, stimuli were presented from loudspeakers at target azimuths. In a third condition, the locations of these same stimuli were described using spatial language. Recall of directions was tested in all groups by using objects’ names equally probes. The experimenters institute that sets of five stimulus locations were learned more slowly using spatial language than using either visual or auditory perceptual cues. The authors suggest that this difference arises because the semantic representation of a identify must be converted into a spatial representation. This is different from Marks (1989) because in the latter study, semantic elaboration was used in add-on to perceptual representation, whereas in Klatzky et al. (2002) the two representations were presented separately. All the same, results of both studies suggest that at that place are two possible paths to encoding and recollect: the actual stimulus paired with the location (perceptual), and the name of that stimulus paired with the description of the stimulus or the location (semantic). Taken together, these studies suggest that using both pathways is superior to using either 1 of them, and too that the perceptual pathway alone is better than the semantic pathway alone for spatial learning.
While in that location is evidence that semantic elaboration plays a function in spatial retention, previous studies assessed it using verbal textile. In this study, both verbal and non-verbal material was intermixed into ane session in order to directly contrast the 2 learning atmospheric condition. In addition, we replicated our study design with two contained modalities. The aim of this report was to investigate whether naming stimuli facilitates the learning of their locations in both the auditory and visual modalities. Nosotros hypothesized that, in both audition and vision, stimuli that are semantically meaningful, i.e., stimuli that tin exist named in words, would accept meliorate spatial encoding and recall than non-semantically meaningful stimuli, i.e., stimuli that cannot be named. We further hypothesized that the advantage of nameable stimuli over not-nameable stimuli would remain despite a practice result over two sessions with dissimilar stimuli in both the auditory and visual modalities.
Materials and Methods
Xx young healthy participants (12 women, 8 men) with no known vision or hearing problems were recruited. Ages ranged from 20 to 35 (hateful = 23.9). Participants were tested in either English or French. Each volunteer participated in 2 auditory spatial retentivity sessions and two visual spatial memory sessions, counterbalanced for order of presentation within and across modalities as well equally for stimulus set (Table
1). Informed consent was obtained from all participants and the experiment was approved past the local ideals committee.
|First task||Second task|
|Session 1 (prepare ane)||Session one (set one)|
|Session ii (prepare 2)||Session 2 (set two)|
|Session 1 (ready 3)||Session 1 (set three)|
|Session 2 (gear up 4)||Session 2 (set iv)|
|Session 1 (set up two)||Session 1 (set 2)|
|Session 2 (set 1)||Session two (set 1)|
|Session 1 (set iv)||Session 1 (set iv)|
|Session 2 (set up three)||Session two (set three)|
Materials and apparatus
Non-semantically meaningful sounds
Sounds of 1 s duration were used. They had been previously assessed to non be hands nameable in a pilot study (for example bondage grinding, baboon call). Three non-semantic stimuli were used for each session, intermixed with iii semantic stimuli. Two out of four sets of three stimuli were used for each participant, and the order of presentation was balanced.
Semantically meaningful sounds
Sounds of familiar objects, 1 s in elapsing, previously demonstrated in a airplane pilot study to exist easily nameable (for example bird call, telephone ring) were used. 3 semantic stimuli were used for each session, intermixed with three non-semantic stimuli. Two out of 4 sets of three stimuli were used for each participant, and the order of presentation was counterbalanced.
Non-semantically meaningful pictures
Rectangular abstract colored pictures previously demonstrated not to exist readily nameable were used. Iii non-semantic stimuli were used for each session, intermixed with 3 semantic stimuli. Ii out of four sets of three non-semantic stimuli were used for each participant, and the society of presentation was counterbalanced. All three non-semantic pictures per set had similar colors.
Semantically meaningful pictures
Blackness and white pictures of familiar objects (for example apple, acorn) were used. Three semantic stimuli were used per session, intermixed with three non-semantic stimuli. Ii out of four sets of three semantic stimuli were used for each participant, and the society of presentation was counterbalanced.
A large auditory array, two.3 yard in diameter, was used to present the stimuli. The array formed a 180° semi-circle with the listener at the center and the speakers evenly spaced out (Figure
i). This array consisted of xiii speakers, ane of which was placed in the center and the others 15° apart, arranged in one aeroplane at the level of the listener’s head. The speakers were wired such that any sound being played by the reckoner could exist directed through a switchboard to a particular speaker. But six of the 13 possible speaker locations were used per session. The unabridged array was covered with blackness cloth to muffle its three supporting legs and the positions of the speakers from the participants. To indicate both perceived and remembered sound locations, participants pointed with a light amplification by stimulated emission of radiation. Location was measured in reference to a discreet newspaper lining the bottom of the assortment approximately at the participants’ shoulder level, with tick marks every five° and without any other inscriptions of any kind.
All pictures were presented on a reckoner monitor using PowerPoint (Microsoft). For the encoding, the screen was finer divided into xvi separate sections where pictures might appear. For each stimulus gear up, a PowerPoint presentation was created in which each slide contained one pic. The first six slides showed half-dozen pictures in their specific locations (for example, the apple appeared in the bottom right corner of the screen, Figure
2). The experimenter scrolled through the frames for a stimulus presentation of approximately ane due south per picture. Following an education slide which informed participants that the upcoming trial was a recall trial, six slides were presented containing the same pictures only in a new system, all in the center of the screen. In recall trials, participants used the computer mouse to drag the pictures to the places on the screen where they remembered having seen them during encoding. Encoding slides and recall slides alternated 12 times, always with pictures presented in a different arrangement in each PowerPoint presentation. The ii types of trials were separated each time past an instruction slide that indicated whether the following trial would be a “Learning Trial” or “Recall Trial” and that repeated the instructions for that particular type of trial.
Participants commencement took office in a practice session which included semantic and not-semantic stimuli that were not used in the experimental tasks. In the auditory practice job, participants heard the practice sound in four different locations to get accepted to localizing sounds, turning their head toward the sounds, and pointing to the locations with the light amplification by stimulated emission of radiation. In the visual practice task, participants saw four practice pictures in the center of the screen to get accustomed to localizing the pictures on the monitor and dragging and dropping icons with the mouse. Following the practice session, participants were administered two experimental sessions in ane modality followed past two experimental sessions in the other modality. Each session consisted of 12 trials where participants had to acquire the location of the stimuli. Each trial consisted of an encoding and a recall segment. Each session was given with a dissimilar set of stimuli. In the auditory-first and visual-start groups, x participants heard stimulus sets 1 and 2 (three non-semantic, three semantic for each ready) for Sessions one and 2. The other x participants heard stimulus sets 3 and four (3 non-semantic, three semantic for each set) for Sessions 1 and two. Each stimulus set had dissimilar stimuli presented to different locations. The lodge of presentation was balanced. For example, five of the participants hearing stimulus sets i and ii heard set ane in the start session and set 2 in the second session, and the other five heard set 2 first and then gear up 1. The same was true for sets 3 and iv.
In the auditory job, the three non-semantic and iii semantic sounds, randomly interspersed, were presented to specific speakers in the assortment, one after the other, with a 1 southward inter-stimulus interval. Participants were instructed to try to remember the locations of sounds equally precisely as possible, but not to pay attention to the order of presentation. They turned their heads toward each sound as it was beingness played and pointed the laser to its location. Afterward each sound, participants returned caput and light amplification by stimulated emission of radiation to the forward position. Localization of the stimuli was recorded on the first learning trial merely and was used to assess recall performance. The rationale for doing so was that the introduction of a localization session inside the training session would have introduced farther uncontrolled opportunities to encode the location data. Furthermore, should localization have gotten improve with exercise, this would take no effect on the learning curves due to the criteria used to assess right functioning and consequently this would have no effect on the data reported in this newspaper. In the visual task, both non-semantic and semantic pictures, randomly interspersed, were presented at specific locations on the monitor. Participants were instructed to try to think the locations of the pictures as precisely as possible and to ignore the order of presentation.
Call back stage
In the auditory task, the aforementioned stimuli as in the encoding stage were presented in a different randomized social club through headphones. Participants turned their heads and used the laser to indicate the location from which they had previously heard the sound come. In the visual task, the stimuli were presented in a different (randomized) society in the heart of the figurer screen. Participants indicated remembered locations by clicking with the mouse and dragging the pictures to where they remembered them appearing in the previous encoding trial. Encoding and recall trials were alternated for each participant until call back was correct for all sounds on two trials in a row or until 12 trials were completed.
Later on completion of both the visual and auditory components of the experiment, participants were given a questionnaire in which they indicated what kind of strategies they used in social club to remember locations for both auditory and visual stimuli. They were likewise asked whether they named any stimuli in either modality, and if and so, which ones they named. Afterwards filling out the questionnaire, participants were debriefed with a written explanation of the experiment and the opportunity to ask any questions they may have had.
Two dependent variables were used equally a measure of recall: trials to benchmark (TTC) and the number of correct locations on trial 3 (T3). Criterion was reached when all stimuli were recalled correctly on 2 trials in a row (the mean number of TTC is 6), with a maximum of 12 trials per session. The sound locations were judged to be right if the participant pointed anywhere between the actual location and the perceived location (recorded on trial 1 during the localization of the stimuli), plus 5° on either side. For the pictures, locations were judged to exist correct when placed within 0.25′′ from the actual location in any direction on the PowerPoint ruler. The number of correct locations on T3 was used as a dependent variable because information technology was a mid-indicate to criterion.
A mixed model ANOVA was conducted for both the visual and auditory chore in guild to investigate whether participants who started with ane modality chore or the other differed in performance, taking into account practice and the semantic holding of stimuli. Session (Session 1 vs. Session 2) and Semantic Property (semantic vs. non-semantic stimuli) were considered inside-subjects factors and First Task (auditory-first vs. visual-first) was considered a between-subjects factor in the analysis. Event sizes for within- and between-subjects factors were calculated using the within- and betwixt-variances every bit denominators, respectively, as suggested in Salkind (2010). We volition consider semantic elaboration and cross-modal effects separately.
For both auditory and visual modalities, at that place was a significant departure on both measures of recall for non-semantic vs. semantic stimuli, with the semantic call back existence better (fewer TTC; higher number of correct answers on the third trial). Participants’ performance improved in Session 2 relative to Session i, whereby fewer TTC were required and number of correct answers on the 3rd trial increased. Spatial retentiveness for semantic stimuli remained significantly meliorate than spatial memory for non-semantic stimuli in Sessions one and 2 in both modalities.
In the auditory task, the master effects showed that participants had meliorate performance in terms of TTC with semantic rather than with non-semantic stimuli, indicating that the location of semantic sounds was easier to retrieve [F(1,18) = 9.43,
p < 0.01, ηtwo = 0.09]. The same was found in terms of number of correct answers on T3 [F(1,xviii) = 9.18,
p < 0.01, η2 = 0.12]. When looking at exercise effects, there was a pregnant difference in performance in terms of correct responses on T3, where participants performed better in Session 2 than in Session 1 [F(one,xviii) = 6.13,
p < 0.05, η2 = 0.x; Figure
threeB]. In addition, participants performed better in terms of TTC in Session 2 compared to Session ane (Figure
iiiA), which closely approached statistical significance [F(1,18) = 3.79,
p = 0.067, η2 = 0.07]. Thus, participants got amend with practice in the auditory job.
Paired-wise comparisons using paired samples
t-tests were conducted in order to determine the differences in semantic property within each session. In Session 1, there was a meaning difference betwixt semantic and non-semantic stimuli in terms of both TTC [t(nineteen) = two.17,
p < 0.05; Effigy
3A] and correct answers on T3 [t(xix) = −2.70,
p < 0.01; Figure
threeB]. In Session 2, there was also a meaning difference between the two types of stimuli in terms of TTC [t(xix) = one.78,
p < 0.05; Effigy
3A]. This departure approached significance for correct answers on T3 [t(19) = −1.63,
p = 0.06; Figure
In the visual task, the results paralleled those of the auditory task. The main effects showed that participants had better performance in terms of TTC with semantic rather than with non-semantic stimuli, indicating that the location of semantic images was easier to remember than that of abstruse images [F(ane,18) = 123.67,
p < 0.001, ηtwo = 0.46]. The aforementioned result was found in terms of number of correct answers on T3 [F(1,18) = 40.39,
p < 0.001, ηii = 0.30]. Thus, it seems that the locations of semantic stimuli, whether they are sounds or images, are recalled better than non-semantic ones. When looking at practice effects, participants performed significantly meliorate in Session two compared to Session 1 in terms of both TTC [F(1,eighteen) = four.67,
p < 0.05, η2 = 0.05; Effigy
threeC] and correct answers on T3 [F(1,eighteen) = 8.ten,
p < 0.01, η2 = 0.07; Figure
Paired-wise comparisons using paired samples
t-tests were conducted in order to look at the effect of semantic property within each session. In Session i, at that place was a significant divergence betwixt semantic and non-semantic stimuli in terms of both TTC [t(19) = five.67,
p < 0.001] and correct answers on T3 [t(19) = −three.82,
p < 0.001; Figures
3C,D]. In Session 2, the aforementioned results were found in terms of TTC [t(xix) = 6.12,
p < 0.001] and right answers on T3 [t(19) = −three.68,
p < 0.001; Figures
Cantankerous-modal job society effects
A comparison of the auditory scores was fabricated betwixt two groups of participants: those who performed the auditory task first and those who performed the auditory chore after the visual task. A comparison of the visual scores was too made betwixt two groups of participants: those who performed the visual task starting time, and those who performed the visual job later the auditory task (Figure
4). In the visual modality, there was a departure in performance betwixt the ii groups in both sessions, with a significantly higher number of TTC in the visual-first group than in the visual-second group, showing cross-modal learning effects. Similar findings were observed between the auditory-kickoff grouping and the auditory-second grouping. These differences were greater for the visual than the auditory tasks and for the non-semantic stimuli than for the semantic stimuli.
The main effects showed that in the visual modality, participants who performed the visual task afterward the auditory chore required significantly fewer TTC (Effigy
4C) and obtained more correct answers on T3 than those who performed the visual job outset [TTC:
F(1,18) = eight.93,
p < 0.01, ηtwo = 0.33; T3:
F(i,eighteen) = eleven.seventy,
p < 0.01, η2 = 0.39]. Thus, performing the auditory spatial memory chore earlier the visual spatial memory task significantly improved functioning on the visual spatial memory job, indicating a cross-modal learning effect. In addition, there was an interaction between Semantic Property and First Task in terms of TTC [F(1,18) = 11.36,
p < 0.01, η2 = 0.04], where individuals who performed the visual job 2d had a smaller difference in TTC between semantic and non-semantic images, compared to the visual-first group. Thus, practice from the aforementioned task in a dissimilar modality helped span the gap between semantic and non-semantic stimuli. There was also a meaning interaction between Session and Beginning Task [F(1,eighteen) = 4.74,
p < 0.05, η2 = 0.04], where the visual-2d group had a smaller difference in T3 correct answers between Sessions i and ii than the visual-kickoff grouping. This interaction strongly approached significance in terms of TTC [F(1,18) = three.86,
p = 0.065, η2 = 0.04]. Thus, the visual-2d group benefited less from practice than the visual-showtime grouping within the visual chore, nigh likely because they had already benefited from the auditory task. This is supported past the fact that the scores of the visual-second group were still improve than those of the visual-commencement group.
t-tests were conducted in order to assess the difference in TTC associated with semantic and not-semantic stimuli betwixt visual-first and second groups. The two groups were plant to exist significantly different in TTC for non-semantic stimuli in Session 1 [t(18) = −four.94,
p < 0.001; Figure
4C]. This result approached significance for semantic stimuli in Session 1 [t(18) = −1.38,
p = 0.092; Effigy
4A] and non-semantic stimuli in Session 2 [t(18) = −1.43,
p = 0.085; Figure
In the auditory chore, the main effects showed that participants performed meliorate in terms of TTC when they were administered the auditory job after the visual task rather than before, and this issue approached significance [F(1,18) = 3.39,
p = 0.082, ηii = 0.16]. Still, there was no deviation in the number of right answers on T3,
p = 0.842.
t-tests were conducted to appraise the difference in TTC related to semantic and non-semantic stimuli between auditory-first and second groups. In Session ane, the auditory-second group performed better with non-semantic stimuli than the auditory-start grouping (Effigy
4A) and this effect strongly approached significance [t(18) = 1.62,
p = 0.061]. The other comparisons were found to be non-significant,
p > 0.05 (Figures
ivA,B). This suggests that there is a cross-modal practice issue that transfers from the visual to the auditory task, and that the processing of non-semantic sounds benefits more from this practice than semantic sounds.
In their responses to the questionnaire, all xx participants reported naming the semantically meaningful sounds, and 10 participants reported naming at least one of the non-semantically meaningful sounds. Many of the names for non-semantic sounds that participants came upward with were abstract, due east.m., “rustling,” “dissonance,” “scaring.” The names participants assigned to semantic sounds, when listed, were always correct. The names assigned to non-semantic sounds were never correct and had no agreement among participants with one exception: 2 participants reported naming one of the sounds “ocean.”
In response to the visual chore, 16 of twenty participants reported naming the semantically meaningful pictures, and vii participants reported naming at least ane of the not-semantically meaningful pictures. For those who listed them, names of semantic pictures were always correct. The names assigned to the non-semantic pictures usually related to color, e.g., “good greenish” and “gross green.” Just i participant reported naming any of the non-semantic pictures using concrete names of objects (“coleslaw” and “guacamole”) rather than color.
Effect of semantic elaboration
The locations of semantically meaningful stimuli were easier to acquire and call up than those of not-semantically meaningful stimuli when presented in both the auditory and visual modalities. This finding indicates that naming a stimulus, which involves basic semantic elaboration, leads to improve spatial memory for sounds and pictures than does simply perceiving a stimulus. One possible explanation is that the representation of semantic objects can be accessed more readily than that for non-semantic objects. This easier admission can then aid retrieve the location of the object from a cerebral map. Alternatively, the formation of a cognitive map could exist facilitated by the pre-existing semantic information of its private components. This is supported past a study past Hardt and Nadel (2009), in which the authors showed that people build a cognitive map using concrete cues present in the environment but not abstruse paintings, in an adaptation of the Morris H2o Maze where both types of cues are available. However, they are able to comprise the abstruse paintings into a cerebral map when asked to do so. Thus, people have a trend to apply concrete nameable cues to construct a cognitive representation of an environment, although they tin can besides use abstract cues.
Half of the participants reported producing names for the non-semantic sounds; fewer named the non-semantic pictures. Participants mostly reported naming the non-semantic sounds subsequently they named the semantic sounds. Because familiar objects already have names, they would likely have been learned first. Another possibility is that participants ignored the not-semantic stimuli and began with learning the semantic stimuli because they had pre-existing names. The non-semantic stimuli could be assigned names afterwards, rendering them easier to remember than if a mere perceptual representation was used.
Several factors can contribute to the faster learning of the location of nameable stimuli. For example, equally suggested past previous enquiry, naming a stimulus likely provides a secondary pathway for encoding, thus supplementing the direct perceptual pathway. Marks (1989) showed that elaborative processing of picture names aids in later retention of the picture names, but not in call back of the perceptual details of the pictures. Jones (1974) showed that pictorial representations of paired associates during encoding led to meaning improvements in think relative to recall for the words solitary, supporting the two-pathway hypothesis for better retentiveness. Klatzky et al. (2002) showed that spatial locations of words can be learned perceptually through vision or audience likewise every bit semantically from a verbal description of the locations. The results of Marks (1989), Jones (1974), and Klatzky et al. (2002) suggest that using two pathways to encode memories is superior to using just one in terms of how quickly data near a stimulus is learned and how well it is later remembered. This experiment provides direct support for this concept. The non-semantic stimuli had only one encoding pathway, which was perceptual, at least at the beginning of the learning phase. The semantic stimuli, which were readily nameable, provided the firsthand opportunity to utilize two pathways: the perceptual pathway and the semantic pathway, both of which would converge onto regions disquisitional for spatial learning. The fact that semantic stimuli were easier to learn than the non-semantic stimuli could be explained by the use of 2 pathways to encode the stimuli, instead of i.
In add-on, the deviation betwixt the learning rate of the location of semantic and non-semantic stimuli may be related to the fact that the semantic stimuli are more familiar and consequently may appear more than distinct to the participant. Familiarity and distinctiveness have both been shown to atomic number 82 to a stronger semantic representation and better recognition memory of visual stimuli (Valentine and Bruce, 1986; Gauthier and Tarr, 1997). Conversely, it has as well been suggested that elaborative processing leads to more distinctiveness, and distinctiveness lonely produces a better memory trace (Marks, 1989). The semantically meaningful stimuli in this report were nameable, but were also more than familiar and more distinct than the not-semantically meaningful stimuli. It is therefore possible that it is easier to form a spatial map of familiar stimuli than not-familiar stimuli, which would lead to faster learning of the semantic pictures and sounds. Although the semantic stimuli were pictures of familiar objects or sounds fabricated by familiar objects, the pictures and sounds themselves were new to each participant at the beginning of each session. Familiarity and distinctiveness were not intended to exist dissociated from nameability in this study, and all three factors likely contributed to the difference in the learning rate betwixt non-semantic and semantic stimuli.
Alternatively, inquiry participants may non have formed a spatial map of the stimuli, but may accept used paired associations between stimulus and location instead. Verbalizing the location could have made it easier to form a paired clan betwixt the proper name of the location and the name of a stimulus than with a perceptual representation of the stimulus, as both the stimulus and its location would exist represented in the same mode. We know from verbal reports that names were sometimes produced for the non-semantic stimuli. The question is whether participants as well used a exact description of the locations. Based on participants’ reports, we establish that this was not the case. No participant reported naming the locations in the auditory modality or in the visual modality. This suggests that the exact characterization was generated toward developing a semantic representation of the stimuli and was non used to learn the location itself. In fact, all participants reported using purely visual or spatial strategies to remember the locations of the stimuli, and many reported using visual cues on the figurer monitor or of folds in the curtain covering the auditory array. Several participants reported forming a “spatial map” or “auditory map” in their heads.
Additionally, we asked whether participants could have confused the various non-semantic stimuli. If they had, nosotros would have expected correct memory for locations and incorrect identification of the objects occupying these locations. However, this was not the instance as locations of semantic stimuli were learned very precisely with relative ease, in contrast to the locations of not-semantic stimuli which remained less precise for a greater number of trials in both modalities. In other words, rather than just swapping the locations of two non-semantic stimuli, participants were less precise in remembering the locations of not-semantic stimuli than those of semantic stimuli. This is important as information technology implies that the divergence betwixt recall for locations of not-semantic and semantic stimuli is not due to difficulties in recognizing the more abstruse, less familiar non-semantic stimuli.
Finally, the performance discrepancy between semantic and non-semantic stimuli was smaller in the auditory task than in the visual chore. A potential caption for this effect is that the not-semantically meaningful sounds, although hard to recognize (every bit in the birdie telephone call and the chains grinding), are non inherently abstract sounds. In the instance of not-semantically meaningful pictures, these were fully abstract since they did not represent whatsoever kind of object. More participants tried to name non-semantic sounds than non-semantic images. Thus, participants could take identified the non-semantic sounds to a greater extent, which would result in a performance that was closer to the operation related to semantic sounds.
Based on the results of this study, we may conclude that naming stimuli, an simple course of semantic elaboration, can facilitate spatial retentiveness and the formation of a cerebral map. Information technology can thus improve memory for the specific locations of those stimuli and render this retentiveness superior to that of the locations of non-nameable stimuli.
Cross-modal learning effects
The gild of presentation was balanced, so that half of the participants performed the visual chore first (visual-first) while the other half performed the visual job after the auditory task (auditory-first). The results in both modalities represent an average of these two groups, but if the groups are examined separately, a cross-modal do consequence is seen.
Performance on the visual task was better for the visual-second group than for the visual-get-go group in both sessions, specially for the not-semantic stimuli. This indicates that there is an effect of practice from the auditory task, and that the upshot transfers from the auditory modality to the visual modality. Similarly, performance on the auditory task was improve for the auditory-second grouping than for the auditory-commencement grouping in both sessions, and over again at that place was a greater deviation for non-semantic stimuli than for semantic stimuli. This result parallels that of the auditory task and implies that there is an consequence of practice on the auditory task that transfers from the auditory to the visual modality. Thus, this practice event crosses modalities in both directions; auditory practice extends to the visual job, and visual do extends to the auditory chore. Nevertheless, the cross-modal effect was smaller in the auditory task. This could be explained by the fact that cross-modal exercise seems to benefit non-semantic processing more than semantic processing. Taking function in the auditory or visual chore 2d resulted in enhanced spatial localization for non-semantic stimuli, whereas localization accuracy of semantic stimuli by and large closely resembled that of the group that was performing the chore as their beginning chore. As mentioned above, operation related to non-semantic sounds was more similar to semantic sounds, in comparison to performance associated with non-semantic and semantic images, which was more different. Thus, in the auditory task, there was less room for amelioration, since non-semantic sounds were not completely abstract and were more similar to semantic sounds. In summary, the similarity between semantic and not-semantic sounds limited the emergence of a bigger cross-modal effect, which affects non-semantic stimuli more than semantic stimuli.
There was too an effect of practise within each task. Operation was better in the second session than in the first session. Interestingly, the visual-2nd group benefited less from practicing the visual chore from Session one to Session ii than the vision-kickoff grouping. Since their overall performance was still ameliorate than the visual-first grouping, it implies that they initially gained greater practice effects from the auditory task. Thus, once they were administered the visual chore, they had already improved functioning, indicating that the learning curve is steeper early on.
The observed practice effects could exist related to not-specific factors, such as an increase in comfort level associated with the testing, or maybe nosotros have tapped into something more interesting. For instance, people may develop a new strategy or technique for performing the task during their very first session, and this strategy may be applicative in subsequent sessions. This is particularly apparent in the large divergence between the two groups for the non-semantic stimuli (Figure
4A). The fact that at that place is petty divergence betwixt retentiveness for locations of non-semantic and semantic sounds in the auditory-second group implies that people develop a specific technique for remembering non-semantic stimuli.
Another potential caption for these cross-modality effects involves mental imagery, the process of bringing perceptual information to consciousness. Many neuroimaging and lesion studies have reported that the aforementioned brain areas are recruited during perception and mental imagery (Kosslyn et al., 2001), whether information technology be in the auditory (Zatorre and Halpern, 1993; Zatorre et al., 1996; Halpern and Zatorre, 1999) or visual modality (Farah, 1984; Levine et al., 1985; De Vreese, 1991; Young et al., 1994; Chatterjee and Southwood, 1995; O’Chicken and Kanwisher, 2000). When remembering the locations of the stimuli, many participants reported using spatial cues and forming maps in their heads, processes that may very well involve imagery. Every bit such, both the auditory and visual tasks may have required mental spatial imagery. A report by Ghaem et al. (1997) supports this hypothesis. In this study, people learned to navigate along landmarks in a real environment. Later, positron emission tomography (PET) was used to image the brain while people imagined walking down the aforementioned path, among imagined landmarks. Mental imagery of the navigation experience led to activation of the hippocampus and neighboring medial temporal lobe regions (Ghaem et al., 1997). The brain areas responsible for encoding and retrieving spatial locations of the stimuli, such as the hippocampus, may and so respond better to auditory or visual stimuli due to triggered imagery (Amedi et al., 2005) and lead to better retentiveness because stored representations of the stimuli are accessed more readily. Mental imagery may likewise explain why performance for semantic stimuli was better than that for not-semantic stimuli, as imagining concrete nameable objects is easier than imagining abstract not-nameable objects.
Various processes similar object recognition are inherently multisensory. An object can be characterized past its advent, its sounds, its texture, its smell, its taste, etc., and thus information is taken from diverse sensory modalities to provide a unified perception of an object. Various encephalon areas process information from different sensory inputs, which converge and are integrated to grade this unified representation (Amedi et al., 2005). Thus, an object presented under unlike modalities can activate the aforementioned representation in the brain. Integration occurs in high-order processing areas but also at the principal cortical level (Schroeder and Foxe, 2005; Ghazanfar and Schroeder, 2006). Importantly, anatomical connections take been shown to be between the visual and auditory primary cortices (Falchier et al., 2002; Rockland and Ojima, 2003). Other studies (come across Schroeder and Foxe, 2005; Ghazanfar and Schroeder, 2006 for reviews) have found that visual processing sometimes takes place in the auditory cortex and that auditory processing sometimes takes identify in the visual cortex (Zangenehpour and Zatorre, 2010). In improver, Schneider et al. (2008) accept demonstrated that object identification was facilitated by cross-modal priming in the auditory and visual modalities.
Moreover, in the macaque monkey, information technology was shown that visual, auditory, somatosensory, and multimodal association areas send projections to the entorhinal, perirhinal, and parahippocampal cortices (Jones and Powell, 1970; Van Hoesen and Pandya, 1975; Van Hoesen et al., 1975; Seltzer and Pandya, 1976; Mesulam and Mufson, 1982; Mufson and Mesulam, 1982; Suzuki and Amaral, 1994a). These cortices and then relay the sensory inputs to the hippocampus (Van Hoesen and Pandya, 1975; Van Hoesen et al., 1975; Insausti et al., 1987; Suzuki and Amaral, 1994b). In add-on, the hippocampus projects back to various association cortices, including the orbitofrontal, medial frontal, inductive temporal, and posterior temporal clan cortices (Rosene and Van Hosen, 1987; Van Hoesen, 1995). Some of these outputs project straight to the association cortices, however most are relayed through the entorhinal cortex (Kosel et al., 1982; Van Hoesen, 1995). These relays of data to and from the hippocampus may therefore prime number the hippocampus and make it more sensitive to upcoming stimulation. Since object location activates the hippocampus, it is possible that the presentation of an object location task in one modality primes the hippocampus and accelerates processing involved in locating objects in a different modality, therefore accounting for the observed cantankerous-modal furnishings.
Thus, visual imagery, hippocampal activation, and cantankerous-modal processing in unimodal areas and associative cortex may all serve every bit priming processes which help produce better “what” and “where” representations of the presented stimuli, as well every bit their conjunction in a cognitive map. This in turn enhances spatial learning and recall.
Naming sounds and pictures, a form of semantic elaboration, makes the locations of sounds and pictures easier to remember than if the stimuli are merely perceived without names. This is true for both the visual and auditory modalities, fifty-fifty after some exercise with the task. Additionally, taking part in an object location job in one modality seems to enhance performance of the same task in a dissimilar modality, indicating that spatial learning is cantankerous-modal. In summary, these results advise that the semantic representation of auditory or visual stimuli, in addition to their representation in different modalities, facilitate the formation of a cerebral map. Further research is necessary in club to elucidate the mechanisms involved in the construction of a cognitive map.
Disharmonize of Interest Statement
The authors declare that the inquiry was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
This projection was funded past NSERC grant number: 239920 and FRSQ grant number: 3234. We wish to give thanks Stephen Frey for providing the visual stimuli, Martine Turgeon for theoretical contributions, and Shumita Roy for her assistance with the manuscript.
Amedi A., von Kriegstein K., van Atteveldt Northward. M., Beauchamp Yard. Southward., Naumer M. J. (2005).
Functional imaging of human crossmodal identification and object recognition.
Exp. Brain Res.
166, 559–571 10.1007/s00221-005-2396-5 [PubMed] [CrossRef]
Anderson J. R., Reder L. M. (1979).
“An elaborative processing explanation of depth of processing,”
Levels of Processing in Human Retentiveness, eds Cermak Fifty. S., Craik F. I. Yard. (Hillsdale, NJ: Lawrence Erlbaum Assembly; ), 385–395
Barnfield A. M. (1999).
Development of sex differences in spatial memory.
Percept. Mot. Skills
89, 339–350 [PubMed]
Belmore S. M. (1981).
Imagery and semantic elaboration in hypermnesia for words.
J. Exp. Psychol. Hum. Learn. Mem.
Bohbot 5. D., Iaria One thousand., Petrides Yard. (2004).
Hippocampal role and spatial retentivity: testify from functional neuroimaging in healthy participants and operation of patients with medial temporal lobe resections.
18, 418–425 x.1037/0894-4105.eighteen.3.418 [PubMed] [CrossRef]
Brown C., Lloyd-Jones T. J. (2006).
Beneficial effects of verbalization and visual distinctiveness on remembering and knowing faces.
34, 277–286 [PubMed]
Bruce Five., Young A. (1986).
Understanding confront recognition.
Br. J. Psychol.
77(Pt 3), 305–327 [PubMed]
Chatterjee A., Southwood M. H. (1995).
Cortical blindness and visual imagery.
45, 2189–2195 [PubMed]
Choi J., L’Hirondelle Northward. (2005).
Object location memory: a direct test of the verbal memory hypothesis.
Learn. Individ. Differ.
Craik F. I. M., Lockhart R. Due south. (1972).
Levels of processing: a framework for memory research.
J. Verbal Learn. Verbal Behav.
xi, 671–684 10.1016/S0022-5371(72)80001-X [CrossRef]
Craik F. I. M., Tulving E. (1975).
Depth of processing and retention of words in episodic retention.
J. Exp. Psychol.
Crawford L. Eastward., Cacioppo J. T. (2002).
Learning where to look for danger: integrating melancholia and spatial information.
13, 449–453 [PubMed]
De Vreese 50. P. (1991).
Two systems for colour-naming defects: exact disconnection vs color imagery disorder.
29, one–18 x.1016/0028-3932(91)90090-U [PubMed] [CrossRef]
Eals One thousand., Silverman I. (1994).
The hunter–gatherer theory of spatial sex differences: proximate factors mediating the female advantage in recall of object arrays.
15, 95–105 10.1016/0162-3095(94)90020-v [CrossRef]
Falchier A., Clavagnier Southward., Barone P., Kennedy H. (2002).
Anatomical evidence of multimodal integration in primate striate cortex.
[PMC complimentary commodity]
Farah G. J. (1984).
The neurological basis of mental imagery: a componential analysis.
18, 245–272 10.1016/0010-0277(84)90026-X [PubMed] [CrossRef]
Gauthier I., Tarr M. J. (1997).
Becoming a “Greeble” expert: exploring mechanisms for face up recognition.
37, 1673–1682 ten.1016/S0042-6989(96)00286-six [PubMed] [CrossRef]
Ghaem O., Mellet E., Crivello F., Tzourio N., Mazoyer B., Berthoz A., Denis M. (1997).
Mental navigation along memorized routes activates the hippocampus, precuneus, and insula.
eight, 739–744 10.1097/00001756-199702100-00032 [PubMed] [CrossRef]
Ghazanfar A. A., Schroeder C. E. (2006).
Is neocortex substantially multisensory?
Trends Cogn. Sci.
10, 278–285 [PubMed]
Greene R. 50. (1992).
Human Memory: Paradigms and Paradoxes. Hillsdale, NJ: Lawrence Erlbaum Associates
Halpern A. R., Zatorre R. J. (1999).
When that tune runs through your head: a PET investigation of auditory imagery for familiar melodies.
9, 697–704 10.1093/cercor/9.7.697 [PubMed] [CrossRef]
Hardt O., Nadel Fifty. (2009).
Cognitive maps and attending.
Prog. Brain Res.
176, 181–194 [PubMed]
Hyde T. S., Jenkins J. J. (1969).
Differential furnishings of incidental tasks on the organization of retrieve of a list of highly associated words.
J. Exp. Psychol.
Insausti R., Amaral D. K., Cowan W. Thou. (1987).
The entorhinal cortex of the monkey: II Cortical afferents.
J. Comp. Neurol.
264, 356–395 [PubMed]
James T. Due west., Kimura D. (1997).
Sex differences in remembering the locations of objects in an array: location-shifts versus location-exchanges.
Evol. Hum. Behav.
Jones E. G., Powell T. P. (1970).
An anatomical study of converging sensory pathways within the cerebral cortex of the monkey.
93, 793–820 10.1093/brain/93.iv.793 [PubMed] [CrossRef]
Jones M. 1000. (1974).
Imagery as a mnemonic assist afterwards left temporal lobectomy: contrast between material-specific and generalized memory disorders.
12, 21–30 ten.1016/0028-3932(74)90023-ii [PubMed] [CrossRef]
Klatzky R. L., Lippa Y., Loomis J. Thou., Golledge R. G. (2002).
Learning directions of objects specified by vision, spatial audience, or auditory spatial language.
nine, 364–367 [PubMed]
Kosel K. C., Van Hoesen K. West., Rosene D. L. (1982).
Non-hippocampal cortical projections from the entorhinal cortex in the rat and rhesus monkey.
244, 201–213 x.1016/0006-8993(82)90079-viii [PubMed] [CrossRef]
Kosslyn S. 1000., Ganis G., Thompson W. L. (2001).
Neural foundations of imagery.
Nat. Rev. Neurosci.
2, 635–642 [PubMed]
Levine D. Due north., Warach J., Farah M. (1985).
Two visual systems in mental imagery: dissociation of “what” and “where” in imagery disorders due to bilateral posterior cognitive lesions.
35, 1010–1018 [PubMed]
Maguire E. A., Burgess N., Donnett J. G., Frackowiak R. S., Frith C. D., O’Keefe J. (1998).
Knowing where and getting at that place: a human navigation network.
280, 921–924 10.1126/science.280.5365.921 [PubMed] [CrossRef]
Marks West. (1989).
Elaborative processing of pictures in verbal domains.
17, 662–672 [PubMed]
Marsh Due east. J. (2006).
When does generation enhance retention for location?
J. Exp. Psychol. Learn. Mem. Cogn.
32, 1216–1220 [PubMed]
Mennemeier M., Fennell E., Valenstein E., Heilman K. M. (1992).
Contributions of the left intralaminar and medial thalamic nuclei to retention Comparisons and report of a instance.
[PMC free commodity]
Mesulam M. Grand., Mufson E. J. (1982).
Insula of the old world monkey III: efferent cortical output and comments on function.
J. Comp. Neurol.
212, 38–52 [PubMed]
Mufson E. J., Mesulam M. M. (1982).
Insula of the sometime world monkey.II: afferent cortical input and comments on the claustrum.
J. Comp. Neurol.
212, 23–37 [PubMed]
Mulligan N. Westward. (2001).
Generation and hypermnesia.
J. Exp. Psychol. Acquire Mem. Cogn.
27, 436–450 [PubMed]
O’Craven Yard. G., Kanwisher North. (2000).
Mental imagery of faces and places activates corresponding stimulus-specific brain regions.
J. Cogn. Neurosci.
12, 1013–1023 [PubMed]
O’Keefe J. N. L. (1978).
The Hippocampus as a Cognitive Map. Oxford: Clarendon
Rockland 1000. South., Ojima H. (2003).
Multisensory convergence in calcarine visual areas in macaque monkey.
Int. J. Psychophysiol.
50, nineteen–26 10.1016/S0167-8760(03)00121-1 [PubMed] [CrossRef]
Rosene D. L., Van Hosen G. Due west. (1987).
“The hippocampal germination of the primate brain: a review of some comparative aspects of cytoarchitecture and connections,”
Cerebral Cortex. Further Aspects of Cortical Function, Including Hippocampus, Vol.
6, eds Jones E. G., Peters A. (New York: Plenum Printing; ), 345–456
Salkind N. J. (2010).
Encyclopedia of Research Design, 1st Edn, Vol.
Thousand Oaks, CA: Sage Publications
Schneider T. R., Engel A. M., Debener S. (2008).
Multisensory identification of natural objects in a two-style crossmodal priming prototype.
55, 121–132 [PubMed]
Schooler J. West., Ryan R. S., Reder 50. (1996).
“The costs and benefits of verbally rehearsing memory for faces,”
Bones and Applied Memory Research, Vol. two, Practical Applications, eds Herrmann D. J., McEvoy C., Hertzog C., Hertel P., Johnson M. One thousand. (Mahwah, NJ: Lawrence Erlbaum Associates; ), 51–65
Schroeder C. E., Foxe J. (2005).
Multisensory contributions to low-level, “unisensory” processing.
Curr. Opin. Neurobiol.
15, 454–458 [PubMed]
Scoville Westward. B., Milner B. (1957).
Loss of contempo memory subsequently bilateral hippocampal lesions.
J. Neurol. Neurosurg. Psychiatr.
[PMC free commodity]
Seltzer B., Pandya D. N. (1976).
Some cortical projections to the parahippocampal area in the rhesus monkey.
50, 146–160 [PubMed]
Silverman I., Eals M. (1992).
“Sex differences in spatial abilities: evolutionary theory and information,”
The Adapted Heed: Evolutionary Psychology and the Generation of Culture, eds Barkow J. H., Kosmides Fifty., Tooby J. (New York: Oxford University Press; ), 533–549
Slamecka North. J., Graf P. (1978).
The generation outcome: depiction of a phenomenon.
J. Exp. Psychol. Hum. Learn.
Suzuki W. A., Amaral D. G. (1994a).
Perirhinal and parahippocampal cortices of the macaque monkey: cortical afferents.
J. Comp. Neurol.
350, 497–533 10.1002/cne.903500402 [PubMed] [CrossRef]
Suzuki W. A., Amaral D. G. (1994b).
Topographic organization of the reciprocal connections between the monkey entorhinal cortex and the perirhinal and parahippocampal cortices.
14(Pt 2), 1856–1877
[PMC costless article]
Valentine T., Bruce V. (1986).
The effects of distinctiveness in recognising and classifying faces.
fifteen, 525–535 10.1068/p150525 [PubMed] [CrossRef]
Van Hoesen Thou., Pandya D. Due north. (1975).
Some connections of the entorhinal (expanse 28) and perirhinal (expanse 35) cortices of the rhesus monkey. I.Temporal lobe afferents.
95, 1–24 ten.1016/0006-8993(75)90204-8 [PubMed] [CrossRef]
Van Hoesen Thou., Pandya D. North., Butters N. (1975).
Some connections of the entorhinal (area 28) and perirhinal (area 35) cortices of the rhesus monkey. II. Frontal lobe afferents.
95, 25–38 10.1016/0006-8993(75)90205-Ten [PubMed] [CrossRef]
Van Hoesen G. W. (1995).
Anatomy of the medial temporal lobe.
Magn. Reson. Imaging
13, 1047–1055 [PubMed]
Young A. W., Humphreys Thou. W., Riddoch M. J., Hellawell D. J., de Haan East. H. (1994).
Recognition impairments and face imagery.
32, 693–702 10.1016/0028-3932(94)90029-9 [PubMed] [CrossRef]
Zangenehpour Southward., Zatorre R. J. (2010).
Crossmodal recruitment of primary visual cortex post-obit brief exposure to bimodal audiovisual stimuli.
48, 591–600 10.1016/j.neuropsychologia.2009.10.022 [PubMed] [CrossRef]
Zatorre R. J., Halpern A. R. (1993).
Consequence of unilateral temporal-lobe excision on perception and imagery of songs.
31, 221–232 10.1016/0028-3932(93)90086-F [PubMed] [CrossRef]
Zatorre R. J., Halpern A. R., Meyer Due east., Evans A. C. (1996).
Hearing in the mind’s ear: a PET investigation of musical imagery and perception.
J. Cogn. Neurosci.
8, 29–46 [PubMed]
Frontiers in Psychology
are provided hither courtesy of
Frontiers Media SA
Describe the Difference Between Elaboration and Visual Imagery