HUSCAP logo Hokkaido Univ. logo

Hokkaido University Collection of Scholarly and Academic Papers >
Graduate School of Humanities and Human Sciences / Faculty of Humanities and Human Sciences >
Peer-reviewed Journal Articles, etc >

Auditory enhancement of visual searches for event scenes

Files in This Item:
kiyos1_211124.pdf830.49 kBPDFView/Open
Please use this identifier to cite or link to this item:http://hdl.handle.net/2115/88087

Title: Auditory enhancement of visual searches for event scenes
Authors: Maezawa, Tomoki Browse this author
Kiyosawa, Miho Browse this author
Kawahara, Jun I Browse this author →KAKEN DB
Keywords: Crossmodal
Attention
Audiovisual
Auditory enhancement
Visual search
Issue Date: 10-Jan-2022
Publisher: Springer
Journal Title: Attention, Perception, and Psychophysics
Volume: 84
Start Page: 427
End Page: 441
Publisher DOI: 10.3758/s13414-021-02433-8
Abstract: Increasing research has revealed that uninformative spatial sounds facilitate the early processing of visual stimuli. This study examined the crossmodal interactions of semantically congruent stimuli by assessing whether the presentation of event-related characteristic sounds facilitated or interfered with the visual search for corresponding event scenes in pictures. The search array consisted of four images: one target and three non-target pictures. Auditory stimuli were presented to participants in synchronization with picture onset using three types of sounds: a sound congruent with a target, a sound congruent with a distractor, or a control sound. The control sound varied across six experiments, alternating between a sound unrelated to the search stimuli, white noise, and no sound. Participants were required to swiftly localize a target position while ignoring the sound presentation. Visual localization resulted in rapid responses when a sound that was semantically related to the target was played. Furthermore, when a sound was semantically related to a distractor picture, the response times were longer. When the distractor-congruent sound was used, participants incorrectly localized the distractor position more often than at the chance level. These findings were replicated when the experiments ruled out the possibility that participants would learn picture-sound pairs during the visual tasks (i.e., the possibility of brief training during the experiments). Overall, event-related crossmodal interactions occur based on semantic representations, and audiovisual associations may develop as a result of long-term experiences rather than brief training in a laboratory.
Rights: This version of the article has been accepted for publication, after peer review (when applicable) and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.3758/s13414-021-02433-8
Type: article (author version)
URI: http://hdl.handle.net/2115/88087
Appears in Collections:文学院・文学研究院 (Graduate School of Humanities and Human Sciences / Faculty of Humanities and Human Sciences) > 雑誌発表論文等 (Peer-reviewed Journal Articles, etc)

Submitter: 河原 純一郎

Export metadata:

OAI-PMH ( junii2 , jpcoar_1.0 )

MathJax is now OFF:


 

 - Hokkaido University