Show simple item record

dc.contributor.authorLiu, Jiayue
dc.contributor.authorStohl, Joshua S.
dc.contributor.authorLópez Poveda, Enrique A. 
dc.contributor.authorOverath, Tobias
dc.date.accessioned2025-11-04T13:59:05Z
dc.date.available2025-11-04T13:59:05Z
dc.date.issued2024-01-30
dc.identifier.citationLiu, J., Stohl, J., Lopez-Poveda, E. A., y Overath, T. (2024). Quantifying the impact of auditory deafferentation on speech perception. Trends in Hearing, 28, 23312165241227818. https://doi.org/10.1177/23312165241227818
dc.identifier.issn2331-2165
dc.identifier.urihttp://hdl.handle.net/10366/167636
dc.description.abstractThe past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding–decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding–decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.es_ES
dc.description.sponsorshipThe authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Charles Lafitte Foundation.es_ES
dc.language.isoenges_ES
dc.publisherSage Publications
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectauditory deafferentationes_ES
dc.subjectcochlear synaptopathyes_ES
dc.subjecthidden hearing losses_ES
dc.subjectspeech perceptiones_ES
dc.titleQuantifying the Impact of Auditory Deafferentation on Speech Perceptiones_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.relation.publishversionhttps://doi.org/10.1177/23312165241227818
dc.subject.unesco2411.13 Fisiología de la Audición
dc.subject.unesco2490.01 Neurofisiología
dc.subject.unesco6106.12 Procesos Sensoriales
dc.identifier.doi10.1177/23312165241227818
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES
dc.journal.titleTrends in Hearinges_ES
dc.volume.number28es_ES
dc.type.hasVersioninfo:eu-repo/semantics/publishedVersiones_ES


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 Internacional