No AccessJournal of Speech, Language, and Hearing ResearchResearch Article1 Feb 2007

Effects of Training on Speech Recognition Performance in Noise Using Lexically Hard Words


    This study examined how repeated presentations of lexically difficult words within a background noise affect a listener’s ability to understand both trained (lexically difficult) and untrained (lexically easy) words in isolation and within sentences.


    In the 1st experiment, 9 young listeners with normal hearing completed a short-term auditory training protocol (5 hr) while 8 other young listeners with normal hearing completed a similar protocol lasting about 15 hr in the 2nd experiment. All training made use of multiple talkers and was in a closed-set condition. Feedback was provided on a trial-to-trial basis and consisted of either orthographic or orthographic and auditory feedback. Performance on both the trained and untrained words in isolation and within sentences was measured pre- and posttraining.


    Listeners' performance improved significantly for the trained words in an open and closed-set condition, as well as the untrained words in the closed-set condition. Although there was no mean improvement in the number of keywords identified within sentences posttraining, 50% of the listeners who completed the long-term training showed improvement beyond the 95% critical difference.


    With enough training on isolated words, individual listeners can generalize knowledge gained through isolated word training to the recognition of lexically similar words in running speech.


    • American National Standards Institute. (1996). Specifications for audiometers (S3.6–1996). New York: Author.
    • American National Standards Institute. (1999). Maximum permissible ambient noise levels for audiometric test rooms (S3.1–1999). New York: Author.
    • Bell, T., & Wilson, R. (2001). Sentence recognition materials based on frequency of word use and lexical confusability.Journal of the American Academy of Audiology, 12, 514–522.
    • Boothroyd, A. (1995). Speech perception tests and hearing-impaired children.In G. Plant, & K. E. Spens (Eds.), Profound deafness and speech communication (pp. 345–371). London: Whurr.
    • Boothroyd, A. (1999). Computer-Assisted Speech Perception Assessment (CASPA Version 2.2). San Diego, CA: Author.
    • Brooks, D. (1979). Hearing aid candidates—Some relevant features.British Journal of Audiology, 13, 81–84.
    • Burk, M., Humes, L., Amos, N., & Strauser, L. (2006). Effect of training on word-recognition performance in noise for young normal-hearing and older hearing-impaired listeners.Ear and Hearing, 27, 263–278.
    • Clopper, C., & Pisoni, D. (2004). Effects of talker variability on perceptual learning of dialects.Language and Speech, 47(Pt 3)207–239.
    • Committee on Hearing and Bioacoustics and Biomechanics (1988). Speech understanding and aging.Journal of the Acoustical Society of America, 83, 859–895.
    • Department of Veterans Affairs. (1998). Veterans Administration Sentence Test (VAST). Mountain Home, TN: Author.
    • Dubno, J., & Schaefer, A. (1992). Comparison of frequency selectivity and consonant recognition among hearing-impaired and masked normal-hearing listeners.Journal of the Acoustical Society of America, 91(4, Pt 1)2110–2121.
    • Dubno, J., & Schaefer, A. (1995). Frequency selectivity and consonant recognition for hearing-impaired and normal-hearing listeners with equivalent masked thresholds.Journal of the Acoustical Society of America, 97, 1165–1174.
    • Humes, L. (1996). Speech understanding in the elderly.Journal of the American Academy of Audiology, 7(3)161–167.
    • Humes, L., Dirks, D., Bell, T., & Kincaid, G. (1987). Recognition of nonsense syllables by hearing-impaired listeners and by noise-masked normal hearers.Journal of the Acoustical Society of America, 81, 765–773.
    • Lively, S., Logan, J., & Pisoni, D. (1993). Training Japanese listeners to identify English /r/ and /l/: II. The role of phonetic environment and talker variability in learning new perceptual categories.Journal of the Acoustical Society of America, 94(3, Pt 1)1242–1255.
    • Lively, S., Pisoni, D., Yamada, R., Tohkura, Y., & Yamada, T. (1994). Training Japanese listeners to identify English /r/ and /l/: III. Long-term retention of new phonetic categories.Journal of the Acoustical Society of America, 96, 2076–2087.
    • Logan, J., Lively, S., & Pisoni, D. (1991). Training Japanese listeners to identify English /r/ and /l/: A first report.Journal of the Acoustical Society of America, 89, 874–886.
    • Luce, P., & Pisoni, D. (1998). Recognizing spoken words: The neighborhood activation model.Ear and Hearing, 19, 1–36.
    • Munro, K., & Lutman, M. (2005). The influence of visual feedback on closed-set word test performance over time.International Journal of Audiology, 44, 701–705.
    • Smedley, T. (1990). Self-assessed satisfaction levels in elderly hearing aid, eyeglass, and denture wearers. A cross-modality comparison.Ear and Hearing, 11(Suppl. 5), 41S–47S.
    • Studebaker, G. (1985). A rationalized arcsine transform.Journal of Speech and Hearing Research, 28, 455–462.
    • Sweetow, R., & Palmer, C.V. (2005). Efficacy of individual auditory training in adults: a systematic review of the evidence.Journal of the American Academy of Audiology, 16, 494–504.
    • Takayanagi, S., Dirks, D., & Moshfegh, A. (2002). Lexical and talker effects on word recognition among native and non-native listeners with normal and impaired hearing.Journal of Speech, Language, and Hearing Research, 45, 585–597.
    • Willott, J. (1996). Physiological plasticity in the auditory system and its possible relevance to hearing aid use, deprivation effects, and acclimatization.Ear and Hearing, 17(Suppl. 3), 66S–77S.

    Additional Resources