No AccessAmerican Journal of AudiologyResearch Note10 Jun 2019

The Viability of Media Interviews as Materials for Auditory Training

    Purpose

    Rehabilitative auditory training for people with hearing loss faces 2 primary challenges: generalization of learning to novel contexts and user adherence to training goals. We hypothesized that using interview excerpts from popular media as training materials would have the potential to positively influence both of these areas. Interviews contain predictable, structured complexity that promotes perceptual generalization and are also designed to be engaging for consumers. This study tested the viability of such popular media interviews as training materials, comparing their effectiveness to that obtained with sentence transcription training.

    Method

    Young adults with normal hearing (N = 60) completed 1 hr of transcription training using noise-vocoded materials, simulating acoustic perception through an 8-channel cochlear implant. Participants completed pre- and posttraining assessments of vocoded speech perception in quiet and in noise, along with posttraining high-variability sentence recognition and cued isolated word recognition. Scores on all tests were compared across 4 randomly assigned groups differing in training materials: audiovisual interviews, audio-only interviews, isolated sentences, and undegraded isolated sentences (providing an untrained control comparison group).

    Results

    Recognition in quiet and in noise improved with both types of interview-based training, and interview training groups outperformed the control group on all generalization tests. Participants in the audiovisual interview group also reported significantly higher, more sustained engagement in a retrospective survey.

    Conclusions

    Media interviews appear to be at least as effective as isolated sentences for transcription-based auditory training in simulated hearing loss settings with young adults and may improve engagement and generalization of benefit in auditory training applications.

    References

    • Barcroft, J., & Sommers, M. S. (2005). Effects of acoustic variability on second language vocabulary learning.Studies in Second Language Acquisition, 27, 387–414. https://doi.org/10.1017/S0272263105050175
    • Barcroft, J., Sommers, M. S., Tye-Murray, N., Mauzé, E., Schroy, C., & Spehar, B. (2011). Tailoring auditory training to patient needs with single and multiple talkers: Transfer-appropriate gains on a four-choice discrimination test.International Journal of Audiology, 50, 802–808.
    • Casserly, E. D., & Barney, E. C. (2017). Auditory training with multiple talkers and passage-based semantic cohesion.Journal of Speech, Language, and Hearing Research, 60(1), 159–171.
    • Casserly, E. D., & Pisoni, D. B. (2015). Auditory learning using a portable real-time vocoder: Preliminary findings.Journal of Speech, Language, and Hearing Research, 58, 1001–1016.
    • Chisolm, T. H., Saunders, G. H., Frederick, M. T., McArdle, R. A., Smith, S. L., & Wilson, R. H. (2013). Learning to listen again: The role of compliance in auditory training for adults with hearing loss.American Journal of Audiology, 22, 339–342. https://doi.org/10.1044/1059-0889(2013/12-0081)
    • Ferguson, M. A., & Henshaw, H. (2015). Auditory training can improve working memory, attention, and communication in adverse conditions for adults with hearing loss.Frontiers in Psychology, 6(556). https://doi.org/10.3389/fpsyg.2015.00556
    • Ferguson, M. A., Henshaw, H., Clark, D. P. A., & Moore, D. R. (2014). Benefits of phoneme discrimination training in a randomized controlled trial of 50- to 74-year-olds with mild hearing loss.Ear and Hearing, 35(4), e110–e121. https://doi.org/10.1097/AUD.0000000000000020
    • Henshaw, H., & Ferguson, M. A. (2013). Efficacy of individual computer-based auditory training for people with hearing loss: A systematic review of the evidence.PLOS ONE, 8(5), e62836. https://doi.org/10.1371/journal.pone.0062836
    • Henshaw, H., McCormack, A., & Ferguson, M. A. (2015). Intrinsic and extrinsic motivation is associated with computer-based auditory training uptake, engagement, and adherence for people with hearing loss.Frontiers in Psychology, 6(1067). https://doi.org/10.3389/fpsyg.2015.01067
    • Hirsh, I. J., Davis, H., Silverman, S. R., Reynolds, E. G., Eldert, E., & Benson, R. W. (1952). Development of materials for speech audiometry.Journal of Speech and Hearing Disorders, 17, 321–337. https://doi.org/10.1044/jshd.1703.321
    • Ihler, F., Blum, J., Steinmetz, G., Weiss, B. G., Zirn, S., & Canis, M. (2017). Development of a home-based auditory training to improve speech recognition on the telephone for patients with cochlear implants: A randomised trial.Clinical Otolaryngology, 42(6), 1303–1310. https://doi.org/10.1111/coa.12871
    • Kaiser, A. R., & Svirsky, M. A. (1999). A real time PC based cochlear implant speech processor with an interface to the nucleus 22 electrode cochlear implant and a filtered noiseband simulation.In Research on spoken language processing: Progress report No. 23 (pp. 417–428). Bloomington: Indiana University.
    • Karl, J. R., & Pisoni, D. B. (1994). The role of talker-specific information in memory for spoken sentences.The Journal of the Acoustical Society of America, 95, 2873. https://doi.org/10.1121/1.409447
    • Loebach, J. L., Pisoni, D. B., & Svirsky, M. A. (2010). Effects of semantic context and feedback on perceptual learning of speech processed through an acoustic simulation of a cochlear implant.Journal of Experimental Psychology: Human Perception and Performance, 36, 224–234. https://doi.org/10.1037/a0017609
    • Nilsson, M., Soli, S. D., & Sullivan, J. A. (1994). Development of the Hearing-in-Noise Test for the measurement of speech reception thresholds in quiet and in noise.The Journal of the Acoustical Society of America, 95, 1085–1099. https://doi.org/10.1121/1.408469
    • Rothauser, E. H., Chapman, W. D., Guttman, N., Nordby, K. S., Silbiger, H. R., Urbanek, G. E., & Weinstock, M. (1969). IEEE recommended practice for speech quality measurements.IEEE Transactions on Audio and Electroacoustics, 17, 225–246. https://doi.org/10.1109/TAU.1969.1162058
    • Schvartz, K. C., & Chatterjee, M. (2012). Gender identification in younger and older adults: Use of spectral and temporal cues in noise-vocoded speech.Ear and Hearing, 33, 411–420. https://doi.org/10.1097/AUD.0b013e31823d78dc
    • Shannon, R. V., Zeng, F. G., Kamath, V., Wygonski, J., & Ekelid, M. (1995). Speech recognition with primarily temporal cues.Science, 270(5234), 303–304.
    • Stelmachowicz, P. G., Hoover, B. M., Lewis, D. E., Kortekaas, R. W. L., & Pittman, A. L. (2000). The relationship between stimulus context, speech audibility, and perception for normal-hearing and hearing-impaired children.Journal of Speech, Language, and Hearing Research, 43, 902–914.
    • Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech intelligibility in noise.The Journal of the Acoustical Society of America, 26(2), 212–215.
    • Sweetow, R. W., & Sabes, J. H. (2010). Auditory training and challenges associated with participation and compliance.Journal of the American Academy of Audiology, 21(9), 586–593.
    • Tye-Murray, N., Sommers, M. S., Mauzé, E., Schroy, C., Barcroft, J., & Spehar, B. P. (2012). Using patient perceptions of relative benefit and enjoyment to assess auditory training.Journal of the American Academy of Audiology, 23(8), 623–634. https://doi.org/10.3766/jaaa.23.8.7
    • Van Berkum, J. J. A., Brown, C. M., Zwitserlood, P., Kooijman, V., & Hagoort, P. (2005). Anticipating upcoming words in discourse: Evidence from ERPs and reading times.Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 443–467. https://doi.org/10.1037/0278-7393.31.3.443

    Additional Resources