Inspiré de la chaussure de course créée par Bottes UGG - Achat / Vente UGG Classic pas cher la semelle intermédiaire pour une absorption maximale des Achat Nike Air Max 1/90 Pas Cher Pour Homme chocs et donc un excellent amorti.

Speech Effort, Speech Production in Noise, and Listening in Noise: Interview with Andrea Pittman, PhD

Speech Effort, Speech Production in Noise, and Listening in Noise: Interview with Andrea Pittman, PhD

June 19, 2013 Interviews

Douglas L. Beck, AuD, spoke with Dr. Pittman, an audiologist, author, and associate professor, speech and hearing science, at Arizona State University (ASU), about audiological diagnostics, auditory rehabilitation, digital noise reduction circuits, pediatric applications, frequency compression, and more.

Academy: Good morning, Andrea!

Pittman: Hi, Doug. Good to speak with you this morning!

Academy: Andrea, before we get into audiological issues and as a reminder for me, how large is the AuD program at ASU, and how long have you been there?

Pittman: Currently, I believe we have 45 students enrolled in our four-year audiology doctoral program, and I've been here since 2004.

Academy: And, for those who are geographically challenged, I might add that Arizona State University is located in Tempe, which is smack dab in the middle of the Phoenix Metropolitan area. And your PhD was from the University of Wisconsin at Madison?

Pittman: Yes, my mentor was Terry Wiley and my dissertation was on the perception of speech produced in noise.

Academy: Which I suspect had to do with the fact that speech production in noise is different from speech production in quiet?

Pittman: That's right. Vocal effort increases in noise as described by the Lombard Effect, and as the generated sound pressure level increases the spectral content changes, too. That is, we speak louder and with a higher pitch when we are in noise. Of course listening in noise is arguably the major complaint of most people with hearing loss and of most people wearing hearing aids and so it is important to understand these factors and address them when programming modern hearing aid amplification systems.

Academy: Absolutely…and to me, the most important oversight in clinical practice is the lack of measurement of speech-in-noise (SIN) ability. That is, because SIN is the most common complaint (as you mentioned) and because we can measure SIN quite easily and quickly, seems to me we should measure it on every patient to quantify their complaint pre- and post-hearing aid fitting—and if we've solved their problem through amplification, the SIN scores should improve while wearing amplification.

Pittman: Yes, I agree, although I would add a couple of caveats. First, conventional SIN tests include recordings of speech produced in quiet that are mixed with noise. My dissertation (published in 2001) showed that the spectral shape of speech produced in noise is different than speech produced in quiet. Although there is substantial diagnostic value in SIN measures, those measures should not be thought of as representative of communication in noise. Second, a patient's complaints about noise are not likely to be improved with amplification, at least not without a little help. But that's another topic. What I do like about SIN measures is that they provide information that can be used for both diagnostic and auditory rehabilitative (AR) purposes.

Academy: Yes, that's right. We are excellent diagnosticians, and that's critically important to the patients—but the other side of that same coin is what we are going to do with the information after the diagnosis has been made…and if we hit a home run with SIN issues, game over.

Pittman: Sure, but there's more to listening than perceiving familiar words presented in a background of noise. Recently, research has moved toward an investigation of cognitive issues and abilities, communication and listening strategies, and much more.

Academy: Which gets me to one of my favorite new, old sayings (see Beck and Flexer, Hearing Review, February, 2011) "Listening Is Where Hearing Meets Brain."

Pittman: Yes and that's a great reminder that it's not really a matter of diagnostics or AR, it's both! We would do well to consider the hearing and cognitive abilities of the patient as well as the demands of the environment if we wish to facilitate better listening ability. The bottom line is that diagnostics are critically important for addressing the physical health and ability of the patient whereas rehabilitative strategies address his/her specific complaint—which is why the patient came to see us in the first place.

Academy: Andrea, I recall one of the very interesting articles you published in 2011, which examined the impact of modern noise reduction circuits as applied to the pediatric population. Would you review that for me?

Pittman: Sure. We were interested to see how digital noise reduction (DNR) might or might not benefit children of different ages as they engaged in the cognitively demanding task of learning new words. Basically, we knew that people prefer digital noise reduction circuits because they make sounds more comfortable—while not interfering too much with speech perception in quiet or noise.

We decided to manipulate the cognitive environment because we know that some tasks are more cognitively demanding than others, such as the difference between listening to familiar words and learning new ones. We had two groups of children; a younger group and an older group, and we examined their rate of word learning in quiet and in noise with noise reduction on and noise reduction off.

Academy: And, to make it even more interesting, you and your colleagues actually measured the signal-to-noise ratio (SNR) at the hearing aid output, and you selected a hearing aid that performed about in the middle, with respect to SNR issues, when the hearing aid was set to its maximal DNR?

Pittman: Yes, we did that because the output SNR of a hearing aid can vary with the degree and configuration of hearing loss. Additionally, the output SNR can vary considerably across manufacturers. We've found that some hearing aids improve SNR to small degrees (~2 dB) while others have the potential to improve SNR quite a bit (~6 dB) when DNR is activated.

Academy: And to be clear, you looked at instruments from more than half a dozen hearing aid manufacturers?

Pittman: That's correct. Because of the large variation we observed, we made sure to measure the nominal and the effective DNR in each of our research participants using the maximal DNR setting.

Academy: And so as a reminder, let's start with the fact that because hearing aids are nonlinear, they generally decrease the SNR as sound goes through them. For example, if the speech is 70 dB and the noise is 55 dB at the input microphone of the hearing aid, that would present a 15 dB SNR. However, as the amplification system is nonlinear, and the goal of compression is to amplify the weakest sounds while protecting the ear against sudden loud sounds (and more), the input 15 dB SNR is not going to be preserved at the output, it will be less of a SNR depending on things like knee-point, compression ratio, fast vs. slow attack and release times, and DNR system employed.

Pittman: Exactly. Many hearing aids will reduce the SNR (make is worse) as the sound goes through the circuit. But a hearing aid with an active DNR circuit can improve the SNR by as much as 6 dB and may provide the listener with a signal closer to the original and more like what normal-hearing listeners are hearing.

Academy: And to make things a little more interesting, what would happen if you applied adaptive directionality in addition to the DNR circuit that adds 6 dB SNR?

Pittman: That's a good question and I should state up front that I'm not an expert on adaptive directionality. But I do know that directional technology is most beneficial when the person wearing the hearing aids physically places himself between the signal of interest (i.e., speech) and the noise source (i.e., background noise). So in some idealized situations, I suspect that adaptive directionality would make things better, but in a noisy restaurant or cocktail party where the noise is originating from all around the listener, and the walls, floor and ceiling are reverberant—the impact of adaptive directionality would be reduced. Even so, the research in this area shows that directional microphone technology is the most effective hearing aid feature we can provide to our patients for dealing with noise.

Academy: Okay, thanks for clarifying…so back to the study. You had two groups of children (older and younger) and two basic hearing aid digital noise reduction settings (on or off). What did you find?

Pittman: First, we found no negative impact from DNR use for older or younger children in noise. Second we found that the children's learning rate decreased significantly in noise when the DNR was turned off. But when the DNR was activated we found that the learning rate of the older children improved significantly. In fact, their word learning rate was the same as it was in quiet.

Although we didn't find the same improvement in the younger children, we think it's because the SNR improvement offered by the hearing aid we used (~2 dB) wasn't quite enough for them. We know from a number of previous studies that younger children need higher SNRs than older children to perform at their best. It's quite likely that the learning rate of the younger children in our study might have improved, too, if they had been using hearing aids with a DNR circuit that offered a better SNR improvement.

Academy: And so the take-home message (and I know many pediatric audiologists will argue about this!) is to use the best hearing aids with the best DNR programs, and turn them on and leave them on.

Pittman: Yep.

Academy: Andrea, I know we're already over the time limit….but can you give me a snapshot of your thoughts regarding frequency compression in hearing aid fittings?

Pittman: Sure, and due to time constraints, it'll have to be a generic discussion. Yes, many of us have been known to take new technology and ideas and apply them to our patients prior to having outcomes-based evidence showing that it actually works as we'd like to think it works!

And so with regard to frequency-lowering technology, there is a place for it—but it may be a fairly narrow place. Research conducted at the Center for Audiology at the University of Western Ontario has shown that frequency compression is beneficial to the perception of certain voiceless fricatives, particularly for listeners with severe hearing loss in the high frequencies. They also showed that the technology did not interfere with the perception of speech in general. So there are data supporting the use of frequency compression, and more is on its way.

That said, I believe our job as audiologists is to provide the cleanest and clearest signal possible to the ear. To me, that means the widest bandwidth possible (hearing aids are getting better and better at this) and the most noise reduction, and the least distortion. I think frequency-lowering should be used when the hearing loss does not allow for effective audibility to occur in the high frequencies. In fact, I have seen it work successfully for a number of hearing aid users.

I would add, however, that it is important to understand what the technology is doing to the amplified signal, just like all of the other signal-processing options that we have at our finger tips when we program a hearing aid. I worry that, due to the rapid advances of these technologies and the increasingly complicated nature of hearing aids in general, we may miss one or two settings that the hearing aid user has to live with. If it's one that distorts the amplified signal unnecessarily, it could mean the difference between successful and unsuccessful hearing aid use

Academy: Thanks, Andrea. Okay, and so now that we are way over time—I know I have to let you go. It's been a pleasure chatting with you, Andrea! Thanks so much for a fascinating discussion.

Pittman: My pleasure, Doug. Thanks for your interest in my work.

Andrea Pittman, PhD, is an audiologist, author, and associate professor, speech and hearing science, at Arizona State University (ASU), in Tempe, AZ.

Douglas L. Beck, AuD, Board Certified Audiologist, is the Web content editor for the American Academy of Audiology.

Also of Interest