Back to Basics | October 2015 Hearing Review
Most of our training regarding hearing aids for hard of hearing people is based on the characteristics of speech. This makes sense because most of what a person is concerned with is based on hearing speech in quiet, and in noise. But what about those people who need amplification to help them play music, or for the rest of our hard of hearing clients who just want to be able to listen to music on occasion. Does the āmusic programā need to be that different than a āspeech in quiet program.ā Surprisingly, the answer is no.
Of the many differences between speech as an input to a hearing aid and music as an input to a hearing aid, the two distinguishing factors for fitting hearing aids are: 1) higher sound levels of music, and 2) the crest factor. Even quiet music can be in excess of 100 dB SPL, whereas the highest sound levels of speech are in the mid 80 dB SPL region. The crest factor is just the difference between the average level of the sound and its peaks. For speech, this is on the order of 12 dB. For music, it can be 18-20 dB. Because musical instruments are not as ādampedā as the soft walled, saliva and mucous-filled human mouth, the peaks for music tend to be 6-8 dB higher than for speech. Music is less damped than is speech. Both of these factors contribute to the higher sound levels of music.
The greater sound level of music (and its higher level peaks) means that the hearing aids should be able to handle this higher level input without distortion. And this is where most hearing aids fall short. Many modern digital hearing aidsāprimarily because of the analog-to-digital (A/D) converter and other āfront endā characteristicsāsimply cannot handle these higher-level inputs characteristic of music. This is a āfront-end hardwareā issue and has nothing to do with the software programming settings that occur later in the hearing aid circuitry. If the input is distorted at this early input stage, then no amount of software manipulation that occurs later on will improve things.
So-called āmusic programsā have limited usefulness unless the front-end issue is taken care of first. However, let us assume that we are dealing with those several hearing aids that have been designed with hardware that can handle the louder inputs of music without distortion. What are some software āprogramming mythsā that we need to be aware of?
- Myth: Greater compression. Although there may be slight differences, the selection of compression has more to do with the sensory/neural damage to the hard of hearing personās cochlea and only secondarily to the properties of the input stimulus. With modern hearing aids (all using a form of average or RMS based compression) the compression parameters that are used in the speech-in-quiet program should be similar to those used in a music program.
- Myth: Greater Bandwidth. Many manufacturers suggest that their music programs should have a wider bandwidth than for speech. This is based on erroneous logic. The widest possible bandwidth of the amplified signal should always be sought unless there is some cochlear limitation, such as cochlear dead regions. The bandwidth of a speech-in-quiet program should be similar to a music program. Rarely is there enough amplification in the higher frequency region for even a speech-in-quiet program, so if at all possible, the extra high frequency amplification should be applied across the board. Bandwidth, like compression, is an individual issue and is based primarily on cochlear function and not the nature of the input stimulus.
- Myth: Extended low-frequency amplification. While it is true that the left-hand side of the piano keyboard (essentially the notes on the bass clef) are typically below the lowest note that is generally amplified for speech, it is not true that these low frequency notes would need to be amplified. There are three reasons for this: 1) Most hearing aid fittings are non- or semi-occluding and, as such, these low-frequency fundamental notes enter through the vent and are not amplified, but are audible; 2) While the fundamental energy of the note may not be amplified, its higher frequency harmonics are amplified, adding to the appreciation of the music, and 3) It is false to assume that hearing a specific note defines that pitch. It is the difference between any two adjacent harmonics that define the pitch and not the note per se. One does not need to hear the note C with a fundamental at 131 Hzājust the 131 Hz difference that can occur between 1,000 Hz and 1,131 Hz. This is called the missing fundamental. For these reasons, extended low-frequency amplification for music is not required.
Marshall Chasin, AuD, is an audiologist and director of research at the Musiciansā Clinics of Canada, Toronto. He has authored five books, including Hearing Loss in Musicians, The CIC Handbook, and Noise ControlāA Primer, and serves on the editorial advisory board of HR. Dr Chasin has guest-edited three special editions of HR on music and hearing loss (August 2014, February 2009, and March 2006), as well as a special edition on hearing conservation (March 2008).
Correspondence can be addressed to: [email protected]
Original citation for this article: Chasin M. Back to Basics: Three Myths in Programming Hearing AidsĀ for Music.Ā Hearing Review. 2015;22(10):12.?