Verification of critical band hypothesis to explain the discrepancy in the amounts of TTS produced by noise and tones when presented at equal intensity and for equal duration
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
Pure-tones are assumed to be more dangerous than Octave-bands of noise
(Anonymous, 1956). This was explained on the basis of critical-band hypothesis
that stated that if a given amount of energy were concentrated within a
single critical band, it would be more dangerous than if it were spread over several
critical bands, e.g. over an octave (Kxyter, 1950). This critical band hypothesis
is found not to be pertinent in explaining the different amounts of TTS (temporary
threshold shift) produced by pure-tones and noise of same intensity level.
(Ward, 1963). However an alternate hypothesis has been reported. Acoustic
reflex is responsible for the difference in the amounts of TTS produced by octavebands
of noise and pure tones, below 2KHz, of same intensity. The action of
middle ear muscles differs for noise and pure-tone stimuli. In the case of puretones,
the muscles after an initial contraction rapidly relax and hence more energy
reaches the cochlea and consequently TTS will be more, whereas noise produces
a more sustained reaction (there will be continuous re-arousal of the reflex of
the middle ear muscles, probably because of the random nature of the noise),
thereby TTS will be less as the energy reaching the cochlea will be less.
References
2. Tobias V. Foundationsof Modern Auditory Theory, Vol. II. New York: Academic Press