Humans hear sounds from 20Hz to 20000Hz

 

Test your lower and upper limit with this video:

 

 

 

Related concept:

(from the section: Wave Interference and Beats - Sound Waves and Electromagnetic Waves. Part Ia : Sound Waves – Human hearing range and musical tones)

 

 

The ear is a frequency analyser: frequencies falling onto different points of a lane depending on its mechanical properties

 

 

What happens when we hit a drum?

Its surface is set in motion and starts to vibrate, meaning moving up and down very quickly. The molecules of the surface collide with the air molecules and push them a little bit “forward”. These push their neighbors and then come back to their original position. The air molecules that have been pushed will push their neighbors and then return to their place so on.

 

The air molecules leave their original position and return to it; they are vibrating or oscillating. This creates compressions and rarefactions in the air, which correspond to waves.

(Note that a sound of 20Hz would correspond to 20 compressions/rarefactions per second).

 

As mentioned in this link:

http://www.acs.psu.edu/drussell/Demos/waves/wavemotion.html

“Pick a single particle and watch its motion. The wave is seen as the motion of the compressed region (ie, it is a pressure wave), which moves from left to right.”

 

 

How do we hear the sound of a drum?

Interestingly, our ear has a drum of its own. The greek word for drum is “tympanon”. It is shown below in dark green.

 

 

"Anatomy of the Human Ear en" by Chittka L, Brockmann - Perception Space—The Final Frontier, A PLoS Biology Vol. 3, No. 4, e137 doi:10.1371/journal.pbio.0030137 (Fig. 1A/Large version), vectorised by Inductiveload. Licensed under CC BY 2.5 via Commons - https://commons.wikimedia.org/wiki/File:Anatomy_of_the_Human_Ear_en.svg#/media/File:Anatomy_of_the_Human_Ear_en.svg

 

 

The vibrating air molecules will push the surface of the ear drum and set it in motion.

 

You hit a drum with a drumstick. Interestingly, our ear drum has a drumstick attached to it. Instead of the stick hitting the drum, the drum hits the stick! Actually, there is a series of three sticks, one transmitting motion to the other: the malleus (hammer), the incus (anvil), and the stapes (stirrup). They are shown above in blue colors.

 

 

By OpenStax College [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia Commons - modified by removal of indications

 

Adjacent to the stapes is a snail-like chamber called “cohlea” after the greek word for snail (in purple in the first image). It consists of three cavities (termed "scalae" for staircases) that can be seen in the image above; note that the one in the middle is the scala media. They are filled with fluid. The third stick, the stapes, has a footplate that seals the oval window of the scala vestibuli. When sound arrives at the stapes it will push onto the liquid of that scala.

 

A very important structure of the cohlea is the basilar membrane.

Let us imagine the cohlea unwound. We could consider this to be like an plane runway.

 

 

File:1408 Frequency Coding in The Cochlea.jpg

 

https://commons.wikimedia.org/wiki/File:1408_Frequency_Coding_in_The_Cochlea.jpg

 

 

 

As the sound travels down the basilar membrane it makes it vibrate.

Its vibration will depend on its material/mechanical properties (stiffness and width at a given point) as well as on a specific type of cells, termed outer hair cells that modify the response that occurs on its mechanical properties alone.

 

As mentioned at this link, the narrowest, stiffest part is found at the base, while the widest, most flexible part at the tip.

 

The human ear perceives sounds at the frequency range of 20 to 20.000Hz. We mentioned that sound is transmitted via compressions and rarefactions of air molecules, therefore a 20Hz would correspond to 20 compressions/rarefactions per second.

 

At which part do we expect a low frequency (20Hz) to land? In other words, at what point of the basilar membrane will be obtain resonance?

 

The answer is at the apex (it is difficult to make a big elastic chunk vibrate quickly).

 

Animation from this link

Basilar membrane traveling wave

 

 

What about the distribution in between?

If we separate the length in three parts, the first would be from 20 to 200, the second from 200 to 2000 and the third from 2000 to 20000. Quite a lot of range difference between these three parts! A logarithmic scale is represented. 

These are the features of the "tonotopic map", as it is called.

 

Interesting image from this link

 

CCC-Piano-Graphics-2011-EN

 

 

Above the basilar membrane, there is an epithelial strip, termed the organ of Corti which includes the hair cells, the sensory receptors of the auditory and vestibular system. The deflection of their hair-like protrusions leads to the opening of mechano-sensitive ion channels and to the generation of action potentials that are transmitted to the brain.

 

http://www.sciencephoto.com/media/309138/view

 

 

The ear canal acts like a resonator

Link

"The ear canal acts like a resonator, and because it is closed at one end, it acts like a quarter-wave-length resonator whose wavelength is four times the length of the canal (average canal length is 25mm in adults) or approximatively 100mm. This equates to a sound of approximatively 3.3kHz; therefore, the ear canal amplifies sound around 3.3kHz in adults and higher frequencies (according to ear length canal lenght) in infants and children. The outer ear increases sound level by 10 to 15dB in a frequency range from 1.5 to 7 kHz, and this resonance includes the concha bowl of the pinna which has a resonant frequency around 5kHz."

 

 

 

 

Otoacoustic emissions: a standard auditory test 

 

Human sonar: The ear not only receives sound but also emits faint sounds at frequencies that are characteristic for each person (biometric feature)

 

https://en.wikipedia.org/wiki/Otoacoustic_emission

Excertps:

Broadly speaking, there are two types of otoacoustic emissions: spontaneous otoacoustic emissions (SOAEs), which can occur without external stimulation, and evoked otoacoustic emissions (EOAEs), which require an evoking stimulus.

 

Clinical importance

Otoacoustic emissions are clinically important because they are the basis of a simple, non-invasive test for hearing defects in newborn babies and in children who are too young to cooperate in conventional hearing tests.

 

Biometric importance

In 2009, Stephen Beeby of The University of Southampton led research into utilizing otoacoustic emissions for biometric identification. Devices equipped with a microphone could detect these subsonic emissions and potentially identify an individual, thereby providing access to the device, without the need of a traditional password [*15*] It is speculated, however, that colds, medication, trimming one's ear hair, or recording and playing back a signal to the microphone could subvert the identification process.[16].

 

(End of excerpts).

pdf link to S. Beeby paper: http://eprints.soton.ac.uk/260480/1/suitoto1.pdf

 

Note that link 15 which cannot be found on wikipedia has been corrected by information-book as [*15*]. 

 

 

Google book

"Animal Sonar: Processes and Performance"

Link (with excerpt related to human)

 

 

Binaural Listening/Processing

 

http://www.ircam.fr/article/detail/ecoute-binaurale/