phantom ears

Started by maestro October 26, 2006
MUSIC FOR THE DEAF (copied, without the background music, from )

Having checked out and improved some of the ideas expressed below, I've
started working on the development of music for people with cochlear
implants.  Deaf "listeners" will be able to appreciate subliminal
effects, such as evoked phantom extra ears (created by structuring the
form of the sound information which the implant sends to the brain),
which are not available to advertisers and composers of sound effects
and other music created for people who can hear (our equivalent to
"phantom ears" is probably the "phantom voices" that some commercial
adverts produce).   The "phantom ears" effect enables the implant
recipient to hear and understand more than one strand of information at
a time.  Signals from the cochlear implant corresponding to sounds
beyond the normal frequency range for hearing people are also expected
to be useful in such music.  It is not possible for people with normal
hearing to perceive effects such as "phantom ears" properly, either
through loudspeakers or headphones, because our ears are a physical
barrier which prevents any sort of direct 'programming' of the auditory
nerve as though it were a data bus in a computer, but a cochlear
implant completely by-passes the (usually profoundly deaf) ear's
natural workings and defenses.  The music being developed consists of
data files containing patterns in the same format as the usual cochlear
implant control signals sent to the inner ear, to be listened to by
people who are already fitted with re-programmable implants.  In due
course, I expect to present some of my compositions to manufacturers of
cochlear implants, in the hope that the current apparently 'stuck'
situation in the market for cochlear implants and similar products can
be alleviated: at present, no major improvements in hearing technology
are expected other than in biological regrowth of the inner ear
hair-cells, and manufacturers are unwilling to admit that the design of
implants already in use is based on an out-of-date and incorrect model
of hearing (the "Place Theory").  Cochlear implants should be the
technology of choice for people who want  to directly interface their
brains with computers, so that they can use high-speed learning tools
and similar products.  Deaf people who already have programmable
cochlear implants may be expected to earn income as co-operating
experimentees during product development within the industry.

Phantom ears

The term 'phantom' is also used by doctors in a medical context when
referring to phantom limbs and similar, and may have some bearing on
the effect I am describing:

It is possible to multiplex the input data to the ear and send two or
more separate sound streams, using a join technique which helps the
brain pattern-match the signals and identify it as two separate things.
 The data is the cochlear firings relevant to the pressure differential
information that programmable cilia would respond to.  At any
'splice-point', the normal ear would hold the information equivalent of
a rolling average of the most recent sound (the last 30th of a second
or so), contained within the cilia which are recharging, and is a
low-quality delayed form of the information which has just been sent
along the auditory nerve.  In a cochlear implant, the expected rolling
average can be replaced with a different rolling average at the splice
point (one generated from a different sound).  Using this splicing
technique every 40th of a second or so, two separate sound streams can
be welded together in a way that the brain does not usually ever hear
(except perhaps in dreams).  When hearing the multiplexed sound, the
brain has some options: it can understand both sounds separately with
no problems (as easily as it already can with two normal ears); it can
understand the sound as a mixture of the two; or it can understand one
properly and mask the other.  "Phantom ears" can presumably be evoked
in deaf people in much the same way that "phantom voice" effects are
already produced, from the brains of normal listeners, by sound
engineers working in the recording industry.

from Bekesy to Cochlear Implants

Because of the work of Bekesy and others, it is known that the inner
ear responds to different frequencies at different places along the
length of the basilar membrane, by sending electrical impulses along
the auditory nerve into the brain.  This principle is at the basis of
'cochlear implant' technology, which is used to help deaf people in
situations where the ear is severely damaged but the auditory nerve is
unimpaired.  A cochlear implant is in effect an 'artificial ear', which
replaces the functionality of most of the ear and the cochlea: a
microphone earpiece transforms sounds into electrical signals
corresponding to the frequency bandwidths of the 'formants' of human
speech, and a thin wire inserted into the cochlea stimulates the
basilar membrane at places along its length corresponding to these
frequency bandwidths.

For some reason, the performance of cochlear implants has been
disappointing.  Recipients often report that the implants only help
them distinguish between sounds and silences - if the implants are
're-tuned', sounds seem higher or lower for a while, but the effect
soon wears off.  Although recipients are encouraged to persevere, on
the principle that their brain would gradually learn to make sense of
the input noise, many of them prefer to leave their implants switched
off most of the time because they are too noisy.

One could suggest that these patients were unlucky - perhaps their
auditory nerve was too damaged for cochlear implants to work for them.
However, one rarely reads believable reports of lucky successes in
cochlear implant operations, where patients who were previously deaf
have become able to hear speech relatively clearly - many reasonably
intelligent adults who have become profoundly deaf can lip-read and
speak with remarkable accuracy even without cochlear implants, and one
would expect genuine improvements to be more widely-known.  The
popularity of cochlear implants these days may be related to the
success of laser surgery for cataracts and other almost miraculous and
relatively cheap 'cures for blindness', unfortunately, the equivalent
technology for the ears provides none of the hoped-for miracles and
cochlear implants have already managed to acquire a bad name for
themselves amongst the deaf community. The idea behind cochlear
implants is correct (from Bekesy), today's technology is certainly
capable of the speed and precision necessary to transform sounds into
the required electrical signals (a normal cochlea sends only a few
thousand signals per second along the auditory nerve), and technicians
can tune the frequency response from different places along the
cochlear implant to a very high accuracy for any particular recipient -
sufficient that one might expect the technology to be not only adequate
for human speech (which is the focus of cochlear implant research these
days), but also for sounds as varied as thunder, Beethoven symphonies,
and birdsong.  So why aren't cochlear implants more successful?  The
answer must be that the 'place theory' model of hearing is incomplete.

My interest in the subject came about as a composer of music, with
normal hearing.  I was trying to construct a filter which would
stimulate the inner ear in patterns which are unlikely to occur in
nature, in the hope that interesting and unusual sounds would result,
and I developed a computational model of the ear which I think can be
applied in the area of cochlear implants for deaf people, which unlike
the 'place theory', has yet to be falsified.

The model is as follows:

Sounds are caused solely by variations in air pressure over a period of
time, and computationally, the ear/brain connection is modelled as a
large number of pressure-sensitive synapses, equivalent to the cilia
inside the cochlea.

These synapses are like miniature batteries, which are charged up by
either a decrease or increase in pressure (on the outer hair cells of
the inner ear, the tiny stereocilia which happen to be facing any
oncoming pressure increase bend backwards because of their aerodynamic
shape, charging up a contributory amount of energy to the OHC synaptic
"battery" - the stereocilia on the same OHC facing in the opposite
direction would respond similarly to a decrease in pressure).  When a
battery is fully charged, it discharges its contents (fires) onto the
auditory nerve, and then starts to recharge according to the subsequent
sound pressure variations.  The sense of hearing comes from the brain's
statistical interpretation of the firing patterns sent from the cilia.

The synapses behave like thousands of individual primitive ears, each
sending their own coarse interpretation of the sound pressure
variations along the auditory nerve and into the brain, and together
they form electrical impulse patterns which the brain attempts to
interpret.  It is important to realise that these patterns contain
temporal information because of the time between firings of different
synapses, so the brain can deduce frequency information from the
distance between the electrical impulses corresponding to these
firings.  Amplitude information is deduced from the quantity of firings
occurring for a particular pattern. While the impulse patterns are
travelling in the brain, they are matched against patterns from
previously-heard sounds which have been stored in the brain's auditory
memory.  During this process, neural pathways in the memory are
strengthened, resulting in the brain deciding on what the sound 'is'.

I could go into considerably more detail, but I assume that anyone who
is familiar with the problem will understand what I am getting at, and
will be able to adapt or add to the model where required.

Allow me to put the problem into a perspective.  The fundamental idea
behind cochlear implants, that different parts of the cochlea are
sensitive to different frequencies, cannot be ignored.  Direct readings
of the electrical output from different parts of the basilar membrane,
when stimulated by sinewaves of different frequencies, show that for
each frequency, there is a maximum output at a corresponding part of
the basilar membrane.

Unfortunately, there is these days no credible 'causal explanation' for
the above fact.  Bekesy (1961) gave an explanation, based on vibrations
of the basilar membrane, which satisfied scientists of his time.  The
reason for Bekesy's basilar membrane conjecture, was that individual
cilia are so simple biologically that it is just not possible that they
could be some sort of 'sound frequency' detectors, which send a signal
to the brain whenever they detect a particular sinewave frequency from
the pressure variations in the cochlear fluid.  If individual cilia
could achieve such a miraculous feat, why would nature ever have needed
to evolve intelligent brains at all!

Since then, the mathematics behind Bekesy's explanation was discovered
to be too simplistic in the area of 'frequency dispersion', and
Lighthill et al. produced an improved mathematical model which covered
the fault.  More recently however, it was discovered that the cochleas
of patients whose basilar membranes had been 'solidified' by a fungal
disease, so they could not be vibrating, still produced the frequency
dispersion characteristics of the Bekesy/Lighthill model.  From this it
has been deduced that movements of the basilar membrane are irrelevant,
and that no matter how unlikely it might seem, the hairlike cilia must
be the elements in the ear which detect frequencies, as they are the
only other candidate for frequency detection within the ear.

When experimenters test the cochlear response to validate the 'place
theory' (that each cilia responds to a specific frequency), they input
a pure sinewave of a particular frequency to the ear, and then observe
the place in the cochlea which gives the maximum output.  My model
explains the place theory observations as a resonance effect caused by
the experiment.

One of the more surprising things about the ear is the low amount of
electrical impulse information which it sends along the auditory nerve,
compared to the 40,000 16-bit words per second required for digital
audio.   The several thousand cilia each have a maximum repeat firing
rate of between 50 and 150ms, so that during loud sounds most of them
are incapable of firing more than half the time because they are
'recharging their batteries', and typically only a few thousand impulse
signals (all pretty much the same magnitude) are sent per second into
the brain.

> maestro schrieb: > > A cochlear implant is in effect an 'artificial ear', which > > replaces the functionality of most of the ear and the cochlea: a > > microphone earpiece transforms sounds into electrical signals > > corresponding to the frequency bandwidths of the 'formants' of human > > speech, and a thin wire inserted into the cochlea stimulates the > > basilar membrane at places along its length corresponding to these > > frequency bandwidths.
> Ein mikromechanisches Cochlea-Implantat: > > >> > Aktive Schallverst=E4rkung im Ohr: > > >
> Signal-Pfade beim H=F6ren: > > > > > > > > > >
> Haarzellen z=FCchten: > > > > >
> Sinneszellen, Stereocilia: >
> Ohr-Evolution: >
> Sonstiges > >
Perhaps I should explain better. I am suggesting that in the design of cochlear implants, one can ignore frequency and amplitude considerations. Given that the travelling wave theory is known to be incorrect, and that therefore synapse firings associated with cilia alone must be encoding the information contained in the stream of rapidly varying pressures presented to the ear, the suggestion is that these synapses behave like miniature batteries whose electrical potential increases when pressure is applied, and they discharge an impulse to the brain when a sufficient amount of the varying pressure has been applied over a period of time not less than their maximum repeat firing rate, which (researchers have shown) varies between about 1/40th of a second for the cilia with the fastest response, and 1/12th of a second for the slowest. Cochlear implants can be programmed with a model which simulates the behaviour of several thousand of these pressure-sensitive 'synapses' when given a typical sound signal. The electric signal which attempts to cause the firings equivalent to the those predicted by the model should be fed directly to all parts of the cochlea. The implant should not perform formant analysis on the input sound and only stimulate (ie trigger) those parts of the cochlea which correspond to the 'place theory' positions of the frequency bands corresponding to the formants, because that does not use the cochlea as efficiently as in normal hearing. If the formant-based (speech-sensitised) programming of cochlear implants worked as advertised, the ordinary sounds of birds, traffic and the weather would tend to sound like a constant stream of voices, and complaints similar to the 'hearing voices' symptoms of schizophrenia would be expected from deaf people with implants. And why have the manufacturers given so little thought to the appreciation of music and natural sounds other than speech? Perhaps the reason that the suggested method is not already used is simply that manufacturing technology is not yet capable of the miniaturisation and precision necessary to electrically stimulate individual nerve cells - there are thousands of cilia but only 22 channels in most cochlear implants. However, it is not known that as many as 22 channels are necessary and it may be that one channel for each ear is sufficient, if the cochlear implant can behave like a single perfect cilium, or as cilia with a perfectly even response throughout the ear, by sending a uniquely noise-free signal (the same from all parts of the implant, instead of the statistically-interpreted fuzzy signal from the ears that the brain usually receives) along the auditory nerve to the brain. These one-channel cochlear implants should also be useful for the partially deaf, if they can replace damaged parts of the cochlea without sending a signal to the brain which conflicts with the information from any remaining working parts of the inner ear (as the technology used in present-day 22-channel cochlear implants would). A typical manufacturer's description of the workings of the ear and cochlear implants can be found at: Bekesy's "Place Theory" is explained in laymen's terms, and the website declares that our ears are natural Fourier analysers and that the cilia are like notes on a piano. Continuing their analogy, my own explanation for the Place Theory observations is that they are measuring the vibrational effects of hitting the wooden casing of a piano with a drumroll at various frequencies, and they only give indirect information about the workings of the vibrating strings contained within the body of the instrument. Most laymen are unaware that the physics of sound production from a piano is itself a subject of much debate and disagreement amongst professionals, and there is certainly room for improvement in the technology. [ ]