Different listening devices are available for individuals with hearing loss. One such type of listening device of amplification is a hearing aid. Hearing aids can also be coupled with FM units, which can provide a greater deal of “loudness” and clarity.

Hearing Aids

As Martin (1997) stated, a “hearing aid may be thought of as a miniature, personalized public address system” (p. 436) which consists of four basic components; a microphone, an amplifier, a power source, and a receiver (Bentler, 1996; Johnson, 1984; Martin, 1997). First, sound is picked up by the microphone which converts the sound into an electrical signal (Bentler, 1996). The electrical signal then is amplified by the amplifier (Johnson, 1984). The receiver converts this electrical signal into an audible signal, which then travels through the ear (Bentler, 1996). Hearing aids are powered by batteries (Martin & Noble, 1990). There are different types of hearing aids available to those who have hearing loss. There are three primary categories of hearing aids available: analog, programmable, and digital.

 

Analog Hearing Aids

Analog hearing aids are the more traditional type of hearing aid which amplifies all sound, regardless of the intensity (Leavitt, 1996). Analog hearing aids also have components that “involves modifying a continuous electrical signal” (Martin, 1997) but are not extremely flexible when adjustment is needed (Leavitt, 1996). The internal components of these hearing aids can be adjusted by either the manufacturer or the audiologist (Leavitt, 1996).

 

Programmable Hearing Aids

Advances in technology have also affected the hearing aid. One type of hearing aid is a programmable hearing aid. In these devices, the wearer can adjust the hearing aid to his or her liking (Leavitt, 1996). A programmable hearing aid consists of a digital controller (Chinook Hearing Clinics, 2000). A computer is typically used to program the hearing aid and once the hearing aid is programmed, the hearing aid can function in the same manner as the analog hearing aid (Cudahy & Levitt, 1994). The hearing aid, for instance, can be programmed to reduce the volume when a loud sound signal is present or to reduce the amount of background noise in extremely noisy situations (Leavitt, 1996). Additionally, the user may have an additional external component, such as a remote control, so the wearer can choose which program he or she would like to utilize in specific situations (Bentler, 1996).

 

Digital Hearing Aids

Digital hearing aids are “self-adaptive” (Cudahy & Levitt, 1994, p. 6) listening devices which converts sound signals into binary codes (Martin, 1997). After the sound signal is converted into binary codes, this information is stored in the digital hearing aid’s memory, similar to the way a computer stores information (Martin, 1997). The binary codes are then converted into an analog format so the signal is heard by the hearing aid user (Cudahy & Levitt, 1994). Based on the binary codes which are stored in the hearing aid, the digital hearing aid filters out the “unwanted” sound signals (Martin, 1997) and the fidelity of sound is improved (Chinook Hearing Clinics, 2000). That is, the hearing aid is programmed to filter out sound, so that the signal has more clarity. The adjustments the hearing aids make are automatic (Martin, 1997) and are nearly instantaneous (Cudahy & Levitt, 1994).

 

Assistive Listening Devices

Hearing aids may not be enough for a person with hearing loss as he or she may also need additional amplification to supplement the hearing aids or cochlear implant that he or she wears. Since hearing aids amplify noise, it is sometimes difficult to understand the speaker (Martin, 1997). Due to increases in the signal-to-noise ratio, or “the difference in decibels between a signal (such as speech) and a noise presented to the same ears or both ears” (Martin, 1997, p. 144); there are decreases in the intelligibility of the speech signal (Flexer, 1996). To further explain, the louder the background noise, the more difficult it is to hear or understand the speaker. Using an FM unit would then improve the signal-to-noise ratio by increasing the loudness of the speaker’s voice for the individual with hearing loss.

 

Personal FM Systems

In order to make it easier to hear the speaker, frequency modulation (FM) devices are sometimes used. This particular device consists of two parts: a transmitter and a receiver (Thibodeau, 1996). One unit consists of a microphone and transmitter (Martin, 1997) which is placed approximately six inches from the speaker’s mouth (Thibodeau, 1996). The speech signal is then transmitted through a radio frequency carrier wave (Martin, 1997). The radio frequency carrier wave is then picked up by the receiver, which is worn by the individual with hearing loss (Flexer, 1996a). The individual with hearing loss either wears an earpiece which connects to the receiver, a wire or loop around his or her neck, or a wire that plugs the FM unit directly to the hearing aid (Flexer, 1996a). The signal that is sent through the radio frequency wave is amplified for the listener, and this increases the speech intelligibility of the signal because the signal-to-noise ratio has been improved (Thibodeau, 1996).

 

Soundfield FM Systems

Some individuals make use of a soundfield FM system instead of a personal FM system. The speech signal in this type of system is transmitted to loudspeakers in the room (Thibodeau, 1996). The speaker wears a microphone (Palmer, 1996), and when he or she speaks, the speech signal is transmitted to loudspeakers which are placed in different areas of the room (Palmer, 1996; Thibodeau, 1996). This improves the signal-to noise ratio because the signal is now approximately 15 dB louder than the noise in the background (Palmer, 1997, p. 215).

 

Cochlear Implants

Cochlear implants “replicate the function of the outer and middle ear and the hair cells within the cochlea” (Med-El, n.d.). To further explain, cochlear implants bypass the conductive components of the ear and stimulate the cochlea directly. Cochlear implants have become another option for individuals with severe to profound or profound hearing loss.

 

History of Cochlear Implants

In the 1800s, Alessandro Volta placed rods, which were charged by a battery, into a patient’s ear. The patient then reported having heard a noise (Beiter & Shallop, 1998). After Volta’s discovery, scientists hoped that electricity could be used to restore hearing (Schindler, 1999). In the 1950s, two researchers, Djourno and Eyries, directly stimulated a patient’s auditory nerve, and as a result, the patient reportedly heard a sound (Schindler, 1999). After this particular research was completed, different surgeons started to work on the development of a device which would use electrical stimulation to help individuals with hearing loss to hear (Beiter & Shallop, 1998).

Two surgeons, William House and James Doyle, developed a cochlear implant device; this device was unsuccessful because of toxic materials in some of the materials used in its construction. However, the patients who used these devices reported hearing some environmental sounds as well as speech patterns (Schilder, 1999). In the following years, different researchers attempted to construct implantable cochlear protheses, and in 1973, William House introduced the first widely used cochlear implant (Schilder, 1999). This particular cochlear implant device was a single channel implant known as the 3M/House device and gave information on sound intensity, environmental sound, and the timing of sound presentation (Beiter & Shallop, 1998). In the 1980s, work on creating a multi-channel cochlear implant was begun (Nevins & Chute,1996). The Nucleus device was undergoing clinical trials during this time period (Nevins & Chute, 1996). In subsequent years, different corporations developed different types of multi-channel cochlear implants, which are used today.

 

 

Components of the Cochlear Implant

The cochlear implant consists of external and internal components. The external component of the cochlear implant consists of a microphone, a transmitter, a cable, and a speech processor (Beiter & Shallop, 1998; Nevins & Chute, 1996). The internal components consist of an electrode array or “circuitry” (Nevins & Chute, 1996), an internal receiver/stimulator, which contains a receiving coil, stimulator circuits, and a small magnet (Kelsay & Tyler, 1996).

 

How a Cochlear Implant Works

The internal components of the cochlear implant are surgically implanted (Cochlear Corporation, 1996). The receiver/stimulator is placed in a shallow well which has been made in the skull behind the patient’s ear (Kelsay & Tyler, 1996). The electrode array is carefully inserted into the cochlea (Cochlear Corporation, 1996). After undergoing this surgical procedure, the patient waits approximately four to six weeks before receiving the external components (Kelsay & Tyler, 1996).

After the speech processor is programmed, the cochlear implant can transmit sound information to the cochlea (Kelsay & Tyler, 1996). The microphone, which may be worn at ear level, picks up the sound information which is then sent to the speech processor (Cochlear Corporation, 1996). The speech processor then converts the sound signals into digital signals “to make them appropriate for electrical stimulation of the auditory system” (Dorman, 1998, p. 6). Next, these modified signals are sent through the cable to the transmitter, where the signals are sent across the skin to the receiver via an FM carrier wave (Advanced Bionics, 1999). After this occurs, the receiver/stimulator “delivers the correct amount of electrical stimulation to the appropriate electrodes on the array” (Cochlear Corporation, 1996, p. 7). These electrical signals then stimulate the selective nerve fibers in the cochlea (Kelsay & Tyler, 1996). Since the cochlea is frequency-specific (Brownell, 1999), and each electrode stimulates certain areas of the cochlea (Beiter & Shallop, 1996), the cochlear implant provides a wider range of frequencies that can be processed and “heard” (Cochlear Corporation, 1996).

 

Programming and Mapping of the Cochlear Implant

Beiter and Shallop (1998) stated, “ cochlear implants are individualized to the specific requirements of each recipient” (p. 15). This individualization of the cochlear implant occurs at mapping or programming sessions. The first mapping session occurs approximately four to six weeks after the surgery (Advanced Bionics, 1997). During these mapping sessions, the cochlear implant is connected to a computer with the software necessary for creating the mapping strategy the individual will use (Kelsay & Tyler, 1996). Next, the computer will present sounds to each channel, or electrode pair, until the patient detects the tone (Advanced Bionics, 1997). The computer will also present louder sounds and the patient will choose levels of sound presentation that are loud but comfortable (Advanced Bionics, 1997). Follow-up appointments to maintain the appropriate levels of sound information are then made (Kelsay & Tyler, 1996).

 

Criteria for Cochlear Implantation

In order to receive a cochlear implant, the patient must fulfill certain general criteria. Both the Nucleus 22 and the Clarion devices require that those who receive cochlear implants have bilateral, profound or severe-to-profound, sensorineural hearing loss and have scored 30 percent or less on open-set tests (Advanced Bionics, 1997; Cochlear Corporation, 1996). Additionally, candidacy requirements for the Nucleus 22 and the Clarion include individuals who are postlingually deafened (Advanced Bionics, 1997; Cochlear Corporation, 1996).

However, different candidacy requirements are reflected in the criteria for the Nucleus 24 device. While the recipients of this particular device still include a bilateral, profound or severe-to- profound, sensorineural hearing loss, the Nucleus 24 is not just recommended for those who have postlingual hearing loss, but also to individuals who have prelingual hearing loss (Cochlear Corporation, 1999). Additionally, those who score 40 percent or less on sentence recognition scores are also eligible to receive the Nucleus 24 device (Cochlear Corporation, 1999). There are other specific criteria stipulated by various cochlear implant teams, including such things such as no medical difficulties, “a desire to be a part of the hearing world” (Cochlear Corporation, 1996, p. 3), high motivation and appropriate expectations, availability of a support team, and the willingness to work on integrating new sound information into one’s life (Cochlear Corporation, 1996, p. 3).

 

Available Multi-Channel Cochlear Implants

Currently, there are four different cochlear implant devices available to individuals who fulfill the criteria for cochlear implantation. Though these devices have the same ultimate function, they differ in design and other features.

 

Nucleus 22

Cochlear Corporation’s device, the Nucleus 22, was approved by the Food and Drug Administration (FDA) in 1984 for premarket approval status for the postlingually deafened population. This device was next granted premarket approval status in 1989 for the pediatric population (Nevins & Chute, 1996). There are currently two different speech processors available for the Nucleus 22. One is the body worn processor, which is approximately the size of a pocket calculator and can be worn on a belt, in a pocket, in a pouch, or underneath clothing (Cochlear Corporation, 1996). A more recent speech processor is the ESPrit 22. This is the behind-the-ear version which offers the capability to have two different programs as well (Cochlear Corporation, 2000). The electrode array of the Nucleus 22 consists of 22 electrodes which are inserted into the cochlea (Cochlear Corporation, 1996).

Currently, the SPEAK coding strategy is used in the Nucleus 22 devices (Cochlear Corporation, 1996). SPEAK uses pulsed signals to stimulate the individual electrodes, which in turn, stimulate the nerve fibers in the cochlea (Cochlear Corporation, 1996). Cochlear Corporation (1996) also stated that the “strongest features” of sound which are present are selected by the speech processor (p. 9) . Nevins and Chute (1996) stated that the speech waveform is analyzed; this waveform is next separated into six different peaks. These six peaks determine which electrodes are stimulated (Nevins & Chute, 1996) and the peaks with the largest amplitudes are presented to the corresponding electrodes in the electrode array (Cochlear Corporation, 1996). Currently, the Nucleus 22 is not the preferred device for cochlear implantation because the Cochlear Corporation has developed a newer cochlear implant model, the Nucleus 24.

 

Nucleus 24

The Nucleus 24 also stimulates 22 electrodes in the cochlea; much like the Nucleus 22 (Cochlear Corporation, 1999). Some differences in the internal components, however, exist between the Nucleus 24 and its predecessor, the Nucleus 22. For instance, a removable magnet is built into the internal receiver because it may become necessary to remove the internal magnet should the patient need to undergo a magnetic resonance imaging (MRI) session (Cochlear Corporation, 2000). Additionally, two extracochlear electrodes are implanted. These electrodes serve two functions: one is to act as a power efficient or monopolar stimulator (Cochlear Corporation, 2000); these two electrodes also allow for neural response telemetry, which permits audiologists to “test the functioning of the auditory nerve at 22 sites along the cochlea” (Cochlear Corporation, 1999, p. 7).

Not only does the Nucleus 24 device have a body worn processor, the SPrint, but there is also a behind-the-ear processor, the ESPrit (Cochlear Corporation, 1999). There are three different coding strategies that can be used with the SPrint speech processor. These are SPEAK, Continuous Interleaved Sampling (CIS), and ACE. CIS is a mapping strategy that stimulates a fixed number of electrodes at a high rate and in a continuous manner (Cochlear Corporation, 2000). ACE combines the SPEAK and CIS strategies; information about pitch is provided by the SPEAK strategy while the timing information is provided by the CIS strategy (Cochlear Corporation, 2000).

 

Clarion

In 1996, the distribution of the Clarion device was approved by the FDA for adults (Kessler, 1999). The electrode array consists of 16 electrode pairs (Nevins & Chute, 1996) which are arranged in a coiled shape to “comfortably fit the natural curve of the cochlea” (Advanced Bionics, 1999). The shape of the electrode array serves to protect the cochlea after surgery (Kessler, 1999).

Currently, the body worn Clarion speech processor is available and clinical trials for the behind-the-ear processor began at the end of 2000 (Advanced Bionics, 2000). There are different speech coding strategies available to users of the Clarion device, including the following strategies: CIS, Simultaneous Analog Strategy (SAS), and Paired Pulsatile Sampler (PPS). SAS uses seven of the eight electrode pairs that are available (Kessler, 1999). SAS also digitizes the auditory input, then this particular strategy converts the digital signal into analog signals, which are then simultaneously delivered to all the active electrodes in the array (Kessler, 1999). PPS is a strategy which is partially simultaneous; this particular strategy, like CIS, is continuous, but instead of one electrode being activated at a time, two electrodes are activated at a time (Armstrong-Bednall, Goodrum-Clarke, Stollwerck, Nunn, Wei, & Boyle, 1999). The PPS strategy appears to be more popular with CIS users and those who have been recently implanted (Armstrong-Bednall, et al., 1999).

 

COMBI 40+

Currently, the Med-El COMBI 40+ cochlear implant device is under investigation in the United States. As a result, selected cochlear implant centers can offer this particular cochlear implant device to their patients (Beiter & Shallop, 1998). The COMBI 40+ system has the capability to process sound information at the high rate of 18,000 sequential pulses per second (Med-El, n.d.). The standard electrode array for the Med-El device has 24 electrodes (Beiter & Shallop, 1998). Not only does Med-El have a standard electrode array, but the company has also manufactured a short electrode array as well as a split electrode array for special cases, such as patients who have a malformed cochlea or bony growth of the cochlea (Med-El, n.d).

There are several different speech processors available for recipients of the Med-El COMBI 40+ system. Some patients have the CIS PRO+ speech processor, which is a belt-worn processor (Med-El, n.d.). This particular speech processor can be programmed to hold three different mapping strategies. The CIS PRO+ also consists of a headset, which has a behind-the- ear microphone and a transmitter which is connected to the cable (Med-El, n.d.).

Another speech processor, the TEMPO+, is also available. This particular speech processor is smaller; the TEMPO+ is 2N x 1 x I and can be worn as a behind-the-ear device or can be separated into two parts, with the control unit placed behind the ear and the battery pack placed elsewhere (Med-El, n.d.).

The COMBI 40+ device uses the Continuous Interleaved Sampling (CIS) speech processing strategy. In addition to this processing strategy, Med-El is currently investigating the Jitter CIS+ strategy (Med-El, n.d.). The Jitter CIS+ strategy randomizes the timing of the stimulation (Med-El, n.d.). A “High-Rate n-of-m” strategy is another programming strategy that is available to the users of the Med-El cochlear implant. The High-Rate of n-of-m program stimulates only the electrodes which have the most energy; instead of representing the different frequencies (Med-El, n.d.).