Cochlear Implant

Description[edit]

The cochlear implant is a device that restores hearing abilities through a surgically inserted electronic mechanism in the patient’s inner ear. As Fig.1 shows, an external component for capturing ambient audio and converting that audio into digital signals is also required. Lastly, a processing device, which translates signals into electrical energy, triggers implanted electrodes to simulate hearing nerves. Recently, there has been interest in leveraging smartphones to reduce the size of the components external to the body and to reduce the number of devices the patient must carry. In such a scenario, the smartphone is used to record audio streams, perform audio processing, and send the converted signals to the implant. For this hybrid system to be useful, the audio processing must process sound samples every 8 ms.

cochlear implant

Fig. 1 Cochlear Implant Device

We separate the application logic in two distinct components: a user interface that controls volume, noise reduction, etc, and a sound processing component that converts a fixed number of audio stream frames into signals and transmits them to the implanted device every 8 ms. The audio recording and processing task is best modeled as real-time periodic services, and exchange data through RTDroid's real-time channels. The UI is naturally modeled as an non-real-time activity as it allows the user to modify application settings. These component must be able to communicate with each other. In our example, if the user requests a volume change, a volume event receiver needs to communicate the volume change request to the audio processing service which will then adjust the signal processing accordingly and then communicate the volume change event to the activity to update the on-screen volume indicator.

Performance Comparison[edit]

For experiment, we implement our cochlear implant application that has two real-time service for audio recording and processing (RecordingService and ProcessingService) and a real-time receiver for output error checking (ResultsReceiver). The RecordingService reads audio samples from sound device, and sends them to a shared buffer for the ProcessingService to process. In the ProcessingService, It acquires 128 audio samples from the shared buffer, process them, and sends the processed output to the ResultsReceiver. We measure the audio processing duration by taking timestamps, the difference between the time when the audio samples are inserted in the shared buffer and the time when the audio output is received in the ResultsReceiver. According to the time requirement mentioned above, this processing duration is expected within 8 ms.

The experiment uses a Nexus 5, the phone runs the stock Android M (v6.0) with the Android Linux kernel (v3.4.0). Nexus 5 equips with a quad-core ARM processor. For precise timing, we only enable one core, and disable the rest of the core during experiments.

During experiments, we collect 4000 audio processing latencies per execution. We run this experiment 7 times, take only the last 5 execution and plot their results. The results of the first two execution are discarded to allow VM to warm-up. Fig.2 shows an aggregated results, which plots the frequencies of the processing duration over the 5 executions. The red line presents the processing duration of RTDroid application with zero missed deadline release. On the contrast, the blue line presents the processing duration of Android application with 1,122 missed deadline releases. Fig. 3 contains a comparison of Android application with multi-core enabled to the single-core only. We do not obverse any significant difference between these two configurations.

Ci freq

Fig. 2 Android v.s RTDroid

Ci-cores-cdf

Fig. 3 Multi-Core Processor v.s Single-core Processor

To better understanding the large latency in Android, we have done a breakdown analysis on Android application. Besides We also measure the time cost for communication in cochlear application, which includes one message passing from the RecordingSerive and ProcessingService, and one intent broadcasting from <ProcessingService> and ResultsReceiver. Fig. 4 shows two types of latency: the overall latency is the entire time of the audio processing procedure, the communication latency is the sum of the cost of message passing and intent broadcasting. It shows that the communication cost dominates the processing procedure, as both message passing and intent broadcasting can trigger context switch and task scheduling. Thus, the application tasks can be preempted by other tasks in system services or applications. Fig. 4 shows the details of tasking scheduling trace for a missed deadline release of the ProcessingService.

Ci-breakdown-scatter

Fig. 4 Audio Processing Latency

Zoom-in-view

Fig. 4 Zoom-in View of Missed Deadline Release for Audio Processing Task