Ebook

Chapter 4

AI to Aid Treatment of Diseases and Health Conditions


The ability of AI systems to ingest and analyze large volumes of data and produce an analysis in a very short time makes it a powerful tool for aiding in the treatment of diseases and health conditions. As an example, the incorporation of AI into medical devices that integrate multiple sensors could expedite early detection of a clinical problem or provide insights that improve the quality of treatment. Using AI, the vast and complex physiological data generated by the human body could potentially be interpreted more quickly and accurately to formulate a medical intervention.

A hand grasping a cup and pouring contents into a glass. The person's wrist is wrapped in a sleeve of electrodes.

An AI-based brain-machine interface enables a man with a paralyzed arm to pour items into a cup. (Image credit: Battelle)

Challenge

For patients with advanced amyotrophic lateral sclerosis (ALS), communication becomes increasingly difficult as the disease progresses. In many cases, ALS (also known as Lou Gehrig’s disease) leads to locked-in syndrome, in which a patient is completely paralyzed but remains cognitively intact. Eye tracking devices, and more recently, electroencephalogram (EEG)-based brain-computer interfaces (BCIs), enable ALS patients to communicate by spelling phrases letter by letter, but it can take several minutes to communicate even a simple message.

Solution

Researchers at the University of Texas Austin developed a noninvasive technology that uses wavelets, machine learning, and deep learning neural networks to decode magnetoencephalography (MEG) signals and detect entire phrases as the patient imagines speaking to them. The performance of the algorithm is nearly in real time; when the patient imagines a phrase, it appears immediately.

  • With Wavelet Toolbox™, they denoised and decomposed the MEG signals to specific neural oscillation bands (high gamma, gamma, alpha, beta, theta, and delta brain waves) by using wavelet multiresolution analysis techniques.
  • Initially the researchers then extracted features from the signals and used Statistics and Machine Learning Toolbox to calculate a variety of statistical features. They used the extracted features to train a support vector machine (SVM) classifier and a shallow artificial neural network (ANN) classifier, obtaining an accuracy baseline by classifying neural signals corresponding to five phrases. This method yielded an accuracy about 80% and served as an accuracy baseline.
  • Then the team took wavelet scalograms of MEG signals for representing rich features and used them as inputs to train three customized pretrained deep convolutional neural networks—AlexNet, ResNet, and Inception-ResNet—for speech decoding MEG signals. With combined wavelets and deep learning techniques, the overall accuracy boosted up to 96%.
  • To speed up training, the team conducted the training on a seven-GPU parallel computing server using Parallel Computing Toolbox.

Results

Using MATLAB, the team was able to quickly iterate between the different feature extraction methods and train several machine learning and deep learning models to obtain an overall accuracy of MEG speech decoding of 96%. MATLAB allowed them to combine wavelet techniques with deep learning in a matter of minutes—significantly less than other programming languages. In addition, the team was able to switch to using multiple GPUs for training with the change of only one line of code. Using Parallel Computing Toolbox and a server with seven GPUs resulted in training the networks about 10 times faster.

A four-step left-to-right process shows MEG data collection, data processing into a scalogram, neural network data interpretation, and an output of decoded speech.

Converting brain MEG data into word phrases. (Image credit: UT Austin)

Challenge

Paralysis is the loss of the ability to move some or all of the body, typically due to damage to the brain or spinal cord.

Solution

Researchers at Battelle Neurolife and Ohio State University developed a brain-computer interface (BCI) to record and analyze signals from the brain and send the signals in the form of commands to a device to perform an action. In this case, the team designed a system to help a patient regain conscious control of his fingers, hand, and wrist.

  • They used MATLAB to analyze electroencephalogram (EEG) signals from the brain and trained machine learning algorithms to detect and decode the brain’s subperceptual touch signals.
  • When the patient using this BCI touched an object, these algorithms teased apart the motor and sensory signals, transmitting touch feedback to a vibrotactile band and motor signals to an electrode sleeve.
Five steps in the BCI system include a loop connecting residual touch signals received by the brain in a loop that restores touch sense and control via a sleeve.

Steps in the BCI system. (Image credit: Battelle)

Result

The patient can now pick up objects without looking at them. The system is too large to use at home but could evolve into a system that improves the day-to-day lives of paralyzed people.

Challenge

Diabetic neuropathy is a type of nerve damage that occurs mostly in legs and feet. It is caused by elevated blood glucose levels.

Solution

With MATLAB, the founder of Xfinito Biodesigns, Siddarth Nair, created software to govern a wearable shoe called Xeuron.ai to treat diabetic neuropathy. Xeuron.ai records information from the wearable shoe, such as pressure, temperature, response to stimuli, and motion. Deep learning algorithms process the data. This processing happens through hybrid computing, which Nair says brings down both costs and computing time. In response to this data, the system delivers personalized therapy through electric or magnetic stimuli, vibration, or thermal or light pulses.

A shoe with an insert that contains sensors to measure pressure, temperature, pronation, and foot strike plus a connection to a smartphone monitor.

Xeuron footwear system. (Image credit: Xfinito Biodesigns)