Article
Author(s):
Mark Cook, MD, of the University of Melbourne, and Brian Litt, MD, of the Penn Epilepsy Center and the Center for Neuroengineering and Therapeutics at the University of Pennsylvania, discuss how new devices, big data, and massive online competitions are enabling prediction of seizures before they start.
Mark Cook, MD, of the University of Melbourne, and Brian Litt, MD, of the Penn Epilepsy Center and the Center for Neuroengineering and Therapeutics at the University of Pennsylvania, discuss how new devices, big data, and massive online competitions are enabling prediction of seizures before they start.
On December 6, 2014, at the presidential session of the 68th annual meeting of the American Epilepsy Society (AES) in Seattle, Washington, Mark Cook, MD, of the University of Melbourne, discussed the results of the Neurovista study, which tested the efficacy of an implanted device used to forecast seizure onset.
A device that is capable of reliably forecasting seizures before they occur would be helpful in reducing the risk for seizure-related injuries and the fear of experiencing a seizure. However, it is important to recognize the limitations of such systems. Cook prefers using the term “forecasting” rather than “prediction,” because, Cook noted, “prediction implies being able to give very specific details, whereas a forecast implies more of a range of outcomes.”
Scientists have been attempting to forecast seizures for many years. In 1975, Viglione and Walsh succeeded in predicting seizures seconds before the seizures started by monitoring for a 3-Hz spike wave discharge. By the 1980s, researchers were using nonlinear techniques to forecast seizures, although success with these early attempts was limited to a few patients and was difficult to replicate on a large scale. More recently, over the past 10 years, larger epilepsy monitoring unit datasets have become available, and more sophisticated algorithms are under development.
The Neurovista study attempted to overcome the past limitations of small datasets by obtaining long-term data in a small group of patients. In the 15-patient Neurovista study, investigators surgically implanted patients with subdural electrodes joined to implanted devices just below the clavicle. Each device was accompanied by a handheld unit that gave patients an indication of the likelihood that a seizure would occur in the coming minutes to hours. This device had 3 lights: a blue light indicated the lowest risk for seizures, a white light indicated intermediate risk, and a red light indicated high risk.
All patients enrolled in the study had failed to attain seizure control with at least 2 agents, and had a seizure frequency of 2 to 12 events per month. Patients involved in the study were allowed to take anticonvulsant medications, with the exception of a single man aged 62 years, who took no anticonvulsant drugs.
The Neurovista device collected data across 16 implanted electrodes and began recording as soon as seizure-like activity was detected. By recording the background audio during potential seizure events, researchers could later review the sound files to help determine if the person fell or received help.
After the first 4 months of the trial led to no useful predictions of seizure activity, the trial was almost terminated. However, it was found that after the 4-month period had elapsed, the ability of the device to forecast seizures improved, not because the predictive algorithm had changed, but because a large enough body of data had been collected to train the algorithm to detect seizures more effectively.
Not all patients in the study experienced good results with the Neurovista device. For instance, 1 patient experienced no seizures over the course of a month, while the device produced 120 false-positive detections of ictal events. In addition, 2 patients had to have the device removed due to adverse events: 1 patient experienced infection, and the other patient experienced fibrosis and banding around electrode sites in the neck.
Despite these adverse effects, for many patients, the Neurovista device provided a good indication of when seizures would occur. When periods of high risk and moderate risk were compared, the Neurovista device successfully forecast seizures with a likelihood ratio ranging from 0.8 to 14.3, and in 2 patients, every seizure that occurred over the course of the study was correctly predicted by the implanted system. By the end of the trial, the device met performance criteria for 11 of 15 subjects.
On average, when the device predicted seizures, patients received approximately 90 minutes of warning, ranging from 8 minutes to 4 hours before the event. According to Cook, “This was genuinely useful to patients. Patients at work could go home, and be in a safe place, and could administer acute medication.”
Acknowledging the limitations of the Neurovista device, Cook continued, “much work remains to be done in algorithm design and performance measures.”
Regarding this limitation of the Neurovista device, the work of designing a more sophisticated algorithm for the prediction of seizures was discussed by Brian Litt, MD, of the Penn Epilepsy Center and the Center for Neuroengineering and Therapeutics at the University of Pennsylvania.
Litt and colleagues, in an initiative funded by the National Institutes of Health (NIH), used the novel method of conducting an online competition to predict the onset of epileptic seizures using sophisticated computer algorithms applied to a large set of human and animal electroencephalogram (EEG) data. By sharing information using an online portal at IEEG.org, and competing in a prize-driven massive, worldwide effort, the AES has stimulated innovation in the area of seizure prediction.
Monitoring the risk for seizures is limited by the enormous complexity of the brain. To monitor all brain activity, according to Litt, would require “750-thousand FiOS cables coming out of your head.” In the face of this complexity, the NIH’s BRAIN initiative is requesting studies evaluating clinically relevant linkages between behavior and recordings at the cellular level.
Given the large amount of data that can be collected from the brain, it is important to share data between investigators. Until the founding of IEEG.org, researchers did not have a place to share EEG data and algorithms to enable improvements in the forecasting of seizures. The IEEG.org website, for the first time, enables researchers to share data throughout the world. To date, 618 active users have registered with the website, and those researchers have made approximately 1300 datasets available.
Building on the concept of data sharing, competitive research by 504 teams from over 30 countries entered the prediction contest, and teams analyzed data collected from animal recordings (6 dogs with epilepsy over the course of 18 months) and human recordings from various research centers to create computer algorithms that predict when seizures would occur and detect EEG patterns indicating seizure activity.
Ultimately, the contest led to a 97% detection accuracy for seizures (as measured by the area under the receiver operating characteristic curve, indicating sensitivity and specificity), which was won by Michael Hills of Melbourne, Australia, and an 84% accuracy in seizures prediction, which was won by a collaboration of epilepsy researchers, programmers, and machine learning experts in Australia and California.
The use of prizes to drive innovation dramatically cuts down on the cost of research. An NIH and EU collaboration, with $40 million over 15 years, attempted to develop a seizure prediction algorithm, and achieved better-than-random results. In the challenge described here, in 3 months, with a $25,000 prize, researchers achieved 84% predictive accuracy and 97% detection accuracy. Further evaluation will reveal whether these new algorithms are clinically valid and useful, but this project demonstrates the principle of using big data and competitions to drive innovation in medical science.