Researchers at the UW Networks & Mobile Systems Lab have developed smart phone and speaker technology to monitor breathing patterns of people at risk for cardiac arrest.
Many cardiac arrest victims do not survive because they are alone in their homes with no one to witness their attack — in North America alone, nearly 300,000 people die each year from such cases.
Agonal breathing, characterized by distinct gasping sounds, is a marker of cardiac arrest and occurs in about 50% of cardiac arrest victims. Immediate CPR can highly increase the chance of survival, however, a second person has to be present to detect the breathing and call emergency medical services or others who can administer CPR.
"There is this proliferation of smart speakers and smart phones around the world and what we wanted to develop was a passive non-contact technology that leverages these devices to audibly detect agonal breathing and connect them to CPR support," computer science doctoral candidate Justin Chan said.
One of the most common settings for out of hospital cardiac arrests is the bedroom. This team of researchers has created an algorithm to monitor sleeping victims for agonal breathing.
The data for the algorithm comes from 162 emergency call recordings from Seattle’s emergency medical services taken over eight years. The witnesses calling in held their phones close to the patients’ chests so the dispatchers could listen and decide if immediate CPR was needed.
The team captured the call recordings on various smart devices: an Amazon Alexa, an iPhone 5s, and a Samsung Galaxy S4.
They played recordings from different distances to represent users in various parts of their bedroom. They also added interfering sounds that could be found in the home, like pet noises, air conditioning, and cars honking.
Their algorithm correctly identified agonal breathing up to 20 feet away 97% of the time.
The team also tested instances where there was no agonal breathing. They used 83 hours of audio data from sleep studies which included sounds like snoring and breathing from sleep apnea. Their algorithm misidentified sleep noises as agonal breathing only 0.14% of the time.
All the agonal recordings from this study are from the Seattle community and the data is limited, with only 10 minutes of clearly captured agonal breathing.
The researchers envision the algorithm implemented in an app or as a smart speaker skill running on a phone or speaker while the user sleeps.
Chan mentioned that user privacy was an important factor in the design process. The data is not saved or sent to the cloud for processing. Their algorithm is designed to be efficient enough to run locally on one device and doesn’t have to store data for any longer than the initial few seconds needed for processing.
For Chan and his team, they want to generalize the technology for more settings where cardiac arrests are likely to occur.
To accomplish that goal, more rigorous testing will be needed for unpredictable environments like elder care facilities or hospital wards. These environments can be noisy and filled with multiple voices, which would make the agonal breathing detection more difficult.
Despite the research and innovation that the team has already accomplished, a larger set of data is needed to make the algorithm more accurate and more general to different variations of agonal breathing and different settings.
Reach reporter Tiasha Datta at email@example.com. Twitter: @TiashaDatta2
Like what you’re reading? Support high-quality student journalism by donating here.