loading page

A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Cardiac Analysis
  • +1
  • Qijia Huang ,
  • Huanrui Yang ,
  • Eric Zeng ,
  • Yiran Chen
Qijia Huang
Duke University

Corresponding Author:[email protected]

Author Profile
Huanrui Yang
Author Profile
Eric Zeng
Author Profile
Yiran Chen
Author Profile

Abstract

The need for telehealth and home-based monitoring surges during the COVID-19 pandemic. Based on the recent advancement of concurrent electrocardiograph (ECG) and phonocardiogram (PCG) wearable sensors, this paper proposes a novel framework for synchronized ECG and PCG signal analysis for cardiac function monitoring. Our system jointly performs R-peak detection on ECG, fundamental heart sounds segmentation of PCG, and cardiac condition classification. First, we propose the use of recurrent neural networks and developed a new type of labeling method for R-peak detection algorithm. The new labeling strategy utilizes a regression objective to resolve the previous imbalanced classification problem. Second, we propose a 1D U-Net structure for PCG segmentation within a single heartbeat length. We further utilize the multi-modality of signals and contrastive learning to enhance model performance. Finally, we extract 20 features from our signal labeling algorithms to apply to two real-world problems: snore detection during sleep and COVID-19 detection. The proposed method achieves state-of-the-art performance on multiple benchmarks using two public datasets: MIT-BIH and PhysioNet 2016. The proposed method provides a cost-effective alternative to labor-intensive manual segmentation, with more accurate segmentation than existing methods. On the dataset collected by Bayland Scientific which includes synchronized ECG and PCG signals, the proposed system achieves an end-to-end R-peak detection with F1 score of 99.84%, heart sound segmentation with F1 score of 91.25%, and snore and COVID-19 detection with accuracy of 96.30% and 95.06% respectively.