Discussion
Our study presents a description of the diagnostic concordance of two applications based on artificial intelligence and a visual discrimination algorithm for the identification of manufacturers of implantable cardiac devices, conducted out differentially at 4 levels of medical training.
An increase in the use of rhythm control devices is evident; however, usually the manufacturer is not known. In 2011, the CaRDIA-X® algorithm manual was created, this seeks to identify 5 manufacturers (Medtronic, St. Jude Medical, Boston Scientific, Biotronik, and Sorin) based on the unique morphological characteristics of each manufacturer observed on chest radiographs. However, it requires difficult training, and up to 80% of doctors report difficulties in applying it (3).
To do this, two applications based on artificial intelligence were created, achieving a faster, simpler, and more accurate identification. Howard et. of 72% (62.2% - 88.9%) to identify the manufacturer, the best agreement was between two electrophysiologists, but neither could identify the model. Subsequently, Weinreich et al. (4) developed PID® (available on the web and cell phones) that identifies 4 manufacturers by chest X-ray and correctly classifies 95% of the devices. The application returns the probability percentage of each manufacturer’s option.
In 2020, these apps and the CaRDIA-X® algorithm were compared with 93% and 86% agreement, respectively (13). This information was obtained from a poster publication at the American Congress of Cardiology 2020 (ACC 2020), does not have a sample size calculation, and was performed by the app developers at a single institution.
Regarding our results, the three applications based on artificial intelligence behaved well, with percentages of agreement higher than 80%. The highest concordance was achieved with the use of PIDa® (Percentage of concordance 90.69%, kappa 0.63). The PPMnn® and PIDw® applications had the lowest concordance with 82% and 81.2%, respectively. These results are similar to those found in recent studies such as those one by Chudow (PIDa® 89%, PIDw® 73%, and PPMnn® 71%) (13), and Sabbotke (PIDa® 87.5%) (14). This finding has been explained because web page applications are the ones that most depend on the quality of the photograph, and it has been shown that changes in the angle of capture, as well as electromagnetic interference from the screen, can substantially affect image interpretation (12).
The mean agreement of the CaRDIA-X® algorithm in our study (91%) is higher than that reported in the literature. Chudow et al. describe an 85% agreement (13) and Shams et al. reported a 61% concordance using the mobile version of the algorithm (7). The lowest concordance was found in the medical student (73.8%), which is explained by their lesser experience with patients with implantable cardiac devices. The three levels of medical specialization show a concordance of over 95%, requiring a short training period.
This is the first study to report a higher concordance of the visual algorithm in applications based on artificial intelligence. This may be because the most common St Jude Medical® models have the “St Jude dot” that facilitates identification using the visual algorithm. Artificial intelligence-based applications are fast; however, those available for the web page may be less accurate. In our study, the mean time to perform the CaRDIA-X ® algorithm was approximately 1 min per radiograph at the end of training with a concordance greater than 90%, which makes these reading strategies complementary and not exclusive. Combined analysis studies are required to determine whether the use of two or more strategies in the same patient can improve diagnostic discrimination.