Introduction
More than 3 million people in the world use some type of cardiac electronic device (1,2). These usually require interrogation; however, there is a different programmer and software for each manufacturer, and many times patients do not remember or don’t know the manufacturer of their device, which leads to a delay in medical care (2).
As a solution, a visual algorithm called CaRDIA-X® was created in 2011, which uses chest X-rays to identify the type of device and the manufacturer (3), but requires a training period. To facilitate identification, two applications based on artificial intelligence and machine learning were created, particularly deep neural networks (4,5). Pacemaker ID® is available on a mobile app (PIDa®) and a web page (PIDw®), and Pacemaker Identification with Neural Networks® (PMMnn®) is available on the web page, both free.
The use of applications is becoming more widespread; they generally do not require training and can be used easily; however, in artificial learning models, the risk of ”overfitting” has been evidenced, which consists in the fact that the neural network is excellent at recognizing frequently seen images but is less accurate with real-world examples (6,7). In Colombia and Latin America, to this date, there are no data on the accuracy of this type of artificial intelligence algorithm or the CaRDIA-X® visual algorithm. This study aims to describe the diagnostic agreement in the manufacturer’s discrimination of two applications in web and mobile versions (PIDa®, PIDw® and PMMnn®) (4,5) and the CaRDIA-X® visual algorithm performed by evaluators with different levels of medical training.