The transparency of machine learning models is central to good practice when they are applied in high stakes applications. Recent developments make this feasible for tabular data, which is prevalent in risk modelling and computer-based decision support across multiple domains including healthcare. Important motivating factors for interpretability are outlined and practical approaches are summarised, signposting the main methods available, with pointers to the supporting literature. A key finding is that any black box classifier making probabilistic predictions of class membership from data in tabular form can be represented with a globally interpretable model without loss of performance.
In this study, an extended version of the Technology Acceptance Model (TAM) was used to understand the factors that might influence patient behaviour. In addition to the “typical” TAM variables (positive attitude or technological readiness), this study examines the role of social and individual benefits and COVID-19 anxiety on willingness to try, intention to use, actual use and satisfaction in a country in the Central and Eastern European region. This extension and the chosen region add novelty to the research. The results of linear and logit regression analysis based on an online questionnaire show that individual benefits and positive attitudes have a strong effect on willingness to try and use, but perceived social benefits do not have a significant effect. These results highlight the importance of awareness campaigns that highlight the personal benefits of e-health and address the general mistrust of the technology.