Despite the small numbers in our study, the performance of our classifier is similar to previously reported machine learning approaches.  Hong et al. reported a sensitivity of 74% and specificity of 100\% in detection of MR negative histologically proven FCDs, although all were visible on retrospective review of the images (Hong 2014).  Ahmed et al. (2015) reported 58\% sensitivity in a similar patient population, compared to a 37% detection rate using univariate methods.  Adler et al. reported a 73% sensitivity in detection of radiologically diagnosed FCD lesions but did not report specificity; an adaptation of this method by Jin et al. in a larger pediatric patient population (30% initially MR negative, all with visible lesions on subsequent expert re-review) reported 72.7% sensitivity and 90% specificity.  It is thus encouraging that our results are similar given our small normative and patient training sets.
There are several challenges to incorporation of these techniques into standard clinical practice preventing their widespread adoption, including 1) variability of MR scanning protocols, hardware, and software, may impair the generalization of any "learning" across centers, 2) limited availability of large patient and normative training sets at single institutions, and 3) technical expertise required to implement these approaches. Some efforts such as the MELD Project have been initiated to create more open access, generalizable tools (\url{https://meldproject.github.io/}) and have been implemented at multiple centers internationally although results may not yet be available to clinicians.  Jin et al. implemented a similar approach and demonstrated robust performance across centers, suggesting that variability created by combining data across centers was more than offset by the utility of using larger training sets (Jin 2018).

Conclusions

Acknowledgements

We are indebted to all patients and their families who have selflessly volunteered their time to participate in this study.