Shirui Luo

and 1 more

IntroductionDeep learning (DL) has emerged as a powerful tool to solve a variety of complex problems that have been difficult to solve with traditional methods. However, domain experts attempting to apply  DL methodology have to learn to code in order to use it. Numerous frameworks have been developed, such as TensorFlow and PyTorch, that simplify the task of building and training complex DL models, yet their efficient use requires a good working knowledge of Python language.  Consequently, a variety of tools have been developed that provide easier to use DL models, ranging from the Keras API built on top of TensorFlow that still requires coding, to tools such as H2O that provide a point-and-click web-based interface to configure and train pre-built models. Among this new breed of tools, IBM Visual Insights (formerly IBM PowerAI Vision) \cite{120} and Google's AutoML Vision \cite{AutoML.Vision} have taken this concept further by providing a web-based graphical user interface (GUI) for configuring and training a variety of models, as well as tools and APIs for deploying these models on a variety of platforms.  Both IBM Visual Insights and Google AutoML Vision implement complex workflows that connect together many services and computational resources to deliver complex functionality that until recently required a substantial coding effort. The functionality of these tools is still limited to just a few pre-arranged models that work well only for specific problems.  The users are also restricted to tweak only some model parameters while leaving majority of the decisions to the computer.  Yet these tools democratize access to complex DL models and empower  domain scientists to take advantage of this new methodology.  In this short article, we walk through an example of using  IBM Visual Insights to train a DL model on a  chest X-ray image dataset.  Google's AutoML Vision's applicability to medical problems has been discussed in \cite{FAES2019e232}.