Cluster-viz: A Tractography QC Tool

Cluster-viz is a web application that provides a platform for cluster-based interactive quality- control of tractography algorithm outputs. This tool facilitates the creation of white matter fascicle models by employing a cluster-based approach to allow the user to select streamline bundles for inclusion/exclusion in the final fascicle model. This project was started at the 2016 Neurohackweek and BrainHack events and is still under development. We welcome contributions to the Cluster-viz github repository (https://github.com/ kesshijordan/Cluster-viz).

Introduction

When tractography algorithms are used to create an anatomically constrained model of a fascicle, the output of the processing can contain many streamlines that are not part of the bundle-of-interest. Using methods that leverage High Angular Resolution Diffusion Imaging (HARDI) datasets by employing models like Constrained Spherical Deconvolution (Tournier 2007, Tournier 2004) or Q-ball (Tuch 2004, Tuch 2003, Berman 2008) increases the sensitivity of the method (compared to the simpler tensor model), but generates many more streamlines that must be excluded. Automatic classification methods have been developed (Yoo 2015, Yeatman 2012), but pathologies (e.g. tumors) present in patient populations can cause failures. Furthermore, clinical use still requires an expert human quality control step for applications such as Neurosurgical planning (Duffau 2014) until the methods have been sufficiently developed and validated. The typical way to select streamlines as part of the bundle-of-interest is to use a tractography output viewer, such as Trackvis (Ruopeng Wang 2007), to place regions-of-interest (ROIs) manually that select included or excluded streamlines. There are many reproducibility concerns (Wakana 2007, Feigl 2014) with these methods, however. We propose a cluster-based approach as an alternative to manual placement of ROIs to isolate fascicle models from tractography output. This approach minimizes the variability in manual execution of streamline selection by reducing the output to discrete clusters that require limited decisions for inclusion instead of relying on the placement of ROIs in continuous space. This method also provides the framework for training a classifier that could be tailored to the data type and goals of a particular application. This is an important consideration, as the tractography output can vary widely depending on a variety of parameters (stopping condition, maximum turning angle, etc.) (Chamberland 2014) and there may not be a consensus on what sub-bundles should be included in tractography models for a given application.

\label{flowchart}Flowchart of the Cluster-viz web application

\label{step0}TThe connectivity of an ROI placed on the coronal plane over the external/extreme capsules at the level of the anterior commissure is shown (tractography method: (Caverzasi 2016)). Each color is a cluster, as generated by the Quickbundles algorithm (Garyfallidis 2012).

Description

This viewer \ref{step0} enables the user to select streamlines on a cluster-level. The Quickbundles algorithm (Garyfallidis 2012), implemented in dipy (http://nipy.org/dipy/) (Garyfallidis 2014), can be used to quickly cluster a set of streamlines into sub-bundles. The main design requirement for this interactive tool was to minimize the computing time spent reclustering between iterative steps of cluster selection. Quickbundles does not take the computational time needed to optimally cluster streamlines, but rather prioritizes speed to reduce the dimensionality of the classification problem (Garyfallidis 2012). The user can select all of the sub-bundles that include parts of the target bundle-of-interest \ref{step1}. The user can alternate between selected and deselected streamline bundles by clicking on the button ”Toggle Choice” to study the rejected streamlines more closely. The selected sub-bundles are re-clustered into finer sub-bundles when the user pushes the ”Finer” button, and the desired components of the bundle-of-interest can be further refined by selecting a subset of the reclustered bundles \ref{step2}.

\label{step1}The user selected two sub-bundles that contain streamlines representing a tractography model of the Uncinate Fasciculus.

\label{step2}Sub-bundles that the user judged were part of an Uncinate Fasciculus tractography model are re-clustered so that the user can further refine the model.

Results

This Cluster-Based Streamline Tool (https://github.com/kesshijordan/Cluster-viz) was implemented as a web-based viewer with a python backend using CherryPy (http://cherrypy.org/) \ref{flowchart}. The code from the AFQ-Browser (https://github.com/yeatmanlab/AFQ-Browser) was used as a interface skeleton and adapted for this project. The user can upload and download tractography streamline data, select streamline bundles, and initiate finer clustering using Quickbundles (Garyfallidis 2012). The viewer presents all of the streamlines to the user and allows them to select a subset of the ten sub-bundles by either clicking on the streamlines, themselves, or by clicking on the menu. This tool is a work-in-progress; in the future, the selected sub-bundles will be clustered further upon user request. The transparent cortical surface is for orientation only; it is not in the patient space. In-progress developments include patient-specific anatomical reference in both slice and surface representation and iterative clustering functionality.

Conclusions/Future Directions

This method is advantageous to the traditional ROI-based approach because binary decisions made on discrete clusters is less variable than manually placing ROIs in continuous space. In theory, this should facilitate reproducibility of human operators, as well as create a more tractable training set for machine learning applications. Ideally, the Cluster-viz tool would learn from the user as they interact with the viewer and provide suggestions for bundle classification that the user could approve. Over time, the learning element could greatly increase the efficiency of the user and, perhaps, eventually replace the human.

Acknowledgements

This work was completed during Neurohackweek 2016 in Seattle, WA and the BrainHack 2016 in Los Angeles, CA. We would like to thank Dr. Ariel Rokem and Dr. Jason Yeatman for their help during Neurohackweek and Dr. Jeremy Maitin-Shepard for his help during BrainHack LA. We would also like to thank all of the Neurohackweek and BrainHack organizers and mentors.

References

  1. Jeffrey I. Berman, SungWon Chung, Pratik Mukherjee, Christopher P. Hess, Eric T. Han, Roland G. Henry. Probabilistic streamline q-ball tractography using the residual bootstrap. NeuroImage 39, 215–222 Elsevier BV, 2008. Link

  2. Eduardo Caverzasi, Shawn L. Hervey-Jumper, Kesshi M. Jordan, Iryna V. Lobach, Jing Li, Valentina Panara, Caroline A. Racine, Vanitha Sankaranarayanan, Bagrat Amirbekian, Nico Papinutto, Mitchel S. Berger, Roland G. Henry. Identifying preoperative language tracts and predicting postoperative functional recovery using HARDI q-ball fiber tractography in patients with gliomas. Journal of Neurosurgery 125, 33–45 Journal of Neurosurgery Publishing Group (JNSPG), 2016. Link

  3. Maxime Chamberland, Kevin Whittingstall, David Fortin, David Mathieu, Maxime Descoteaux. Real-time multi-peak tractography for instantaneous connectivity display. Frontiers in Neuroinformatics 8 Frontiers Media SA, 2014. Link

  4. Hugues Duffau. The Dangers of Magnetic Resonance Imaging Diffusion Tensor Tractography in Brain Surgery. World Neurosurgery 81, 56–58 Elsevier BV, 2014. Link

  5. Guenther C. Feigl, Wolfgang Hiergeist, Claudia Fellner, Karl-Michael M. Schebesch, Christian Doenitz, Thomas Finkenzeller, Alexander Brawanski, Juergen Schlaier. Magnetic Resonance Imaging Diffusion Tensor Tractography: Evaluation of Anatomic Accuracy of Different Fiber Tracking Software Packages. World Neurosurgery 81, 144–150 Elsevier BV, 2014. Link

  6. Eleftherios Garyfallidis, Matthew Brett, Marta Morgado Correia, Guy B. Williams, Ian Nimmo-Smith. QuickBundles a Method for Tractography Simplification. Frontiers in Neuroscience 6 Frontiers Media SA, 2012. Link

  7. Eleftherios Garyfallidis, Matthew Brett, Bagrat Amirbekian, Ariel Rokem, Stefan van der Walt, Maxime Descoteaux, Ian Nimmo-Smith and. Dipy a library for the analysis of diffusion MRI data. Frontiers in Neuroinformatics 8 Frontiers Media SA, 2014. Link

  8. Van J. Wedeen Ruopeng Wang. Trackvis. (2007).

  9. J-Donald Tournier, Fernando Calamante, Alan Connelly. Robust determination of the fibre orientation distribution in diffusion MRI: Non-negativity constrained super-resolved spherical deconvolution. NeuroImage 35, 1459–1472 Elsevier BV, 2007. Link

  10. J.-Donald Tournier, Fernando Calamante, David G. Gadian, Alan Connelly. Direct estimation of the fiber orientation density function from diffusion-weighted MRI data using spherical deconvolution. NeuroImage 23, 1176–1185 Elsevier BV, 2004. Link

  11. David S. Tuch. Q-ball imaging. Magnetic Resonance in Medicine 52, 1358–1372 Wiley-Blackwell, 2004. Link

  12. David S. Tuch, Timothy G. Reese, Mette R. Wiegell, Van J. Wedeen. Diffusion MRI of Complex Neural Architecture. Neuron 40, 885–895 Elsevier BV, 2003. Link

  13. Setsu Wakana, Arvind Caprihan, Martina M. Panzenboeck, James H. Fallon, Michele Perry, Randy L. Gollub, Kegang Hua, Jiangyang Zhang, Hangyi Jiang, Prachi Dubey, Ari Blitz, Peter van Zijl, Susumu Mori. Reproducibility of quantitative tractography methods applied to cerebral white matter. NeuroImage 36, 630–644 Elsevier BV, 2007. Link

  14. Jason D. Yeatman, Robert F. Dougherty, Nathaniel J. Myall, Brian A. Wandell, Heidi M. Feldman. Tract Profiles of White Matter Properties: Automating Fiber-Tract Quantification. PLoS ONE 7, e49790 Public Library of Science (PLoS), 2012. Link

  15. Sang Wook Yoo, Pamela Guevara, Yong Jeong, Kwangsun Yoo, Joseph S. Shin, Jean-Francois Mangin, Joon-Kyung Seong. An Example-Based Multi-Atlas Approach to Automatic Labeling of White Matter Tracts. PLOS ONE 10, e0133337 Public Library of Science (PLoS), 2015. Link

[Someone else is editing this]

You are editing this file