Hanene Ben Yedder

and 4 more

Identifying breast cancer lesions with a portable diffuse optical tomography (DOT) device can improve early detection while avoiding otherwise unnecessarily invasive, ionizing, and more expensive modalities such as CT, as well as enabling pre-screening efficiency. Critical to this capability is not just the identification of lesions but rather the complex problem of discriminating between malignant and benign lesions. To accurately capture the highly heterogeneous tissue of a cancer lesion embedded in healthy breast tissue with non-invasive DOT, multiple frequencies can be combined to optimize signal penetration and reduce sensitivity to noise. However, these frequency responses can overlap, capture common information, and correlate, potentially confounding reconstruction and downstream end tasks. We show that an orthogonal fusion loss of multi-frequency DOT can improve reconstruction. More importantly, the orthogonal fusion leads to more accurate end-to-end identification of malignant versus benign lesions, illustrating its regularization properties in the multi-frequency input space. While the deployment of portable DOT probes requires a severely constrained computational budget, we show that our raw-to-task model, for direct prediction of the end task from signal, significantly reduces computational complexity without sacrificing accuracy, enabling a high real-time throughput, desiredin medical settings. Furthermore, our results indicate that image reconstruction is not necessary for unbiased classiication of lesions with a balanced accuracy of 77% and 66% on the synthetic dataset and clinical dataset, espectively, using the raw-to-task model. Code is available at https: //github.com/sfu-mial/FuseNet

Ben Cardoen

and 4 more

Hanene Ben Yedder

and 2 more

Diffuse optical tomography (DOT) leverages near-infrared light propagation through tissue to assess its optical properties and identify abnormalities. DOT image reconstruction from limited-angle data acquisition is severely ill-posed due to the highly scattered photons in the medium and the relatively small number of collected projections. Reconstructions are thus commonly marred by artifacts and, as a result, it is difficult to obtain accurate reconstruction of target objects, e.g., malignant lesions. Reconstruction does not always ensure good localization of small lesions. Furthermore, conventional optimization-based reconstruction methods are computationally expensive, rendering them too slow for real-time imaging applications. Our goal is to develop a fast and accurate image reconstruction method using deep learning, where multitask learning ensures accurate lesion localization in addition to improved reconstruction. We apply spatial-wise attention and a distance transform based loss function in a novel multitask learning formulation to improve localization and reconstruction compared to single-task optimized methods. Given the scarcity of real-world sensor-image pairs required for training supervised deep learning models, we leverage physics-based simulation to generate synthetic datasets and use a transfer learning module to align the sensor domain distribution between in silico and real-world data, while taking advantage of cross-domain learning. Both quantitative and qualitative results on phantom and real data indicate the superiority of our multitask method in the reconstruction and localization of lesions in tissue compared to state-of-the-art methods. The results demonstrate that multitask learning provides sharper and more accurate reconstruction.