FIGURE 8 Top row: US image of subcutaneous tumors of three
different mice. (Row starting with d) Gaussian white noise corrupted
images being fed to denoising U-Net to generate corresponding outcomes
in (Row starting with g). (Row starting with j) S&P noise corrupted
images being fed to denoising U-Net to generate corresponding outcomes
in (Row starting with m).
4 | CONCLUSION
In this study we developed a simple, easily trainable and generalized
deep learning U-Net model to denoise LED-based photoacoustic images
obtained with low number of frame averages. The portable and
cost-effective LED-based systems have already shown tremendous promise
in the preclinical and clinical arena despite many challenges like low
fluence, wider pulse width, and non-tunability of wavelengths.
Particularly, the low energy condition is compensated by high number of
frames averaging at the expense of low temporal resolution. Our U-Net
architecture achieves high SNR in LED-based photoacoustic imaging with
low number of frames enabling real-time implementation. The present
study discussed two prominent downsides to the U-Net framework, namely,
that it can produce blurry outcomes while de-noising the images and it
falls prey to S&P noise while being invariant to Gaussian white noise.
In our current study we acquired the images with only one type of
transducer, i.e., a transducer operating at 7 MHz. Our future studies
will involve testing the architecture on data acquired with different
frequency transducers and LED array configurations. Furthermore our
future work will also incorporate transfer learning and network
downsizing along with building a proper training database, that can
eventually promote more generalized version of our network for different
types of tumors and in vivo applications.
ACKNOWLEDGMENTS
The authors would like to acknowledge support from Tufts School of
Engineering, Tufts Data Intensive Science Center Pilot grant, NIH grant
and subcontract funds on NIH grant 5R01CA231606. The authors would also
like to acknowledge Mr. Marvin Xavierselvan for help with tumor
implantation, Ms. Allison Sweeney for handling animal care, Mr.
Christopher Nguyen for proof-reading the manuscript and Ms. Sahanvi
Pothamsetty for sketches used in Fig. 1.
AUTHOR CONTRIBUTIONS
A.P. and S.M. were involved in conceptualization, investigation,
writing—original draft, writing—review and editing. S.M. secured the
funds for the project.
CONFLICT OF INTEREST
The authors declare no conflicts of interest.
DATA AVAILABILITY STATEMENT
Data and respective codes will be available upon reasonable request.
REFERENCES
1. Bell, A.G., Upon the production and reproduction of sound by
light. Journal of the Society of Telegraph Engineers, 1880.9 (34): p. 23.
2. Wang, L.V. and S. Hu, Photoacoustic tomography: in vivo imaging
from organelles to organs. Science, 2012. 335 (6075): p.
1458-62.
3. Xu, M. and L.V. Wang, Photoacoustic imaging in biomedicine.Review of scientific instruments, 2006. 77 (4): p. 041101.
4. Steinberg, I., et al., Photoacoustic clinical imaging.Photoacoustics, 2019. 14 : p. 77-98.
5. Das, D., et al., Another decade of photoacoustic imaging. Phys
Med Biol, 2020.
6. Mallidi, S., G.P. Luke, and S. Emelianov, Photoacoustic imaging
in cancer detection, diagnosis, and treatment guidance. Trends
Biotechnol, 2011. 29 (5): p. 213-21.
7. Wang, L.V. and J. Yao, A practical guide to photoacoustic
tomography in the life sciences. Nat Methods, 2016. 13 (8): p.
627-38.
8. Xavierselvan, M., M.K.A. Singh, and S. Mallidi, In Vivo Tumor
Vascular Imaging with Light Emitting Diode-Based Photoacoustic Imaging
System. Sensors (Basel), 2020. 20 (16).
9. Zhu, Y., et al., Towards Clinical Translation of LED-Based
Photoacoustic Imaging: A Review. Sensors (Basel), 2020. 20 (9).
10. Francis, K.J., et al., Tomographic Ultrasound and LED-Based
Photoacoustic System for Preclinical Imaging. Sensors (Basel), 2020.20 (10).
11. Agrawal, S., et al., Light-Emitting-Diode-Based Multispectral
Photoacoustic Computed Tomography System. Sensors (Basel), 2019.19 (22).
12. Joseph, J., et al., Technical validation studies of a
dual-wavelength LED-based photoacoustic and ultrasound imaging system.Photoacoustics, 2021. 22 : p. 100267.
13. Agrawal, S., et al., Photoacoustic Imaging of Human
Vasculature Using LED versus Laser Illumination: A Comparison Study on
Tissue Phantoms and In Vivo Humans. Sensors (Basel), 2021.21 (2).
14. Jo, J., et al., Imaging of enthesitis by an LED-based
photoacoustic system. J Biomed Opt, 2020. 25 (12).
15. Hariri, A., et al., The characterization of an economic and
portable LED-based photoacoustic imaging system to facilitate molecular
imaging. Photoacoustics, 2018. 9 : p. 10-20.
16. Joseph Francis, K., et al., Tomographic imaging with an
ultrasound and LED-based photoacoustic system. Biomed Opt Express,
2020. 11 (4): p. 2152-2165.
17. Bulsink, R., et al., Oxygen Saturation Imaging Using LED-Based
Photoacoustic System. Sensors (Basel), 2021. 21 (1).
18. Xavierselvan, M. and S. Mallidi, LED-Based Functional
Photoacoustics—Portable and Affordable Solution for Preclinical Cancer
Imaging . 2020. p. 303-319.
19. Zhao, H., et al., Deep learning enables superior photoacoustic
imaging at ultralow laser dosages. Advanced Science, 2021.8 (3): p. 2003097.
20. LeCun, Y., Y. Bengio, and G. Hinton, Deep learning. Nature,
2015. 521 (7553): p. 436-44.
21. Ker, J., et al., Deep Learning Applications in Medical Image
Analysis. IEEE Access, 2018. 6 : p. 9375-9389.
22. Deng, L. and D. Yu, Deep Learning: Methods and Applications.Found. Trends Signal Process., 2014. 7 (3–4): p. 197–387.
23. Razzak, M.I., S. Naz, and A. Zaib, Deep Learning for Medical
Image Processing: Overview, Challenges and the Future , inClassification in BioApps: Automation of Decision Making , N. Dey,
A.S. Ashour, and S. Borra, Editors. 2018, Springer International
Publishing: Cham. p. 323-350.
24. Yang, C., et al., Review of deep learning for photoacoustic
imaging. Photoacoustics, 2021. 21 : p. 100215.
25. Tian, L., et al., Deep Learning in Biomedical Optics. Lasers
Surg Med, 2021. 53 (6): p. 748-775.
26. Deng, H., et al., Deep learning in photoacoustic imaging: a
review. J Biomed Opt, 2021. 26 (4).
27. DiSpirito, A., 3rd, et al., Sounding out the hidden data: A
concise review of deep learning in photoacoustic imaging. Exp Biol Med
(Maywood), 2021. 246 (12): p. 1355-1367.
28. Lan, H., et al. Reconstruct the Photoacoustic Image Based On
Deep Learning with Multi-frequency Ring-shape Transducer Array . in2019 41st Annual International Conference of the IEEE Engineering
in Medicine and Biology Society (EMBC) . 2019.
29. Feng, J., et al., End-to-end Res-Unet based reconstruction
algorithm for photoacoustic imaging. Biomed Opt Express, 2020.11 (9): p. 5321-5340.
30. Gutta, S., et al., Deep neural network-based bandwidth
enhancement of photoacoustic data. Journal of biomedical optics, 2017.22 (11): p. 1-7.
31. Awasthi, N., et al., Sinogram super-resolution and denoising
convolutional neural network (SRCN) for limited data photoacoustic
tomography . 2020.
32. Antholzer, S., M. Haltmeier, and J. Schwab, Deep learning for
photoacoustic tomography from sparse data. Inverse Probl Sci Eng, 2019.27 (7): p. 987-1005.
33. Shan, H., G. Wang, and Y. Yang, Accelerated Correction of
Reflection Artifacts by Deep Neural Networks in Photo-Acoustic
Tomography. Applied Sciences, 2019. 9 (13).
34. Zhang, H., et al., A New Deep Learning Network for Mitigating
Limited-view and Under-sampling Artifacts in Ring-shaped Photoacoustic
Tomography. Comput Med Imaging Graph, 2020. 84 : p. 101720.
35. Davoudi, N., X.L. Deán-Ben, and D. Razansky, Deep learning
optoacoustic tomography with sparse data. Nature Machine Intelligence,
2019. 1 (10): p. 453-460.
36. Jeon, S., et al., Deep learning-based speed of sound
aberration correction in photoacoustic images , in Photons Plus
Ultrasound: Imaging and Sensing 2020 . 2020.
37. Guan, S., et al., Fully Dense UNet for 2D Sparse Photoacoustic
Tomography Artifact Removal . 2018.
38. Vu, T., et al., A generative adversarial network for artifact
removal in photoacoustic computed tomography with a linear-array
transducer. Exp Biol Med (Maywood), 2020. 245 (7): p. 597-605.
39. Farnia, P., et al., High-quality photoacoustic image
reconstruction based on deep convolutional neural network: towards
intra-operative photoacoustic imaging. Biomed Phys Eng Express, 2020.6 (4): p. 045019.
40. Godefroy, G., B. Arnal, and E. Bossy, Compensating for
visibility artefacts in photoacoustic imaging with a deep learning
approach providing prediction uncertainties. Photoacoustics, 2021.21 .
41. Tong, T., et al., Domain Transform Network for Photoacoustic
Tomography from Limited-view and Sparsely Sampled Data. Photoacoustics,
2020. 19 : p. 100190.
42. Guan, S., et al., Limited-View and Sparse Photoacoustic
Tomography for Neuroimaging with Deep Learning. Sci Rep, 2020.10 (1): p. 8510.
43. Gomez-Villa, A., et al., Color illusions also deceive CNNs for
low-level vision tasks: Analysis and implications. Vision Research,
2020. 176 : p. 156-174.
44. Gomez-Villa, A., et al. Convolutional neural networks can be
deceived by visual illusions . in Proceedings of the IEEE/CVF
conference on computer vision and pattern recognition . 2019.
45. Alexander, R.G., et al., Visual Illusions in Radiology: untrue
perceptions in medical images and their implications for diagnostic
accuracy. Frontiers in Neuroscience, 2021. 15 : p. 554.
46. Jingdong, C., et al., New insights into the noise reduction
Wiener filter. IEEE Transactions on Audio, Speech and Language
Processing, 2006. 14 (4): p. 1218-1234.
47. Loupas, T., W.N. McDicken, and P.L. Allan, An adaptive
weighted median filter for speckle suppression in medical ultrasonic
images. IEEE Transactions on Circuits and Systems, 1989.36 (1): p. 129-135.
48. Rudin, L.I., S. Osher, and E. Fatemi, Nonlinear total
variation based noise removal algorithms. Physica D: Nonlinear
Phenomena, 1992. 60 (1): p. 259-268.
49. Savitzky, A. and M.J.E. Golay, Smoothing and Differentiation
of Data by Simplified Least Squares Procedures. Analytical Chemistry,
1964. 36 (8): p. 1627-1639.
50. Ronneberger, O., P. Fischer, and T. Brox. U-Net: Convolutional
Networks for Biomedical Image Segmentation . in Medical Image
Computing and Computer-Assisted Intervention – MICCAI 2015 . 2015.
Cham: Springer International Publishing.
51. Kuriakose, M., et al., Optimizing Irradiation Geometry in
LED-Based Photoacoustic Imaging with 3D Printed Flexible and Modular
Light Delivery System. Sensors (Basel), 2020. 20 (13).
52. Lediju Bell, M.A., et al., Short-lag spatial coherence
beamforming of photoacoustic images for enhanced visualization of
prostate brachytherapy seeds. Biomed Opt Express, 2013. 4 (10):
p. 1964-77.
53. He, H., et al., Importance of Ultrawide Bandwidth for
Optoacoustic Esophagus Imaging. IEEE Trans Med Imaging, 2018.37 (5): p. 1162-1167.
54. Anas, E.M.A., et al., Enabling fast and high quality LED
photoacoustic imaging: a recurrent neural networks based approach.Biomed Opt Express, 2018. 9 (8): p. 3852-3866.
55. Kuniyil Ajith Singh, M., et al., Deep learning-enhanced
LED-based photoacoustic imaging , in Photons Plus Ultrasound:
Imaging and Sensing 2020 . 2020.
56. Manwar, R., et al., Deep learning protocol for improved
photoacoustic brain imaging. J Biophotonics, 2020. 13 (10): p.
e202000212.
57. Hariri, A., et al., Deep learning improves contrast in
low-fluence photoacoustic imaging. Biomed Opt Express, 2020.11 (6): p. 3360-3373.
58. O’Hare, L. and P.B. Hibbard, Visual discomfort and blur.Journal of vision, 2013. 13 (5): p. 7-7.
59. Bahrami, K. and A.C. Kot, A fast approach for no-reference
image sharpness assessment based on maximum local variation. IEEE
signal processing letters, 2014. 21 (6): p. 751-755.
60. Pham, T.D., Kriging-Weighted Laplacian Kernels for Grayscale
Image Sharpening. IEEE Access, 2022.
61. Isola, P., et al. Image-to-image translation with conditional
adversarial networks . in Proceedings of the IEEE conference on
computer vision and pattern recognition . 2017.
SUPPORTING INFORMATION
Additional Supporting Information may be found online in the supporting
information tab for this article.