loading page

Cholecystectomy Surgical Instrument Detection Using Variants of YOLOv8
  • Muhammad Adil Raja ,
  • Roisin Loughran ,
  • Fergal Mc Caffery
Muhammad Adil Raja
Author Profile
Roisin Loughran
Author Profile
Fergal Mc Caffery
Author Profile

Abstract

As algorithms get better at their accuracy and computational efficiency, they invoke curiosity among the affected scientific communities to check if they can benefit from newer versions or not. Computer vision is one such domain that has observed rapid growth in terms of algorithmic advancements. The advent of deep learning was itself a catalyst for agile innovation. Coupled with rapid improvements in algorithms for object detection, the speed of innovation has become tremendous. And so there are many engineering and scientific disciplines that leverage object detection, any improvement in the latter has a ripple effect on the erstwhile. Computer Aided Laparoscopy (CAL) has come a long way due to object detection algorithms based on deep learning. Yet, every once in a while a new algorithm is released. It is tempting to see how a new algorithm may have affected the tool detection accuracy and efficiency in CAL. Recently version 8 of the famous You Look Only Once (YOLO) algorithm was released. Like all the past releases, it has been claimed that this version is better at detection accuracy as well as computational efficiency. This paper examines the performance of YOLOv8 at tool detection in a CAL context. We employed a well-known laparoscopy tool detection benchmark dataset in this research. Models with superior performance have been obtained as a result of this research. Models are superior in terms of both detection accuracy as well as inference speed. Moreover, the models are ready to be deployed into a production environment. The results that are reported in this paper are not only useful for the surgical community but also for the benchmarking of the YOLO algorithm.