loading page

Increasing the Throughput of Annotation Tasks Across Scales of Plant Phenotyping Experiments
  • +4
  • Hudanyun Sheng,
  • Jorge Gutierrez,
  • Haley Schuhl,
  • Katherine M. Murphy,
  • Lucia Acosta-Gamboa,
  • Malia Gehan,
  • Noah Fahlgren
Hudanyun Sheng
Jorge Gutierrez
Haley Schuhl

Corresponding Author:[email protected]

Author Profile
Katherine M. Murphy
Lucia Acosta-Gamboa
Author Profile
Malia Gehan
Author Profile
Noah Fahlgren
Author Profile

Abstract

PlantCV is an open-source open-development image analysis software package for plant phenotyping written in Python that has been actively developed since 2014. A new version of PlantCV was recently released. Major goals of the version 4 release were to 1) simplify the process of developing workflows by reducing the amount of coding needed; 2) broadening the set of supported data types; and 3) introducing interactive annotation tools that can be used directly in PlantCV workflow notebooks. Here we highlight the use of point annotations that can be used to quickly collect sets of points for parameterization of functions such as regions of interest or the identification of landmark points. Another application of point annotations this for image annotation, which is a major bottleneck in plant phenomics. For example, we have used point annotations to analyze microscopy images aimed at measurement of quinoa salt bladders, the number and size of stomata, and scoring of pollen germination. These tasks have traditionally been low throughput and have required manual scoring, but our point annotation tools can be used along with traditional segmentation methods to semi-automatically detect and annotate images. The PlantCV point annotation tools also allow users to correct semi-automated detection results before classification (e.g., germinated vs non-germinated pollen) and extraction of size & color traits per object. Once images are annotated, results can be analyzed directly or potentially can be used as labeled data in supervised learning methods.
19 Oct 2023Submitted to NAPPN 2024
19 Oct 2023Published in NAPPN 2024