Comparison of Multi-Organ Segmentation Tools for Whole-Body [18F]FDG-PET/CT Clinical Imaging
DOI:
https://doi.org/10.2218/piwjournal.10874Abstract
Typically, delineation of volumes of interest (VOI) within clinical PET/CT imaging is performed manually. This is time consuming [1] and is highly vulnerable to inter/intra-operator variability resulting in differing delineations for the same patient [1]. Application of automated segmentation aims to address these issues in the VOI delineation process; increasing image throughput and reducing the current manual stochasticity involved. Moreover, automation in the segmentation process has the potential to galvanise downstream analyses, including network analysis [2]. Among the various automated multi-organ segmentation approaches, two methodologies Multiple-Organ Objective Segmentation (MOOSE) [3] and TotalSegmentator (TS) [4] have emerged as state-of-the-art (SOTA). Both methods leverage the nnU-Net framework as their underlying architecture. However, they differ in training dataset, nnU-Net configurations, and weights. Recently, Julie et al. [5] compared MOOSE and TotalSegmentator on a metastatic breast cancer dataset. However, gold-standard labels were not generated for this study and as a result, comparison against a verified gold-standard is not available, instead the authors focus on the degree of agreement and differences in feature values. Concurrent with Julie et al. [5], we look to compare MOOSE and TS, on a dataset of clinical stage IIB/III non-small cell lung carcinoma (NSCLC). We compare both methods versus gold-standard manual delineation and evaluate using current technical segmentation metrics alongside PET/CT outcome metrics (e.g. Hounsfield units (HU) and Standardised Uptake Values (SUV)) to assess if the automated methods introduce quantitative bias versus manual annotations.
Please click on the 'PDF' for the full abstract!
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Cameron Wheeler, Phyo H. Khaing, Eleonora D’Arnese , Adriana A.S. Tavares

This work is licensed under a Creative Commons Attribution 4.0 International License.


