Abstract
Over the past two decades a significant number of OCT segmentation approaches have been proposed in the literature. Each methodology has been conceived for and/or evaluated using specific datasets that do not reflect the complexities of the majority of widely available retinal features observed in clinical settings. In addition, there does not exist an appropriate OCT dataset with ground truth that reflects the realities of everyday retinal features observed in clinical settings. While the need for unbiased performance evaluation of automated segmentation algorithms is obvious, the validation process of segmentation algorithms have been usually performed by comparing with manual labelings from each study and there has been a lack of common ground truth. Therefore, a performance comparison of different algorithms using the same ground truth has never been performed. This paper reviews research‐oriented tools for automated segmentation of the retinal tissue on OCT images. It also evaluates and compares the performance of these software tools with a common ground truth.
Reliable data analysis and proper diagnosis of various retinal diseases requires an appropriate ground truth dataset that reflects the realities of everyday retinal features observed in clinical settings. The major intent of this review is to show a number of important issues surrounding the lack of a representative and practical dataset that could be used as a common ground truth for the evaluation of the performance of OCT quantitative methods.