Abstract
This dissertation involved the development of a Machine Learning Based Computer Vision Pipeline (MLCVP) that is able to interpret microscopic images of rotifer samples, estimate rotifer density and fertilization rate, evaluate rotifer motility, and recognize the presence of ciliates. The MLCVP integrates a background subtraction (BGS) object detection algorithm, convolutional neural networks (CNNs), an object tracking algorithm, and a neural network based sequential framework to facilitate image processing and interpretation. This study designed, implemented, and comprehensively analyzed the efficacy of each component of the MLCVP. The BGS algorithms were able to effectively detect the moving objects in rotifer samples. CNNs were developed to infer the labels of the objects detected by the BGS algorithms. This research explored different CNN architectures and found that the 32-layer ResNet achieved the best classification performance with 8.04% ± 1.74% error rate. The object labels predicted by the CNNs were used for rotifer density estimation and achieved 1.82% ± 1.47% mean absolute error (MAE). However, estimating the rotifer fertilization rate using object labels of a single frame was biased (32.71% ± 10.83% MAE). In order to resolve this problem, this research proposed a sequence data pipeline that can track the moving objects and generate object sequences and a neural network based sequential framework, NeuraRoti, for sequence interpretation. NeuraRoti achieved 3.77% ± 0.4% classification error rate. Estimating fertilization rate using sequence labels predicted by NeuraRoti achieved 9.08% ± 3.46% MAE. Additionally, this research proposed a horizontal swimming filtering mechanism that can automatically analyze the trajectories generated by the sequence pipeline and evaluate the rotifer motility. The predicted sequence labels and the estimated swimming speeds were further used to facilitate recognition of ciliates in rotifer samples.