A Comparison of the Performance of Human and Algorithmic Segmentations on Low-Contrast Martian Rock Images

  • Jessica Charlton

Student thesis: Doctoral ThesisDoctor of Philosophy

Abstract

The Martian environment is a difficult one to navigate. Current exploration is limited by the slow transfer of data and by the need for human intervention when choosing and directing a rover toward its next target. While edge-finding and
segmentation algorithms do exist which could enable a rover to choose autonomously, their performance suffers in situations like those on Mars where a fine dust covering makes all objects appear much the same colour.

A comparison of annotation responses of two groups (Computer Scientists and Geographers) was undertaken to assess the differences between the groups and between existing segmentation algorithms. The algorithms tested were: AutoWEKA, A1DE, Bagging, Best-First PART, CHIRP, CSForest (with 10 and 50 trees), Dual Perturb and Combine, ExtRaTrees, Fast Random Forest, ForestPA, FURIA, Consolidated Trees, LIBLINEAR, Multilayer Perceptrons (with n = 100 and 200), Multi-Objective Evolutionary Fuzzy Classifier, SPAARC, SysFor (with 10 and 50 trees), WiSARD, and UNet.

Two metrics were employed to compare the segmentations. The first (Local/Global Consistency Error) was designed to compare human segmentations and thus is exceedingly tolerant of refinements. The second (Object-Level Consistency Error) was designed to be much less tolerant of under- and over-segmentation.

Although most differences between algorithms are not statistically significant using the LCE/GCE, and using the OCE no algorithm stood out from the pack, both WiSARD and the Multi-Objective Evolutionary Fuzzy Classifier did show significantly worse performance than many of the other algorithms. The more state-of-the-art UNet performed better than BFTrees, CHIRP, Multi-Objective Evolutionary Fuzzy Classifier, SPAARC, and WiSARD, though did not show statistically significant differences from the remaining algorithms. While segmentation times were for the most part not large, these will be significantly extended on the limited Martian Rover hardware, ruling out real-time usage.

Training was also performed for CSForest, Fast Random Forest, Multilayer Perceptrons, SPAARC, and UNet across two sets of two, and one set of four images to assess the impact of more training images. While the results from training on four images did show an improvement with the measures on training with one image, there is still a wide variety of performance (with some images more consistent with the human segmentations while some show large areas of background included), suggesting that for some images, the extra training data improves performance while for others it leads to confusion. Furthermore, the four image results showed improved performance over one version of the two image training, but not the other, suggesting that the precise images used for training play a significant role.
Date of Award2024
Original languageEnglish
Awarding Institution
  • Aberystwyth University
SupervisorReyer Zwiggelaar (Supervisor) & Laurence Tyler (Supervisor)

Keywords

  • Mars imaging
  • segmentation
  • machine learning

Cite this

'