nerc.ac.uk

Comparison of image annotation data generated by multiple investigators for benthic ecology

Durden, Jennifer M. ORCID: https://orcid.org/0000-0002-6529-9109; Bett, Brian J. ORCID: https://orcid.org/0000-0003-4977-9361; Schoening, Timm; Morris, Kirsty J.; Nattkemper, Tim W.; Ruhl, Henry A.. 2016 Comparison of image annotation data generated by multiple investigators for benthic ecology. Marine Ecology Progress Series, 552. 61-70. https://doi.org/10.3354/meps11775

Before downloading, please read NORA policies.
[img]
Preview
Text (Open Access paper (main))
m552p061.pdf - Published Version
Available under License Creative Commons Attribution.

Download (429kB) | Preview
[img]
Preview
Text (Open Access paper (supplementary material))
m552p061_supp.pdf - Published Version
Available under License Creative Commons Attribution.

Download (208kB) | Preview
[img]
Preview
Text
MEPS201512030 Postprint version.pdf - Accepted Version

Download (1MB) | Preview

Abstract/Summary

Multiple investigators often generate data from seabed images within a single image set to reduce the time burden, particularly with the large photographic surveys now available to ecological studies. These data (annotations) are known to vary as a result of differences in investigator opinion on specimen classification, and human factors such as fatigue and cognition. These variations are rarely recorded or quantified, nor are their impacts on derived ecological metrics (density, diversity, composition). We compared the annotations of three investigators of 73 megafaunal morphotypes in ~28,000 images, including 650 common images. Successful annotation was defined as both detecting and correctly classifying a specimen. Estimated specimen detection success was 77%, and classification success was 95%, giving an annotation success rate of 73%. Specimen detection success varied substantially by morphotype (12-100%). Variation in the detection of common taxa resulted in significant differences in apparent faunal density and community composition among investigators. Such bias has the potential to produce spurious ecological interpretations if not appropriately controlled or accounted for. We recommend that photographic studies document the use of multiple annotators, and quantify potential inter-investigator bias. Randomisation of the sampling unit (photograph or video clip) is clearly critical to the effective removal of human annotation bias in multiple annotator studies (and indeed single annotator works).

Item Type: Publication - Article
Digital Object Identifier (DOI): https://doi.org/10.3354/meps11775
ISSN: 0171-8630
Additional Keywords: Expert knowledge; Scoring; Visual imaging; Multiple investigators; Data quality; Quality assurance/quality control
NORA Subject Terms: Marine Sciences
Date made live: 20 May 2016 14:26 +0 (UTC)
URI: https://nora.nerc.ac.uk/id/eprint/513656

Actions (login required)

View Item View Item

Document Downloads

Downloads for past 30 days

Downloads per month over past year

More statistics for this item...