Název: MAELab: a framework to automatize landmark estimation
Autoři: Le Van, Linh
Beurton-Aimar, Marie
Krahenbuhl, Adrien
Parisey, Nicolas
Citace zdrojového dokumentu: WSCG '2017: short communications proceedings: The 25th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision 2016 in co-operation with EUROGRAPHICS: University of West Bohemia, Plzen, Czech RepublicMay 29 - June 2 2017, p. 153-158.
Datum vydání: 2017
Nakladatel: Václav Skala - UNION Agency
Typ dokumentu: konferenční příspěvek
conferenceObject
URI: wscg.zcu.cz/WSCG2017/!!_CSRN-2702.pdf
http://hdl.handle.net/11025/29746
ISBN: 978-80-86943-45-9
ISSN: 2464-4617
Klíčová slova: morfologie;registrace obrázků;SIFT deskriptor;brouk;mandibula
Klíčová slova v dalším jazyce: morphology;image registration;SIFT descriptor;beetle;mandible
Abstrakt: In biology, the morphometric analysis is widely used to analyze the inter-organisms variations. It allows to classify and to determine the evolution of an organism’s family. The morphometric methods consider features such as shape, structure, color, or size of the studied objects. In previous works [8], we have analyzed beetle mandibles by using the centroid as feature, in order to classify the beetles. We have shown that the Probabilistic Hough Transform (PHT) is an efficient unsupervised method to compute the centroid. This paper proposes a new approach to precisely estimate the landmark geometry, points of interest defined by biologists on the mandible contours. In order to automatically register the landmarks on different mandibles, we defined patches around manual landmarks of the reference image. Each patch is described by computing its SIFT descriptor. Considering a query image, we apply a registration step performed by an Iterative Principal Component Analysis which identify the rotation and translation parameters. Then, the patches in the query image are identified and the SIFT descriptors computed. The biologists have collected 293 beetles to provide two sets of mandible images separated into left and right side. The experiments show that, depending on the position of the landmarks on the mandible contour, the performance can go up to 98% of good detection. The complete workflow is implemented in the MAELab framework, freely available as library on GitHub.
Práva: © Václav Skala - UNION Agency
Vyskytuje se v kolekcích:WSCG '2017: Short Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Van.pdfPlný text4,67 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/29746

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.