Full metadata record
DC poleHodnotaJazyk
dc.contributor.authorBen Abdallah, Fatma
dc.contributor.authorFeki, Ghada
dc.contributor.authorBen Ammar, Anis
dc.contributor.authorBen Amar, Chokri
dc.contributor.editorSkala, Václav
dc.date.accessioned2019-05-10T10:11:50Z-
dc.date.available2019-05-10T10:11:50Z-
dc.date.issued2018
dc.identifier.citationWSCG 2018: poster papers proceedings: 26th International Conference in Central Europe on Computer Graphics, Visualization and Computer Visionin co-operation with EUROGRAPHICS Association, p. 8-17.en
dc.identifier.isbn978-80-86943-42-8
dc.identifier.issn2464-4617
dc.identifier.uriwscg.zcu.cz/WSCG2018/!!_CSRN-2803.pdf
dc.identifier.urihttp://hdl.handle.net/11025/34632
dc.description.abstractNowadays, taking photos and recording our life are daily task for the majority of people. The recorded information helped to build several applications like the self-monitoring of activities, memory assistance and long-term assisted living. This trend, called lifelogging, interests a lot of research communities such as computer vision, machine learning, human-computer interaction, pervasive computing and multimedia. Great effort have been made in the acquisition and the storage of captured data but there are still challenges in managing, analyzing, indexing, retrieving, summarizing and visualizing these captured data. In this work, we present a new model driven architecture for deep learning-based multimodal lifelog retrieval, summarization and visualization. Our proposed approach is based on different models integrated in an architecture established on four phases. Based on Convolutional Neural Network, the first phase consists of data preprocessing for discarding noisy images. In a second step, we extract several features to enhance the data description. Then, we generate a semantic segmentation to limit the search area in order to better control the runtime and the complexity. The second phase consist in analyzing the query. The third phase which based on Relational Network aims at retrieving the data matching the query. The final phase treat the diversity-based summarization with k-means which offers, to lifelogger, a key-frame concept and context selection-based visualization.en
dc.format10 s.cs
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherVáclav Skala - UNION Agencyen
dc.relation.ispartofseriesWSCG 2018: poster papers proceedingsen
dc.rights© Václav Skala - Union Agencycs
dc.subjectmultimodalitacs
dc.subjectvyhledávánícs
dc.subjectshrnutícs
dc.subjectvizualizacecs
dc.subjectkonvoluční neuronová síťcs
dc.subjectrelační síťcs
dc.titleA new model driven architecture for deep learning-based multimodal lifelog retrievalen
dc.typekonferenční příspěvekcs
dc.typeconferenceObjecten
dc.rights.accessopenAccessen
dc.type.versionpublishedVersionen
dc.subject.translatedmultimodalityen
dc.subject.translatedretrievalen
dc.subject.translatedsummarizationen
dc.subject.translatedvisualizationen
dc.subject.translatedconvolutional neural networken
dc.subject.translatedrelational networken
dc.identifier.doihttps://doi.org/10.24132/CSRN.2018.2803.2
dc.type.statusPeer-revieweden
Vyskytuje se v kolekcích:WSCG 2018: Poster Papers Proceedings

Soubory připojené k záznamu:
Soubor Popis VelikostFormát 
Abdallah.pdfPlný text1,33 MBAdobe PDFZobrazit/otevřít


Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam: http://hdl.handle.net/11025/34632

Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.