Název: | Human action recognition based on 3D convolution neural networks from RGBD videos |
Autoři: | Al-Akam, Rawya Paulus, Dietrich Gharabaghi, Darius |
Citace zdrojového dokumentu: | WSCG 2018: poster papers proceedings: 26th International Conference in Central Europe on Computer Graphics, Visualization and Computer Visionin co-operation with EUROGRAPHICS Association, p. 18-26. |
Datum vydání: | 2018 |
Nakladatel: | Václav Skala - UNION Agency |
Typ dokumentu: | konferenční příspěvek conferenceObject |
URI: | wscg.zcu.cz/WSCG2018/!!_CSRN-2803.pdf http://hdl.handle.net/11025/34633 |
ISBN: | 978-80-86943-42-8 |
ISSN: | 2464-4617 |
Klíčová slova: | rozpoznání akce;RGBD videa;optický tok;3D konvoluční neuronová síť;podpora vektorového stroje |
Klíčová slova v dalším jazyce: | action recognition;RGBD videos;optical flow;3D convolutional neural network;support vector machines |
Abstrakt: | Human action recognition with color and depth sensors has received increasing attention in image processing and computer vision. This paper target is to develop a novel deep model for recognizing human action from the fusion of RGB-D videos based on a Convolutional Neural Network. This work is proposed a novel 3D Convolutional Neural Network architecture that implicitly captures motion information between adjacent frames, which are represented in two main steps: As a First, the optical flow is used to extract motion information from spatio-temporal domains of the different RGB-D video actions. This information is used to compute the features vector values from deep 3D CNN model. Secondly, train and evaluate a 3D CNN from three channels of the input video sequences (i.e. RGB, depth and combining information from both channels (RGB-D)) to obtain a feature representation for a 3D CNN model. For evaluating the accuracy results, a Convolutional Neural Network based on different data channels are trained and additionally the possibilities of feature extraction from 3D Convolutional Neural Network and the features are examined by support vector machine to improve and recognize human actions. From this methods, we demonstrate that the test results from RGB-D channels better than the results from each channel trained separately by baseline Convolutional Neural Network and outperform the state of the art on the same public datasets. |
Práva: | © Václav Skala - Union Agency |
Vyskytuje se v kolekcích: | WSCG 2018: Poster Papers Proceedings |
Soubory připojené k záznamu:
Soubor | Popis | Velikost | Formát | |
---|---|---|---|---|
Al-Akam.pdf | Plný text | 1,3 MB | Adobe PDF | Zobrazit/otevřít |
Použijte tento identifikátor k citaci nebo jako odkaz na tento záznam:
http://hdl.handle.net/11025/34633
Všechny záznamy v DSpace jsou chráněny autorskými právy, všechna práva vyhrazena.