CHOR, M. Maxime (2018) Multimodal Représentation Learning PRE - Research Project, ENSTA.

[img]
Preview
PDF
1843Kb

Abstract

Nowadays with the remarkable progress of technologies for processing daily activities, speech, emotion and language including facial expression have expanded the interaction of multimodal data between humans and computers. Human-computer interaction (HCI) interface is playing an important role in our daily life. This work focuses on a novel application on deep learning to learn feature representation over multiple modalities. We present different experimentations and show how to train a deep network to learn a good representation of multiple features by using a deep autoencoder. Our experimental results are validated on the RECOLA dataset of the AVEC 2015 research challenge on emotion recognition and prove the efficient and robust suitability of the proposed approach to produce interesting shared representation.

Item Type:Thesis (PRE - Research Project)
Subjects:Information and Communication Sciences and Technologies
ID Code:7150
Deposited By:Maxime Chor
Deposited On:12 juin 2019 11:04
Dernière modification:12 juin 2019 11:04

Repository Staff Only: item control page