End Date: 01/01/2011
Project Leader: Koutsombogera Maria
The main objective of the Action was to develop an advanced acoustical, perceptual and psychological analysis of verbal and non-verbal communication signals originating in spontaneous face-to-face interaction, in order to identify algorithms and automatic procedures capable of identifying human emotional states. Several key aspects were considered, such as the integration of the developed algorithms and procedures for application in telecommunication, and for the recognition of emotional states, gestures, speech and facial expressions, in anticipation of the implementation of intelligent avatars and interactive dialogue systems that could be exploited to improve user access to future telecommunication services.
The results of the Action were threefold:
- It contributed to the establishment of quantitative and qualitative features describing both verbal and non-verbal modalities.
- It advanced technological support for the development of improved multimodal (i.e. exploiting signals coming from several modalities) systems.
- It contributed to new theories to clarify the role of verbal and non-verbal modalities in communication, and their exploitation in telecommunication services.