Large-scale human behavioral data is needed to accelerate human-centered AI research and provide benchmarks for the field. To address this need, Hume AI is working with researchers and organizations around the world to co-organize machine learning competitions using our novel datasets. Our upcoming competitions center on understanding emotional expression using multi-modal, multi-task, generative, and few-shot learning methods.

Each competition will center on a new dataset capturing an understudied modality or context for human emotional behavior (e.g. voice, face, gesture, multi-modal, self-reported experience, social interaction).

We welcome both academic and industry teams to participate.

 

The 6th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW), will be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2024. The ABAW Workshop and Competition is a continuation of the respective Workshops and Competitions held at IEEE CVPR 2023, ECCV 2022IEEE CVPR 2022ICCV 2021IEEE FG 2020 (a), IEEE FG 2020 (b) and IEEE CVPR 2017 Conferences.

The ABAW Workshop and Competition has a unique aspect of fostering cross-pollination of different disciplines, bringing together experts (from academia, industry, and government) and researchers of mobile and ubiquitous computing, computer vision and pattern recognition, artificial intelligence and machine learning, multimedia, robotics, HCI, ambient intelligence and psychology. The diversity of human behavior, the richness of multi-modal data that arises from its analysis, and the multitude of applications that demand rapid progress in this area ensure that our events provide a timely and relevant discussion and dissemination platform. 

 

ACM Multimedia 2023 Computational Paralinguistics Challenge (ComParE)

The ACM Multimedia 2023 Computational Paralinguistics ChallengE (ComParE) is an open Grand Challenge dealing with states and traits of speakers as manifested in their speech signal’s properties and beyond. There have so far been 14 consecutive Challenges held annually at INTERSPEECH 2009 – 2021 and ACM Multimedia 2022. We introduce two new tasks, the Emotion Share Sub-Challenge, and the Requests Sub-Challenge. 

The ACM Multimedia 2023 Computational Paralinguistics Challenge (ComParE) shall help bridge the gap between excellent research on paralinguistic information in audio and other modalities and the low compatibility of results. The results of the Challenge will be presented at ACM Multimedia 2023 in Ottawa between 29 October and 3 November 2023. Prizes will be awarded to the Sub-Challenge winners.

 

The 5th Workshop and Competition on Affective Behavior Analysis in-the-wild (ABAW), will be held in conjunction with the IEEE Computer Vision and Pattern Recognition Conference (CVPR), 2023. The event will take place in the morning on 19 June.  The ABAW Workshop and Competition is a continuation of the respective Workshops and Competitions held at ECCV 2022IEEE CVPR 2022ICCV 2021IEEE FG 2020 (a), IEEE FG 2020 (b) and IEEE CVPR 2017 Conferences.

The ABAW Workshop and Competition has a unique aspect of fostering cross-pollination of different disciplines, bringing together experts (from academia, industry, and government) and researchers of mobile and ubiquitous computing, computer vision and pattern recognition, artificial intelligence and machine learning, multimedia, robotics, HCI, ambient intelligence and psychology. The diversity of human behavior, the richness of multi-modal data that arises from its analysis, and the multitude of applications that demand rapid progress in this area ensure that our events provide a timely and relevant discussion and dissemination platform. 

 

The 2022 ACII Affective Vocal Burst Workshop & Competition (A-VB) is a workshop-based competition that introduces the problem of understanding emotion in vocal bursts – a wide range of non-verbal vocalizations that includes laughs, grunts, gasps, and much more. With affective states informing both mental and physical wellbeing, the core focus of the A-VB workshop is the broader discussion of current strategies in affective computing for modeling emotional behavior.

Within this first iteration of the A-VB Challenge, the participants will be presented with four emotion-focused sub-challenges that utilize the large-scale and “in-the-wild” Hume-VB dataset. The dataset and the four tracks draw attention to new innovations in emotion science as it pertains to vocal expression, addressing low- and high-dimensional theories of emotional expression, cultural variation, and “call types” (laugh, cry, sigh, etc.). 

 

Cosponsored by Hume AI, Mila, and the National Film Board of Canada, the ICML Expressive Vocalizations (ExVo) Workshop & Competition is focused on the machine learning problem of understanding and generating vocal bursts – a wide range of expressive non-verbal vocalizations.

Participants of ExVo will be presented with three tasks that utilize a single dataset. The dataset and three tasks draw attention to new innovations in emotion science and capture 10 dimensions of emotion reliably perceived in distinct vocal bursts [1]: Awe, Excitement, Amusement, Awkwardness, Fear, Horror, Distress, Triumph, Sadness and Surprise.

These tasks highlight the need for advanced machine learning techniques for the recognition, generation, and personalization of non-verbal communication. With studies of vocal emotional expression often relying on significantly smaller datasets insufficient to apply the latest machine learning innovations [2], the ExVo competition and workshop will provide an unprecedented platform for the development of novel strategies for understanding vocal bursts and will enable unique forms of collaborations by leading researchers from diverse disciplines.

[1] Cowen, A. S., Elfenbein, H. A., Laukka, P., & Keltner, D. (2019). Mapping 24 emotions conveyed by brief human vocalization. American Psychologist, 74(6), 698–712.

[2] Batliner, A., Hantke, S., & Schuller, B. W. (2020). Ethics and good practice in computational paralinguistics. IEEE Transactions on Affective Computing.


 

The MuSe 2022 Workshop and Competition is dedicated to multimodal sentiment and emotion understanding. Hume is contributing to this year's challenge by releasing the Hume-Reaction dataset, which includes more than 70 hours of video of 2,222 individuals spontaneously reacting to 1,841 evocative video elicitors, each annotated with seven self-reported emotions and their intensities on a 1-100 scale.

MuSe 2022 is seeking participants from the communities of audio-visual emotion understanding, health informatics, and symbolic sentiment analysis. Using newly introduced datasets, MuSe 2022 addresses three problems central to affective computing. The MuSe-Reaction track focuses on the core problem of attempting to infer self-reported emotion from multimodal expression, using multi-output regression to predict fine-grained self-report annotations of seven ‘in-the-wild' emotional experiences.

The baseline white paper for MuSe 2022 has now been released, describing both the datasets and the feature sets extracted from them. A recurrent neural network with LSTM-cells is used to establish a competitive baseline for each track. For more information on the competition, please visit the MuSe-2022 webpage.

For access to the Hume-Reaction data please email competitions@hume.ai.