The BLEMORE workshop and competition on multimodal blended emotion recognition will be held at the 13th International Conference on Affective Computing and Intelligent Interaction (ACII 2025) in Canberra, Australia. BLEMORE aims to inspire further development on the still under-represented topic of blended emotion recognition. By combining a competition with a workshop format, our hope is that BLEMORE will lead to concrete and measurable technical advances, while also providing a forum for reflection and exchange between researchers working in the field. The workshop will take place on the 11th of October 2025.
Mailing list
To stay up-to-date, please sign up to our mailing list here: https://www.dfki.de/mailman/cgi-bin/listinfo/blemore-workshop.
Please be aware that you need to confirm your list membership via email. We will distribute any news and future developments regarding the workshop (e.g., clarification of the challenge rules if there are any uncertainties) via the mailing list.
News
Citable draft paper released (2025-06-24) along with audio and multimodal baselines
We have released a draft version of our paper, which provides an overview of the BLEMORE workshop and competition, including the dataset, evaluation metrics and baselines for visual, audio, and multimodal emotion recognition. The paper is available as a technical report on GitHub.
To cite the paper, follow the instructions on GitHub
Test Dataset Released (2025-06-15)
We’ve now published the test partition of the BlEmoRe dataset on Zenodo. The new Zenodo version includes a set of test videos and a submission template.
Participants are invited to submit their predictions in the template format to: Petri Laukka: petri.laukka@psyk.uu.se
We will evaluate them on our server. The evaluation will be based on the two metrics defined in the challenge: ACC_presence
and ACC_salience
.
New Baselines Released (2025-06-09)
New baselines for the BLEMORE Challenge are now available.
The baselines include models using features extracted from pre-trained video and image encoders:
- CLIP
- ImageBind
- VideoMAEv2
- Video Swin Transformer
Features are either aggregated into video-level representations or subsampled from short clips. Lightweight feedforward models (Linear, MLP) are trained to predict emotion presence and salience following the challenge evaluation protocol.
Code and instructions are available in the updated GitHub repository.
Paper Submission (2025-06-09)
Submissions to the BLEMORE Workshop are now open!
Please use the following link to submit your paper:
1. General Track: We invite submissions concerned with the general problem of blended emotion recognition.
2. Challenge Track: In this track, we invite submissions which utilize the challenge dataset and pre-defined evaluation metrics.
See the Call for Papers for more details.
Basic Code for Baselines released (2025-04-08)
Available on GitHub.
Call for Papers released (2025-04-08)
The call for papers for the BLEMORE workshop and competition is now available: Call for Papers
Training dataset released (2025-03-27)
The training dataset for the BLEMORE workshop and competition is now available on Zenodo. Please feel free to contact us in case of any question concerning the dataset: petri.laukka@psyk.uu.se