Eyetracked Multi-Modal Translation (EMMT) is a simultaneous eye-tracking, 4-electrode EEG and audio corpus for multi-modal reading and translation scenarios. It contains monocular eye movement recordings, audio data and 4-electrode wearable electroencephalogram (EEG) data of 43 participants while engaged in sight translation supported by an image.
The details about the experiment and the dataset can be found in the README file.
Czech Science Foundation@@19-26934X@@Neural Representations in Multi-modal and Multi-lingual Modelling@@nationalFunds@@✖[remove]1
European Union@@EC/H2020/825303@@Bergamot - Browser-based Multilingual Translation@@euFunds@@info:eu-repo/grantAgreement/EC/H2020/825303✖[remove]1
Ministerstvo školství, mládeže a tělovýchovy České republiky@@LM2018101@@LINDAT/CLARIAH-CZ: Digitální výzkumná infrastruktura pro jazykové technologie, umění a humanitní vědy@@nationalFunds@@✖[remove]1