Data
-------
Bengali Visual Genome (BVG for short) 1.0 has similar goals as Hindi Visual Genome (HVG) 1.1: to support the Bengali language. Bengali Visual Genome 1.0 is the multi-modal dataset in Bengali for machine translation and image
captioning. Bengali Visual Genome is a multimodal dataset consisting of text and images suitable for English-to-Bengali multimodal machine translation tasks and multimodal research. We follow the same selection of short English segments (captions) and the associated images from Visual Genome as HGV 1.1 has. For BVG, we manually translated these captions from English to Bengali taking the associated images into account. The manual translation is performed by the native Bengali speakers without referring to any machine translation system.
The training set contains 29K segments. Further 1K and 1.6K segments are provided in development and test sets, respectively, which follow the same (random) sampling from the original Hindi Visual Genome. A third test set is
called the ``challenge test set'' and consists of 1.4K segments. The challenge test set was created for the WAT2019 multi-modal task by searching for (particularly) ambiguous English words based on the embedding similarity and
manually selecting those where the image helps to resolve the ambiguity. The surrounding words in the sentence however also often include sufficient cues to identify the correct meaning of the ambiguous word.
Dataset Formats
---------------
The multimodal dataset contains both text and images.
The text parts of the dataset (train and test sets) are in simple tab-delimited plain text files.
All the text files have seven columns as follows:
Column1 - image_id
Column2 - X
Column3 - Y
Column4 - Width
Column5 - Height
Column6 - English Text
Column7 - Bengali Text
The image part contains the full images with the corresponding image_id as the file name. The X, Y, Width and Height columns indicate the rectangular region in the image described by the caption.
Data Statistics
---------------
The statistics of the current release are given below.
Parallel Corpus Statistics
--------------------------
Dataset Segments English Words Bengali Words
---------- -------- ------------- -------------
Train 28930 143115 113978
Dev 998 4922 3936
Test 1595 7853 6408
Challenge Test 1400 8186 6657
---------- -------- ------------- -------------
Total 32923 164076 130979
The word counts are approximate, prior to tokenization.
Citation
--------
If you use this corpus, please cite the following paper:
@inproceedings{hindi-visual-genome:2022,
title= "{Bengali Visual Genome: A Multimodal Dataset for Machine Translation and Image Captioning}",
author={Sen, Arghyadeep
and Parida, Shantipriya
and Kotwal, Ketan
and Panda, Subhadarshi
and Bojar, Ond{\v{r}}ej
and Dash, Satya Ranjan},
editor={Satapathy, Suresh Chandra
and Peer, Peter
and Tang, Jinshan
and Bhateja, Vikrant
and Ghosh, Anumoy},
booktitle= {Intelligent Data Engineering and Analytics},
publisher= {Springer Nature Singapore},
address= {Singapore},
pages = {63--70},
isbn = {978-981-16-6624-7},
doi = {10.1007/978-981-16-6624-7_7},
}
CUBBITT En-Cs translation models, exported via TensorFlow Serving, available in the Lindat translation service (https://lindat.mff.cuni.cz/services/translation/).
Models are compatible with Tensor2tensor version 1.6.6.
For details about the model training (data, model hyper-parameters), please contact the archive maintainer.
Evaluation on newstest2014 (BLEU):
en->cs: 27.6
cs->en: 34.4
(Evaluated using multeval: https://github.com/jhclark/multeval)
CUBBITT En-Fr translation models, exported via TensorFlow Serving, available in the Lindat translation service (https://lindat.mff.cuni.cz/services/translation/).
Models are compatible with Tensor2tensor version 1.6.6.
For details about the model training (data, model hyper-parameters), please contact the archive maintainer.
Evaluation on newstest2014 (BLEU):
en->fr: 38.2
fr->en: 36.7
(Evaluated using multeval: https://github.com/jhclark/multeval)
CUBBITT En-Pl translation models, exported via TensorFlow Serving, available in the Lindat translation service (https://lindat.mff.cuni.cz/services/translation/).
Models are compatible with Tensor2tensor version 1.6.6.
For details about the model training (data, model hyper-parameters), please contact the archive maintainer.
Evaluation on newstest2020 (BLEU):
en->pl: 12.3
pl->en: 20.0
(Evaluated using multeval: https://github.com/jhclark/multeval)
Data
-------
Hausa Visual Genome 1.0, a multimodal dataset consisting of text and images suitable for English-to-Hausa multimodal machine translation tasks and multimodal research. We follow the same selection of short English segments (captions) and the associated images from Visual Genome as the dataset Hindi Visual Genome 1.1 has. We automatically translated the English captions to Hausa and manually post-edited, taking the associated images into account.
The training set contains 29K segments. Further 1K and 1.6K segments are provided in development and test sets, respectively, which follow the same (random) sampling from the original Hindi Visual Genome.
Additionally, a challenge test set of 1400 segments is available for the multi-modal task. This challenge test set was created in Hindi Visual Genome by searching for (particularly) ambiguous English words based on the embedding similarity and manually selecting those where the image helps to resolve the ambiguity.
Dataset Formats
-----------------------
The multimodal dataset contains both text and images.
The text parts of the dataset (train and test sets) are in simple tab-delimited plain text files.
All the text files have seven columns as follows:
Column1 - image_id
Column2 - X
Column3 - Y
Column4 - Width
Column5 - Height
Column6 - English Text
Column7 - Hausa Text
The image part contains the full images with the corresponding image_id as the file name. The X, Y, Width, and Height columns indicate the rectangular region in the image described by the caption.
Data Statistics
--------------------
The statistics of the current release are given below.
Parallel Corpus Statistics
-----------------------------------
Dataset Segments English Words Hausa Words
---------- -------- ------------- -----------
Train 28930 143106 140981
Dev 998 4922 4857
Test 1595 7853 7736
Challenge Test 1400 8186 8752
---------- -------- ------------- -----------
Total 32923 164067 162326
The word counts are approximate, prior to tokenization.
Citation
-----------
If you use this corpus, please cite the following paper:
@InProceedings{abdulmumin-EtAl:2022:LREC,
author = {Abdulmumin, Idris
and Dash, Satya Ranjan
and Dawud, Musa Abdullahi
and Parida, Shantipriya
and Muhammad, Shamsuddeen
and Ahmad, Ibrahim Sa'id
and Panda, Subhadarshi
and Bojar, Ond{\v{r}}ej
and Galadanci, Bashir Shehu
and Bello, Bello Shehu},
title = "{Hausa Visual Genome: A Dataset for Multi-Modal English to Hausa Machine Translation}",
booktitle = {Proceedings of the Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {6471--6479},
url = {https://aclanthology.org/2022.lrec-1.694}
}
Data
----
Hindi Visual Genome 1.1 is an updated version of Hindi Visual Genome 1.0. The update concerns primarily the text part of Hindi Visual Genome, fixing translation issues reported during WAT 2019 multimodal task. In the image part, only one segment and thus one image were removed from the dataset.
Hindi Visual Genome 1.1 serves in "WAT 2020 Multi-Modal Machine Translation Task".
Hindi Visual Genome is a multimodal dataset consisting of text and images suitable for English-to-Hindi multimodal machine translation task and multimodal research. We have selected short English segments (captions) from Visual Genome along with associated images and automatically translated them to Hindi with manual post-editing, taking the associated images into account.
The training set contains 29K segments. Further 1K and 1.6K segments are provided in a development and test sets, respectively, which follow the same (random) sampling from the original Hindi Visual Genome.
A third test set is called ``challenge test set'' consists of 1.4K segments and it was released for WAT2019 multi-modal task. The challenge test set was created by searching for (particularly) ambiguous English words based on the embedding similarity and manually selecting those where the image helps to resolve the ambiguity. The surrounding words in the sentence however also often include sufficient cues to identify the correct meaning of the ambiguous word.
Dataset Formats
--------------
The multimodal dataset contains both text and images.
The text parts of the dataset (train and test sets) are in simple
tab-delimited plain text files.
All the text files have seven columns as follows:
Column1 - image_id
Column2 - X
Column3 - Y
Column4 - Width
Column5 - Height
Column6 - English Text
Column7 - Hindi Text
The image part contains the full images with the corresponding image_id as the file name. The X, Y, Width and Height columns indicate the rectangular region in the image described by the caption.
Data Statistics
----------------
The statistics of the current release is given below.
Parallel Corpus Statistics
---------------------------
Dataset Segments English Words Hindi Words
------- --------- ---------------- -------------
Train 28930 143164 145448
Dev 998 4922 4978
Test 1595 7853 7852
Challenge Test 1400 8186 8639
------- --------- ---------------- -------------
Total 32923 164125 166917
The word counts are approximate, prior to tokenization.
Citation
--------
If you use this corpus, please cite the following paper:
@article{hindi-visual-genome:2019,
title={{Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation}},
author={Parida, Shantipriya and Bojar, Ond{\v{r}}ej and Dash, Satya Ranjan},
journal={Computaci{\'o}n y Sistemas},
volume={23},
number={4},
pages={1499--1505},
year={2019}
}
Data
-------
Malayalam Visual Genome (MVG for short) 1.0 has similar goals as Hindi Visual Genome (HVG) 1.1: to support the Malayalam language. Malayalam Visual Genome 1.0 is the first multi-modal dataset in Malayalam for machine translation and image captioning.
Malayalam Visual Genome 1.0 serves in "WAT 2021 Multi-Modal Machine Translation Task".
Malayalam Visual Genome is a multimodal dataset consisting of text and images suitable for English-to-Malayalam multimodal machine translation task and multimodal research. We follow the same selection of short English segments (captions) and the associated images from Visual Genome as HGV 1.1 has. For MVG, we automatically translated these captions from English to Malayalam and manually corrected them, taking the associated images into account.
The training set contains 29K segments. Further 1K and 1.6K segments are provided in development and test sets, respectively, which follow the same (random) sampling from the original Hindi Visual Genome.
A third test set is called ``challenge test set'' and consists of 1.4K segments. The challenge test set was created for the WAT2019 multi-modal task by searching for (particularly) ambiguous English words based on the embedding similarity and manually selecting those where the image helps to resolve the ambiguity. The surrounding words in the sentence however also often include sufficient cues to identify the correct meaning of the ambiguous word. For MVG, we simply translated the English side of the test sets to Malayalam, again utilizing machine translation to speed up the process.
Dataset Formats
----------------------
The multimodal dataset contains both text and images.
The text parts of the dataset (train and test sets) are in simple tab-delimited plain text files.
All the text files have seven columns as follows:
Column1 - image_id
Column2 - X
Column3 - Y
Column4 - Width
Column5 - Height
Column6 - English Text
Column7 - Malayalam Text
The image part contains the full images with the corresponding image_id as the file name. The X, Y, Width and Height columns indicate the rectangular region in the image described by the caption.
Data Statistics
-------------------
The statistics of the current release are given below.
Parallel Corpus Statistics
---------------------------------
Dataset Segments English Words Malayalam Words
---------- -------------- -------------------- -----------------
Train 28930 143112 107126
Dev 998 4922 3619
Test 1595 7853 5689
Challenge Test 1400 8186 6044
-------------------- ------------ ------------------ ------------------
Total 32923 164073 122478
The word counts are approximate, prior to tokenization.
Citation
-----------
If you use this corpus, please cite the following paper:
@article{hindi-visual-genome:2019, title={{Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation}}, author={Parida, Shantipriya and Bojar, Ond{\v{r}}ej and Dash, Satya Ranjan}, journal={Computaci{\'o}n y Sistemas}, volume={23}, number={4}, pages={1499--1505}, year={2019} }
En-De translation models, exported via TensorFlow Serving, available in the Lindat translation service (https://lindat.mff.cuni.cz/services/translation/).
The models were trained using the MCSQ social surveys dataset (available at https://repo.clarino.uib.no/xmlui/bitstream/handle/11509/142/mcsq_v3.zip).
Their main use should be in-domain translation of social surveys.
Models are compatible with Tensor2tensor version 1.6.6.
For details about the model training (data, model hyper-parameters), please contact the archive maintainer.
Evaluation on MCSQ test set (BLEU):
en->de: 67.5 (train: genuine in-domain MCSQ data only)
de->en: 75.0 (train: additional in-domain backtranslated MCSQ data)
(Evaluated using multeval: https://github.com/jhclark/multeval)
En-Ru translation models, exported via TensorFlow Serving, available in the Lindat translation service (https://lindat.mff.cuni.cz/services/translation/).
The models were trained using the MCSQ social surveys dataset (available at https://repo.clarino.uib.no/xmlui/bitstream/handle/11509/142/mcsq_v3.zip).
Their main use should be in-domain translation of social surveys.
Models are compatible with Tensor2tensor version 1.6.6.
For details about the model training (data, model hyper-parameters), please contact the archive maintainer.
Evaluation on MCSQ test set (BLEU):
en->ru: 64.3 (train: genuine in-domain MCSQ data)
ru->en: 74.7 (train: additional backtranslated in-domain MCSQ data)
(Evaluated using multeval: https://github.com/jhclark/multeval)
This submission contains Dockerfile for creating a Docker image with compiled Tensor2tensor backend with compatible (TensorFlow Serving) models available in the Lindat Translation service (https://lindat.mff.cuni.cz/services/transformer/). Additionally, the submission contains a web frontend for simple in-browser access to the dockerized backend service.
Tensor2Tensor (https://github.com/tensorflow/tensor2tensor) is a library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.