The collection consists of queries and documents provided by the Qwant search Engine (https://www.qwant.com). The queries, which were issued by the users of Qwant, are based on the selected trending topics. The documents in the collection were selected with respect to these queries using the Qwant click model. Apart from the documents selected using this model, the collection also contains randomly selected documents from the Qwant index. All the data were collected over June 2022. In total, the collection contains 672 train queries, with corresponding 9656 assessments coming from the Qwant click model, and 98 heldout queries. The set of documents consist of 1,570,734 downloaded, cleaned and filtered Web Pages. Apart from their original French versions, the collection also contains translations of the webpages and queries into English. The collection serves as the official training collection for the 2023 LongEval Information Retrieval Lab (https://clef-longeval.github.io/) organised at CLEF.
Document-level testsuite for evaluation of gender translation consistency.
Our Document-Level test set consists of selected English documents from the WMT21 newstest annotated with gender information. Czech unnanotated references are also added for convenience.
We semi-automatically annotated person names and pronouns to identify the gender of these elements as well as coreferences.
Our proposed annotation consists of three elements: (1) an ID, (2) an element class, and (3) gender.
The ID identifies a person's name and its occurrences (name and pronouns).
The element class identifies whether the tag refers to a name or a pronoun.
Finally, the gender information defines whether the element is masculine or feminine.
We performed a series of NLP techniques to automatically identify person names and coreferences.
This initial process resulted in a set containing 45 documents to be manually annotated.
Thus, we started a manual annotation of these documents to make sure they are correctly tagged.
See README.md for more details.
Moroccan Dialect Electronic Dictionary (MDED) is an electronic lexicon containing almost 15000 MSA entries. They are written in Arabic letters and translated to Moroccan Arabic dialect. In addition, MDED entries are annotated useful metadata such as POS, Origin and root. MDED can be useful in some advanced NLP applications such as Machine translation and morphological analyzer.
Source code of the first full and running version for the Malach Center User Interface, does not contain data or metadata fo the digital objects and resources.
Data
-------
Malayalam Visual Genome (MVG for short) 1.0 has similar goals as Hindi Visual Genome (HVG) 1.1: to support the Malayalam language. Malayalam Visual Genome 1.0 is the first multi-modal dataset in Malayalam for machine translation and image captioning.
Malayalam Visual Genome 1.0 serves in "WAT 2021 Multi-Modal Machine Translation Task".
Malayalam Visual Genome is a multimodal dataset consisting of text and images suitable for English-to-Malayalam multimodal machine translation task and multimodal research. We follow the same selection of short English segments (captions) and the associated images from Visual Genome as HGV 1.1 has. For MVG, we automatically translated these captions from English to Malayalam and manually corrected them, taking the associated images into account.
The training set contains 29K segments. Further 1K and 1.6K segments are provided in development and test sets, respectively, which follow the same (random) sampling from the original Hindi Visual Genome.
A third test set is called ``challenge test set'' and consists of 1.4K segments. The challenge test set was created for the WAT2019 multi-modal task by searching for (particularly) ambiguous English words based on the embedding similarity and manually selecting those where the image helps to resolve the ambiguity. The surrounding words in the sentence however also often include sufficient cues to identify the correct meaning of the ambiguous word. For MVG, we simply translated the English side of the test sets to Malayalam, again utilizing machine translation to speed up the process.
Dataset Formats
----------------------
The multimodal dataset contains both text and images.
The text parts of the dataset (train and test sets) are in simple tab-delimited plain text files.
All the text files have seven columns as follows:
Column1 - image_id
Column2 - X
Column3 - Y
Column4 - Width
Column5 - Height
Column6 - English Text
Column7 - Malayalam Text
The image part contains the full images with the corresponding image_id as the file name. The X, Y, Width and Height columns indicate the rectangular region in the image described by the caption.
Data Statistics
-------------------
The statistics of the current release are given below.
Parallel Corpus Statistics
---------------------------------
Dataset Segments English Words Malayalam Words
---------- -------------- -------------------- -----------------
Train 28930 143112 107126
Dev 998 4922 3619
Test 1595 7853 5689
Challenge Test 1400 8186 6044
-------------------- ------------ ------------------ ------------------
Total 32923 164073 122478
The word counts are approximate, prior to tokenization.
Citation
-----------
If you use this corpus, please cite the following paper:
@article{hindi-visual-genome:2019, title={{Hindi Visual Genome: A Dataset for Multimodal English-to-Hindi Machine Translation}}, author={Parida, Shantipriya and Bojar, Ond{\v{r}}ej and Dash, Satya Ranjan}, journal={Computaci{\'o}n y Sistemas}, volume={23}, number={4}, pages={1499--1505}, year={2019} }
The file represents a text corpus in the context of Arabic spell checking, where a group of persons edited different files, and all of the committed spelling errors by these persons have been recorded. A comprehensive representation these persons’ profile has been considered: male, female, old-aged, middle-aged, young-aged, high and low computer usage users, etc. Through this work, we aim to help researchers and those interested in Arabic NLP by providing them with an Arabic spell check corpus ready and open to exploitation and interpretation. This study also enabled the inventory of most spelling mistakes made by editors of Arabic texts. This file contains the following sections (tags): people – documents they printed – types of possible errors – errors they made. Each section (tag) contains some data that explains its details and its content, which helps researchers extracting research-oriented results. The people section contains basic information about each person and its relationship of using the computer, while the documents section clarifies all sentences in each document with the numbering of each sentence to be used in the errors section that was committed. We are also adding the “type of errors” section in which we list all the possible errors with their description in the Arabic language and give an illustrative example.
This data set contains four types of manual annotation of translation quality, focusing on the comparison of human and machine translation quality (aka human-parity). The machine translation system used is English-Czech CUNI Transformer (CUBBITT). The annotations distinguish adequacy, fluency and overall quality. One of the types is Translation Turing test - detecting whether the annotators can distinguish human from machine translation.
All the sentences are taken from the English-Czech test set newstest2018 (WMT2018 News translation shared task www.statmt.org/wmt18/translation-task.html), but only from the half with originally English sentences translated to Czech by a professional agency.
Manual classification of errors of Czech-Slovak translation according to the classification introduced by Vilar et al. [1]. First 50 sentences from WMT 2010 test set were translated by 5 MT systems (Česílko, Česílko2, Google Translate and two Moses setups) and MT errors were manually marked and classified. Classification was applied in MT systems comparison [3]. Reference translation is included.
References:
[1] David Vilar, Jia Xu, Luis Fernando D’Haro and Hermann Ney. Error Analysis of Machine Translation Output. In International Conference on Language Resources and Evaluation, pages 697-702. Genoa, Italy, May 2006.
[2] http://matrix.statmt.org/test_sets/list
[3] Ondřej Bojar, Petra Galuščáková, and Miroslav Týnovský. Evaluating Quality of Machine Translation from Czech to Slovak. In Markéta Lopatková, editor, Information Technologies - Applications and Theory, pages 3-9, September 2011 and This work has been supported by the grants Euro-MatrixPlus (FP7-ICT-2007-3-231720 of the EU and
7E09003 of the Czech Republic)
Manual classification of errors of English-Slovak translation according to the classification introduced by Vilar et al. [1]. 50 sentences randomly selected from WMT 2011 test set [2] were translated by 3 MT systems described in [3] and MT errors were manually marked and classified. Reference translation is included.
References:
[1] David Vilar, Jia Xu, Luis Fernando D’Haro and Hermann Ney. Error Analysis of Machine Translation Output. In International Conference on Language Resources and Evaluation, pages 697-702. Genoa, Italy, May 2006.
[2] http://www.statmt.org/wmt11/evaluation-task.html
[3] Petra Galuščáková and Ondřej Bojar. Improving SMT by Using Parallel Data of a Closely Related Language. In Human Language Technologies - The Baltic Perspective - Proceedings of the Fifth International Conference Baltic HLT 2012, volume 247 of Frontiers in AI and Applications, pages 58-65, Amsterdam, Netherlands, October 2012. IOS Press. and This work has been supported by the grant Euro-MatrixPlus (FP7-ICT-2007-3-231720 of the EU and
7E09003 of the Czech Republic)
Manually ranked outputs of Czech-Slovak translations. Three annotators manually ranked outputs of five MT systems (Česílko, Česílko2, Google Translate and two Moses setups) on three data sets (100 sentences randomly selected from books, 100 sentences randomly selected from Acquis corpus and 50 first sentences from WMT 2010 test set). Ranking was applied in MT systems comparison in [1].
References:
[1] Ondřej Bojar, Petra Galuščáková, and Miroslav Týnovský. Evaluating Quality of Machine Translation from Czech to Slovak. In Markéta Lopatková, editor, Information Technologies - Applications and Theory, pages 3-9, September 2011 and This work has been supported by the grant Euro-MatrixPlus (FP7-ICT-2007-3-231720 of the EU and
7E09003 of the Czech Republic)