CzEng is a sentence-parallel Czech-English corpus compiled at the Institute of Formal and Applied Linguistics (ÚFAL). While the full CzEng 2.0 is freely available for non-commercial research purposes from the project website (https://ufal.mff.cuni.cz/czeng), this release contains only the original monolingual parts of news text (csmono 53M and enmono 79M sentences) with automatic (synthetic) translations by CUBBITT.
See the attached README for additional details such as the file format.
Tamil Dependency Treebank version 0.1 (TamilTB.v0.1) is an attempt to develop a syntactically annotated corpora for Tamil. TamilTB.v0.1 contains 600 sentences enriched with manual annotation of morphology and dependency syntax in the style of Prague Dependency Treebank. TamilTB.v0.1 has been created at the Institute of Formal and Applied Linguistics, Charles University in Prague.
The presented data and metadata include answers to questions raised in the questionnaire focused on the experience of teaching practicums and their role in the practical preparation of English language teachers at the Faculty of Arts, Charles University, as well as a basic quantitative analysis of the answers.
The analysis of the questionnaires shows that trainees are, in most cases, prepared for their teaching practicum both professionally and in terms of pedagogy and psychology, and the use of reflective teaching methods seems very useful. The benefits of the teaching practicum include, in particular, getting to know the real situation of teaching in secondary schools and working with a larger group of pupils, getting to know oneself as a teacher, gaining self-confidence, and becoming aware of one's own limits and areas for improvement. The downsides of the current system of teaching practice include mainly the low time allocation, the lack of integration of the practice in the curriculum, and the lack of involvement of the trainee in the daily running of the school (administrative work, supervision, meetings) and the lack of quality feedback from the faculty teacher.
This submission contains Dockerfile for creating a Docker image with compiled Tensor2tensor backend with compatible (TensorFlow Serving) models available in the Lindat Translation service (https://lindat.mff.cuni.cz/services/transformer/). Additionally, the submission contains a web frontend for simple in-browser access to the dockerized backend service.
Tensor2Tensor (https://github.com/tensorflow/tensor2tensor) is a library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
A simple way of browsing CoNLL format files in your terminal. Fast and text-based.
To open a CoNLL file, simply run: ./view_conll sample.conll
The output is piped through less, so you can use less commands to navigate the
file; by default the less searches for sentence beginnings, so you can use "n"
to go to next sentence and "N" to go to previous sentence. Close by "q". Trees
with a high number of non-projective edges may be difficult to read, as I have
not found a good way of displaying them intelligibly.
If you are on Windows and don't have less (but have Python), run like this: python view_conll.py sample.conll
For complete instructions, see the README file.
You need Python 2 to run the viewer.
Test data for the WMT 2017 Automatic post-editing task (the same used for the Sentence-level Quality Estimation task). They consist in German-English triplets (source and target) belonging to the pharmacological domain and already tokenized. Test set contains 2,000 pairs. All data is provided by the EU project QT21 (http://www.qt21.eu/).
Test data for the WMT 2017 Automatic post-editing task (the same used for the Sentence-level Quality Estimation task). They consist in 2,000 English-German pairs (source and target) belonging to the IT domain and already tokenized. All data is provided by the EU project QT21 (http://www.qt21.eu/).
Test data for the WMT 2018 Automatic post-editing task. They consist in English-German pairs (source and target) belonging to the information technology domain and already tokenized. Test set contains 1,023 pairs. A neural machine translation system has been used to generate the target segments. All data is provided by the EU project QT21 (http://www.qt21.eu/).
Test data for the WMT 2018 Automatic post-editing task. They consist in English-German pairs (source and target) belonging to the information technology domain and already tokenized. Test set contains 2,000 pairs. A phrase-based machine translation system has been used to generate the target segments. This test set is sampled from the same dataset used for the 2016 and 2017 APE shared task editions. All data is provided by the EU project QT21 (http://www.qt21.eu/).