Statistical component of Chimera, a state-of-the-art MT system. and Project DF12P01OVV022 of the Ministry of Culture of the Czech Republic (NAKI -- Amalach).
Wikipedia plain text data obtained from Wikipedia dumps with WikiExtractor in February 2018.
The data come from all Wikipedias for which dumps could be downloaded at [https://dumps.wikimedia.org/]. This amounts to 297 Wikipedias, usually corresponding to individual languages and identified by their ISO codes. Several special Wikipedias are included, most notably "simple" (Simple English Wikipedia) and "incubator" (tiny hatching Wikipedias in various languages).
For a list of all the Wikipedias, see [https://meta.wikimedia.org/wiki/List_of_Wikipedias].
The script which can be used to get new version of the data is included, but note that Wikipedia limits the download speed for downloading a lot of the dumps, so it takes a few days to download all of them (but one or a few can be downloaded fast).
Also, the format of the dumps changes time to time, so the script will probably eventually stop working one day.
The WikiExtractor tool [http://medialab.di.unipi.it/wiki/Wikipedia_Extractor] used to extract text from the Wikipedia dumps is not mine, I only modified it slightly to produce plaintext outputs [https://github.com/ptakopysk/wikiextractor].
System for querying annotated treebanks in PML format. The querying uses it own query language with graphical representation. It has two different implementations (SQL and Perl) and several clients (TrEd, browser-based, command line interface).
The segment of Československý zvukový týdeník Aktualita (Czechoslovak Aktualita Sound Newsreel), 1938, issue no. 42B shows the town of Polička before the German annexation on 10 October 1938. The footage shows a munitions factory, the main square with the town hall, a cemetery, and the removal of chronicles from the local archive on the square. Images of street traffic are combined with shots showing Czech refugees
This package contains polysemy graphs constructed on the basis of different sense chaining algorithms (representing different polysemy theories: prototype, exemplar and radial). The detailed description of all files is contained in the README.md file.
Model trained for Czech POS Tagging and Lemmatization using Czech version of BERT model, RobeCzech. Model is trained on data from Prague Dependency Treebank 3.5. Model is a part of Czech NLP with Contextualized Embeddings master thesis and presented a state-of-the-art performance on the date of submission of the work.
Demo jupyter notebook is available on the project GitHub.
Contains linguistic annotated data from the Online-Forum PC Games (https://forum.pcgames.de). The forum is concerned about gaming. All posts (approx. 2.4 mio) where scraped in April 2019 (details see Kissling 2019), resulting in 120 mio tokens of almost 70'000 authors. The data is saved in a SQL-database and can be accessed using eg. pg_restore. The database itself and the tables of the database contain detailed self-descriptions.
In this database you find tokenized, part-of-speech-tagged and party lemmatized information of every token in the forum and its metadata (usernames and their location in the forum structure, e.g. which post(s), thread, subforum it belongs to). The order of the words in a post cannot be reconstructed with this corpus. Usernames were replaced with author_ids to protect the personal rights of the post authors.
Additional information:
As this corpus was analyzed in terms of productivity and language contact of German and English (Kissling 2020), there is additional information about German base forms found in present day English, mainly focussing on the formula "German_verb_stem + -en = English verb infinitive". Therefore the API of the Oxford Dictionary of English was used. You will find the results of the API request done with Oxford Dictionary of English in the table infinitives. The corpus can be used without using this information, too.
Calculations were performed at sciCORE (http://scicore.unibas.ch/) scientific computing core facility at University of Basel on 2019-09-10. This database contains all of the primary corpus of Kissling (2020).
Sources:
Kissling, J. (2019). Computerunterstütztes Verfahren zur Erhebung eigener Textkorpus-Daten. Methodenentwicklung und Anwendung auf 2.4 Mio. Posts des Forums PC Games.de [certification thesis]. Universität Basel.
Kissling, J. (2020). Produktivität englischer Verben im Deutschen [master thesis]. Universität Basel.
The used scraper is available on github: https://github.com/vizzerdrix55/web-scraping-vBulletin-forum
The PADT project might be summarized as an open-ended activity of the Center for Computational Linguistics, the Institute of Formal and Applied Linguistics, and the Institute of Comparative Linguistics, Charles University in Prague, resting in multi-level annotation of Arabic language resources in the light of the theory of Functional Generative Description (Sgall et al., 1986; Hajičová and Sgall, 2003).