The segment of Československý zvukový týdeník Aktualita (Czechoslovak Aktualita Sound Newsreel), 1938, issue no. 27 reports on the 16th International Congress of PEN. Clubs held in Prague from 26 to 30 June 1938. The delegates include President Edvard Beneš and his wife Hana, the writers Karel Čapek, H. G. Wells and Olga Scheinpflugová, and Vojtěch Mastný, the Czechoslovak envoy to Berlin.
Poet Petr Bezruč with a group of unidentified people. A view of his house in Ostravice. Bezruč among mining apprentices in the Mining Vocational School Residence Hall. Footage from segments of Československý filmový týdeník (Czechoslovak Film Weekly Newsreel) 1958, issue no. 9, and Týden ve filmu (Week in Film) 1945, issue no. 23.
Studio recordings of spontaneous Estonian segmented phonetically on word, sound, and other linguistic levels. Current size about 22 hours of speech, 155 000 words. Online search engine lets you search from word-level segments and returns matching 2 second sequences of sound and segmentation.
4. Aufl. 1857-1865; wortgenaue Seitenkonkordanz zu der gedruckten Ausgabe; laut dem im Untertitel angegebenen Eigenanspruch ein "enzyklopädisches Wörterbuch"
Statistical component of Chimera, a state-of-the-art MT system. and Project DF12P01OVV022 of the Ministry of Culture of the Czech Republic (NAKI -- Amalach).
Wikipedia plain text data obtained from Wikipedia dumps with WikiExtractor in February 2018.
The data come from all Wikipedias for which dumps could be downloaded at [https://dumps.wikimedia.org/]. This amounts to 297 Wikipedias, usually corresponding to individual languages and identified by their ISO codes. Several special Wikipedias are included, most notably "simple" (Simple English Wikipedia) and "incubator" (tiny hatching Wikipedias in various languages).
For a list of all the Wikipedias, see [https://meta.wikimedia.org/wiki/List_of_Wikipedias].
The script which can be used to get new version of the data is included, but note that Wikipedia limits the download speed for downloading a lot of the dumps, so it takes a few days to download all of them (but one or a few can be downloaded fast).
Also, the format of the dumps changes time to time, so the script will probably eventually stop working one day.
The WikiExtractor tool [http://medialab.di.unipi.it/wiki/Wikipedia_Extractor] used to extract text from the Wikipedia dumps is not mine, I only modified it slightly to produce plaintext outputs [https://github.com/ptakopysk/wikiextractor].