Extracting translation data from the Wiktionary project

Wiktionary is a project for creating a multilingual, web-based free dictionary of all words in all languages. Like its sister project Wikipedia, since its inception it has been subject to criticism both in terms of its lexicographic approaches and in terms of reliability, content, procedures, and community operation (see Lepore 2006, Fuertes-Olivera 2009, Meyer 2012). Faults have also been pointed in terms of its structure which is confusing for newcomers, with parallel and unaligned information shared among the various language dictionaries, and differences in accuracy and depth among languages. Notwithstanding, data from Wiktionary is routinely employed with successful results in natural language processing and, occasionally, in linguistic research (see Otte 2011, Schlippe 2012, Medero 2009, Li 2012), as it constitutes, by far, the largest free multilingual lexical source.

The Wikimedia Foundation, the organization managing the project, releases automatically generated “dumps” of the data for free and anonymous download. However, such files cannot be used in linguistic research without a pre-processing (“parsing”) stage, as they constitute more a backup than a data release: in essence, they are XML files which enclose the textual information of the dictionary articles (pages potentially holding information for more than one word and more than one language), which are encoded in the MediaWiki markup syntax (a context-sensitive language that is notoriously difficult to parse). Data extraction is further complicated by the fact that the rendered HTML pages include information computed by functions of general and linguistic scope only available inside an environment running the Wiktionary server, as well as by Wiktionary collaborators not always following the project’s guidelines and specifications. Many projects have started to tackle such problems and the difficulties in reusing the data, including a brand new initiative by Wikidata.

As such, no standard method for extracting Wiktionary information exists, with mostly project-specific solutions. An investigation of parsing tools on GitHub revealed that two main approaches are used: parsing the XML files and manipulating the entire textual fields, or parsing the individually rendered HTML pages (fetched either from a local server or over the Internet). We decided to test a simpler approach of parsing the dumps as regular text files, reading them line by line while building an internal structured version of the information, processing lines with regular expression or simple string searching methods. The first experiment, whose results are here presented, involved extracting the parallel translations for English words found in the English Wiktionary.

The data, based on the dump of 2018-06-01, includes 2,169,063 different entries from the translation of 149,530 English words and expressions in 2,358 languages (with much variation in vocabulary size among languages: 931 languages have only one entry and German, the largest language after English, has 97,091 entries). Data is offered in a tabular textual format, and all entries include (a) a unique ID, (b) a concept ID referring to the source English word, (c) a description string with the English source and a short definition (such as “dictionary/publication that explains the meanings of an ordered list of words“), (d) a language ID from the Glottolog catalog, (e) the text of the translation as given in the Wiktionary, and (f) an extra field holding complementary information, when available (such as phonetic transcription of the text, noun gender, etc.). Data is also offer in a set of files (tabular textual files, bibtex sources, and JSON metadata) following the Cross-Linguistic Data Formats (CLDF), a specification designed to allow the exchange of cross-linguistic data. The code for data extraction is available on GitHub and the data is available on Zenodo as “Parallel Translations from the English Wiktionary” (DOI: 10.5281/zenodo.1286991). Many thanks to Johann-Mattis List and to Christoph Rzymski for their help with this work.

References

Fuertes-Olivera, Pedro A. (2009). “The function theory of lexicography and electronic dictionaries: Wiktionary as a prototype of collective free multiple-language internet dictionary”. In H. Bergenholtz, S. Nielsen, and S. Tarp (eds), Lexicography at a Crossroads: Dictionaries and Encyclopedias Today, Lexicographical Tools Tomorrow. Linguistic Insights: Studies in Language and Communication 90, 99–134. Bern: Peter Lang.

Lepore, Jill (2006). “Noah’s Mark”. In: New Yorker, November 6 2006 Issue.

LI, Shen; Graça, Joao V.; Taskar, Ben (2012). “Wiki-ly supervised part-of-speech tagging” (PDF). Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Jeju Island, Korea: Association for Computational Linguistics. pp. 1389–1398.

Medero, Julie; Ostendorf, Mari (2009). “Analysis of vocabulary difficulty using wiktionary” (PDF). Proc. SLaTE Workshop.

Meyer, Christian M.; Gurevych, Iryna (2012). “Wiktionary: A new rival for expert-built lexicons? Exploring the possibilities of collaborative lexicography”. In Granger, Sylviane; Paquot, Magali, Electronic Lexicography. Oxford: Oxford University Press.

Otte, Pim; Tyers, Francis M. (2011). “Rapid rule-based machine translation between Dutch and Afrikaans” (PDF). In Forcada, Mikel L.; Depraetere, Heidi; Vandeghinste, Vincent. 16th Annual Conference of the European Association of Machine Translation, EAMT11. Leuven, Belgium. pp. 153–160.

Schlippe, Tim; Ochs, Sebastian; Schultz, Tanja (2012). “Grapheme-to-phoneme model generation for Indo-European languages” (PDF). Acoustics, Speech and Signal Processing (ICASSP). Kyoto, Japan. pp. 4801–4804.

Cite this article as: Tiago Tresoldi, "Extracting translation data from the Wiktionary project," in Computer-Assisted Language Comparison in Practice, 11/06/2018, https://calc.hypotheses.org/32.

Let the Games Begin!

By comparing the languages of the world, we gain invaluable insights into human prehistory, predating the appearance of written records by thousands of years. The traditional methods for language comparison are based on manual data inspection. With more and more data available, they reach their practical limits. Computer applications, however, are not capable of replacing experts’ experience and intuition. In a situation where computers cannot replace experts and experts do not have enough time to analyse the massive amounts of data, a new framework, neither completely computer-driven, nor ignorant of the help computers provide, becomes urgent. Such frameworks are well-established in biology and translation, where computational tools cannot provide the accuracy needed to arrive at convincing results, but do assist humans to digest large data sets.

After one month of preparation, during which our team was busy teaching each other, members of our seminar at Friedrich Schiller University Jena, and colleagues in our department, how to code, we are ready to launch the first posts during the next weeks.

I will refrain from promising too much at this stage, but I recommend those interested in learning more about different topics as diverse as coding (in Python and R), data curation and analysis, theory of diversity linguistics, and methodology of historical language comparison to keep an eye on this blog. Our core team of four to five authors will try to publish at least one new blogpost per month, and we will try to constantly increase our range of authors by inviting colleagues from our Department of Linguistic and Cultural Evolution and from other institutions to present their questions, ideas, or approaches to questions related to computer-based and computer-assisted approaches in historical language comparison and beyond.

Our team is currently preparing the first blogposts for this month. I won’t tell you too much about the concrete content yet, but if you are interested in computer-assisted language comparison and empirical approaches to diversity linguistics, I recommend you to keep an eye on our weblog.

Cite this article as: Johann-Mattis List, "Let the Games Begin!," in Computer-Assisted Language Comparison in Practice, 06/06/2018, https://calc.hypotheses.org/22.