Merging datasets with LingPy and the CLDF curation framework

Imagine you have two different datasets, both containing approximately the same concepts, but slightly different numbers of columns and — more importantly — potentially identical identifiers in the first column. A bad idea for merging these datasets would be to paste them in Excel or some other kind of spreadsheet software, and then trying to manually adjust all problems that might occur during this process.

A better idea is to just use LingPy and our CLDF curation framework Continue reading “Merging datasets with LingPy and the CLDF curation framework”

From Fieldwork to Trees 2: Cognate coding

In a previous post (Kaiping 2018), I described how to convert matrix-shape word lists given in Excel into the long format LingPy and other software can work with. My motivation for this was to provide my colleague Yunus Sulistyono with a good way to compare the lexicon of his Alorese [alor1247] dialects, and to understand the relationship between them. In this post, the data is automatically cognate coded and converted into CLDF. Continue reading “From Fieldwork to Trees 2: Cognate coding”

Inferring consonant clusters from CLICS data with LingPy

LingPy (List et al. 2017) offers a great deal of functions for string manipulation. Although most of those functions are readily documented (see lingpy.org for details), and the basic ideas have also been described in my dissertation (List 2014), it seems that not many users are aware of these additional possibilities, which the library offers.

In the following, I want to illustrate how we can use LingPy to learn something about consonant clusters occurring in the data underlying the CLICS database (List et al. 2018, clics.clld.org). I have illustrated in an earlier post how one can use the CLICS software API to cook one’s own CLICS application. I will thus assume that you know how to install CLICS (following the instructions on our GitHub page) and the data underlying it.

Continue reading “Inferring consonant clusters from CLICS data with LingPy”

Representing Structural Data in CLDF

The Cross-Linguistic Data Formats initiative (CLDF, https://cldf.clld.org, Forkel et al. 2018) has helped a lot in preparing the CLICS² database of cross-linguistic colexifications (https://clics.lingpy.org, List et al. 2018), since linking our data to Concepticon (https://concepticon.clld.org, List et al. 2016) and Glottolog (https://glottolog.org, Hammarström et al. 2018) has provided incredible help in merging the different datasets into a big comparative dataset.

CLDF, however, is not restricted to lexical data, but can also be successfully used to store structural data, although — due to the nature of structural data — it is much more difficult to compare different datasets.

Continue reading “Representing Structural Data in CLDF”

Cooking with CLICS

Robert Forkel just published a very nice cookbook example for our CLICS database (List et al. 2018f, http://clics.clld.org), where you can find out how to manipulate the data further, apart from just installing it and running it to replicate our analyses.

This cookbook tells you how the underlying SQLITE database is structured and how you can, after installing CLICS and the respective packages, access the data to conduct studies of your own.

As a little example of what you can do with the new CLICS API, let me illustrate in this post, how we can use the old CLICS data (underlying the version 1.0 by List et al. 2014, http://clics.lingpy.org), available from here, in the new application, specifically the standalone that we provide.

Continue reading “Cooking with CLICS”

Extracting translation data from the Wiktionary project

Wiktionary is a project for creating a multilingual, web-based free dictionary of all words in all languages. Like its sister project Wikipedia, since its inception it has been subject to criticism both in terms of its lexicographic approaches and in terms of reliability, content, procedures, and community operation (see Lepore 2006, Fuertes-Olivera 2009, Meyer 2012). Faults have also been pointed in terms of its structure which is confusing for newcomers, with parallel and unaligned information shared among the various language dictionaries, and differences in accuracy and depth among languages. Notwithstanding, data from Wiktionary is routinely employed with successful results in natural language processing and, occasionally, in linguistic research (see Otte 2011, Schlippe 2012, Medero 2009, Li 2012), as it constitutes, by far, the largest free multilingual lexical source.

The Wikimedia Foundation, the organization managing the project, releases automatically generated “dumps” of the data for free and anonymous download. However, such files cannot be used in linguistic research without a pre-processing (“parsing”) stage, as they constitute more a backup than a data release: in essence, they are XML files which enclose the textual information of the dictionary articles (pages potentially holding information for more than one word and more than one language), which are encoded in the MediaWiki markup syntax (a context-sensitive language that is notoriously difficult to parse). Data extraction is further complicated by the fact that the rendered HTML pages include information computed by functions of general and linguistic scope only available inside an environment running the Wiktionary server, as well as by Wiktionary collaborators not always following the project’s guidelines and specifications. Many projects have started to tackle such problems and the difficulties in reusing the data, including a brand new initiative by Wikidata.

As such, no standard method for extracting Wiktionary information exists, with mostly project-specific solutions. An investigation of parsing tools on GitHub revealed that two main approaches are used: parsing the XML files and manipulating the entire textual fields, or parsing the individually rendered HTML pages (fetched either from a local server or over the Internet). We decided to test a simpler approach of parsing the dumps as regular text files, reading them line by line while building an internal structured version of the information, processing lines with regular expression or simple string searching methods. The first experiment, whose results are here presented, involved extracting the parallel translations for English words found in the English Wiktionary.

The data, based on the dump of 2018-06-01, includes 2,169,063 different entries from the translation of 149,530 English words and expressions in 2,358 languages (with much variation in vocabulary size among languages: 931 languages have only one entry and German, the largest language after English, has 97,091 entries). Data is offered in a tabular textual format, and all entries include (a) a unique ID, (b) a concept ID referring to the source English word, (c) a description string with the English source and a short definition (such as “dictionary/publication that explains the meanings of an ordered list of words“), (d) a language ID from the Glottolog catalog, (e) the text of the translation as given in the Wiktionary, and (f) an extra field holding complementary information, when available (such as phonetic transcription of the text, noun gender, etc.). Data is also offer in a set of files (tabular textual files, bibtex sources, and JSON metadata) following the Cross-Linguistic Data Formats (CLDF), a specification designed to allow the exchange of cross-linguistic data. The code for data extraction is available on GitHub and the data is available on Zenodo as “Parallel Translations from the English Wiktionary” (DOI: 10.5281/zenodo.1286991). Many thanks to Johann-Mattis List and to Christoph Rzymski for their help with this work.

References

Fuertes-Olivera, Pedro A. (2009). “The function theory of lexicography and electronic dictionaries: Wiktionary as a prototype of collective free multiple-language internet dictionary”. In H. Bergenholtz, S. Nielsen, and S. Tarp (eds), Lexicography at a Crossroads: Dictionaries and Encyclopedias Today, Lexicographical Tools Tomorrow. Linguistic Insights: Studies in Language and Communication 90, 99–134. Bern: Peter Lang.

Lepore, Jill (2006). “Noah’s Mark”. In: New Yorker, November 6 2006 Issue.

LI, Shen; Graça, Joao V.; Taskar, Ben (2012). “Wiki-ly supervised part-of-speech tagging” (PDF). Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Jeju Island, Korea: Association for Computational Linguistics. pp. 1389–1398.

Medero, Julie; Ostendorf, Mari (2009). “Analysis of vocabulary difficulty using wiktionary” (PDF). Proc. SLaTE Workshop.

Meyer, Christian M.; Gurevych, Iryna (2012). “Wiktionary: A new rival for expert-built lexicons? Exploring the possibilities of collaborative lexicography”. In Granger, Sylviane; Paquot, Magali, Electronic Lexicography. Oxford: Oxford University Press.

Otte, Pim; Tyers, Francis M. (2011). “Rapid rule-based machine translation between Dutch and Afrikaans” (PDF). In Forcada, Mikel L.; Depraetere, Heidi; Vandeghinste, Vincent. 16th Annual Conference of the European Association of Machine Translation, EAMT11. Leuven, Belgium. pp. 153–160.

Schlippe, Tim; Ochs, Sebastian; Schultz, Tanja (2012). “Grapheme-to-phoneme model generation for Indo-European languages” (PDF). Acoustics, Speech and Signal Processing (ICASSP). Kyoto, Japan. pp. 4801–4804.

Cite this article as: Tiago Tresoldi, "Extracting translation data from the Wiktionary project," in Computer-Assisted Language Comparison in Practice, 11/06/2018, https://calc.hypotheses.org/32.