From Fieldwork to Trees 1: Data preparation

A colleague of mine has recently returned from his fieldwork, where he collected data on on the dialectal variation of the Alorese language of Alor and Pantar in the East Nusa Tenggara province of Indonesia. He collected data on 13 Alorese varieties, including word list data. One obvious step for comparing the dialects is to mark which forms are obviously cognate and then use a standard tree (or network) construction algorithm to display the shared signal in the data. With standard tools and a bit of Python glue, this is an easy task. A script for 3 steps can be found in my repository on github. In this first part, I will describe how to get an Excel file into a format LingPy can deal with.

Continue reading “From Fieldwork to Trees 1: Data preparation”

Inferring consonant clusters from CLICS data with LingPy

LingPy (List et al. 2017) offers a great deal of functions for string manipulation. Although most of those functions are readily documented (see lingpy.org for details), and the basic ideas have also been described in my dissertation (List 2014), it seems that not many users are aware of these additional possibilities, which the library offers.

In the following, I want to illustrate how we can use LingPy to learn something about consonant clusters occurring in the data underlying the CLICS database (List et al. 2018, clics.clld.org). I have illustrated in an earlier post how one can use the CLICS software API to cook one’s own CLICS application. I will thus assume that you know how to install CLICS (following the instructions on our GitHub page) and the data underlying it.

Continue reading “Inferring consonant clusters from CLICS data with LingPy”

Enhancing morphological annotation for internal language comparison

In language comparison, there is a long history of using concept-based wordlists to get insights into the degree of similarity between languages, going back at least to Morris Swadesh (Swadesh 1950). For these purposes, words from different languages that share the same meaning are compared, either manually or with computational methods. The latter have the advantages of being both faster and more consistent. However, there are also limits to what computer-based methods can detect for the time being.

One of the biggest problems in this context is that none of the currently available methods for automatic cognate detection can infer partial cognates directly if no information on morpheme boundaries is provided by the user. As a result, if morpheme boundaries are missing and morphological differences are frequent in the data one wants to investigate, automatic cognate detection can be seriously hampered (List, Greenhill, and Gray 2017).

Continue reading “Enhancing morphological annotation for internal language comparison”

A fast implementation of the Consonant Class Matching method for automatic cognate detection in LingPy

LingPy’s LexStat class for cognate detection confuses those who want to apply it, since the name of the Python class is the same as the name of one of the methods the class provides, but the class can be used for other types of cognate detection as well. I recommend all users of LingPy that they give a read to our most recent tutorial on LingPy’s cognate detection method (List et al. 2018), since the three most important methods are discussed there in detail, namely the edit distance method for cognate detection, which makes use of the simple, normalized edit distance, the SCA method, based on the Sound-Class-Based Alignment algorithm (List 2014), and the LexStat method (ibid.). Applying these methods in LingPy is fairly simple and described in detail in our aforementioned tutorial. But LingPy offers an additional method for cognate detection that has the advantage of being extremely fast and thus especially suitable for exploratory data analysis of very large datasets. This method is called turchin in LingPy, named after the first author of a paper presenting the method (Turchin et al. 2010), but the method itself, which Turchin et al. name “Consonant Class Matching” method, goes originally back to Dolgopolsky (1964)), and has long since been implemented as a part of the STARLING software package (http://starling.rinet.ru/program.php). Continue reading “A fast implementation of the Consonant Class Matching method for automatic cognate detection in LingPy”

Representing Structural Data in CLDF

The Cross-Linguistic Data Formats initiative (CLDF, https://cldf.clld.org, Forkel et al. 2018) has helped a lot in preparing the CLICS² database of cross-linguistic colexifications (https://clics.lingpy.org, List et al. 2018), since linking our data to Concepticon (https://concepticon.clld.org, List et al. 2016) and Glottolog (https://glottolog.org, Hammarström et al. 2018) has provided incredible help in merging the different datasets into a big comparative dataset.

CLDF, however, is not restricted to lexical data, but can also be successfully used to store structural data, although — due to the nature of structural data — it is much more difficult to compare different datasets.

Continue reading “Representing Structural Data in CLDF”

Cooking with CLICS

Robert Forkel just published a very nice cookbook example for our CLICS database (List et al. 2018f, http://clics.clld.org), where you can find out how to manipulate the data further, apart from just installing it and running it to replicate our analyses.

This cookbook tells you how the underlying SQLITE database is structured and how you can, after installing CLICS and the respective packages, access the data to conduct studies of your own.

As a little example of what you can do with the new CLICS API, let me illustrate in this post, how we can use the old CLICS data (underlying the version 1.0 by List et al. 2014, http://clics.lingpy.org), available from here, in the new application, specifically the standalone that we provide.

Continue reading “Cooking with CLICS”

Exporting Sublists from a Wordlist with LingPy and Concepticon

When dealing with linguistic datasets, we may often want to export only a small part of our data, for example, only vocabulary in a certain range, such as the Swadesh list of 200 items or the list of 35 items by Yakhontov (originally published in Starostin 1991. Thanks to the pyconcepticon API and LingPy’s built-in export functions for wordlists, this task can be done just in a few lines of code, as we will see below. If you prefer to see the raw code instead of the step-by-step explanation below, you can find a GitHub Gist here.

Continue reading “Exporting Sublists from a Wordlist with LingPy and Concepticon”

Extracting translation data from the Wiktionary project

Wiktionary is a project for creating a multilingual, web-based free dictionary of all words in all languages. Like its sister project Wikipedia, since its inception it has been subject to criticism both in terms of its lexicographic approaches and in terms of reliability, content, procedures, and community operation (see Lepore 2006, Fuertes-Olivera 2009, Meyer 2012). Faults have also been pointed in terms of its structure which is confusing for newcomers, with parallel and unaligned information shared among the various language dictionaries, and differences in accuracy and depth among languages. Notwithstanding, data from Wiktionary is routinely employed with successful results in natural language processing and, occasionally, in linguistic research (see Otte 2011, Schlippe 2012, Medero 2009, Li 2012), as it constitutes, by far, the largest free multilingual lexical source.

The Wikimedia Foundation, the organization managing the project, releases automatically generated “dumps” of the data for free and anonymous download. However, such files cannot be used in linguistic research without a pre-processing (“parsing”) stage, as they constitute more a backup than a data release: in essence, they are XML files which enclose the textual information of the dictionary articles (pages potentially holding information for more than one word and more than one language), which are encoded in the MediaWiki markup syntax (a context-sensitive language that is notoriously difficult to parse). Data extraction is further complicated by the fact that the rendered HTML pages include information computed by functions of general and linguistic scope only available inside an environment running the Wiktionary server, as well as by Wiktionary collaborators not always following the project’s guidelines and specifications. Many projects have started to tackle such problems and the difficulties in reusing the data, including a brand new initiative by Wikidata.

As such, no standard method for extracting Wiktionary information exists, with mostly project-specific solutions. An investigation of parsing tools on GitHub revealed that two main approaches are used: parsing the XML files and manipulating the entire textual fields, or parsing the individually rendered HTML pages (fetched either from a local server or over the Internet). We decided to test a simpler approach of parsing the dumps as regular text files, reading them line by line while building an internal structured version of the information, processing lines with regular expression or simple string searching methods. The first experiment, whose results are here presented, involved extracting the parallel translations for English words found in the English Wiktionary.

The data, based on the dump of 2018-06-01, includes 2,169,063 different entries from the translation of 149,530 English words and expressions in 2,358 languages (with much variation in vocabulary size among languages: 931 languages have only one entry and German, the largest language after English, has 97,091 entries). Data is offered in a tabular textual format, and all entries include (a) a unique ID, (b) a concept ID referring to the source English word, (c) a description string with the English source and a short definition (such as “dictionary/publication that explains the meanings of an ordered list of words“), (d) a language ID from the Glottolog catalog, (e) the text of the translation as given in the Wiktionary, and (f) an extra field holding complementary information, when available (such as phonetic transcription of the text, noun gender, etc.). Data is also offer in a set of files (tabular textual files, bibtex sources, and JSON metadata) following the Cross-Linguistic Data Formats (CLDF), a specification designed to allow the exchange of cross-linguistic data. The code for data extraction is available on GitHub and the data is available on Zenodo as “Parallel Translations from the English Wiktionary” (DOI: 10.5281/zenodo.1286991). Many thanks to Johann-Mattis List and to Christoph Rzymski for their help with this work.

References

Fuertes-Olivera, Pedro A. (2009). “The function theory of lexicography and electronic dictionaries: Wiktionary as a prototype of collective free multiple-language internet dictionary”. In H. Bergenholtz, S. Nielsen, and S. Tarp (eds), Lexicography at a Crossroads: Dictionaries and Encyclopedias Today, Lexicographical Tools Tomorrow. Linguistic Insights: Studies in Language and Communication 90, 99–134. Bern: Peter Lang.

Lepore, Jill (2006). “Noah’s Mark”. In: New Yorker, November 6 2006 Issue.

LI, Shen; Graça, Joao V.; Taskar, Ben (2012). “Wiki-ly supervised part-of-speech tagging” (PDF). Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Jeju Island, Korea: Association for Computational Linguistics. pp. 1389–1398.

Medero, Julie; Ostendorf, Mari (2009). “Analysis of vocabulary difficulty using wiktionary” (PDF). Proc. SLaTE Workshop.

Meyer, Christian M.; Gurevych, Iryna (2012). “Wiktionary: A new rival for expert-built lexicons? Exploring the possibilities of collaborative lexicography”. In Granger, Sylviane; Paquot, Magali, Electronic Lexicography. Oxford: Oxford University Press.

Otte, Pim; Tyers, Francis M. (2011). “Rapid rule-based machine translation between Dutch and Afrikaans” (PDF). In Forcada, Mikel L.; Depraetere, Heidi; Vandeghinste, Vincent. 16th Annual Conference of the European Association of Machine Translation, EAMT11. Leuven, Belgium. pp. 153–160.

Schlippe, Tim; Ochs, Sebastian; Schultz, Tanja (2012). “Grapheme-to-phoneme model generation for Indo-European languages” (PDF). Acoustics, Speech and Signal Processing (ICASSP). Kyoto, Japan. pp. 4801–4804.

Cite this article as: Tiago Tresoldi, "Extracting translation data from the Wiktionary project," in Computer-Assisted Language Comparison in Practice, 11/06/2018, https://calc.hypotheses.org/32.

Let the Games Begin!

By comparing the languages of the world, we gain invaluable insights into human prehistory, predating the appearance of written records by thousands of years. The traditional methods for language comparison are based on manual data inspection. With more and more data available, they reach their practical limits. Computer applications, however, are not capable of replacing experts’ experience and intuition. In a situation where computers cannot replace experts and experts do not have enough time to analyse the massive amounts of data, a new framework, neither completely computer-driven, nor ignorant of the help computers provide, becomes urgent. Such frameworks are well-established in biology and translation, where computational tools cannot provide the accuracy needed to arrive at convincing results, but do assist humans to digest large data sets.

After one month of preparation, during which our team was busy teaching each other, members of our seminar at Friedrich Schiller University Jena, and colleagues in our department, how to code, we are ready to launch the first posts during the next weeks.

I will refrain from promising too much at this stage, but I recommend those interested in learning more about different topics as diverse as coding (in Python and R), data curation and analysis, theory of diversity linguistics, and methodology of historical language comparison to keep an eye on this blog. Our core team of four to five authors will try to publish at least one new blogpost per month, and we will try to constantly increase our range of authors by inviting colleagues from our Department of Linguistic and Cultural Evolution and from other institutions to present their questions, ideas, or approaches to questions related to computer-based and computer-assisted approaches in historical language comparison and beyond.

Our team is currently preparing the first blogposts for this month. I won’t tell you too much about the concrete content yet, but if you are interested in computer-assisted language comparison and empirical approaches to diversity linguistics, I recommend you to keep an eye on our weblog.

Cite this article as: Johann-Mattis List, "Let the Games Begin!," in Computer-Assisted Language Comparison in Practice, 06/06/2018, https://calc.hypotheses.org/22.