Category Archives: Dataset

Towards a refined wordlist of German in the Intercontinental Dictionary Series

For a long time, I have been wondering about the origin of the German wordlist in the Intercontinental Dictionary Series (Key and Comrie 2016. Not only are many of the words given as translations for the large concept list of 1310 items very archaic variants, which are no longer in use, we also find many annoying problems, such as unusual spellings (consequently avoiding the letter “ß”, which is still in use, even if some people think differently), wrong translations, and, of course, no phonetic transcriptions. Already during my doctoral studies, I therefore started to work on a refined list, but I soon had so many other things on my plate, that I never really managed to finish this work. Recently, however, I realized that my previous work which I had done years ago was far more complete than I had thought, and I had even added information on potential borrowings, extracted from Kluge’s (2002) etymological dictionary. Given that this list can come in handy in various ways, I decided to finish the work and publish the list officially in a very first version.

Continue reading

Annotating Rhyme Judgments for a Complex Corpus of Manuscript Sources: Making Sense of the «Cang Jie pian 蒼頡篇»

Establishing a standardized annotation framework for communicating rhyme judgments identified in historical texts will both ease the use of computational tools for rhyme analysis, and hopefully inspire greater collaboration amongst scholars interested in historical linguistics. The framework we have proposed (List, Hill & Foster 2019), was designed with simplicity, exhaustiveness, and flexibility in mind (p. 30), with the intension of eventual inclusion in the Cross-Linguistic Data Formats initiative (https://cldf.clld.org). Further testing of the framework is desired to demonstrate its utility and identify areas requiring refinement. This study presents such a test on rhyming in the Cang Jie pian 蒼頡篇, an ancient Chinese scribal treatise only recently reconstructed from a complex corpus of surviving manuscript fragments. In a follow-up study, the proposal will be formally evaluated by providing code to test the annotations.

Continue reading

A list of 171 body part concepts

The body of most human beings consist of similar parts such as a head, arms, legs, and so on. Many body parts also occur in animals. The shapes and functions of body parts are universal across cultures, but speakers of various languages choose to categorize the body differently. For example, Vietnamese has a single word (tay) for the concepts HAND and ARM. The universality of the human body and its categorization into different parts have attracted attention across research areas such as lexical typology and cognitive science. Therefore, I present a comprehensive list of human and animal body part terms based on German which were mapped to the concepts in the Concepticon (List et al. 2020). The list is intended for investigations on cross-linugistic naming patterns of body parts.

Continue reading

A model of distinctive features for computer-assisted language comparison

This post introduces a model of segmental/distinctive features for the symbolic representation of sounds, covering almost 600 segments from CLTS (List et al., 2019) mapped to unique sets of bivalent features. It is being designed as an alternative input to vectors of presence/absence built from BIPA descriptors, analogous to other feature matrices like the one by Phoible (Moran & McCloy, 2019). While still under development, it can already be used both for training models of machine learning and statistics, notably decision trees, and for bootstrapping language- and process-specific models, aided by an “universal” and concise reference. The complete matrix is available on Zenodo. Asupporting Python library, distfeat, is available on PyPI.

Continue reading

Concept Similarity in STARLING

STARLING is a software package, originally created by Sergej A. Starostin, which is designed for historical linguists who want to build their own etymological dictionaries. It is not only a database system that allows its users to set up a very straightforward relational database structure, but also a package full of surprises, since it contains many methods that are supposed to automate specific tasks in historical linguistics. These range from phylogenetic tree reconstruction via the preliminary identification of sound correspondences up to the comparison of elicitation glosses for their semantic similarity. While phylogenetic reconstruction and sound correspondences are now quite successfully handled in alternative software packages, I thought it would be interesting to discuss the routine for assessing concept similarity in more detail, since it offers interesting possibilities for those who practice historical language comparison.

Continue reading

New Lexical Data for the Kusunda Language

Endangered language documentation and endangered language revitalisation have been two hot topics in recent years. For instance, the United Nations Educational, Scientific and Cultural Organization (UNESCO) declared the year 2019 as the International Year of Indigenous Languages. However, although the UNESCO and many other organizations (e.g. The Endangered Languages Documentation Programme or SIL International) urge the public to be aware of the rapidly decreasing number of languages in the world, it does not slow down the annual rate of language loss. For example, the total number of speakers of the Kusunda language, a moribund language spoken in Nepal, decreased to only one person in 2020.

Continue reading

New Kusunda data: A list of 250 concepts

This is a joint post by Uday Raj Aaley (independent researcher, Dang, Nepal) and Timotheus A. Bodt.

Between 29th July 2019 and 12th August 2019, we invited the then remaining two speakers of the Kusunda language to Kathmandu, where we interviewed them. One of these speakers, Gyani Maiya Sen Kusunda, unfortunately passed away early 2020. At the moment of writing this, there is only one Kusunda speaker left, Kamala Sen Kusunda.

Continue reading

Illustrating linguistic data reuse: a modest database for semantic distance

Besides new algorithms and tools that facilitate established workflows, one change prompted by computer-assisted approaches to language comparison is a distinct relationship between scientists and their data. A critical part of our work, and perhaps the one with the most lasting impact, is to promote an approach in which the data life-cycle is not constrained within the limits of planning and publishing a study. Data are organized and planned for reuse in investigations perhaps not even considered during collection, with the output of one project becoming the input of another.

Continue reading

Merging datasets with LingPy and the CLDF curation framework

Imagine you have two different datasets, both containing approximately the same concepts, but slightly different numbers of columns and — more importantly — potentially identical identifiers in the first column. A bad idea for merging these datasets would be to paste them in Excel or some other kind of spreadsheet software, and then trying to manually adjust all problems that might occur during this process.

A better idea is to just use LingPy and our CLDF curation framework Continue reading

From Fieldwork to Trees 2: Cognate coding

In a previous post (Kaiping 2018), I described how to convert matrix-shape word lists given in Excel into the long format LingPy and other software can work with. My motivation for this was to provide my colleague Yunus Sulistyono with a good way to compare the lexicon of his Alorese [alor1247] dialects, and to understand the relationship between them. In this post, the data is automatically cognate coded and converted into CLDF. Continue reading

Inferring consonant clusters from CLICS data with LingPy

LingPy (List et al. 2017) offers a great deal of functions for string manipulation. Although most of those functions are readily documented (see lingpy.org for details), and the basic ideas have also been described in my dissertation (List 2014), it seems that not many users are aware of these additional possibilities, which the library offers.

In the following, I want to illustrate how we can use LingPy to learn something about consonant clusters occurring in the data underlying the CLICS database (List et al. 2018, clics.clld.org). I have illustrated in an earlier post how one can use the CLICS software API to cook one’s own CLICS application. I will thus assume that you know how to install CLICS (following the instructions on our GitHub page) and the data underlying it.

Continue reading

Representing Structural Data in CLDF

The Cross-Linguistic Data Formats initiative (CLDF, https://cldf.clld.org, Forkel et al. 2018) has helped a lot in preparing the CLICS² database of cross-linguistic colexifications (https://clics.lingpy.org, List et al. 2018), since linking our data to Concepticon (https://concepticon.clld.org, List et al. 2016) and Glottolog (https://glottolog.org, Hammarström et al. 2018) has provided incredible help in merging the different datasets into a big comparative dataset.

CLDF, however, is not restricted to lexical data, but can also be successfully used to store structural data, although — due to the nature of structural data — it is much more difficult to compare different datasets.

Continue reading

Cooking with CLICS

Robert Forkel just published a very nice cookbook example for our CLICS database (List et al. 2018f, http://clics.clld.org), where you can find out how to manipulate the data further, apart from just installing it and running it to replicate our analyses.

This cookbook tells you how the underlying SQLITE database is structured and how you can, after installing CLICS and the respective packages, access the data to conduct studies of your own.

As a little example of what you can do with the new CLICS API, let me illustrate in this post, how we can use the old CLICS data (underlying the version 1.0 by List et al. 2014, http://clics.lingpy.org), available from here, in the new application, specifically the standalone that we provide.

Continue reading

Extracting translation data from the Wiktionary project

Wiktionary is a project for creating a multilingual, web-based free dictionary of all words in all languages. Like its sister project Wikipedia, since its inception it has been subject to criticism both in terms of its lexicographic approaches and in terms of reliability, content, procedures, and community operation (see Lepore 2006, Fuertes-Olivera 2009, Meyer 2012). Faults have also been pointed in terms of its structure which is confusing for newcomers, with parallel and unaligned information shared among the various language dictionaries, and differences in accuracy and depth among languages. Notwithstanding, data from Wiktionary is routinely employed with successful results in natural language processing and, occasionally, in linguistic research (see Otte 2011, Schlippe 2012, Medero 2009, Li 2012), as it constitutes, by far, the largest free multilingual lexical source.

Continue reading