Using pyconcepticon to map concept lists (II)

Mapping a given concept list to Concepticon can be done in a straight-forward way, even if automatic mappings need manual refinement. But what can we do when having to deal with larger datasets, say, a dictionary, from which we want to extract specific concepts, such as, for example, the ones in the classical Swadesh list of 100 items (Swadesh 1955)?

Continue reading “Using pyconcepticon to map concept lists (II)”

Using pyconcepticon to map concept lists

A major problem for data reuse in computer-assisted historical linguistics, especially when employing data collected with no computational workflows in mind, is linking datasets in terms of the meanings of the words (or, technically, “forms”) they carry. Just as linking languages across different datasets is not as straightforward as one might naively assume, demanding a complex reference catalog such as Glottolog, linking the concepts used in a wordlist (a “concept list”) to our Concepticon project might well be the most intensive task in preparing a dataset for cross-linguistic studies.

 

Continue reading “Using pyconcepticon to map concept lists”

A Primer on Automatic Inference of Sound Correspondence Patterns (3): Extended Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands

Having illustrated how a quick correspondence pattern analysis can be done with help of readily formatted data and the EDICTOR tool alone, it is now time to show how we can use the LingRex package in order to carry out a full-fledged correspondence pattern analysis. While EDICTOR uses a simple algorithm that is based on sorting the patterns, the Python algorithm for correspondence pattern detection, which is described in detail in List (2019), uses a greedy approach inspired by the Welsh-Powell algorithm for graph coloring (Welsh and Powell 1967), in order to cluster all alignment sites in the data into clusters which are compatible with each other.

Continue reading “A Primer on Automatic Inference of Sound Correspondence Patterns (3): Extended Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands”

A Primer on Automatic Inference of Sound Correspondence Patterns (2): Initial Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands

Following up on my announcement to present in more detail how the algorithms for automatic correspondence pattern detection can be applied, this post introduces the preliminary preparations needed to run a first experiment with aligned data. In order to avoid that we have to align a dataset completely from scratch, we make use of already aligned data from the Tableaux Phonétiques des Patois Suisses Romands by Gauchat et al. (1925), which were originally aligned for the study in List (2014) and later published as part of the Benchmark Database of Phonetic Alignments(List and Prokić 2014). In this post, I will introduce how we can harvest the alignments from this dataset with help of LingPy, and later analyze them with help of the sound correspondence pattern algorithms.

Continue reading “A Primer on Automatic Inference of Sound Correspondence Patterns (2): Initial Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands”

A Primer on Automatic Inference of Sound Correspondence Patterns (1): Introduction

After about three years of work on the matter, I have managed (with help of many colleagues who helped in testing) to develop a first approach for the automatic inference of sound correspondence patterns, which will soon be published with Computational Linguistics (List 2019). The key task which this algorithm solves is to take aligned data as input and to compute explicit sound correspondence patterns from the alignments.

Continue reading “A Primer on Automatic Inference of Sound Correspondence Patterns (1): Introduction”

From Fieldwork to Trees 3: CLDF recipes

In the previous two posts (Part 1, Part 2), I took you from a matrix of word lists from fieldwork to a LingPy-compatible CLDF Wordlist with cognate codes and alignments. We can now feed this dataset into existing tools and recipes for visualizing and analyzing CLDF Wordlists.

Continue reading “From Fieldwork to Trees 3: CLDF recipes”

Merging datasets with LingPy and the CLDF curation framework

Imagine you have two different datasets, both containing approximately the same concepts, but slightly different numbers of columns and — more importantly — potentially identical identifiers in the first column. A bad idea for merging these datasets would be to paste them in Excel or some other kind of spreadsheet software, and then trying to manually adjust all problems that might occur during this process.

A better idea is to just use LingPy and our CLDF curation framework Continue reading “Merging datasets with LingPy and the CLDF curation framework”

From Fieldwork to Trees 2: Cognate coding

In a previous post (Kaiping 2018), I described how to convert matrix-shape word lists given in Excel into the long format LingPy and other software can work with. My motivation for this was to provide my colleague Yunus Sulistyono with a good way to compare the lexicon of his Alorese [alor1247] dialects, and to understand the relationship between them. In this post, the data is automatically cognate coded and converted into CLDF. Continue reading “From Fieldwork to Trees 2: Cognate coding”

Semantic promiscuity as a factor of productivity in word formation

The blog post introduces ideas discussed in our project about taking a closer look at word formation from a semantic (or semasiological) point of view. Since this so far underinvestigated approach to word formation processes lacks proper terminology, a new term to denote the central research question of concept-based type-frequency is introduced and contrasted with related established terminology.

Continue reading “Semantic promiscuity as a factor of productivity in word formation”

From Fieldwork to Trees 1: Data preparation

A colleague of mine has recently returned from his fieldwork, where he collected data on on the dialectal variation of the Alorese language of Alor and Pantar in the East Nusa Tenggara province of Indonesia. He collected data on 13 Alorese varieties, including word list data. One obvious step for comparing the dialects is to mark which forms are obviously cognate and then use a standard tree (or network) construction algorithm to display the shared signal in the data. With standard tools and a bit of Python glue, this is an easy task. A script for 3 steps can be found in my repository on github. In this first part, I will describe how to get an Excel file into a format LingPy can deal with.

Continue reading “From Fieldwork to Trees 1: Data preparation”

Inferring consonant clusters from CLICS data with LingPy

LingPy (List et al. 2017) offers a great deal of functions for string manipulation. Although most of those functions are readily documented (see lingpy.org for details), and the basic ideas have also been described in my dissertation (List 2014), it seems that not many users are aware of these additional possibilities, which the library offers.

In the following, I want to illustrate how we can use LingPy to learn something about consonant clusters occurring in the data underlying the CLICS database (List et al. 2018, clics.clld.org). I have illustrated in an earlier post how one can use the CLICS software API to cook one’s own CLICS application. I will thus assume that you know how to install CLICS (following the instructions on our GitHub page) and the data underlying it.

Continue reading “Inferring consonant clusters from CLICS data with LingPy”

Enhancing morphological annotation for internal language comparison

In language comparison, there is a long history of using concept-based wordlists to get insights into the degree of similarity between languages, going back at least to Morris Swadesh (Swadesh 1950). For these purposes, words from different languages that share the same meaning are compared, either manually or with computational methods. The latter have the advantages of being both faster and more consistent. However, there are also limits to what computer-based methods can detect for the time being.

One of the biggest problems in this context is that none of the currently available methods for automatic cognate detection can infer partial cognates directly if no information on morpheme boundaries is provided by the user. As a result, if morpheme boundaries are missing and morphological differences are frequent in the data one wants to investigate, automatic cognate detection can be seriously hampered (List, Greenhill, and Gray 2017).

Continue reading “Enhancing morphological annotation for internal language comparison”

A fast implementation of the Consonant Class Matching method for automatic cognate detection in LingPy

LingPy’s LexStat class for cognate detection confuses those who want to apply it, since the name of the Python class is the same as the name of one of the methods the class provides, but the class can be used for other types of cognate detection as well. I recommend all users of LingPy that they give a read to our most recent tutorial on LingPy’s cognate detection method (List et al. 2018), since the three most important methods are discussed there in detail, namely the edit distance method for cognate detection, which makes use of the simple, normalized edit distance, the SCA method, based on the Sound-Class-Based Alignment algorithm (List 2014), and the LexStat method (ibid.). Applying these methods in LingPy is fairly simple and described in detail in our aforementioned tutorial. But LingPy offers an additional method for cognate detection that has the advantage of being extremely fast and thus especially suitable for exploratory data analysis of very large datasets. This method is called turchin in LingPy, named after the first author of a paper presenting the method (Turchin et al. 2010), but the method itself, which Turchin et al. name “Consonant Class Matching” method, goes originally back to Dolgopolsky (1964)), and has long since been implemented as a part of the STARLING software package (http://starling.rinet.ru/program.php). Continue reading “A fast implementation of the Consonant Class Matching method for automatic cognate detection in LingPy”

Representing Structural Data in CLDF

The Cross-Linguistic Data Formats initiative (CLDF, https://cldf.clld.org, Forkel et al. 2018) has helped a lot in preparing the CLICS² database of cross-linguistic colexifications (https://clics.lingpy.org, List et al. 2018), since linking our data to Concepticon (https://concepticon.clld.org, List et al. 2016) and Glottolog (https://glottolog.org, Hammarström et al. 2018) has provided incredible help in merging the different datasets into a big comparative dataset.

CLDF, however, is not restricted to lexical data, but can also be successfully used to store structural data, although — due to the nature of structural data — it is much more difficult to compare different datasets.

Continue reading “Representing Structural Data in CLDF”

Cooking with CLICS

Robert Forkel just published a very nice cookbook example for our CLICS database (List et al. 2018f, http://clics.clld.org), where you can find out how to manipulate the data further, apart from just installing it and running it to replicate our analyses.

This cookbook tells you how the underlying SQLITE database is structured and how you can, after installing CLICS and the respective packages, access the data to conduct studies of your own.

As a little example of what you can do with the new CLICS API, let me illustrate in this post, how we can use the old CLICS data (underlying the version 1.0 by List et al. 2014, http://clics.lingpy.org), available from here, in the new application, specifically the standalone that we provide.

Continue reading “Cooking with CLICS”