Biological metaphors and methods in historical linguistics (2): Words and genes

As was mentioned in the introduction to this series of blogposts, both species and languages are often presented in a tree model. In biology, trees of each individual gene are created in order to account for horizontal transmission and other processes in which the history of a gene differs from the general history of its genome. From the sum of these trees, the species trees are then derived, a method called gene tree reconciliation (Nakhleh 2013). In linguistics on the other hand, phylogenetic trees normally are built on cognate sets of related words, from which the most likely tree of the languages is calculated. A closer equivalent however would be to describe the history of each individual word or word form, including regular sound change, irregular changes to its form, semantic changes, borrowings, and processes of word formation, and to derive the language tree based on the sum of the word histories (Gray, Greenhill, and Ross 2007, 15). Unlike its biological equivalent, this is normally done manually.

Continue reading

Feature-Based Alignment Analyses with LingPy and CLTS (2)

Having seen how we can obtain a simple scorer derived from the feature system in CLTS (List et al. 2019) in last month’s post, what is missing now, in order to use the scorer for alignment analyses, is an alignment function which can take the scorer as an argument. If one does not have higher ambitions with respect to the alignment function itself, this step can be achieved in a very straightforward way with help of LingPy’s (List et al. 2018) nw_align() or sw_align() method. As can be seen from the documentation, this method takes as input two sequences (i.e., lists of sounds), along with a scoring function. Obviously, all we need to do now is to create our specific scorer based on the CLTS features, and then pass this scoring function along with our sequences to the function.

Continue reading

Feature-Based Alignment Analyses with LingPy and CLTS (1)

In the past, people have repeatedly asked me how they could use their own scoring functions in combination with LingPy’s alignment algorithms. Their major concern was that the sound-class-based scoring systems we use in LingPy might fail to reflect true phonetic similarity of sounds, specifically also because they are not informed by classical ideas about distinctive features in phonology. As described in detail in List (2014), LingPy converts sounds in phonetic transcription to an internal alphabet of less than 30 letters, to which the alignment algorithms are then applied in a second stage.

Continue reading

Using the Waterman-Eggert algorithm for sentence alignment

During the 24th International Conference of Historical Linguistics, I was asked by a colleague whether I would know a good way to align and scores sentences available in form of phonetic transcriptions. While it is clear that one can roughly compare the difference between sequences rather easily by aligning them, and calculating, for example, the edit distance between them, it is clear that the task of sentence alignment could be done in a somewhat more subtle way.

Continue reading

Behind the Sino-Tibetan Database of Lexical Cognates: Concept selection

One of the crucial steps in creating a database of lexical cognates is the selection of concepts one wants to use for a given study. While many scholars use the classical Swadesh list of 200 items) (Swadesh 1952) for this purpose, or the combined list of 207 items, in which the former has been merged with Swadesh’s updated list of 100 items (Swadesh 1955), and which is often mistakenly attributed to Swadesh himself, although the first official reference seems to be Comrie (1977), it is useful to give the selection of concepts some more thought initially.

Continue reading

Rooting MADness

Rooting of phylogenetic trees is an important task, not only in evolutionary biology, but also in historical linguistics. So far, however, different rooting methods have not yet been sufficiently tested on linguistics data. Given that a new method for the automatic rooting of phylogenetic trees has been presented recently in biology, it seemed to be a good occasion to test in detail how well this new method works in comparison with alternative methods.

Continue reading

Biological metaphors and methods in historical linguistics (1): Introduction

Evolutionary biology and historical linguistics share a long history of scientific exchange, reflected both not only in the sharing and transfer of metaphors but more recently also in the transfer of methods. Already Charles Darwin claimed that both species and languages evolve in tree-like patterns, and linguists, like Wilhelm Meyer-Lübke, used terms like “sprachliche[n] Biologie” (‘linguistic biology’) when referring to the history of languages (Meyer-Lübke 1890, x). While the discipline of historical-comparative linguistics allowed biologists to adopt evolutionary thought against religious dogma in the late 19th century (Wells 1987: 54), it was biological applications which opened up the possibility of large quantitative studies in linguistics based on computational approaches (Geisler and List 2013, 111).

Continue reading

Behind the Sino-Tibetan Database of Lexical Cognates: Introductory remarks


One of the major efforts behind our recently published paper on the origin and spread of the Sino-Tibetan languages (Sagart et al. 2019) was the creation of a database of lexical cognates which was used to run the phylogenetic analyses. The creation of this database started about four years ago, when I joined the Centre des Recherches Linguistiques sur l’Asie Oriental in Paris as a research fellow in January 2015, and Guillaume Jacques and Laurent Sagart approached me with the idea of making a phylogenetic study of Sino-Tibetan languages. In December 2017, almost three years after having started, our database consisted of 180 concepts translated into 50 different languages. Since creating the database was not directly straightforward from the beginning, with quite a few situations in which we realized we had to re-arrange the data or the procedure, I thought it might be useful to share our experience in a series of blog posts, as it might be interesting for scholars who wish to create their own database.

Continue reading

Using pyconcepticon to map concept lists (II)

Mapping a given concept list to Concepticon can be done in a straight-forward way, even if automatic mappings need manual refinement. But what can we do when having to deal with larger datasets, say, a dictionary, from which we want to extract specific concepts, such as, for example, the ones in the classical Swadesh list of 100 items (Swadesh 1955)?

Continue reading

Using pyconcepticon to map concept lists

A major problem for data reuse in computer-assisted historical linguistics, especially when employing data collected with no computational workflows in mind, is linking datasets in terms of the meanings of the words (or, technically, “forms”) they carry. Just as linking languages across different datasets is not as straightforward as one might naively assume, demanding a complex reference catalog such as Glottolog, linking the concepts used in a wordlist (a “concept list”) to our Concepticon project might well be the most intensive task in preparing a dataset for cross-linguistic studies.

 

Continue reading

A Primer on Automatic Inference of Sound Correspondence Patterns (3): Extended Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands

Having illustrated how a quick correspondence pattern analysis can be done with help of readily formatted data and the EDICTOR tool alone, it is now time to show how we can use the LingRex package in order to carry out a full-fledged correspondence pattern analysis. While EDICTOR uses a simple algorithm that is based on sorting the patterns, the Python algorithm for correspondence pattern detection, which is described in detail in List (2019), uses a greedy approach inspired by the Welsh-Powell algorithm for graph coloring (Welsh and Powell 1967), in order to cluster all alignment sites in the data into clusters which are compatible with each other.

Continue reading

A Primer on Automatic Inference of Sound Correspondence Patterns (2): Initial Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands

Following up on my announcement to present in more detail how the algorithms for automatic correspondence pattern detection can be applied, this post introduces the preliminary preparations needed to run a first experiment with aligned data. In order to avoid that we have to align a dataset completely from scratch, we make use of already aligned data from the Tableaux Phonétiques des Patois Suisses Romands by Gauchat et al. (1925), which were originally aligned for the study in List (2014) and later published as part of the Benchmark Database of Phonetic Alignments(List and Prokić 2014). In this post, I will introduce how we can harvest the alignments from this dataset with help of LingPy, and later analyze them with help of the sound correspondence pattern algorithms.

Continue reading

A Primer on Automatic Inference of Sound Correspondence Patterns (1): Introduction

After about three years of work on the matter, I have managed (with help of many colleagues who helped in testing) to develop a first approach for the automatic inference of sound correspondence patterns, which will soon be published with Computational Linguistics (List 2019). The key task which this algorithm solves is to take aligned data as input and to compute explicit sound correspondence patterns from the alignments.

Continue reading

From Fieldwork to Trees 3: CLDF recipes

In the previous two posts (Part 1, Part 2), I took you from a matrix of word lists from fieldwork to a LingPy-compatible CLDF Wordlist with cognate codes and alignments. We can now feed this dataset into existing tools and recipes for visualizing and analyzing CLDF Wordlists.

Continue reading

Merging datasets with LingPy and the CLDF curation framework

Imagine you have two different datasets, both containing approximately the same concepts, but slightly different numbers of columns and — more importantly — potentially identical identifiers in the first column. A bad idea for merging these datasets would be to paste them in Excel or some other kind of spreadsheet software, and then trying to manually adjust all problems that might occur during this process.

A better idea is to just use LingPy and our CLDF curation framework Continue reading