Recently, I was asked by a colleague how one could query only those datasets in the Lexibank repository which come with cognate sets annotated by humans. While I first thought this could be done in a very straightforward way, I figeured out, when trying it myself, that the code still needs some workarounds. As a result, I thought it is best to share the solution I came up with in a blog post in order to make it accessible to colleagues who might be interested in inspecting and using the data provided by the Lexibank repository in more detail.
Category Archives: Primer
Blog Post Style Guide
The Computer Language Comparison in Practice blog has grown steadily over the past few years. Each year we had a number of blog posts written by our colleagues on various topics. With the introduction of a PDF copy for every blog post and the publication of an annual volume, these blog posts can be treated like publications in a non-peer-reviewed journal. We hope that our blog will continue to be a valuable publication venue for our colleagues, not only those working in our research group and department, but also outside collaborators, and scholars who are generally interested in the topic and want to share their ideas. To maintain a coherent structure for all posts, this blog post provides a style guide for all future posts. Continue reading
Comparing NoRaRe data sets: Calculation of correlations and creation of plots in R
In a recent blog post, I introduced the Database of Cross-Linguistic Norms, Ratings, and Relations for Words and Concepts (NoRaRe) and demonstrated how to add new data sets (Tjuka 2021). The database currently includes 65 unique word and concept properties based on 98 different data sets across 40 languages (NoRaRe v0.2, Tjuka et al. 2021a) and can easily be expanded further. But what can we do with the data? The article presenting the NoRaRe database already included three case studies that illustrate the application of the database (Tjuka et al. 2021b). This blog post, therefore, provides a tutorial on how to compare NoRaRe data sets in R by conducting a new case study that correlates ratings on arousal in English and Dutch.
Adding data sets to NoRaRe: A guide for beginners
There has been much discussion about the reproducibility of research and how it can be improved (Munafò et al. 2017). One antidote to the “reproducibility crisis” seems obvious: data sharing. However, one additional point that is not mentioned as often is the standardization of shared data to make them comparable. Especially for cross-linguistic studies, this is an important step that needs to take place so that we can conduct reproducible studies in different languages. The NoRaRe database (Tjuka et al. 2021a) is a resource that provides standardized cross-linguistic data on norms, ratings, and relations published in psychology and linguistics. In this blog post, I describe a beginner’s guide to adding data sets to NoRaRe.
Adding concept lists to Concepticon: A guide for beginners
Scientific data should be openly accessible. This includes databases which are designed for collaborative work. However, in most cases, these databases are only extended by a team of experts. If a database is truly collaborative, the workflows need to be accessible for everybody. The Concepticon database (List et al., 2019) invites contributors to include their own data sets. This requires a transparent description of the contributing process.
Behind the Sino-Tibetan Database of Lexical Cognates: Concept selection
One of the crucial steps in creating a database of lexical cognates is the selection of concepts one wants to use for a given study. While many scholars use the classical Swadesh list of 200 items) (Swadesh 1952) for this purpose, or the combined list of 207 items, in which the former has been merged with Swadesh’s updated list of 100 items (Swadesh 1955), and which is often mistakenly attributed to Swadesh himself, although the first official reference seems to be Comrie (1977), it is useful to give the selection of concepts some more thought initially.
Behind the Sino-Tibetan Database of Lexical Cognates: Introductory remarks
One of the major efforts behind our recently published paper on the origin and spread of the Sino-Tibetan languages (Sagart et al. 2019) was the creation of a database of lexical cognates which was used to run the phylogenetic analyses. The creation of this database started about four years ago, when I joined the Centre des Recherches Linguistiques sur l’Asie Oriental in Paris as a research fellow in January 2015, and Guillaume Jacques and Laurent Sagart approached me with the idea of making a phylogenetic study of Sino-Tibetan languages. In December 2017, almost three years after having started, our database consisted of 180 concepts translated into 50 different languages. Since creating the database was not directly straightforward from the beginning, with quite a few situations in which we realized we had to re-arrange the data or the procedure, I thought it might be useful to share our experience in a series of blog posts, as it might be interesting for scholars who wish to create their own database.
A Primer on Automatic Inference of Sound Correspondence Patterns (3): Extended Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands
Having illustrated how a quick correspondence pattern analysis can be done with help of readily formatted data and the EDICTOR tool alone, it is now time to show how we can use the LingRex package in order to carry out a full-fledged correspondence pattern analysis. While EDICTOR uses a simple algorithm that is based on sorting the patterns, the Python algorithm for correspondence pattern detection, which is described in detail in List (2019), uses a greedy approach inspired by the Welsh-Powell algorithm for graph coloring (Welsh and Powell 1967), in order to cluster all alignment sites in the data into clusters which are compatible with each other.
A Primer on Automatic Inference of Sound Correspondence Patterns (2): Initial Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands
Following up on my announcement to present in more detail how the algorithms for automatic correspondence pattern detection can be applied, this post introduces the preliminary preparations needed to run a first experiment with aligned data. In order to avoid that we have to align a dataset completely from scratch, we make use of already aligned data from the Tableaux Phonétiques des Patois Suisses Romands by Gauchat et al. (1925), which were originally aligned for the study in List (2014) and later published as part of the Benchmark Database of Phonetic Alignments(List and Prokić 2014). In this post, I will introduce how we can harvest the alignments from this dataset with help of LingPy, and later analyze them with help of the sound correspondence pattern algorithms.
A Primer on Automatic Inference of Sound Correspondence Patterns (1): Introduction
After about three years of work on the matter, I have managed (with help of many colleagues who helped in testing) to develop a first approach for the automatic inference of sound correspondence patterns, which will soon be published with Computational Linguistics (List 2019). The key task which this algorithm solves is to take aligned data as input and to compute explicit sound correspondence patterns from the alignments.