For a long time, I have been wondering about the origin of the German wordlist in the Intercontinental Dictionary Series (Key and Comrie 2016. Not only are many of the words given as translations for the large concept list of 1310 items very archaic variants, which are no longer in use, we also find many annoying problems, such as unusual spellings (consequently avoiding the letter “ß”, which is still in use, even if some people think differently), wrong translations, and, of course, no phonetic transcriptions. Already during my doctoral studies, I therefore started to work on a refined list, but I soon had so many other things on my plate, that I never really managed to finish this work. Recently, however, I realized that my previous work which I had done years ago was far more complete than I had thought, and I had even added information on potential borrowings, extracted from Kluge’s (2002) etymological dictionary. Given that this list can come in handy in various ways, I decided to finish the work and publish the list officially in a very first version.
Author Archives: Johann-Mattis List
How to write an initial review for a journal in linguistics? (How to do X in linguistics 1)
Writing reviews for a journal is one of those things which most scientists never actively learn. For laypeople, this may be surprising, given how often the scientific method with its rigorous peer review procedure is being mentioned in the news nowadays. How can it be, one may ask oneself, that this procedure that is usually presented as the core principle of scientific reasoning, is never really actively taught? If the review by experts is the core of the scientific method and what decides about the acceptance of an article, how can it be that scientists do never take a course on article reviewing, and how can it be that reviewers are (as I have previously discussed in a German blogpost) themselves never reviewed or graded?
How to do X in linguistics? A new series of blog posts
I cannot remember when I decided to become a linguist. I cannot even remember when I first called myself a linguist (as opposed to a student, a Sinologist, or a scientist). But I can remember when I wrote my first review for a linguistics journal, and I also remember that it came close to a catastrophe, since I maintained a very hostile tone, I didn’t like the paper, thought the authors were badly informed, and didn’t want to allow the paper to be published.
Concept Similarity in STARLING
STARLING is a software package, originally created by Sergej A. Starostin, which is designed for historical linguists who want to build their own etymological dictionaries. It is not only a database system that allows its users to set up a very straightforward relational database structure, but also a package full of surprises, since it contains many methods that are supposed to automate specific tasks in historical linguistics. These range from phylogenetic tree reconstruction via the preliminary identification of sound correspondences up to the comparison of elicitation glosses for their semantic similarity. While phylogenetic reconstruction and sound correspondences are now quite successfully handled in alternative software packages, I thought it would be interesting to discuss the routine for assessing concept similarity in more detail, since it offers interesting possibilities for those who practice historical language comparison.
RhyAnT: A web-based tool for interactive rhyme annotation
In times where home office is an obligation rather than an option, I have finally found time to create a first draft version of a web-based tool for interactive rhyme annotation. The tool is written in plain JavaScript, without any additional libraries, and supports the inline rhyme annotation format which we proposed in an earlier study. It allows for an efficient and save annotation of poems for their rhyme structure and will hopefully help us to assemble larger samples of rhyme patterns across genres, languages, times, and cultures.
Making an annotated concept list from the data in CLICS
The CLICS database in its current format makes direct use of the data assembled by the Concepticon project in order to aggregate lexical data from different sources. At the same time, the CLICS database itself can be seen as an interesting conceptlist, providing information on concept polysemy and semantic similarity.
Automated Mapping of Metadata to Concepticon
While the core of the Concepticon project (https://concepticon.clld.org, List et al. 2019) are the numerous conceptlists which are constantly being added by the growing list of contributors, we have already from the beginning of the project, with the first version (List et al. 2016) tried to collect various kinds of concept metadata for all our concept sets.
Feature-Based Alignment Analyses with LingPy and CLTS (2)
Having seen how we can obtain a simple scorer derived from the feature system in CLTS (List et al. 2019) in last month’s post, what is missing now, in order to use the scorer for alignment analyses, is an alignment function which can take the scorer as an argument. If one does not have higher ambitions with respect to the alignment function itself, this step can be achieved in a very straightforward way with help of LingPy’s (List et al. 2018) nw_align()
or sw_align()
method. As can be seen from the documentation, this method takes as input two sequences (i.e., lists of sounds), along with a scoring function. Obviously, all we need to do now is to create our specific scorer based on the CLTS features, and then pass this scoring function along with our sequences to the function.
Feature-Based Alignment Analyses with LingPy and CLTS (1)
In the past, people have repeatedly asked me how they could use their own scoring functions in combination with LingPy’s alignment algorithms. Their major concern was that the sound-class-based scoring systems we use in LingPy might fail to reflect true phonetic similarity of sounds, specifically also because they are not informed by classical ideas about distinctive features in phonology. As described in detail in List (2014), LingPy converts sounds in phonetic transcription to an internal alphabet of less than 30 letters, to which the alignment algorithms are then applied in a second stage.
Using the Waterman-Eggert algorithm for sentence alignment
During the 24th International Conference of Historical Linguistics, I was asked by a colleague whether I would know a good way to align and scores sentences available in form of phonetic transcriptions. While it is clear that one can roughly compare the difference between sequences rather easily by aligning them, and calculating, for example, the edit distance between them, it is clear that the task of sentence alignment could be done in a somewhat more subtle way.
Behind the Sino-Tibetan Database of Lexical Cognates: Concept selection
One of the crucial steps in creating a database of lexical cognates is the selection of concepts one wants to use for a given study. While many scholars use the classical Swadesh list of 200 items) (Swadesh 1952) for this purpose, or the combined list of 207 items, in which the former has been merged with Swadesh’s updated list of 100 items (Swadesh 1955), and which is often mistakenly attributed to Swadesh himself, although the first official reference seems to be Comrie (1977), it is useful to give the selection of concepts some more thought initially.
Behind the Sino-Tibetan Database of Lexical Cognates: Introductory remarks
One of the major efforts behind our recently published paper on the origin and spread of the Sino-Tibetan languages (Sagart et al. 2019) was the creation of a database of lexical cognates which was used to run the phylogenetic analyses. The creation of this database started about four years ago, when I joined the Centre des Recherches Linguistiques sur l’Asie Oriental in Paris as a research fellow in January 2015, and Guillaume Jacques and Laurent Sagart approached me with the idea of making a phylogenetic study of Sino-Tibetan languages. In December 2017, almost three years after having started, our database consisted of 180 concepts translated into 50 different languages. Since creating the database was not directly straightforward from the beginning, with quite a few situations in which we realized we had to re-arrange the data or the procedure, I thought it might be useful to share our experience in a series of blog posts, as it might be interesting for scholars who wish to create their own database.
A Primer on Automatic Inference of Sound Correspondence Patterns (3): Extended Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands
Having illustrated how a quick correspondence pattern analysis can be done with help of readily formatted data and the EDICTOR tool alone, it is now time to show how we can use the LingRex package in order to carry out a full-fledged correspondence pattern analysis. While EDICTOR uses a simple algorithm that is based on sorting the patterns, the Python algorithm for correspondence pattern detection, which is described in detail in List (2019), uses a greedy approach inspired by the Welsh-Powell algorithm for graph coloring (Welsh and Powell 1967), in order to cluster all alignment sites in the data into clusters which are compatible with each other.
A Primer on Automatic Inference of Sound Correspondence Patterns (2): Initial Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands
Following up on my announcement to present in more detail how the algorithms for automatic correspondence pattern detection can be applied, this post introduces the preliminary preparations needed to run a first experiment with aligned data. In order to avoid that we have to align a dataset completely from scratch, we make use of already aligned data from the Tableaux Phonétiques des Patois Suisses Romands by Gauchat et al. (1925), which were originally aligned for the study in List (2014) and later published as part of the Benchmark Database of Phonetic Alignments(List and Prokić 2014). In this post, I will introduce how we can harvest the alignments from this dataset with help of LingPy, and later analyze them with help of the sound correspondence pattern algorithms.
A Primer on Automatic Inference of Sound Correspondence Patterns (1): Introduction
After about three years of work on the matter, I have managed (with help of many colleagues who helped in testing) to develop a first approach for the automatic inference of sound correspondence patterns, which will soon be published with Computational Linguistics (List 2019). The key task which this algorithm solves is to take aligned data as input and to compute explicit sound correspondence patterns from the alignments.