The CLICS database in its current format makes direct use of the data assembled by the Concepticon project in order to aggregate lexical data from different sources. At the same time, the CLICS database itself can be seen as an interesting conceptlist, providing information on concept polysemy and semantic similarity.
While the core of the Concepticon project (https://concepticon.clld.org, List et al. 2019) are the numerous conceptlists which are constantly being added by the growing list of contributors, we have already from the beginning of the project, with the first version (List et al. 2016) tried to collect various kinds of concept metadata for all our concept sets.
Having seen how we can obtain a simple scorer derived from the feature system in CLTS (List et al. 2019) in last month’s post, what is missing now, in order to use the scorer for alignment analyses, is an alignment function which can take the scorer as an argument. If one does not have higher ambitions with respect to the alignment function itself, this step can be achieved in a very straightforward way with help of LingPy’s (List et al. 2018)
sw_align() method. As can be seen from the documentation, this method takes as input two sequences (i.e., lists of sounds), along with a scoring function. Obviously, all we need to do now is to create our specific scorer based on the CLTS features, and then pass this scoring function along with our sequences to the function.
In the past, people have repeatedly asked me how they could use their own scoring functions in combination with LingPy’s alignment algorithms. Their major concern was that the sound-class-based scoring systems we use in LingPy might fail to reflect true phonetic similarity of sounds, specifically also because they are not informed by classical ideas about distinctive features in phonology. As described in detail in List (2014), LingPy converts sounds in phonetic transcription to an internal alphabet of less than 30 letters, to which the alignment algorithms are then applied in a second stage.
During the 24th International Conference of Historical Linguistics, I was asked by a colleague whether I would know a good way to align and scores sentences available in form of phonetic transcriptions. While it is clear that one can roughly compare the difference between sequences rather easily by aligning them, and calculating, for example, the edit distance between them, it is clear that the task of sentence alignment could be done in a somewhat more subtle way.
One of the crucial steps in creating a database of lexical cognates is the selection of concepts one wants to use for a given study. While many scholars use the classical Swadesh list of 200 items) (Swadesh 1952) for this purpose, or the combined list of 207 items, in which the former has been merged with Swadesh’s updated list of 100 items (Swadesh 1955), and which is often mistakenly attributed to Swadesh himself, although the first official reference seems to be Comrie (1977), it is useful to give the selection of concepts some more thought initially.
One of the major efforts behind our recently published paper on the origin and spread of the Sino-Tibetan languages (Sagart et al. 2019) was the creation of a database of lexical cognates which was used to run the phylogenetic analyses. The creation of this database started about four years ago, when I joined the Centre des Recherches Linguistiques sur l’Asie Oriental in Paris as a research fellow in January 2015, and Guillaume Jacques and Laurent Sagart approached me with the idea of making a phylogenetic study of Sino-Tibetan languages. In December 2017, almost three years after having started, our database consisted of 180 concepts translated into 50 different languages. Since creating the database was not directly straightforward from the beginning, with quite a few situations in which we realized we had to re-arrange the data or the procedure, I thought it might be useful to share our experience in a series of blog posts, as it might be interesting for scholars who wish to create their own database.
Having illustrated how a quick correspondence pattern analysis can be done with help of readily formatted data and the EDICTOR tool alone, it is now time to show how we can use the LingRex package in order to carry out a full-fledged correspondence pattern analysis. While EDICTOR uses a simple algorithm that is based on sorting the patterns, the Python algorithm for correspondence pattern detection, which is described in detail in List (2019), uses a greedy approach inspired by the Welsh-Powell algorithm for graph coloring (Welsh and Powell 1967), in order to cluster all alignment sites in the data into clusters which are compatible with each other.
Following up on my announcement to present in more detail how the algorithms for automatic correspondence pattern detection can be applied, this post introduces the preliminary preparations needed to run a first experiment with aligned data. In order to avoid that we have to align a dataset completely from scratch, we make use of already aligned data from the Tableaux Phonétiques des Patois Suisses Romands by Gauchat et al. (1925), which were originally aligned for the study in List (2014) and later published as part of the Benchmark Database of Phonetic Alignments(List and Prokić 2014). In this post, I will introduce how we can harvest the alignments from this dataset with help of LingPy, and later analyze them with help of the sound correspondence pattern algorithms.
After about three years of work on the matter, I have managed (with help of many colleagues who helped in testing) to develop a first approach for the automatic inference of sound correspondence patterns, which will soon be published with Computational Linguistics (List 2019). The key task which this algorithm solves is to take aligned data as input and to compute explicit sound correspondence patterns from the alignments.
Imagine you have two different datasets, both containing approximately the same concepts, but slightly different numbers of columns and — more importantly — potentially identical identifiers in the first column. A bad idea for merging these datasets would be to paste them in Excel or some other kind of spreadsheet software, and then trying to manually adjust all problems that might occur during this process.
A better idea is to just use LingPy and our CLDF curation framework Continue reading
LingPy (List et al. 2017) offers a great deal of functions for string manipulation. Although most of those functions are readily documented (see lingpy.org for details), and the basic ideas have also been described in my dissertation (List 2014), it seems that not many users are aware of these additional possibilities, which the library offers.
In the following, I want to illustrate how we can use LingPy to learn something about consonant clusters occurring in the data underlying the CLICS database (List et al. 2018, clics.clld.org). I have illustrated in an earlier post how one can use the CLICS software API to cook one’s own CLICS application. I will thus assume that you know how to install CLICS (following the instructions on our GitHub page) and the data underlying it.
LexStat class for cognate detection confuses those who want to apply it, since the name of the Python class is the same as the name of one of the methods the class provides, but the class can be used for other types of cognate detection as well. I recommend all users of LingPy that they give a read to our most recent tutorial on LingPy’s cognate detection method (List et al. 2018), since the three most important methods are discussed there in detail, namely the edit distance method for cognate detection, which makes use of the simple, normalized edit distance, the SCA method, based on the Sound-Class-Based Alignment algorithm (List 2014), and the LexStat method (ibid.). Applying these methods in LingPy is fairly simple and described in detail in our aforementioned tutorial. But LingPy offers an additional method for cognate detection that has the advantage of being extremely fast and thus especially suitable for exploratory data analysis of very large datasets. This method is called
turchin in LingPy, named after the first author of a paper presenting the method (Turchin et al. 2010), but the method itself, which Turchin et al. name “Consonant Class Matching” method, goes originally back to Dolgopolsky (1964)), and has long since been implemented as a part of the STARLING software package (http://starling.rinet.ru/program.php). Continue reading
The Cross-Linguistic Data Formats initiative (CLDF, https://cldf.clld.org, Forkel et al. 2018) has helped a lot in preparing the CLICS² database of cross-linguistic colexifications (https://clics.lingpy.org, List et al. 2018), since linking our data to Concepticon (https://concepticon.clld.org, List et al. 2016) and Glottolog (https://glottolog.org, Hammarström et al. 2018) has provided incredible help in merging the different datasets into a big comparative dataset.
CLDF, however, is not restricted to lexical data, but can also be successfully used to store structural data, although — due to the nature of structural data — it is much more difficult to compare different datasets.