A Primer on Automatic Inference of Sound Correspondence Patterns (3): Extended Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands

Having illustrated how a quick correspondence pattern analysis can be done with help of readily formatted data and the EDICTOR tool alone, it is now time to show how we can use the LingRex package in order to carry out a full-fledged correspondence pattern analysis. While EDICTOR uses a simple algorithm that is based on sorting the patterns, the Python algorithm for correspondence pattern detection, which is described in detail in List (2019), uses a greedy approach inspired by the Welsh-Powell algorithm for graph coloring (Welsh and Powell 1967), in order to cluster all alignment sites in the data into clusters which are compatible with each other.

In the following, I will demonstrate how the LingRex package, which I plan to include into LingPy in the future, after sufficient tests have been written, can be used to apply the correspondence pattern detection algorithm, and how the results can — again — be investigated with help of EDICTOR. In order to get started, you should make sure to install the package with its dependencies. In order to do so, the easiest way is to download the package from GitHub or to clone it with GIT, and to install the depencencies with help of PIP.

$ git clone https://github.com/lingpy/lingrex.git
$ cd lingrex
$ pip install -r pip-requirements.txt
$ python setup.py develop

As data for testing, we will use the same dataset of 70 alignments taken from the Tableaux Phonétiques des Patois Suisses Romands (Gauchat et al. 1925), which were included as alignments in the Benchmark Database for Phonetic Alignments (http://alignments.lingpy.org, List and Prokić 2014). Now, that the data has already been prepared as a wordlist that can be accessed by LingPy and EDICTOR, we can directly access it from Python. Since LingRex uses LingPy’s datastructures, it expects the same data as input. In fact, the major class that we will use to carry out the correspondence pattern detection is an extension of LingPy’s Alignments class which can be used to handle alignments in wordlists. That means, that all the functions that are available as part of the Alignments class in LingPy are also available as part of the CoPaR class in LingRex. We thus start by loading the data.

from lingrex.copar import CoPaR
cop = CoPaR(
    'tppsr-bdpa.tsv',
    ref='cogid',
    segments='tokens'
)
print('{0} / {1} / {2}'.format(
    cop.height,
    cop.width,
    len(cop)
)

Now that we have imported the data, we need to add prosodic information to all sequences. This information serves as some kind of an initial clustering of the data, based on the prosodic environment of a given alignment site. In its simplest form, we treat all sites alike. But since we know, for example, that our alignments never place a vowel and a consonants into the same column, we can already use that information to make the task a little bit easier for the algorithm. This can be done by adding, what is called structure in the LingRex package.

cop.add_structure(model='cv', structure='structure')

This method will add another column to our data, in which each sound sequence is characterized by a string that indicates if the segment is a vowel or a consonant. The result of this can be seen when looking at the first ten entries of our wordlist, as shown in the table below.

ID DOCULECT TOKENS STRUCTURE
1 Ormont-Dessus ʃ e C V
2 Champéry ʃ i C V
3 Lourtier ʃ e C V
4 Grimentz ʃ aː C V
5 Hermance s e C V
6 Courtepin ʃ eː C V
7 Dompierre s e C V
8 Boudry s aː C V
9 Côte-aux-Fées s a C V
10 Cerneux-Péquignot s aː v u C V C V

Now that we have added the STRUCTURE to our data, we can start with the real analysis. We start by retrieving the alignment sites with all relevant information, restricting the sites we consider to those which have at least two reflexes (indicated by the minrefs keyword).

cop.get_sites(minrefs=2, structure='structure')

This method will add a new attribute to our CoPaR object, called sites. These sites are organized as a dictionary, with tuples of the cognate identifier and the position in the alignment as a key, and the structure (if it is consonant or vowel, in our case) along with the concrete alignment site as a value. If a site contains missing entries, this is by default represented with help of the symbol Ø, which we use to denote missing data (in contrast to - denoting a gap in an alignment. The following table illustrates this for the cognate sets 31 and 63 in the data, which contain reflexes for the concepts le chasseur and la hache, with the latter being only reflected in 8 out of 12 varieties in our dataset. The alignment site column shows the reflex for each variety in alphabetical order by the variety name.

COGID POSITION STRUCTURE ALIGNMENT SITE
31 0 C ‘ʦ’, ‘ʧ’, ‘ʦ’, ‘ʧ’, ‘ʦ’, ‘ʦ’, ‘ʦ’, ‘ʦ’, ‘θ’, ‘ʦ’, ‘ʦ’, ‘ʧ’
31 1 V ‘a’, ‘ɛ’, ‘a’, ‘-‘, ‘a’, ‘a’, ‘a’, ‘a’, ‘ɛ’, ‘a’, ‘a’, ‘-‘
31 2 C ‘s’, ‘s’, ‘ç’, ‘s’, ‘ç’, ‘ʃ’, ‘ç’, ‘s’, ‘f’, ‘ç’, ‘-‘, ‘s’
31 3 V ‘œː’, ‘u’, ‘œː’, ‘u’, ‘aː’, ‘œ’, ‘aː’, ‘ou’, ‘œ’, ‘ɔː’, ‘au’, ‘u’
31 4 C ‘r’, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘
63 0 C ‘ʦ’, ‘ʧ’, ‘Ø’, ‘ʧ’, ‘ʦ’, ‘ʦ’, ‘ʦ’, ‘Ø’, ‘θ’, ‘Ø’, ‘Ø’, ‘ʧ’
63 1 V ‘-‘, ‘ɔ’, ‘Ø’, ‘ɑ’, ‘ɛ’, ‘-‘, ‘ɛ’, ‘Ø’, ‘õ’, ‘Ø’, ‘Ø’, ‘a’
63 2 C ‘-‘, ‘t’, ‘Ø’, ‘t’, ‘t’, ‘-‘, ‘t’, ‘Ø’, ‘-‘, ‘Ø’, ‘Ø’, ‘t’
63 3 V ‘-‘, ‘ɛ’, ‘Ø’, ‘-‘, ‘a’, ‘-‘, ‘a’, ‘Ø’, ‘-‘, ‘Ø’, ‘Ø’, ‘-‘

Now that we have stored the alignment sites, we can start clustering them.

cop.cluster_sites()

This analysis will add another property to our CoPaR object, called clusters. This is again a Python dictionary, consisting of the structure segment and the pattern as a key, and the alignment sites, represented by cognate identifier and position as value. If we now check in the data for the alignment sites for cognate set number 31 and 63, we can see that the initials in both cognate sets are assigned to the same pattern (as they are compatible), and that the last site in cognate set 31 is clustered with the last site in cognate set 50 (not shown here), while the rest of the sites are singletons, which do not recur anywhere else in the data. That we find a lot of singletons in the data is not surprising, given that we have a very limited number of cognate sets.

In order to analyze the clusters further, we can now make a secondary analysis, during which we compare all patterns that were inferred in this run with each alignment site a second time, this time assigning alignment sites to all patterns with which they are compatible. This may result in a fuzzy clustering, as one alignment site could easily be compatible with two or more patterns, provided it contains enough missing data.

cop.sites_to_pattern()

The results of this analysis are stored in the attribute patterns of the CoPaR object, and they are again provided in form of a Python dictionary, with the cognate set and the position as the key for the alignment site, and the patterns to which the site was assigned provided in a list as value, with each pattern represented by its size, its structure, and the pattern itself. In our analysis, only three sites occur, which could be assigned to more than one pattern. One of these is the second column of the alignment of cognate set 24 m’appeler, reflected in only three varieties. Given the large number of missing data in this alignment, it is compatible with seven different patterns in the data, shown below in the table.

Ø Ø Ø Ø Ø Ø Ø r r Ø r Ø
r r r r r r r r r r r r
l r r r r r r r r r r r
r r r r r r r r r r
r r r r r r r r ʁ r r
r r Ø r r r r r r r Ø
r r r r r r Ø r Ø Ø r
Ø Ø r Ø Ø Ø r r Ø ʁ Ø Ø

Since it is difficult to spot the differences between these patterns, I have marked all those sounds which are different from a normal r with bold font. We can see that the pattern is indeed compatible with all seven patterns, and that the seven patterns themselves are incompatible with each other. We can, however, also see that the differences are minor. There are different possibilities to explain the differences. They could be due to errors in the data or the alignments, they could be due to borrowing, due to individual irregular sound changes, or due to some specific phonetic environment that triggered a certain sound change in the individual varieties. What holds in all cases needs to be checked qualitatively, by investigating the variety in detail.

As a final step, we add the patterns to our wordlist, and write both the patterns and the wordlist to a new file.

cop.add_patterns(ref='patterns')
cop.output('tsv', filename='tppsr-copped')
cop.write_patterns('patterns.tsv')

This will result in two new files, the file tppsr-copped.tsv being a wordlist, which we can inspect in EDICTOR, and the file patterns.tsv being a spreadsheet that shows all patterns for each language variety along with the reflexes and the alignment sites. If you open the file tppsr-copped.tsv now with EDICTOR and inspect the correspondence patterns, as shown in the previous post, you will find, that EDICTOR displays the patterns inferred with the Python algorithm instead of using its own simple method. In this way, you can conveniently investigate the results with the interactive EDICTOR application that facilitates the detailed inspection of correspondence pattern data.

Instead of investigating the data further, I will end here, and leave it to interested readers to check the data and the results themselves. A script to run the analysis along with the data is available in form of a GitHub Gist, which you can find here.

References

Gauchat, Louis and Jeanjaquet, Jules and Tappolet, Ernest (1925): Tableaux phonétiques des patois suisses romands. Relevés comparatifs dénviron 500 mots dans 62 patois-types. Publiés avec introduction, notes, carte et répertoires . Neuchâtel:Attinger.

List, J.-M. and Prokić, J. (2014): A benchmark database of phonetic alignments in historical linguistics and dialectology. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation. 288-294.

List, Johann-Mattis (2019): Automatic inference of sound correspondence patterns across multiple languages. Computational Linguistics 1.45. 137-161.

Welsh, D. J. A. and Powell, M. B. (1967): An upper bound for the chromatic number of a graph and its application to timetabling problems. The Computer Journal 10.1. 85-86.

 


OpenEdition suggests that you cite this post as follows:
Johann-Mattis List (March 27, 2019). A Primer on Automatic Inference of Sound Correspondence Patterns (3): Extended Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands. Computer-Assisted Language Comparison in Practice. Retrieved September 12, 2024 from https://doi.org/10.58079/m6k4


This entry was posted in Code, Primer and tagged , , , on by .

About Johann-Mattis List

Seit Anfang 2023 leite ich den Lehrstuhl für Multilinguale Computerlinguistik in Passau. In meiner Forschung nehme ich generell einen datenbasierten, empirischen und quantitativen Standpunkt in Bezug auf Sprachwandel und Sprachgeschichte ein, mit einem speziellen Fokus auf südostasiatischen Sprachen. Im Gegensatz zu rein computerbasierten Ansätzen versuche ich jedoch, meine Forschung nah an der traditionellen historischen Linguistik und der linguistischen Theorie auszurichten, weshalb ich einen computer-gestützten Ansatz im Gegensatz zu einem rein computer-basierten Ansatz verfolge.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.