A Primer on Automatic Inference of Sound Correspondence Patterns (2): Initial Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands

Following up on my announcement to present in more detail how the algorithms for automatic correspondence pattern detection can be applied, this post introduces the preliminary preparations needed to run a first experiment with aligned data. In order to avoid that we have to align a dataset completely from scratch, we make use of already aligned data from the Tableaux Phonétiques des Patois Suisses Romands by Gauchat et al. (1925), which were originally aligned for the study in List (2014) and later published as part of the Benchmark Database of Phonetic Alignments(List and Prokić 2014). In this post, I will introduce how we can harvest the alignments from this dataset with help of LingPy, and later analyze them with help of the sound correspondence pattern algorithms.

The Tableaux Phonétiques des Patois Suisses Romands (Gauchat et al. 1925) is a large collection of dialect data on French dialects spoken in Switzerland. Originally collected by Gauchat and colleagues, it was digitized in a project by Hans Geisler (Heinrich Heine Universität Düsseldorf) but could by then not be published officially due to copyright restrictions. Fortunately, however, I could use parts of the data for my dissertation (List 2014), where I aligned 76 of the charts for as many as 62 dialect points. While the data in its form used by then can still be interactively searched from the website offering all supplementary material accompanying my dissertation, I figured later that it would be better to share it officially as part of a larger benchmark database of phonetic alignments, which I published together with Jelena Prokić, who contributed alignments for Bulgarian dialects to that sample (List and Prokić 2014). This Benchmark Database of Phonetic Alignments (BDPA) offers a potentially more convenient way of browsing and inspecting alignment data, although the data is not necessarily offered in a convenient form to reuse.

As of now, the original alignment data underlying the BDPA has also been submitted to Zenodo, from where it can still be downloaded. The data on the Zenodo repository is of a very simple structure, containing a bunch of zip-folders for each of the different datasets from which we harvested the alignments, along with two redundant master-folders, containing all 750 multiple sequence alignments. Each of the folders contains two sub-folders, one called msa, containing the multiple alignments in the so-called msa-format, which can be readily imported and processed by LingPy, as well as a psa-folder, containing all corresponding pairwise phonetic alignments (i.e., pairwise alignments automatically derived from the multiple alignments by extracting all possible pairs).

The msa-format is basically outdated, and we don’t use it anymore, although LingPy still parses it. For testing purposes, this is quite useful, although we now usually tend to use alignments exclusively in wordlists, where we have more consistent ways of handling the data, and also doing more interesting analyses. The msa-format is described in detail on the LingPy website, so I will spare the readers and myself a closer description here. What is important to know is that we want to convert those files corresponding to the TPPSR in the BDPA, asprovided in msa-format on Zenodo to the “normal” wordlist-format, as it is used by the LingPy package (see List et al. 2018 for a closer description) and also required in order to compute correspondence patterns with help of LingRex and the correspondence pattern recognition algorithm (List 2019).

Assuming that you have downloaded the data from Zenodo and unpacked the multiple.zip folder, placing the msa-folder in your current working directory (or cd-ing into it), we can now get started to convert the data. Our goal is to select some 15 representative dialects from the data, extract their alignments, and store them in the wordlist-format, so that we can later analyze the correspondence patterns in the data. We thus start by setting up our Python script in which we import the packages required:

from lingpy import *
from glob import glob
import re
import tqdm
from lingpy.align.sca import normalize_alignment

We use glob to retrieve the paths of the files, and tqdm to have a status bar that informs us about the process. We further need the re module to retrieve some information about the data, lingpy in general, and the normalize_alignment function in specific. This latter function will delete all those columns from an alignment, which consist only of gaps. This can happen when taking only a small selection of language varieties from a larger selection of aligned words.

We can now retrieve all files with help of glob.

files = glob('msa/*.msa')

To make sure that we find the varieties we want, I made a manual pre-selection, which we represent as a Python dictionary:

selection = {'Boudry': '46',
 'Cerneux-Péquignot': '53',
 'Champéry': '18',
 'Courtedoux': '62',
 'Courtepin': '41',
 'Côte-aux-Fées': '50',
 'Dompierre': '42',
 'Evolène': '30',
 'Grimentz': '31',
 'Hermance': '36',
 'Lourtier': '22',
 'L’Auberson': '3',
 'Ormont-Dessus': '15',
 'Plagne': '56',
 }

We also represent our wordlist as a Python dictionary, where the 0-key represents the column header.

D = {0: [
    'doculect',
    'language_id',
    'concept',
    'latin',
    'french',
    'form',
    'tokens',
    'alignment',
    'cogid'
    ]}

As we want to fill the wordlist with identifiers for the words themselves and for cognates consecutively, we set them now as variables.

idx, cogid = 1, 1

We also define a very lazy converter, since we want to replace all underscores in the alignments by a + symbol, which is now the standard marker for morpheme boundaries, which we decided for during the last year (but older versions have still the underscore _ as a marker for word boundaries).

converter = {
        '_': '+'
        }

We can now start by looping over all files and extracting the relevant data. In this loop, we open each of the msa-files with help of LingPy, and assess it’s dataset property. If the dataset is French, we keep the file and try to process it further. We use a regular expression to parse the HTML-like coding of the so-called sequence identifier of the alignment, which is the counterpart of the aligned word forms in French and its projected ancestral form in Latin. Since the msa format does not specify how the dataset or the sequence identifier should be structured, the formats are pretty free, and by then, I used HTML-like tags for convenience (1).

Once we have extracted the dataset, made sure it is French, and also extracted the Latin and the French word form, we can extract the aligned data from the MSA-object, stored in the variable msa. Here, we iterate over its properties msa.taxa and msa.alignment and check if the name of the variety also occurs in our dictionary of pre-selected varieties. If this is the case, we retrieve the unaligned but segmented form (called tokens) by stripping off all dashes from the alignment, and we also retrieve the raw, unsegmented form by even deleting the spaces that would otherwise indicate the boundaries between the sound segments.

We can now add all data to our wordlist (or our dictionary, 3), but we should not forget to incremend the index for the word identifier (idx) and the cognate set identifier (cogid).

for f in tqdm.tqdm(files):
    # (1: initiate MSA object)
    msa = MSA(f)
    if msa.dataset == 'French':
        french, latin = re.findall(
                r'\*(.*?)\*',
                msa.seq_id
                )
        # (2: extract alignments)
        for taxon, alm in zip(msa.taxa, 
                msa.alignment):
            taxon_id = selection.get(taxon, '')
            if taxon_id:
                tokens = [converter.get(
                    x, 
                    x
                    ) for x in alm if x != '-']
                form = ''.join(tokens)
                # (3: add data to wordlist)
                D[idx] = [
                        taxon,
                        taxon_id,
                        french,
                        latin,
                        french,
                        form,
                        tokens,
                        alm,
                        cogid
                        ]
                idx += 1
        cogid += 1

Now that we have assembled the data readily, we can load it with help of LingPy’s Alignments class, which we call by passing the dictionary with cogid as the keyword for the reference (ref), which we use to construct our alignments.

alms = Alignments(D, ref='cogid')

We need to do this in order to make sure that all alignments are “normalized”, i.e., they should not contain empty columns. We do this by iterating over all alignments in the Alignments object, which we can access as a dictionary, in which the key is the cognate identifier, and the value is a dictionary identical with the data needed to construct an MSA object. The normalization is now straightforward, and we re-write all individual alignments attached to individual word forms to make sure this is readily stored.

for cogid, msa in alms.msa['cogid'].items():
    for idx, alm in zip(
            msa['ID'],
            normalize_alignment(
                msa['alignment']
                )
            ):
        alms[idx, 'alignment'] = alm

We can now write the data to file. By passing the prettify keyword and setting it to False, we make sure that the data will be written to plain TSV format.

alms.output(
        'tsv', 
        filename='tppsr-bdpa',
        prettify=False
        )

How we can use this file to calculate correspondence patterns with help of the correspondence pattern recognition algorithm described in List (2019) is something I will describe in more detail in a follow-up post. But what we can do already with the data by now is loading it into EDICTOR (List 2017), and use the built-in tool for correspondence pattern analyses. This tool is less accurate than the Python implementation in the LingRex package, since it is written in plain JavaScript. All it does is that it tries to sort the sound correspondences in a rather smart way, using JavaScripts rather convenient ways to sort arrays. This works — at least to some degree — surprisingly well, as you can see yourself, when opening the EDICTOR at http://edictor.digling.org and then either dragging the file tppsr-bdpa.tsv to the BROWSE button or selecting it by clicking on this button. You then click on ANALYZE in the menu, and select CORRESPONDENCE PATTERNS from there. Select “full cognates”, as we are not dealing with partial cognates here, and press OK.

Correspondence pattern display in EDICTOR

What you will see is a collection of all your correspondence patterns which EDICTOR’s simple method extracts from the alignments.The arrangement is by language variety in the columns, and by cognate set (or alignemnt) in each row. By clicking on a given value, EDICTOR will show you the full word for that entry.

Viewing original words for a given correspondence pattern

Clicking on the cognate ID on the very left, it will show you the alignment.

Alignment popups in EDICTOR’s correspondence pattern viewer

It would take too long to describe all possibilities for searching here, so I recommend those interested in exploring this further, to simply download the dataset, prepared with the script described here from the Github.Gist accompanying this tutorial, and see yourself what can be done. If you have ideas on how the tool could be enhanced, I would furthermore be happy about any kind of feedback or questions or suggestions, which you shoudl ideally share via our project page with GitHub/digling/edictor.

References

Gauchat, Louis and Jeanjaquet, Jules and Tappolet, Ernest (1925): Tableaux phonétiques des patois suisses romands. Relevés comparatifs d’environ 500 mots dans 62 patois-types. Publiés avec introduction, notes, carte et répertoires . Neuchâtel:Attinger.

List, Johann-Mattis (2014): Sequence comparison in historical linguistics. Düsseldorf:Düsseldorf University Press.

List, J.-M. and Prokić, J. (2014): A benchmark database of phonetic alignments in historical linguistics and dialectology. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation. 288-294.

List, Johann-Mattis (2017): A web-based interactive tool for creating, inspecting, editing, and publishing etymological datasets. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. System Demonstrations. 9-12.

List, Johann-Mattis and Walworth, Mary and Greenhill, Simon J. and Tresoldi, Tiago and Forkel, Robert (2018): Sequence comparison in computational historical linguistics. Journal of Language Evolution 3.2. 130–144.

List, Johann-Mattis (2019): Automatic inference of sound correspondence patterns across multiple languages. Computational Linguistics 1.45. 1-24.

 


OpenEdition suggests that you cite this post as follows:
Johann-Mattis List (February 27, 2019). A Primer on Automatic Inference of Sound Correspondence Patterns (2): Initial Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands. Computer-Assisted Language Comparison in Practice. Retrieved February 12, 2025 from https://doi.org/10.58079/m6k3


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.