Using the Waterman-Eggert algorithm for sentence alignment

During the 24th International Conference of Historical Linguistics, I was asked by a colleague whether I would know a good way to align and scores sentences available in form of phonetic transcriptions. While it is clear that one can roughly compare the difference between sequences rather easily by aligning them, and calculating, for example, the edit distance between them, it is clear that the task of sentence alignment could be done in a somewhat more subtle way.

The first obstacle that we may meet when trying to align sentences is that a completely linear alignment may yield problems, since sentences may contain the same or similar words, but differ with respect to word order. The first alignment algorithm that comes to mind when trying to deal with this question is the so-called Waterman-Eggert algorithm, first proposed and named after Waterman and Eggert (1987).

Long time ago, I provided an implementation of this algorithm as part of the LingPy package. The basic idea compared to traditional alignment algorithms is to expand upon local alignment when searching for optimal subsequences between two strings. While local alignment stops after having identified the longest similar subsequence between two strings, the Waterman-Eggert algorithm does not stop, but instead continuous searching for more similar subsequences in the data, just until all of them have been readily identified.

This is not the place to get into the details of the algorithmics here. For those interested in the topic, I recommend to have a look at List (2014), where the major algorithms for alignment analyses have been rdescribed and discussed for usability in linguistic applications. To get a sbrief glimpse at the crucial difference between Waterman-Eggert and the famous local alignment algorithm by Smith-Waterman (Smith and Waterman 1981), let us open a terminal and test the Waterman-Eggert algorithm in a short LingPy session (List et al. 2018).

>>> from lingpy import *
>>> for a, b, c in we_align('abcd', 'cdab'):
...     print(' '.join(a))
...     print(' '.join(b))
...     print('-')
a b
a b
-
c d
c d

What should be clear from this illustration is that the algorithm does not simply align strings locally, but instead that it searches for substring matches. This is internally done by identifying the highest-scoring subsequence in the alignment matrix at first, and then searching again for the next-high-scoring subsequence, until none are left.

To use this for sentence alignment of sentences provided in phonetic transcription, we can write a wrapper function that makes use the Waterman-Eggert implementation given in LingPy. In addition to simply aligning two sentences, however, we would also like to score them, and we will soon see how this can be done. We first start by loading a patch for the Waterman-Eggert algorithm, since I realized that the current LingPy implementation contains a bug. This patch is provided along with the code in a GitHub-Gist accompanying this post.

We start our script by loading the libraries. Here, we load LingPy, the patch (which will soon be included when we do the next update), and the product function from itertools, which we need to populate our scoring dictionary.

from lingpy import *
from patch_we_align import we_align
from itertools import product

We now start by defining our function. It has a rather simple call signature with a few parameters. The first parameters are the sentences, which should be given in IPA transcription without any punctuation marks. The gap penalty is the classical gap penalty passed to the alignment algorithm. The model refers to the sound class model, i.e., it determines how we represent sounds internally in our function. The default is the sca model also widely used in LingPy. The limit-argument tells the function which subsequences it should still accept as matches. I think, for initial tests, subsequences of length two seem like a good start here.

def salign(
        senA, 
        senB, 
        gap=-1, 
        model='sca',
        limit=2
        ):
    """Align and score two sentences"""

In the next step, we convert the sentences into sound classes, using the sound class model. To do so, we first tokenize them, i.e., we determine what should count as a sound. This function will treat combinations of base-letters and diacritics as one sound. If you do not want to use it, just provide your data already in space-segmented form, and this function won’t do anything. We then convert the segmented data (a Python list as data type) into sound class representations.

    # retrieve sound class model 
    if not hasattr('scorer', model): 
        model = Model(model)
    # convert to sound classes
    tokA, tokB = list(map(
        lambda x: ipa2tokens(
            x.replace(' ', '_')),
        [senA, senB]))

    # assume segmentation by underscore
    clsA, clsB = list(map(
        lambda x: tokens2class(
            x, 
            model
            ),
        [tokA, tokB]
        ))

Before we can now align the data with the Waterman-Eggert algorithm, we need to create a scorer that tells the algorithm how segments should be compared with each other. Here, we simply use the built-in scorer provided along with the LingPy package, setting the matching of _ with itself to 0, because we do not want to enourage the algorithm to align too many boundary markers with each other.

    scorer = {(a, b): -1 if \
            (a != b or '_' in (a, b)) \
            else 1 for a, b in product(
            set(clsA+clsB),
            set(clsA+clsB))
        }
    # make sure boundaries don't score
    scorer['_', '_'] = 0

We can now start to compute the alignments. While doing so, we iterate over each identified segment and store their similarity in a specific list, so we can later used it for scoring the similarity of the sentences. Our patch has the advantage of offering a detailed index of which elements have been aligned (different from the original call signature). We use this to extract the original transcriptions as alignments, rather than the aligned sound classes.

    out, scores = [], [] 
    for a, b, score, iA, iB in we_align( 
        clsA, clsB, scorer=scorer, gap=gap 
        ): 
        if len(a) < limit: 
            pass 
        else:
            out += [
                    (a, b, score)
                    ]
            scores += [score]

While we could already output the data as is, it would be useful to score the data as well, in order to offer us a way to retrieve some general distance between the two sentences. In order to do so, we just follow Downey et al. (2008) in computing a normalized distance score from the similarity scores that we retrieve for each aligned subsequence in comparison with the alignments of each sentence with itself.

    # compute self-scores 
    sA = sum([scorer[a, a] for a in clsA]) 
    sB = sum([scorer[a, a] for a in clsB]) 
    sAB = sum(scores) / len(scores)

    score = 1 - (2 * sAB / (sA + sB))
    return out, score

Equipped with this code, we can now carry out our first sentence alignment. Let’s compare two random sentences from German with each other in which we deliberately re-arrange some words.

alms, scores = salign(
        'mainəʁuː.ɪsthɪnsainhɛrt͡s',
        'hɛrt͡smainʁuː.əsain',
        gap=-2
        )
for a, b, score in alms:
    print(' '.join(a))
    print(' '.join(b))
    print('{0:.2f}'.format(score))
print('{0:.2f}'.format(scores))

The output of this comparison (the first sentence is inspired by Goethe’s “Meine Ruh’ ist hin”) yields the following results:

H E R C
H E R C
4.00
S A N
S A N
3.00
M A N E R Y
M A N - R Y
3.00
0.38

In total, the algorithm yields three blocks, which correspond to four words in the original data, with the third block (corresponding to “meine Ruh”) showing the same order in both test sequences. The last score, 0.38, is the overall distance of the two sequences, which is rather low, given the high number of words occuring in the sequence.

It is clear that more can be done to arrive at a good sentence alignment. This post was not meant to present a complete solution to the problem, it was rather intended to illustrate how we can use the tools we offer in libraries such as LingPy to carry out quick tests on new topics that have so far not yet been thoroughly discussed in the field of comparative linguistics. Code and patch are available in form of a GitHub Gist, which you can download from here.

References

Downey, Sean S. and Hallmark, Brian and Cox, Murray P. and Norquest, Peter and Lansing, Stephen (2008): Computational feature-sensitive reconstruction of language relationships: developing the ALINE distance for comparative historical linguistic reconstruction. Journal of Quantitative Linguistics 15.4. 340-369.

List, Johann-Mattis (2014): Sequence comparison in historical linguistics. Düsseldorf:Düsseldorf University Press.

List, Johann-Mattis and Greenhill, Simon and Tresoldi, Tiago and Forkel, Robert (2018): LingPy. A Python library for quantitative tasks in historical linguistics. Max Planck Institute for the Science of Human History. Jena: http://lingpy.org.

Smith, T. F. and Waterman, M. S. (1981): Identification of common molecular subsequences. Journal of Molecular Biology 1. 195-197.

Waterman, M. S. and Eggert, M. (1987): A new algorithm for best subsequence alignments with application to tRNA-rRNA comparisons. Journal of Molecular Biology 197. 723-728.

 


OpenEdition suggests that you cite this post as follows:
Johann-Mattis List (July 15, 2019). Using the Waterman-Eggert algorithm for sentence alignment. Computer-Assisted Language Comparison in Practice. Retrieved December 13, 2024 from https://doi.org/10.58079/m6kb


This entry was posted in Code and tagged , , , on by .

About Johann-Mattis List

Seit Anfang 2023 leite ich den Lehrstuhl für Multilinguale Computerlinguistik in Passau. In meiner Forschung nehme ich generell einen datenbasierten, empirischen und quantitativen Standpunkt in Bezug auf Sprachwandel und Sprachgeschichte ein, mit einem speziellen Fokus auf südostasiatischen Sprachen. Im Gegensatz zu rein computerbasierten Ansätzen versuche ich jedoch, meine Forschung nah an der traditionellen historischen Linguistik und der linguistischen Theorie auszurichten, weshalb ich einen computer-gestützten Ansatz im Gegensatz zu einem rein computer-basierten Ansatz verfolge.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.