In the past, people have repeatedly asked me how they could use their own scoring functions in combination with LingPy’s alignment algorithms. Their major concern was that the sound-class-based scoring systems we use in LingPy might fail to reflect true phonetic similarity of sounds, specifically also because they are not informed by classical ideas about distinctive features in phonology. As described in detail in List (2014), LingPy converts sounds in phonetic transcription to an internal alphabet of less than 30 letters, to which the alignment algorithms are then applied in a second stage.
What my colleagues wanted to do instead was to test feature-based scoring systems (e.g. Chomsky and Halle 1968) and then use these features to derive distance or similarity scores between phonetic segments which would then be used to carry out the alignment analysis. In addition, some colleagues had created their own similarity matrix, and wanted to apply it more flexibly to other datasets (see, for example, Jäger 2015). My answer was usually, that this could be done in principle, but that it would require some amount of data preprocessing, depending on what data one wants to align. Not many colleagues asked me a second time, maybe because they thought it would be too complicated to use their own distance or similarity scores in LingPy.
When I recently started working on a modified alignment algorithm myself, I had to work on the same question again, and I figured, that it is in fact not that difficult to use a custom scoring function in Lingpy. Most of the things one needs to use LingPy along with custom scoring functions are already in place. Furthermore, with our recently published pyclts package, underlying the Database of Cross-Linguistic Transcription Systems (see Anderson et al. (2018) for an overwiew), there is now even a large collection of sounds for which we have distinctive feature sets. The major reason why I never tested feature-based alignment methods so far, was that I could not find a feature-system that would offer feature for all the sounds I usually find in linguistic datasets. Even datasets like Phoible (Moran et al. 2014), which offers huge amounts of transcriptions barely touch the surface of the variation one encounters when working with real transcriptions.
Luckily, our CLTS system has shown to be very robust so far. It understands some 8000 different phonetic segments, including clicks, tones, dipththongs, and certain consonant clusters, and we have developed first approaches to convert a given dataset into a form of IPA that CLTS accepts. For the conversion, we now use orthography profiles (Moran and Cysouw 2018), which I have presented in past tutorials (List 2017), and for which I recently also wrote an implementation in JavaScript, called SegmentsJS.
I therefore think that it is time to get back to my colleagues’ request and illustrate how one can first create a custom, feature-based scoring function with help of CLTS, and then use this scoring function to carry out pairwise alignments of phonetic sequences. Given my limited time to write tutorial blogposts, I furthermore decided to divide this post into two (potentially three) posts, and discuss the creation of the scoring function in this post, while I will get back to the question of how to use LingPy’s methods for sequence comparison in one or two follow-up posts.
Before we start with the actual code, it is important to understand how scoring functions work. Basically there are two different kinds of scoring functions: those based on distances between segments, and those based on similarities between segments. One may intuitively think that there is no real difference with respect to the use of similarities or distances. When looking closer into the algorithmics underlying the sequence alignment problem, however, this assumption is not correct, since only similarity-based scoring functions allow for both semi-global and local alignment analyses (for details on these problems, compare List 2014, where all algorithms are described in detail).
For this reason, it is not useful to start from the computation distance scores for phonetic segments. Instead, I recommend everybody who wishes to work seriously on phonetic alignment algorithms to work with those algorithms that make use of similarities between segments. While distance scores are probably easy to understand, since we can think of them in form of distance in space, or distance along pathways, or distance in trees. Distance between identical objects is always 0, and the more dissimilar two objcts become, the higher the distance score between them. Similarity scores, on the other hand, assume some high score for the identity of objects, which may vary, depending on the object under question, and smaller scores for dissimilar objects, with scores beyond zero being reserved for those objects that have almost nothing in common.
In order to derive a distance score between two sound segments which are defined by a given feature system, the easiest and seemingly most straightforward way (recommended and defended by some colleagues in personal communication) is to compute the so-called Hamming distance (Hamming 1950. This distance reflects the proportion of features where two segments differ. Turning the Hamming distance into a similarity score is straightforward, since we only need to calculate the proportion of features where two segments are identical.
If we look at the CLTS feature system, we find 37 features in total, which cover three major classes of sounds, consonants, vowels, and tones, which themselves may vary with respect to the features that define them. The features themselves can be binary or have multiple values. Retrieving the features of a given sound linked to the CLTS system is straightforward. The features can be found in the featuredict
attribute of a Sound
object.
from pyclts import TranscriptionSystem bipa = TranscriptionSystem('bipa') sound = bipa['ts'] for i, (k, v) in enumerate(sorted(sound.featuredict.items())): print('{0:5} | {1:22} | {2:10}'.format(i+1, k, v or '-'))
When you type in this code in your commandline (provided you have installed the pyclts package, e.g., with help of pip
, by typing pip install pyclts
), you will receive the output shown in the following table in text form.
No. | Feature | Value |
---|---|---|
1 | articulation | – |
2 | aspiration | – |
3 | breathiness | – |
4 | creakiness | – |
5 | duration | – |
6 | ejection | – |
7 | glottalization | – |
8 | labialization | – |
9 | laminality | – |
10 | laterality | – |
11 | manner | affricate |
12 | nasalization | – |
13 | palatalization | – |
14 | pharyngealization | – |
15 | phonation | voiceless |
16 | place | alveolar |
17 | preceding | – |
18 | raising | – |
19 | relative_ articulation |
– |
20 | release | – |
21 | sibilancy | sibilant |
22 | syllabicity | – |
23 | velarization | – |
24 | voicing | – |
We can see from the output that CLTS currently uses as many as 24 different features to define consonants, which mostly reflect the traditional way in which sounds are defined by the International Phonetic Alphabet (IPA 1999). You can apply the same procedure to check for the vowel and the tone features applied by CLTS, by replacing the line sound =
by the line
bipa['ts']sound = bipa['a']
or sound = bipa['⁵⁵']
, respectively.
With our feature vector and a system like CLTS which offers feature values, it is straightforward to write a short function that allows us to compare the hamming distance between different sounds transcribed in Broad IPA (bipa
) system offered by the CLTS database.
We start by defining our function for scoring two sound segments. This function accepts two paremeters (the sound segments, passed as strings), and three keywords, bipa
(referring to the bipa
transcription system offered by the pyclts
package), classes
(a function that defines to which score we want to normalize our Hamming similarities), and features
, the list of feature vectors, as offered per class in our transcription system.
from pyclts.transcriptionsystem import TranscriptionSystem from itertools import combinations def score_sounds( a, b, features=None, classes=None, bipa=None ): """ Score sounds with Hamming distance from feature system. """ # load bipa object bipa = bipa or TranscriptionSystem('bipa') # define the features features = features or { "consonant": list( bipa['t'].featuredict), "vowel": list( bipa['a'].featuredict), "tone": list( bipa['⁵⁵'].featuredict) } # define base score for the classes classes = classes or { "consonant": 1, "vowel": 1, "tone": 1 }
You can see from this first code block that I was lazy and did not spell out the feature systems, but retrieved them from dummy sounds. As we will see later, however, we will usually create our feature vectors before and pass them to the method via the keywords, also to avoid that the Broad IPA is loaded on every call, which will drastically influence the performance of this method.
Our next step consists in the conversion of our sounds to our TranscriptionSystem
in CLTS, and to check for dipthongs or clusters. Dipthongs and clusters are not defined by their own features in CLTS, but rather have separate features for the first and the second sound, which can be accessed through their attributes from_sound
and to_sound
. To keep the demonstration simple for the time being, we will for now simply represent the complex sounds by their first sound.
# convert sounds to transcription system sA, sB = bipa(a+' '+b) # check for diphthongs or clusters if hasattr(sA, 'from_sound'): sA = sA.from_sound if hasattr(sB, 'from_sound'): sB = sB.from_sound
Now we make sure to return a high negative value in those cases where the base classes for all sounds in CLTS turn out to be different. This score is in fact arbitrary, as long as it is low enough, since we want to avoid no matter what, that alignments mix the sound classes (some colleagues were not happy with this decision in the past, but my experience clearly shows that not separating classes leads to an unexpected alignment behavior that is extremely difficult to control).
# return -10 if classes don't match if sA.type != sB.type: return -10
Now, we are almost done with our function that calculates segment similarities based on distinctive feature systems. In order to compute the Hamming similarity, we first define the base similarity, which is reflecting the size of the feature vector, and then calculate the factor needed to normalize the data consistently. According to our defaults, which we wrote into the function, the highest similarity score for all sound classes is 1, so normalization will yield scores between 0 and 1 for our Hamming similarities.
# base score is the number of features sim = len(features[sA.type]) # normalization factor normalize = classes[sA.type] / sim # return in case of identity if a == b: return sim * normalize # reduce similarity in case of mismatch for feature in features[sA.type]: if sA.featuredict[feature] != sB.featuredict[feature]: sim -= 1 return sim * normalize
Now that we can compute our Hamming similarities, we can test this function directly by feeding it different sounds. In order to get a full-fledged scoring dictionary, however, which we will need to carry out a phonetic alignment analysis, it is useful to write another function that takes a couple of sounds as input and returns a scoring dictionary in which all sounds are compared with each other. The function takes a bunch of letters as parameter, and also the three keywords, which we already defined for the score_sounds
function.
def get_scorer( letters, bipa=None, classes=None, features=None ): """ Retrieve a scoring dictionary for alignment algorithms. """ # load bipa object bipa = bipa or TranscriptionSystem('bipa') # define the features features = features or { "consonant": list( bipa['t'].featuredict), "vowel": list( bipa['a'].featuredict), "tone": list( bipa['⁵⁵'].featuredict) } # define base score for the classes classes = classes or { "consonant": 1, "vowel": 1, "tone": 1 }
After this boring part, where we repeat the same code (this may definitely be enhanced further, but for demonstration purposes, it is surely enough), we can now compute the scorer. This can be done in a very straightforward way with help of the combinations
method offered by the itertools
module, which is part of Python.
scorer = {} bipa = bipa or TranscriptionSystem('bipa') for a, b in combinations(letters, r=2): scorer[a, b] = scorer[b, a] = score_sounds(a, b, bipa=bipa) scorer[a, a] = score_sounds(a, a, bipa=bipa) scorer[b, b] = score_sounds(b, b, bipa=bipa) return scorer
Now, we can finally start and see, how well this approach works. For convenience, I use the tabulate package for printing of tables, and define a couple of sounds whose similarity I want to investigate.
from tabulate import tabulate cons = ['p', 't', 'b', 'd', 'pʰ', 'tʰ'] vows = ['a', 'e', 'i', 'o', 'u'] scorer = get_scorer(cons+vows)
When retrieving the similarity matrix from the scorer now, we can easily construct a matrix, which illustrates the difference between the sounds in our sample.
matrix = [[1 for x in cons] for y in cons] for (i, a), (j, b) in combinations(enumerate(cons), r=2): matrix[i][j] = matrix[j][i] = round(scorer[a, b], 2) for i, (c, r) in enumerate(zip(cons, matrix)): matrix[i] = [c]+r print(tabulate(matrix, headers=cons, tablefmt='pipe'))
The matrix for the consonants in our sample is shown in the following table.
p | t | b | d | pʰ | tʰ | |
p | 1 | 0.96 | 0.96 | 0.92 | 0.96 | 0.92 |
t | 0.96 | 1 | 0.92 | 0.96 | 0.92 | 0.96 |
b | 0.96 | 0.92 | 1 | 0.96 | 0.92 | 0.88 |
d | 0.92 | 0.96 | 0.96 | 1 | 0.88 | 0.92 |
pʰ | 0.96 | 0.92 | 0.92 | 0.88 | 1 | 0.96 |
tʰ | 0.92 | 0.96 | 0.88 | 0.92 | 0.96 | 1 |
By replacing the cons
variable by vows
, we can produce the same matrix for our vowels.
a | e | i | o | u | |
a | 1 | 0.95 | 0.95 | 0.85 | 0.85 |
e | 0.95 | 1 | 0.95 | 0.9 | 0.85 |
i | 0.95 | 0.95 | 1 | 0.85 | 0.9 |
o | 0.85 | 0.9 | 0.85 | 1 | 0.95 |
u | 0.85 | 0.85 | 0.9 | 0.95 | 1 |
Those experience in phonetics and distinctive features and historical sound change may find the results strange, especially those for the consonants. We see that [
p]
is as similar to a [
t]
as to a [
b]
, although most scholars would usually say that [
p]
and [
b]
are much closer to each other. The problem, and this is also one of the challenges when using feature systems for alignment analyses, is that all features are given the same weight by our Hamming similarity approach. When comparing the features by which [
p]
, [
t]
, and [
d]
differ, according to the CLTS feature system, we can see that [
p]
and [
t]
differ by one feature (“place”), and [
p]
and [
b]
differ by another feature (“voicing”). Since the similarity measure presented here does not allow for a differential weighting of features and feature values, it yields the same similarity of 0.96, since both sounds differ in one out of 24 features.
References
Anderson, Cormac and Tresoldi, Tiago and Chacon, Thiago Costa and Fehn, Anne-Maria and Walworth, Mary and Forkel, Robert and List, Johann-Mattis (2018): A Cross-Linguistic Database of Phonetic Transcription Systems. Yearbook of the Poznań Linguistic Meeting 4.1. 21-53.
Chomsky, Noam and Halle, Morris (1968): The sound pattern of English. New York and Evanston and London:Harper and Row.
Hamming, Richard W. (1950): Error detection and error detection codes. Bell System Technical Journal 29.2. 147–160.
(1999): . Cambridge:Cambridge University Press.
Jäger, Gerhard (2015): Support for linguistic macrofamilies from weighted alignment. Proceedings of the National Academy of Sciences 112.41. 12752–12757.
List, Johann-Mattis (2014): Sequence comparison in historical linguistics. Düsseldorf:Düsseldorf University Press.
List, Johann-Mattis (2017): Historical Language Comparison with LingPy and EDICTOR [Historischer Sprachvergleich mit LingPy und EDICTOR]. Department of Linguistic and Cultural Evolution: Max-Planck Institute for the Science of Human History.
Steven Moran and Daniel McCloy and Richard Wright (eds.) (2014): PHOIBLE Online. Max Planck Institute for Evolutionary Anthropology. Leipzig: http://phoible.org/.
Moran, Steven and Cysouw, Michael (2018): The Unicode Cookbook for Linguists: Managing writing systems using orthography profiles. Berlin:Language Science Press.
Supplementary Information
The code demonstrated here can be found on this GitHub Gist.
OpenEdition suggests that you cite this post as follows:
Johann-Mattis List (August 19, 2019). Feature-Based Alignment Analyses with LingPy and CLTS (1). Computer-Assisted Language Comparison in Practice. Retrieved March 23, 2025 from https://doi.org/10.58079/m6kc
Pingback: Feature-Based Alignment Analyses with LingPy and CLTS (2) | Computer-Assisted Language Comparison in Practice