Extracting Transparent Compounds from Lexibank

Many languages make use of transparent compounding processes in order to express certain words in their lexicon. With time, these processes can loose their transparency, making them hard to detect automatically. With large data collections simple tests can be designed to detect transparent compounds and investigate their distribution. This study illustrates how a very rudimentary analysis of cross-linguistically recurring transparent compound patterns can be applied to Lexibank data with a few lines of Python code.

1 Introduction

Lexical motivation is a term commonly used to denote the semantic principles by which new word forms are coined from existing words in a language’s lexicon (Koch 2001). Thus, a compound like German Regenbogen “rainbow” can be motivated as a combination of the words Regen “rain” and Bogen “bow”, thus referring to a bow-like structure that emerges on rainy days. The new word formed from the two original words can be explained semantically, in so far as the combination of the two meanings of the original words (which can be further specified by certain semantic relations dictated by the grammar of the language exhibiting the word formation) relates to the meaning of the resulting word. However, the relation between the original meanings and the derived meanings does not logically dictate the new meaning, it merely motivates it, making the resulting meaning in some sense transparent for humans (see Sperber 1923, 11f, on examples like German Schneider “tailor”, which would literally mean “cutter” but never existed as an intermediate meaning for the word Schneider).

Motivation patterns themselves vary from language to language and from word form to word form. Often, historical changes make it difficult to detect them. It is therefore not easy to study cross-linguistic patterns of lexical motivation directly from existing data collections. Studies carried out so far were based on a careful investigation of individual, carefully selected words across several dozens of languages (Pepper 2020). Due to the need to annotate individual words manually, they usually have a limited scope, concentrating on a small selection of concepts and a small selection of languages.

With the Lexibank collection, whose second installation was recently published (Blum et al. 2025), we have now a rather large collection of wordlists that are semantically aligned (linking indidivual elicitation glosses to the Concepticon catalog, List et al. 2025) and standardized with respect to the transcription systems (using the technique of orthography profiles by Moran and Cysouw 2018 to convert original transcriptions to the CLTS standard, List et al. 2018). Along with the new Lexibank version, we have also introduced new techniques to search for partial colexifications in the Lexibank data collection. Under partial collexification (introduced in this form by List 2023), we understand automatically inferred (or inferrable) meaning pairs in which a word expressing one meaning A recurs in another word expressing another meaning B (compare Regen “rain” and Regenbogen “rainbow” in German).

While it may be interesting to study partial colexifications derived from large comparative wordlists directly (see for example Rubehn and List 2025), it may also be benefitial to see how many truly transparent binomial compounds can be inferred from Lexibank. In the following, I will illustrate how this analysis can be carried out with just a few lines of code.

2 Materials

The starting point is the list of partial (affix) colexifications that can be computed from Lexibank with the help of the commands shared in the study presenting Lexibank 2 (Blum et al. 2025, query partial-colex.sql). The data comes in the form of a CSV file that consists of the ID of the language for which the affix colexification was observed, the first and the second concept, as well as the first and the second word form. For convenience, the data file is shared in the supplement accompanying this study (see the link provided at the end of this study). We could do the same analysis with the entire Lexibank dataset, but for the purpose of illustrating the basic idea here, it is easier to restrict the application to those word forms for which partial colexifications were already identified before.

3 Methods

The method I propose here is quite simple. In a first pass, we extract and store individual word lists for all languages in the data. After this merely technical operation, we iterate over all languages, create a lookup table (a dictionary in Python, also called a hash table) for each word and the corresponding concepts, and then splitting all words systematically into all possible combinations of two parts, checking each time if both parts occur as single words in our lookup table. If both parts recur in the lookup table, we store the concept of the original word as well as the concepts of the word parts, counting how often they occur in the data. This yields triples of two concepts that motivate the meaning of a third concept, along with the number of examples we find in the data.

The code that I wrote to implement this strategy has only one dependency, using the tabulate package for the convenient printing of tabular data. The other two dependencies are built-ins. All you have to do to run the code in Python is therefore to install the package via pip.

$ pip install tabulate

You should also download the CSV file with the partial colexifications from the supplement and place it in the same folder where you run the script. The following lines of code load the libraries, read the CSV file and create individual wordlists (stored in the form of a list per language in a dictionary) for each language variety in the Lexibank collection. The last two lines make sure that words are not duplicated in the resulting data.

import csv
import collections
from tabulate import tabulate

data = collections.defaultdict(list) 
header = [
        "Language_ID", 
        "Language", 
        "None", 
        "Dataset", 
        "Concept_A",
        "Concept_B", 
        "Segments_A", 
        "Segments_B", 
        "IDX_A", 
        "IDX_B"
        ]
with open("partial-colexifications.csv") as f:
    for row in csv.reader(f, delimiter=','):
        dpoint = dict(zip(header, row))
        data[dpoint["Language_ID"]] += [
                (
                    dpoint["Concept_A"],
                    dpoint["Segments_A"]
                    )]
        data[dpoint["Language_ID"]] += [
                (
                    dpoint["Concept_B"],
                    dpoint["Segments_B"]
                    )]
for language, words in data.items():
    data[language] = list(set(words))

The iteration over all individual language varieties is shown in the following lines of code. Here, we first create the lookup table using a defaultdict. Afterwards, we iterate across all concept-word pairs in a given wordlist, and then split the word systematically into all possible combinations of splits, starting with the index 2 to the left and ending with the index that is 2 items lower than the length of the word. Since words consist of sounds, segmented by spaces in the original data, we split them into lists first. If a hit is detected, we store it in a local “hit”-list, from which we later remove potential duplicates by turning it into a set. The resulting triples are then appended to the master list of hits.

all_hits = collections.defaultdict(list)

for language, words in data.items():
    lookup = collections.defaultdict(list)
    for concept, word in words:
        lookup[word] += [concept]
    hits = []
    for concept, word in words:
        tokens = word.split(" ")
        for i in range(2, len(tokens) - 2):
            prt_a, prt_b = (
                    " ".join(tokens[:i]), 
                    " ".join(tokens[i:])
                    )
            if prt_a in lookup and prt_b in lookup:
                for concept_a in lookup[prt_a]:
                    for concept_b in lookup[prt_b]:
                        if concept_a < concept_b:
                            hits += [(
                                concept, 
                                concept_b, 
                                concept_a
                                )]
                        else:
                            hits += [(
                                concept, 
                                concept_a, 
                                concept_b
                                )]
    for hit in set(hits):
        all_hits[hit] += [language]

To inspect the hits for transparent pairs of binomial compounds, we first sort the data in reverse order, starting from those compounds that recur most frequently in the data. Afterwards, we write all data to file (using the csv file writer that is built into Python). Finally, we print out the 20 compounds with the most frequent number of recurrences in the data.

best_hits = sorted(
        all_hits.items(), 
        key=lambda x: len(x[1]), 
        reverse=True
        )
table = []
with open("hits.csv", "w") as f:
    writer = csv.writer(f)
    writer.writerow([
        "Concept_AB", 
        "Concept_A", 
        "Concept_B", 
        "HITS"])
    for k, v in best_hits:
        writer.writerow(list(k) + [len(v)])
        table += [list(k) + [len(v)]]

print(
    tabulate(
        table[:20], 
        tablefmt="pipe", 
        headers=[
            "New Concept", 
            "Concept A", 
            "Concept B", 
            "Hits"]
        )
    )

4 Results

With this little script containing less than 100 lines of code, we extract 52694 different motivation patterns from the data. Most of these patterns, however, occur only one time and can be savely regarded as coincidential. If we restrict the analysis to those patterns that recur at least 5 times, there are 931 motivation patterns. 6124 patterns occur at least 2 times. The overwhelming majority of these 52694 patterns are thus singletons in our data.

Judging from the frequently recurring patterns shown in Table 1, we do not find specific surprises, but can rather see very typical motivation patterns for very typical compounds, including several numbers (ELEVEN = TEN + ONE), kinship terms (PARENTS = MOTHER + FATHER), and animals of a particular gender or age (STALLING = MALE (OF ANIMAL) + HORSE).

Table 1: 20 most frequently recurring transparent compounds.
New Concept Concept A Concept B Hits
BARK TREE SKIN 121
TWELVE TWO TEN 118
TWENTY TWO TEN 91
ELEVEN TEN ONE 91
PARENTS MOTHER FATHER 88
FIFTEEN TEN FIVE 82
TEAR (OF EYE) WATER EYE 54
SOW (FEMALE PIG) PIG FEMALE (OF ANIMAL) 54
MARE HORSE FEMALE (OF ANIMAL) 53
NOSTRIL NOSE HOLE 50
FIFTY TEN FIVE 47
SKULL HEAD BONE 44
EWE SHEEP FEMALE (OF ANIMAL) 44
STALLION MALE (OF ANIMAL) HORSE 43
NIT LOUSE EGG 43
SNAKE LONG ANIMAL 42
MALE GOAT MALE (OF ANIMAL) GOAT 41
ADJUDICATE DO OR MAKE COURT 40
LIP SKIN MOUTH 39
EYELID SKIN EYE 39

When looking a bit further down the data, concentrating on cases that are less frequently reflected in the data, we still find several interesting patterns. Thus, concentrating only on the verb BRING, we can check the patterns that motivate the verb in the data. The relevant cases can be easily extracted with the following lines of code.

bring = [row for row in table if
         row[0] == 'BRING' and
         row[-1] > 1]

print(
    tabulate(
        bring, 
        tablefmt="pipe", 
        headers=[
            "New Concept", 
            "Concept A", 
            "Concept B", 
            "Hits"]
        )
    )

The results are shown in Table 2 below. The fact that we find several instances where COME is one of the parts of the compound, can be easily explained by the fact that COME is a verb often involved in grammaticalization processes where its original meaning becomes more abstract, expressing that an action has been accomplished or is desired (compare constructions with venir in French and Spanish, such as je viens de dire “I just said” in French, or vengo a decir “I have to say” in Spanish).

Table 2: All motivation patterns for BRING.
New Concept Concept A Concept B Hits
BRING COME CARRY 16
BRING TAKE COME 14
BRING LEAD (GUIDE) COME 8
BRING COME CARRY IN HAND 5
BRING CARRY ARRIVE 3
BRING COME BACK CARRY 2

However, what we can also see from this table is that we face a semantic problem with the concept list approach in Lexibank applied to lexical motivation patterns. The problem consists in the fact that we base our analysis on individual pairs of concepts and words (allowing words to recur multiple times if they are pairing with different concepts in our data). Each motivation pattern thus presents only one possible instance of motivation, but this instance must not necessarily be the truly attested one for the individual languages from which we draw the examples. Thus, if a language colexifies COME BACK and COME, as well as CARRY and CARRY IN HAND, and motivates BRING as a compound of COME and CARRY, our approach will observe four individual motivation patterns, COME BACK + CARRY and COME BACK + CARRY IN HAND, as well as COME + CARRY and COME + CARRY IN HAND.

It is not trivial to solve this problem. We cannot simply select one out of several solutions, since we do not know which one should be taken as the most appropriate one. It seems important to annotate our data manually, indicating detailed semantic relations between all individual concepts in Lexibank. But even this annotation would not help us if we are missing broader concepts in the individual wordlists, or if wordlists exhibit gaps (see Bocklage et al. 2024). In the fourth installment of the Database of Cross-Linguistic Colexifications (Tjuka et al. 2025), we have furthermore made sure to represent words coded for overarching concepts like ARM OR HAND by two individual words, one representing ARM and one representing HAND. This step further unifies the concepts underlying Lexibank and may also prove helpful in searching for transparent compounds.

5 Conclusion

In this little study, I have tried to show how we can search for transparent binomial compounds in Lexibank with just a few lines of code. The study shows some interesting results, but also points to problems that we must address in future studies. A problem, for which I do not yet have a solution, remains the proper handling of language-specific meanings involved in lexical motivation as opposed to the cross-linguistic concepts defined by the Concepticon catalog. The problem here is that a strict concept-based (onomasiological) approach seems to conflict with the identification of language-specific motivation patterns. The future will hopefully show how we can handle this problem consistently.

References

Blum, Frederic and Barrientos, Carlos and Englisch, Johannes and Forkel, Robert and Greenhill, Simon J. and Rzymski, Christoph and List, Johann-Mattis (2025): Lexibank 2: pre-computed features for large-scale lexical data [version 1; peer review: awaiting peer review]. Open Research Europe 5.126. 1-19. [Preprint, under review, not peer-reviewed] https://doi.org/10.12688/openreseurope.20216.1

Bocklage, Katja and Di Natale, Anna and Tjuka, Annika and List, Johann-Mattis (2024): Directional Tendencies in Semantic Change. Humanities Commons 2024.2. 1-28. [Preprint, under review, not peer-reviewed] https://doi.org/10.17613/0y0r-f341

List, Johann-Mattis and Anderson, Cormac and Tresoldi, Tiago and Forkel, Robert (2023): Cross-Linguistic Transcription Systems. Version 2.3.0. Leipzig: Max Planck Institute for the Evolutionary Anthropology. https://clts.clld.org

List, Johann-Mattis and Tjuka, Annika and Blum, Frederic and Kučerová, Alžběta and Barrientos Ugarte, Carlos and Rzymski, Christoph and Greenhill, Simon J. and Robert Forkel (2025): CLLD Concepticon [Dataset, Version 3.4.0]. Leipzig: Max Planck Institute for Evolutionary Anthropology. https://concepticon.clld.org

Peter Koch (2001): Lexical typology from a cognitive and linguistic point of view. In: Martin Haspelmath and Ekkehard König and Wulf Oesterreicher and Wolfgang Raible (eds.): Language typology and language universals. 20.2. Berlin and New York: de Gruyter. 1142-1178.

List, Johann-Mattis (2023): Inference of partial colexifications from multilingual wordlists. Frontiers in Psychology 14.1156540. 1-10. https://doi.org/10.3389/fpsyg.2023.1156540

Moran, Steven and Cysouw, Michael (2018): The Unicode Cookbook for Linguists: Managing writing systems using orthography profiles. Berlin: Language Science Press.

Steve Pepper (2020): The typology and semantics of binominal lexemes. noun compounds and their functional equivalents . PhD Thesis. University of Oslo: Oslo.

Rubehn, Arne and List, Johann-Mattis (2025): Partial Colexifications Improve Concept Embeddings. In: Proceedings of the Association of Computational Linguistics 2025. 1-15. https://doi.org/10.48550/arXiv.2502.09743

Sperber, Hans (1923): Einführung in die Bedeutungslehre. Bonn and Leipzig: Kurt Schroeder.

Tjuka, Annika and Forkel, Robert and Rzymski, Christoph and List, Johann-Mattis (2025): Advancing the Database of Cross-Linguistic Colexifications with New Workflows and Data. arXiv 2503.11377. 1-14. [Preprint, under review, not peer-reviewed] https://doi.org/10.48550/arXiv.2503.11377

Cite this article as: List, Johann-Mattis (2025): “Extracting Transparent Compounds from Lexibank,” in Computer-Assisted Language Comparison in Practice, 8.1: 39-47 [first published on 26/05/2025], URL: https://calc.hypotheses.org/8526, DOI: 10.15475/calcip.2025.1.5.

Download the article as PDF: calcip-08-1-5.pdf

Copyright information: This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Supplementary Materials: Code and data are archived with Zenodo (https://doi.org/10.5281/zenodo.15478974).

Funding Information: This project has received funding from the European Research Council (ERC) under the European Union’s Horizon Europe research and innovation programme (Grant agreement No. 101044282). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.


OpenEdition suggests that you cite this post as follows:
Johann-Mattis List (May 26, 2025). Extracting Transparent Compounds from Lexibank. Computer-Assisted Language Comparison in Practice. Retrieved July 9, 2025 from https://doi.org/10.58079/140gb


This entry was posted in Code and tagged , , , on by .

About Johann-Mattis List

Seit Anfang 2023 leite ich den Lehrstuhl für Multilinguale Computerlinguistik in Passau. In meiner Forschung nehme ich generell einen datenbasierten, empirischen und quantitativen Standpunkt in Bezug auf Sprachwandel und Sprachgeschichte ein, mit einem speziellen Fokus auf südostasiatischen Sprachen. Im Gegensatz zu rein computerbasierten Ansätzen versuche ich jedoch, meine Forschung nah an der traditionellen historischen Linguistik und der linguistischen Theorie auszurichten, weshalb ich einen computer-gestützten Ansatz im Gegensatz zu einem rein computer-basierten Ansatz verfolge.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.