Exporting Sublists from a Wordlist with LingPy and Concepticon

When dealing with linguistic datasets, we may often want to export only a small part of our data, for example, only vocabulary in a certain range, such as the Swadesh list of 200 items or the list of 35 items by Yakhontov (originally published in Starostin 1991. Thanks to the pyconcepticon API and LingPy’s built-in export functions for wordlists, this task can be done just in a few lines of code, as we will see below. If you prefer to see the raw code instead of the step-by-step explanation below, you can find a GitHub Gist here.

In order to get started, I work with a dataset that was originally published along with a paper by Kolipakam et al. (2018). This dataset can be downloaded from the supplemental material accompanying the paper, and the important file is a zip-folder called SI_robustness_cognate_coding.zip, from which you have to extract the file DravLex.tsv. I take this dataset, since it is published, it is easy to get a copy of the data, and the data has been already linked to Concepticon, although not officially in Version 1.1, but you can already receive the data from our GitHub repository, where the concept list is labelled Kolipakam-2018-100, following our Concepticon naming convention, which takes the first author, the year of the publication, and the number of items in order to create a stable identifier for a concept list.

To get started, you need to make sure that you have the pyconcepticon API installed in its most recent version. You should find all important instructions for this on the GitHub repository of the Concepticon project (see clld/concepticon-data). You also need to have LingPy installed in its current 2.6 version (ideally also make sure to take the most recent version from our GitHub repository: lingpy/lingpy). Equipped with this information and the dataset, open a terminal in the folder in which you have placed the file DravLex.tsv, and start by loading LingPy and the pyconcepticon API.

[IN ]: from pyconcepticon.api import Concepticon
[IN ]: from lingpy import *

This is of course easy, but now we will process the concept list of Kolipakam et al. and try to extract those items which we can also find in the very short list of 35 items by Yakhontov. In order to do so, you need to understand how Concepticon stores data in the Python API, and this may be somewhat confusing at the first sight, since the class hierarchy is created in such a way that it directly reflects the online version of the Concepticon and makes extensive use of dictionaries (more precisely OrderedDict). But let’s start step by step, and first, we simply load the API:

[IN ]: CNC = Concepticon()

And now we load the concept lists by Yakhontov and Kolipakam:

[IN ]: yakhontov = CNC.conceptlists['Yakhontov-1991-35']
[IN ]: kolipakam = CNC.conceptlists['Kolipakam-2018-100']

What is important here is the structure of the Conceptlist objects that we just loaded. They all have an attribute concepts, which is itself a dictionary with the identifier of a given concept as key, and a Concept object as value. The Concept object itself has again different attributes, and the most important attributes for us are the concepticon_id and the english entry. We need the english attribute of the data by Kolipakam et al. to determine the Concepticon identifiers in the dataset, since the dataset itself does not officially link the data to Concepticon. We use the Concepticon identifiers to compare which of the concept sets in the Dravidian data also occur in Yakhontov’s list. We start by slightly modifying our two lists (yakhontov and kolipakam) first, and then extract a sublist with English glosses from them:

[IN ]: yakhontov = [c.concepticon_id for c in yakhontov.concepts.values()]
[IN ]: kolipakam = [(c.concepticon_id, c.english) for c in  kolipakam.concepts.values()] sublist = [english for english, concepticon_id in kolipakam if concepticon_id in yakhontov]
[IN ]: print('Overlap shows {0} items in common.'.format(len(sublist))) 
[OUT]: Overlap shows 31 items in common.

We can now load the DravLex.tsv file and output it in such a way that only a subset of concepts is selected which occur in our sublist. For this, we use the output-method of the Wordlist class in LingPy, but in contrast to the normal output procedure, we specify a subset and a condition. Since the condition is evaluated internally in form of Python code passed to LingPy, this looks a bit ugly, as we define a dictionary that specifies columns that occur in the wordlist, and these columns’ content is then checked against the Python code that we pass as string, but it is the fastest way to accomplish this task in LingPy:

[IN ]: wl = Wordlist('DravLex.tsv') wl.output('tsv', filename='DravLex-sublist', subset=True, rows={"concept": "in "+str(sublist)})

That’s all we have to do. In order to verify that we really exported only a part of the wordlist, we can reload it and count its basic parameters (number of languages, concepts, and words):

[IN ]: wl2 = Wordlist('DravLex-sublist.tsv') 
[IN ]: print('{0}:{1} languages, {2}:{3} concepts, and {4}:{5} words'.format(wl.width, wl2.width, wl.height, wl2.height, len(wl), len(wl2))) 
[OUT]: 20:20 languages, 100:31 concepts, and 2114:660 words

Another way to achieve this goal which has the advantage of allowing you to circumvent to write the data to file and reload it is to create a new wordlist object from the original wordlist object. In order to do so, we create a dictionary with integers as key where the key 0 reflects the header of the wordlist object and the keys link to values that are a list, just as we know it from the normal wordlist objects.

[IN ]: D = {0: [c for c in wl.columns]}

We can fill this now still empty dictionary by iterating over all entries in our original wordlist and checking whether the concepts occur in our sublist:

[IN ]: for idx, concept in wl.iter_rows('concept'): 
......     if concept in sublist:
......         D[idx] = [entry for entry in wl[idx]] 

This dictionary can then directly be passed to the Wordlist class and loaded in the same way in which we would load a normal wordlist.

[IN ]: wl2 = Wordlist(D)

We can again quickly verify that this yields the same expected output.

[IN ]: print('{0}:{1} languages, {2}:{3} concepts, and {4}:{5} words'.format(wl.width, wl2.width, wl.height, wl2.height, len(wl), len(wl2))) 
[OUT]: 20:20 languages, 100:31 concepts, and 2114:660 words

Which of the methods to use depends on personal preferences and also the task at hand. It may be preferable to load a sublist on the fly and manipulate it further in LingPy, and it may be useful to save it to file, which is faster with our subset option in LingPy’s output function for wordlists. When playing a bit with the conditions, many more things can be done in order to manipulate wordlist objects within Python, without having to manipulate them manually, or by writing wordlist content to other datatypes in order to handle them with additional libraries, like, for example, Pandas. The only problem is, at least in my experience, that it seems to be difficult for users to grasp the major concepts behind this practice in LingPy, as they have been developed long before tools like Pandas were common ways of manipulating arrays and tabular data.

References

Kolipakam, V., F. Jordan, M. Dunn, S. Greenhill, R. Bouckaert, R. Gray, and A. Verkerk (2018): A Bayesian phylogenetic study of the Dravidian language family. Royal Society Open Science 5.171504. 1-17.

Starostin, S. (1991): Altajskaja problema i proischo\vzdenije japonskogo jazyka [The Altaic problem and the origin of the Japanese language]. Nauka: Moscow.


OpenEdition suggests that you cite this post as follows:
Johann-Mattis List (July 16, 2018). Exporting Sublists from a Wordlist with LingPy and Concepticon. Computer-Assisted Language Comparison in Practice. Retrieved October 3, 2024 from https://doi.org/10.58079/m6jr


This entry was posted in Code and tagged , , , , , on by .

About Johann-Mattis List

Seit Anfang 2023 leite ich den Lehrstuhl für Multilinguale Computerlinguistik in Passau. In meiner Forschung nehme ich generell einen datenbasierten, empirischen und quantitativen Standpunkt in Bezug auf Sprachwandel und Sprachgeschichte ein, mit einem speziellen Fokus auf südostasiatischen Sprachen. Im Gegensatz zu rein computerbasierten Ansätzen versuche ich jedoch, meine Forschung nah an der traditionellen historischen Linguistik und der linguistischen Theorie auszurichten, weshalb ich einen computer-gestützten Ansatz im Gegensatz zu einem rein computer-basierten Ansatz verfolge.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.