Tag Archives: code example

A New Python Library for the Manipulation and Annotation of Linguistic Sequences

The Python package linse (https://pypi.org/project/linse) offers various methods for the manipulation and annotation of sequences. In this short overview, we summarize its major functionalities and provide some information on its background and how we intend to develop it further in the future.

Continue reading

How to Map Concepts with the PySem Library

Mapping concepts to common concept identifiers across resources has become an important task for the aggregation of lexical data from different sources. With the Concepticon, this task has been facilitated due to a specific mapping algorithm by which a concept list can be automatically mapped to the concept sets in the Concepticon reference catalogue in order to be later manually refined. PySem offers an additional possibility to map concepts to the Concepticon, but in contrast to the algorithm used in the Concepticon workflow, the PySem approach can be accessed from within Python applications.

Continue reading

Mapping Multi-SimLex to Concepticon

Multi-SimLex (https://multisimlex.com) is a multilingual resource which provides user ratings for word pairs translated into different languages. The data is important for the evaluation of methods that derive word embeddings from large corpora. While it is on the one hand desirable to link such a large dataset to Concepticon, it is difficult to do so in concrete, given that the datasets represents word similarity ratings withouth any clear reference to concepts. In this post, I will show how the data can nevertheless be linked to Concepticon, and how the original Multi-SimLex data can be represented without losing any information in the form of a Concepticon Concept List.

Continue reading

How to handle semantic data with tables (How to do X in linguistics 3)

Semantic data are notoriously difficult to handle. In contrast to the form-part of the linguistic sign, meanings are not organized sequentially, but rather network-like (List 2014: 34f). As a result, we often encounter problems when trying to model complex relations between different meanings, specifically in those cases, where we have only tables as our base material. This blog post tries to summarize how major types of semantic data are handled in the Concepticon project and how they can be accessed in code.

Continue reading

Behind the Sino-Tibetan Database of Lexical Cognates: Concept selection

One of the crucial steps in creating a database of lexical cognates is the selection of concepts one wants to use for a given study. While many scholars use the classical Swadesh list of 200 items) (Swadesh 1952) for this purpose, or the combined list of 207 items, in which the former has been merged with Swadesh’s updated list of 100 items (Swadesh 1955), and which is often mistakenly attributed to Swadesh himself, although the first official reference seems to be Comrie (1977), it is useful to give the selection of concepts some more thought initially.

Continue reading

Using pyconcepticon to map concept lists (II)

Mapping a given concept list to Concepticon can be done in a straight-forward way, even if automatic mappings need manual refinement. But what can we do when having to deal with larger datasets, say, a dictionary, from which we want to extract specific concepts, such as, for example, the ones in the classical Swadesh list of 100 items (Swadesh 1955)?

Continue reading

From Fieldwork to Trees 2: Cognate coding

In a previous post (Kaiping 2018), I described how to convert matrix-shape word lists given in Excel into the long format LingPy and other software can work with. My motivation for this was to provide my colleague Yunus Sulistyono with a good way to compare the lexicon of his Alorese [alor1247] dialects, and to understand the relationship between them. In this post, the data is automatically cognate coded and converted into CLDF. Continue reading

From Fieldwork to Trees 1: Data preparation

A colleague of mine has recently returned from his fieldwork, where he collected data on on the dialectal variation of the Alorese language of Alor and Pantar in the East Nusa Tenggara province of Indonesia. He collected data on 13 Alorese varieties, including word list data. One obvious step for comparing the dialects is to mark which forms are obviously cognate and then use a standard tree (or network) construction algorithm to display the shared signal in the data. With standard tools and a bit of Python glue, this is an easy task. A script for 3 steps can be found in my repository on github. In this first part, I will describe how to get an Excel file into a format LingPy can deal with.

Continue reading

Exporting Sublists from a Wordlist with LingPy and Concepticon

When dealing with linguistic datasets, we may often want to export only a small part of our data, for example, only vocabulary in a certain range, such as the Swadesh list of 200 items or the list of 35 items by Yakhontov (originally published in Starostin 1991. Thanks to the pyconcepticon API and LingPy’s built-in export functions for wordlists, this task can be done just in a few lines of code, as we will see below. If you prefer to see the raw code instead of the step-by-step explanation below, you can find a GitHub Gist here.

Continue reading