Category Archives: Code

Mapping Multi-SimLex to Concepticon

Multi-SimLex (https://multisimlex.com) is a multilingual resource which provides user ratings for word pairs translated into different languages. The data is important for the evaluation of methods that derive word embeddings from large corpora. While it is on the one hand desirable to link such a large dataset to Concepticon, it is difficult to do so in concrete, given that the datasets represents word similarity ratings withouth any clear reference to concepts. In this post, I will show how the data can nevertheless be linked to Concepticon, and how the original Multi-SimLex data can be represented without losing any information in the form of a Concepticon Concept List.

Continue reading

How to work with WALS data in CLDF (How to do X in linguistics 5)

With an increasing amount of data being available in Cross-Linguistic Formats, it is becoming more and more important to know the basics underlying the Python packages designed by the CLDF initiative in order to allow interested users a quick access to the data. This very short tutorial illustrates how the CLDF data underlying the World Atlas of Language structures can be accessed and written to a table in which each individual values for all WALS parameters are for each language variety in one row.

Continue reading

How to handle semantic data with tables (How to do X in linguistics 3)

Semantic data are notoriously difficult to handle. In contrast to the form-part of the linguistic sign, meanings are not organized sequentially, but rather network-like (List 2014: 34f). As a result, we often encounter problems when trying to model complex relations between different meanings, specifically in those cases, where we have only tables as our base material. This blog post tries to summarize how major types of semantic data are handled in the Concepticon project and how they can be accessed in code.

Continue reading

Computing colexification statistics for individual languages in CLICS

In the last two weeks we had a renewed interest in colexifications, especially in the third generation of the “Database of Cross-Linguistic Colexifications” (Rzymski, Tresoldi, et al., 2020). The attention was due to two different and independent requests in few days. For those unfamiliar, the concept of “colexification” (François, 2008) refers to instances in which a language uses the same lexeme to express more than one comparable concept (e.g., Russian де́рево, which can mean both “tree” and “wood”). The CLICS project, first developed by List et al. (2014), is an offspring of the transparent approaches to standardization, aggregation, and curation of linguistic data that have been promoted within the CLDF framework (Forkel et al., 2018). It uses standardized lexical databases to identify “colexification networks”.

Continue reading

Concept Similarity in STARLING

STARLING is a software package, originally created by Sergej A. Starostin, which is designed for historical linguists who want to build their own etymological dictionaries. It is not only a database system that allows its users to set up a very straightforward relational database structure, but also a package full of surprises, since it contains many methods that are supposed to automate specific tasks in historical linguistics. These range from phylogenetic tree reconstruction via the preliminary identification of sound correspondences up to the comparison of elicitation glosses for their semantic similarity. While phylogenetic reconstruction and sound correspondences are now quite successfully handled in alternative software packages, I thought it would be interesting to discuss the routine for assessing concept similarity in more detail, since it offers interesting possibilities for those who practice historical language comparison.

Continue reading

Illustrating linguistic data reuse: a modest database for semantic distance

Besides new algorithms and tools that facilitate established workflows, one change prompted by computer-assisted approaches to language comparison is a distinct relationship between scientists and their data. A critical part of our work, and perhaps the one with the most lasting impact, is to promote an approach in which the data life-cycle is not constrained within the limits of planning and publishing a study. Data are organized and planned for reuse in investigations perhaps not even considered during collection, with the output of one project becoming the input of another.

Continue reading

Feature-Based Alignment Analyses with LingPy and CLTS (2)

Having seen how we can obtain a simple scorer derived from the feature system in CLTS (List et al. 2019) in last month’s post, what is missing now, in order to use the scorer for alignment analyses, is an alignment function which can take the scorer as an argument. If one does not have higher ambitions with respect to the alignment function itself, this step can be achieved in a very straightforward way with help of LingPy’s (List et al. 2018) nw_align() or sw_align() method. As can be seen from the documentation, this method takes as input two sequences (i.e., lists of sounds), along with a scoring function. Obviously, all we need to do now is to create our specific scorer based on the CLTS features, and then pass this scoring function along with our sequences to the function.

Continue reading

Feature-Based Alignment Analyses with LingPy and CLTS (1)

In the past, people have repeatedly asked me how they could use their own scoring functions in combination with LingPy’s alignment algorithms. Their major concern was that the sound-class-based scoring systems we use in LingPy might fail to reflect true phonetic similarity of sounds, specifically also because they are not informed by classical ideas about distinctive features in phonology. As described in detail in List (2014), LingPy converts sounds in phonetic transcription to an internal alphabet of less than 30 letters, to which the alignment algorithms are then applied in a second stage.

Continue reading

Using the Waterman-Eggert algorithm for sentence alignment

During the 24th International Conference of Historical Linguistics, I was asked by a colleague whether I would know a good way to align and scores sentences available in form of phonetic transcriptions. While it is clear that one can roughly compare the difference between sequences rather easily by aligning them, and calculating, for example, the edit distance between them, it is clear that the task of sentence alignment could be done in a somewhat more subtle way.

Continue reading

Using pyconcepticon to map concept lists (II)

Mapping a given concept list to Concepticon can be done in a straight-forward way, even if automatic mappings need manual refinement. But what can we do when having to deal with larger datasets, say, a dictionary, from which we want to extract specific concepts, such as, for example, the ones in the classical Swadesh list of 100 items (Swadesh 1955)?

Continue reading

A Primer on Automatic Inference of Sound Correspondence Patterns (3): Extended Experiments with Alignments from the Tableaux Phonétiques des Patois Suisses Romands

Having illustrated how a quick correspondence pattern analysis can be done with help of readily formatted data and the EDICTOR tool alone, it is now time to show how we can use the LingRex package in order to carry out a full-fledged correspondence pattern analysis. While EDICTOR uses a simple algorithm that is based on sorting the patterns, the Python algorithm for correspondence pattern detection, which is described in detail in List (2019), uses a greedy approach inspired by the Welsh-Powell algorithm for graph coloring (Welsh and Powell 1967), in order to cluster all alignment sites in the data into clusters which are compatible with each other.

Continue reading

From Fieldwork to Trees 3: CLDF recipes

In the previous two posts (Part 1, Part 2), I took you from a matrix of word lists from fieldwork to a LingPy-compatible CLDF Wordlist with cognate codes and alignments. We can now feed this dataset into existing tools and recipes for visualizing and analyzing CLDF Wordlists.

Continue reading

Merging datasets with LingPy and the CLDF curation framework

Imagine you have two different datasets, both containing approximately the same concepts, but slightly different numbers of columns and — more importantly — potentially identical identifiers in the first column. A bad idea for merging these datasets would be to paste them in Excel or some other kind of spreadsheet software, and then trying to manually adjust all problems that might occur during this process.

A better idea is to just use LingPy and our CLDF curation framework Continue reading