Thanks to the CLDFBench Python package, CLDF datasets compiled with CLDFbench have a rich command-line utility that can easily be expanded by custom commands (Forkel et al. 2018, Forkel and List 2020) . Taking as an example the creation of a Nexus-file for phylogenetic analysis from an existing Lexibank Wordlist (List et al. 2022) , this tutorial will guide you through the necessary steps for writing a script that can be used as a CLDFBench command. You will then be able to apply the same workflow to other scripts you may want to use for a certain repository.
Category Archives: Code
PyEDICTOR: A Small Python Package that Integrates LingPy, EDICTOR, and CLDF
With the introduction of CLDF as a major format for data exchange, there is an increased need in handy solutions for the conversion of CLDF to the formats required by computer-assisted tools like LingPy and EDICTOR, which allow to preprocess data automatically or to curate data by adding detailed annotations. With the publication PyEDICTOR, there is now a very lightweight software package that is supposed to provide first solutions for a successful integration of CLDF with these existing tools for computer-assisted language comparison.
How to Map Concepts with the PySem Library
Mapping concepts to common concept identifiers across resources has become an important task for the aggregation of lexical data from different sources. With the Concepticon, this task has been facilitated due to a specific mapping algorithm by which a concept list can be automatically mapped to the concept sets in the Concepticon reference catalogue in order to be later manually refined. PySem offers an additional possibility to map concepts to the Concepticon, but in contrast to the algorithm used in the Concepticon workflow, the PySem approach can be accessed from within Python applications.
Mapping Multi-SimLex to Concepticon
Multi-SimLex (https://multisimlex.com) is a multilingual resource which provides user ratings for word pairs translated into different languages. The data is important for the evaluation of methods that derive word embeddings from large corpora. While it is on the one hand desirable to link such a large dataset to Concepticon, it is difficult to do so in concrete, given that the datasets represents word similarity ratings withouth any clear reference to concepts. In this post, I will show how the data can nevertheless be linked to Concepticon, and how the original Multi-SimLex data can be represented without losing any information in the form of a Concepticon Concept List.
How to work with WALS data in CLDF (How to do X in linguistics 5)
With an increasing amount of data being available in Cross-Linguistic Formats, it is becoming more and more important to know the basics underlying the Python packages designed by the CLDF initiative in order to allow interested users a quick access to the data. This very short tutorial illustrates how the CLDF data underlying the World Atlas of Language structures can be accessed and written to a table in which each individual values for all WALS parameters are for each language variety in one row.
How to handle semantic data with tables (How to do X in linguistics 3)
Semantic data are notoriously difficult to handle. In contrast to the form-part of the linguistic sign, meanings are not organized sequentially, but rather network-like (List 2014: 34f). As a result, we often encounter problems when trying to model complex relations between different meanings, specifically in those cases, where we have only tables as our base material. This blog post tries to summarize how major types of semantic data are handled in the Concepticon project and how they can be accessed in code.
Computing colexification statistics for individual languages in CLICS
In the last two weeks we had a renewed interest in colexifications, especially in the third generation of the “Database of Cross-Linguistic Colexifications” (Rzymski, Tresoldi, et al., 2020). The attention was due to two different and independent requests in few days. For those unfamiliar, the concept of “colexification” (François, 2008) refers to instances in which a language uses the same lexeme to express more than one comparable concept (e.g., Russian де́рево, which can mean both “tree” and “wood”). The CLICS project, first developed by List et al. (2014), is an offspring of the transparent approaches to standardization, aggregation, and curation of linguistic data that have been promoted within the CLDF framework (Forkel et al., 2018). It uses standardized lexical databases to identify “colexification networks”.
Concept Similarity in STARLING
STARLING is a software package, originally created by Sergej A. Starostin, which is designed for historical linguists who want to build their own etymological dictionaries. It is not only a database system that allows its users to set up a very straightforward relational database structure, but also a package full of surprises, since it contains many methods that are supposed to automate specific tasks in historical linguistics. These range from phylogenetic tree reconstruction via the preliminary identification of sound correspondences up to the comparison of elicitation glosses for their semantic similarity. While phylogenetic reconstruction and sound correspondences are now quite successfully handled in alternative software packages, I thought it would be interesting to discuss the routine for assessing concept similarity in more detail, since it offers interesting possibilities for those who practice historical language comparison.
Making an annotated concept list from the data in CLICS
The CLICS database in its current format makes direct use of the data assembled by the Concepticon project in order to aggregate lexical data from different sources. At the same time, the CLICS database itself can be seen as an interesting conceptlist, providing information on concept polysemy and semantic similarity.
Automated Mapping of Metadata to Concepticon
While the core of the Concepticon project (https://concepticon.clld.org, List et al. 2019) are the numerous conceptlists which are constantly being added by the growing list of contributors, we have already from the beginning of the project, with the first version (List et al. 2016) tried to collect various kinds of concept metadata for all our concept sets.
Illustrating linguistic data reuse: a modest database for semantic distance
Besides new algorithms and tools that facilitate established workflows, one change prompted by computer-assisted approaches to language comparison is a distinct relationship between scientists and their data. A critical part of our work, and perhaps the one with the most lasting impact, is to promote an approach in which the data life-cycle is not constrained within the limits of planning and publishing a study. Data are organized and planned for reuse in investigations perhaps not even considered during collection, with the output of one project becoming the input of another.
Feature-Based Alignment Analyses with LingPy and CLTS (2)
Having seen how we can obtain a simple scorer derived from the feature system in CLTS (List et al. 2019) in last month’s post, what is missing now, in order to use the scorer for alignment analyses, is an alignment function which can take the scorer as an argument. If one does not have higher ambitions with respect to the alignment function itself, this step can be achieved in a very straightforward way with help of LingPy’s (List et al. 2018)
sw_align() method. As can be seen from the documentation, this method takes as input two sequences (i.e., lists of sounds), along with a scoring function. Obviously, all we need to do now is to create our specific scorer based on the CLTS features, and then pass this scoring function along with our sequences to the function.
Feature-Based Alignment Analyses with LingPy and CLTS (1)
In the past, people have repeatedly asked me how they could use their own scoring functions in combination with LingPy’s alignment algorithms. Their major concern was that the sound-class-based scoring systems we use in LingPy might fail to reflect true phonetic similarity of sounds, specifically also because they are not informed by classical ideas about distinctive features in phonology. As described in detail in List (2014), LingPy converts sounds in phonetic transcription to an internal alphabet of less than 30 letters, to which the alignment algorithms are then applied in a second stage.
Using the Waterman-Eggert algorithm for sentence alignment
During the 24th International Conference of Historical Linguistics, I was asked by a colleague whether I would know a good way to align and scores sentences available in form of phonetic transcriptions. While it is clear that one can roughly compare the difference between sequences rather easily by aligning them, and calculating, for example, the edit distance between them, it is clear that the task of sentence alignment could be done in a somewhat more subtle way.