Thanks to the CLDFBench Python package, CLDF datasets compiled with CLDFbench have a rich command-line utility that can easily be expanded by custom commands (Forkel et al. 2018, Forkel and List 2020) . Taking as an example the creation of a Nexus-file for phylogenetic analysis from an existing Lexibank Wordlist (List et al. 2022) , this tutorial will guide you through the necessary steps for writing a script that can be used as a CLDFBench command. You will then be able to apply the same workflow to other scripts you may want to use for a certain repository.
Recently, I was asked by a colleague how one could query only those datasets in the Lexibank repository which come with cognate sets annotated by humans. While I first thought this could be done in a very straightforward way, I figeured out, when trying it myself, that the code still needs some workarounds. As a result, I thought it is best to share the solution I came up with in a blog post in order to make it accessible to colleagues who might be interested in inspecting and using the data provided by the Lexibank repository in more detail.
With the introduction of CLDF as a major format for data exchange, there is an increased need in handy solutions for the conversion of CLDF to the formats required by computer-assisted tools like LingPy and EDICTOR, which allow to preprocess data automatically or to curate data by adding detailed annotations. With the publication PyEDICTOR, there is now a very lightweight software package that is supposed to provide first solutions for a successful integration of CLDF with these existing tools for computer-assisted language comparison.
Colleagues often ask us how they could receive more detailed information on specific languages and colexifications in the CLICS database. With the publication of the CL Toolkit package, which allows to merge several CLDF datasets on the fly, carrying out analyses on certain parts of the data underlying the CLICS database is now much easier than before. In order to illustrate this, this tutorial shows how colexifications for a selected number of languages can be computed from two distinct datasets that are included in CLICS (Version 3).
I recently converted the Gothic dictionary written by Wilhelm Streitberg to a CLDF wordlist. Since I was using Windows, I had some difficulties during the conversion progress, which Unix system users may not have to deal with. I thought it would be useful to share my experience here and point out that users of Windows operating systems should be aware of certain aspects when converting data to CLDF.
A few days ago, Sidwell and Alwes submitted a very nice dataset on Vietic languages to Zenodo (10.5281/zenodo.5263194). When inspecting the data, I realized that this dataset could be easily converted to our CLDF formats in our new Lexibank standards. Since both authors explicitly invited for discussions of the data and the testing of the results, I thought it would even be better to quickly illustrate the CLDF conversion in a blog post, as this may enable colleagues to do the same with their datasets in the future.
When sharing data and code when submitting papers to a journal, you need to make sure that the reviewers can test and inspect your data as conveniently as possible. In between reviews, you should also be maximally transparent on any changes that have been made to the data or the code underlying your study. When using data that was published elsewhere, this means you should pay specific attention to the versions you have used and make sure they are readily accessible.
In this study, we discuss the sparsely studied Gelong language of Hainan Island and its affiliation to the Hlai languages. Our work is based on Andy Chin’s article “The Gelong Language in the Multilingual Hub of Hainan”. We extracted Chin’s data and processed it with the help of various computer-assisted methods in order to make it more accessible, machine-readable, and comparable with other datasets.
In the summer of 2018, I set out to collect data for my master’s thesis (Tjuka 2019). The goal was to elicit body part terms that can also refer to object and landscape features. This was BC (before COVID-19), so I was able to meet with informants living in Berlin at the time and conduct my urban fieldwork study. The informants who participated in the study spoke one of 13 different languages (e.g., Wolof, Vietnamese, Czech). As a first task, I elicited 28 body part terms to get a sense of the naming patterns in each language. This blog post provides background information and introduces the resulting multilingual body part concept list.
While searching for the topic of a small research project about the linguistic history of South America, I realized that a lot of data that is crucial for assessing central arguments is not openly available, but new data is difficult to come by these days. And when it is, it is not usually presented in data format that allows for easy reuse. Guided by these thoughts, I decided to turn towards the upcycling of previously published data (also called retro-standardization, see for example Geisler et al. (forthcoming) on the upcycling of the TPPSR dataset, https://tppsr.clld.org). The dataset I chose was previously published by Adolfo Constenla Umaña (2005). In this article, the author investigated comparatively the long-claimed genealogical relationship of three families of Central and South America, Chibchan, Lencan and Misumalpam (Lehmann 1920).
With an increasing amount of data being available in Cross-Linguistic Formats, it is becoming more and more important to know the basics underlying the Python packages designed by the CLDF initiative in order to allow interested users a quick access to the data. This very short tutorial illustrates how the CLDF data underlying the World Atlas of Language structures can be accessed and written to a table in which each individual values for all WALS parameters are for each language variety in one row.
Endangered language documentation and endangered language revitalisation have been two hot topics in recent years. For instance, the United Nations Educational, Scientific and Cultural Organization (UNESCO) declared the year 2019 as the International Year of Indigenous Languages. However, although the UNESCO and many other organizations (e.g. The Endangered Languages Documentation Programme or SIL International) urge the public to be aware of the rapidly decreasing number of languages in the world, it does not slow down the annual rate of language loss. For example, the total number of speakers of the Kusunda language, a moribund language spoken in Nepal, decreased to only one person in 2020.
Imagine you have two different datasets, both containing approximately the same concepts, but slightly different numbers of columns and — more importantly — potentially identical identifiers in the first column. A bad idea for merging these datasets would be to paste them in Excel or some other kind of spreadsheet software, and then trying to manually adjust all problems that might occur during this process.
A better idea is to just use LingPy and our CLDF curation framework Continue reading
In a previous post (Kaiping 2018), I described how to convert matrix-shape word lists given in Excel into the long format LingPy and other software can work with. My motivation for this was to provide my colleague Yunus Sulistyono with a good way to compare the lexicon of his Alorese [alor1247] dialects, and to understand the relationship between them. In this post, the data is automatically cognate coded and converted into CLDF. Continue reading
The Cross-Linguistic Data Formats initiative (CLDF, https://cldf.clld.org, Forkel et al. 2018) has helped a lot in preparing the CLICS² database of cross-linguistic colexifications (https://clics.lingpy.org, List et al. 2018), since linking our data to Concepticon (https://concepticon.clld.org, List et al. 2016) and Glottolog (https://glottolog.org, Hammarström et al. 2018) has provided incredible help in merging the different datasets into a big comparative dataset.
CLDF, however, is not restricted to lexical data, but can also be successfully used to store structural data, although — due to the nature of structural data — it is much more difficult to compare different datasets.