With the introduction of CLDF as a major format for data exchange, there is an increased need in handy solutions for the conversion of CLDF to the formats required by computer-assisted tools like LingPy and EDICTOR, which allow to preprocess data automatically or to curate data by adding detailed annotations. With the publication PyEDICTOR, there is now a very lightweight software package that is supposed to provide first solutions for a successful integration of CLDF with these existing tools for computer-assisted language comparison.
A colexification network is a network consisting of concepts as nodes with weighted edges drawn between the nodes, indicating how often the concepts colexify across the data in a given sample of languages. Having seen how individual colexifications can be computed with the help of the CL Toolkit package in an earlier blog post, we will now see how this code needs to be extended in order to compute colexification networks.
Colleagues often ask us how they could receive more detailed information on specific languages and colexifications in the CLICS database. With the publication of the CL Toolkit package, which allows to merge several CLDF datasets on the fly, carrying out analyses on certain parts of the data underlying the CLICS database is now much easier than before. In order to illustrate this, this tutorial shows how colexifications for a selected number of languages can be computed from two distinct datasets that are included in CLICS (Version 3).
Mapping concepts to common concept identifiers across resources has become an important task for the aggregation of lexical data from different sources. With the Concepticon, this task has been facilitated due to a specific mapping algorithm by which a concept list can be automatically mapped to the concept sets in the Concepticon reference catalogue in order to be later manually refined. PySem offers an additional possibility to map concepts to the Concepticon, but in contrast to the algorithm used in the Concepticon workflow, the PySem approach can be accessed from within Python applications.
Writing a term paper requires the same scrutiny as writing an article for a journal. As a result, the techniques which apply when writing term papers are very similar to those which apply when writing a journal article, and students should feel encouraged to take the task as seriously as a journal article that scientists send off to peer review. In the following, I will briefly introduce major techniques that help to structure one’s work when writing a term paper and which also help to interact well with one’s supervisor during the writing process.
A few days ago, Sidwell and Alwes submitted a very nice dataset on Vietic languages to Zenodo (10.5281/zenodo.5263194). When inspecting the data, I realized that this dataset could be easily converted to our CLDF formats in our new Lexibank standards. Since both authors explicitly invited for discussions of the data and the testing of the results, I thought it would even be better to quickly illustrate the CLDF conversion in a blog post, as this may enable colleagues to do the same with their datasets in the future.
When sharing data and code when submitting papers to a journal, you need to make sure that the reviewers can test and inspect your data as conveniently as possible. In between reviews, you should also be maximally transparent on any changes that have been made to the data or the code underlying your study. When using data that was published elsewhere, this means you should pay specific attention to the versions you have used and make sure they are readily accessible.
The scientific culture in linguistics has been changing recently, and more and more papers are published with code and data accompanying them. What is still often forgotten, however, is that code and data should also be shared with the reviewers during the first submission of a paper in order to guarantee a maximally transparent review process that includes also a thorough inspection of the data and the code. This calls for attention from two sides: Reviewers should make sure that they receive data and code if they are needed to replicate the results reported in a paper, while authors should make sure to submit them to the reviewers in a way that they can easily inspect them. In this new blog post series, I want to summarize what authors should keep in mind when preparing their data and code for submission to a journal. On the one hand, I hope that this post will increase awareness among colleagues that data and code should be shared upon submission. On the other hand, I hope it also provides active help to all colleagues who plan to submit an article to a journal and are not sure how to share their data in the best form.
With the recent publication of the new version of the EDICTOR application for the curation and creation of etymological dictionaries, several new features were introduced which target specifically the annoation of language-internal word families opposed to cross-linguistic cognates. While working on the EDICTOR update, I carried out intensive tests of the new features by annotating a German wordlist for language-internal cognates. In this post, I will quickly discuss some of the new features in EDICTOR 2.0 by showing some examples of the freshly annotated wordlist for German.
Multi-SimLex (https://multisimlex.com) is a multilingual resource which provides user ratings for word pairs translated into different languages. The data is important for the evaluation of methods that derive word embeddings from large corpora. While it is on the one hand desirable to link such a large dataset to Concepticon, it is difficult to do so in concrete, given that the datasets represents word similarity ratings withouth any clear reference to concepts. In this post, I will show how the data can nevertheless be linked to Concepticon, and how the original Multi-SimLex data can be represented without losing any information in the form of a Concepticon Concept List.
With an increasing amount of data being available in Cross-Linguistic Formats, it is becoming more and more important to know the basics underlying the Python packages designed by the CLDF initiative in order to allow interested users a quick access to the data. This very short tutorial illustrates how the CLDF data underlying the World Atlas of Language structures can be accessed and written to a table in which each individual values for all WALS parameters are for each language variety in one row.
Semantic data are notoriously difficult to handle. In contrast to the form-part of the linguistic sign, meanings are not organized sequentially, but rather network-like (List 2014: 34f). As a result, we often encounter problems when trying to model complex relations between different meanings, specifically in those cases, where we have only tables as our base material. This blog post tries to summarize how major types of semantic data are handled in the Concepticon project and how they can be accessed in code.
For a long time, I have been wondering about the origin of the German wordlist in the Intercontinental Dictionary Series (Key and Comrie 2016. Not only are many of the words given as translations for the large concept list of 1310 items very archaic variants, which are no longer in use, we also find many annoying problems, such as unusual spellings (consequently avoiding the letter “ß”, which is still in use, even if some people think differently), wrong translations, and, of course, no phonetic transcriptions. Already during my doctoral studies, I therefore started to work on a refined list, but I soon had so many other things on my plate, that I never really managed to finish this work. Recently, however, I realized that my previous work which I had done years ago was far more complete than I had thought, and I had even added information on potential borrowings, extracted from Kluge’s (2002) etymological dictionary. Given that this list can come in handy in various ways, I decided to finish the work and publish the list officially in a very first version.