The ability to visualize data in an intelligible way is an important skill for scientists. In linguistics, especially in lexical semantics, data are often visualized using graphs, i.e., networks. For example, in the web app for the Database of Cross-Linguistic Colexifications (CLICS), we use networks to illustrate that a lexical form refers to two different concepts by connecting the concepts (i.e., nodes) with a line (i.e., edge). When identifying the colexifications between concepts across a large number of languages, the network grows and a tool to visualize multiple data points becomes necessary. Here, I present a tutorial for the first steps to visualize a colexification network with Cytoscape. The tutorial is intended for beginners who want to learn how the tool works and serves as a starting point for further skill development.
Tag Archives: tutorial
Creating Custom Commands in CLDF: From Lexibank to Nexus Files
Thanks to the CLDFBench Python package, CLDF datasets compiled with CLDFbench have a rich command-line utility that can easily be expanded by custom commands (Forkel et al. 2018, Forkel and List 2020) . Taking as an example the creation of a Nexus-file for phylogenetic analysis from an existing Lexibank Wordlist (List et al. 2022) , this tutorial will guide you through the necessary steps for writing a script that can be used as a CLDFBench command. You will then be able to apply the same workflow to other scripts you may want to use for a certain repository.
Converting Streitberg’s Gothic Dictionary to a CLDF Wordlist on a Windows System
I recently converted the Gothic dictionary written by Wilhelm Streitberg to a CLDF wordlist. Since I was using Windows, I had some difficulties during the conversion progress, which Unix system users may not have to deal with. I thought it would be useful to share my experience here and point out that users of Windows operating systems should be aware of certain aspects when converting data to CLDF.
Comparing NoRaRe data sets: Calculation of correlations and creation of plots in R
In a recent blog post, I introduced the Database of Cross-Linguistic Norms, Ratings, and Relations for Words and Concepts (NoRaRe) and demonstrated how to add new data sets (Tjuka 2021). The database currently includes 65 unique word and concept properties based on 98 different data sets across 40 languages (NoRaRe v0.2, Tjuka et al. 2021a) and can easily be expanded further. But what can we do with the data? The article presenting the NoRaRe database already included three case studies that illustrate the application of the database (Tjuka et al. 2021b). This blog post, therefore, provides a tutorial on how to compare NoRaRe data sets in R by conducting a new case study that correlates ratings on arousal in English and Dutch.
Adding data sets to NoRaRe: A guide for beginners
There has been much discussion about the reproducibility of research and how it can be improved (Munafò et al. 2017). One antidote to the “reproducibility crisis” seems obvious: data sharing. However, one additional point that is not mentioned as often is the standardization of shared data to make them comparable. Especially for cross-linguistic studies, this is an important step that needs to take place so that we can conduct reproducible studies in different languages. The NoRaRe database (Tjuka et al. 2021a) is a resource that provides standardized cross-linguistic data on norms, ratings, and relations published in psychology and linguistics. In this blog post, I describe a beginner’s guide to adding data sets to NoRaRe.
Adding concept lists to Concepticon: A guide for beginners
Scientific data should be openly accessible. This includes databases which are designed for collaborative work. However, in most cases, these databases are only extended by a team of experts. If a database is truly collaborative, the workflows need to be accessible for everybody. The Concepticon database (List et al., 2019) invites contributors to include their own data sets. This requires a transparent description of the contributing process.
Automated Mapping of Metadata to Concepticon
While the core of the Concepticon project (https://concepticon.clld.org, List et al. 2019) are the numerous conceptlists which are constantly being added by the growing list of contributors, we have already from the beginning of the project, with the first version (List et al. 2016) tried to collect various kinds of concept metadata for all our concept sets.
A Primer on Automatic Inference of Sound Correspondence Patterns (1): Introduction
After about three years of work on the matter, I have managed (with help of many colleagues who helped in testing) to develop a first approach for the automatic inference of sound correspondence patterns, which will soon be published with Computational Linguistics (List 2019). The key task which this algorithm solves is to take aligned data as input and to compute explicit sound correspondence patterns from the alignments.