The Dogon and Bangime Linguistics project (https://dogonlanguages.org) offers a large comparative spreadsheet in which translational equivalents for a huge number of concepts are translated into various Dogon languages. Due to its enormous size, no attempts have been made so far to integrate the spreadsheet with the lexical resources that were compiled as part of the CLDF initiative in order to populate the Lexibank repository. Here, we report a first attempt to circumvent the problems resulting from the size of the spreadsheet by convert not all but parts of the spreadsheet to CLDF Wordlist standards, which allows us to integrate parts of the data with other resources in Lexibank.
Creating a Standardized Comparative Wordlist of Newari Varieties
Newari is one of the few ancient Sino-Tibetan languages attested in written texts. Since previous studies on the phylogeny of Sino-Tibetan did not take Newari data into account, we felt it is important to close this gap by providing an up-to-date comparative wordlist of Newari varieties. This wordlist has now been finalized in a first version that has additionally been standardized following the recommendations of the Cross-Linguistic Data Formats initiative.
Continue reading
A New Dataset with Phonological Reconstructions in CLDF
Data in historical linguistics is typically presented in non-machine-readable formats, such as text-based supplementary material or even handwritten manuscripts. Many annotations and important facts are given in prose or remain within linguists’ heads. Those problems make it difficult for non-experts in the specific field to understand the data, and to reproduce and replicate the results, and also limits the exposure that linguists receive for their hard work. Similar to previous blog posts on retro-standardizing data, we present the digitization of a dataset that includes phonological reconstructions. By representing this kind of data in CLDF, we can apply a variety of computer-assisted methods to assess the quality of the reconstructions.
Leveraging JavaScript, jQuery, and ChatGPT for Data Extraction from Web Tables
This case study explores the combined use of JavaScript, jQuery, and the AI-driven tool ChatGPT to efficiently extract data from HTML tables, specifically focusing on a website containing Middle Chinese and Old Chinese readings of characters. The study provides a step-by-step guide for accessing the website, utilizing browser developer tools, implementing JavaScript and jQuery code, and leveraging ChatGPT to refine the extraction process. By employing this methodology, the extraction of Chinese characters and their corresponding readings from an HTML table was automated, saving time and effort. The resulting data was then imported into a Google Sheets document for further analysis. This case study highlights the potential of AI-driven tools to enhance web development tasks and streamline data extraction processes, demonstrating their value for both technical and non-technical users.
The Release of Concepticon 3.1.
In the Concepticon project, we add more concept lists, improve the links to concept sets, and discuss ambiguous cases regularly. With every new release, the Concepticon is updated and the changes are published. Here, I discuss the improvements we integrated into the newest version: Concepticon 3.1. After covering the new lists that were added to Concepticon in this release, I describe the process of refining concept relations and mappings. I also demonstrate how we deal with inconsistencies in the data we encounter using the example of a wordlist that proved to be inconsistent. The aim is to inform about the processes and discussions that led to the new version of Concepticon.
How to Organize Literature and Notes in Zotero (How to do X in Linguistics 13)
Reading, summarizing, and citing literature is an essential part of every student and researcher’s life. While there are many books and workshops on academic writing, there is less information on how to efficiently organize literature. For this reason, I offer an overview of how to organize literature and notes in Zotero. Specifically, I illustrate some of my own workflows for organizing the literature for my dissertation. I also discuss general features of Zotero. The interested reader who wants to learn more will benefit from the related links.
Cross-Linguistic Colexifications with Body Concepts: Metaphor, Metonymy, Analogy
Colexification describes the relation between two meanings that are expressed with the same form in a given language. A colexification is established based on a linguistic analysis of word meanings in the same language. While the term is a cover term for different semantic relations (i.e., vagueness, polysemy, homophony), the discussion of particular types of colexifications often is connected to linguistic terminologies such as metaphor or metonymy. This is not only the case because there are prominent linguistic theories that argue for the pervasiveness of metaphor (and metonymy) in everyday life, but also because semantic relations are assumed to mirror conceptual relations. The linguistic analysis of metaphor and metonymy thus provides insights into the human mind. However, one needs to be careful to make claims about cognitive mechanisms solely based on linguistic evidence. Therefore, it is important to also consider frameworks from psychology such as analogical reasoning in order to explain the processes behind a linguistic phenomenon. In the following, I discuss ideas on metaphor and metonymy from linguistics that highlight the cognitive underpinnings of both notions, as well as a proposal for how analogical reasoning can explain their processing.
The Origins of Cross-Linguistic Colexifications
In recent years, studies exploring the phenomenon of colexification across languages have steadily increased in number. Colexification occurs if a word has multiple meanings, regardless of whether the meanings are related (dish ‘plate; meal’) or unrelated (bank ‘financial institution; part of a river’). The investigation of cross-linguistic colexifications yields many interesting findings that are important for different research fields. Psychologists and cognitive scientists are interested in the overarching principles that establish a connection between meanings and how speakers categorize the environment around them. Historical linguists are concerned with diachronic processes that lead to semantic shifts and what these can tell us about language evolution. Typologists engage in the study of language contact scenarios and how linguistic areas are formed. All these processes are entwined with one another and disentangling them is a challenge. This blog post is the first step into a deeper exploration of the origins of cross-linguistic colexifications and discusses the four processes underlying this phenomenon.
The Small Bang
The Small Bang represents a recently begun ERC-funded project dedicated to discovering the origins of the Bangime language and Bangande people. The language and its speakers are of particular interest to West African research as Bangime is one of the only isolates spoken in the region and the ancestors of the Bangande are also unknown. Using the latest computer-assisted technologies, the Bang team is amassing linguistic and genetic data and comparing them with a hitherto unexplored set of languages and peoples in search of a hidden history. Preliminary hypotheses suggest geographic isolation that has led to a bottleneck of at least 9,000 years. The question as to from whom the people and their language originated remains open.
Creating Custom Commands in CLDF: From Lexibank to Nexus Files
Thanks to the CLDFBench Python package, CLDF datasets compiled with CLDFbench have a rich command-line utility that can easily be expanded by custom commands (Forkel et al. 2018, Forkel and List 2020) . Taking as an example the creation of a Nexus-file for phylogenetic analysis from an existing Lexibank Wordlist (List et al. 2022) , this tutorial will guide you through the necessary steps for writing a script that can be used as a CLDFBench command. You will then be able to apply the same workflow to other scripts you may want to use for a certain repository.
Querying Datasets with Cognates in the Lexibank Repository
Recently, I was asked by a colleague how one could query only those datasets in the Lexibank repository which come with cognate sets annotated by humans. While I first thought this could be done in a very straightforward way, I figeured out, when trying it myself, that the code still needs some workarounds. As a result, I thought it is best to share the solution I came up with in a blog post in order to make it accessible to colleagues who might be interested in inspecting and using the data provided by the Lexibank repository in more detail.
PyEDICTOR: A Small Python Package that Integrates LingPy, EDICTOR, and CLDF
With the introduction of CLDF as a major format for data exchange, there is an increased need in handy solutions for the conversion of CLDF to the formats required by computer-assisted tools like LingPy and EDICTOR, which allow to preprocess data automatically or to curate data by adding detailed annotations. With the publication PyEDICTOR, there is now a very lightweight software package that is supposed to provide first solutions for a successful integration of CLDF with these existing tools for computer-assisted language comparison.
How to Visualize Colexification Networks with JavaScript and D3 (How to do X in Linguistics 12)
Having seen how colexifications can be inferred and how colexification networks can be computed in previous posts, this post concludes our mini series in showing how computed colexification networks can be visualized interactively, using a JavaScript application based on the popular visualization library D3.
How to Compute Colexification Networks with CL Toolkit (How to do X in Linguistics 11)
A colexification network is a network consisting of concepts as nodes with weighted edges drawn between the nodes, indicating how often the concepts colexify across the data in a given sample of languages. Having seen how individual colexifications can be computed with the help of the CL Toolkit package in an earlier blog post, we will now see how this code needs to be extended in order to compute colexification networks.
How to Compute Colexifications with CL Toolkit (How to do X in Linguistics 10)
Colleagues often ask us how they could receive more detailed information on specific languages and colexifications in the CLICS database. With the publication of the CL Toolkit package, which allows to merge several CLDF datasets on the fly, carrying out analyses on certain parts of the data underlying the CLICS database is now much easier than before. In order to illustrate this, this tutorial shows how colexifications for a selected number of languages can be computed from two distinct datasets that are included in CLICS (Version 3).