Comparative wordlists are a fundamental tool for tracing language history, allowing us to see how languages are related, much like biologists use DNA to infer phylogenies of species. When linguists compile data from different sources, scholars often code lexical data differently, using individual transcription systems that cannot be directly compared with each other. In order to make such data comparable, individual transcription systems must be unified in order to reflect a common standard. This study illustrates how such unification can be done by taking a particularly diverse dataset on Semitic languages as example and illustrating how transcriptions for individual language varieties can be harmonized as part of the general standardization workflow proposed by the Cross-Linguistic Data Formats initiative.
Manipulating Lexical Forms with the PyLexibank FormSpec
Multilingual lexical data is typically stored in a wide variety of forms, based on many idiosyncratic decisions that vary from dataset to dataset. Here, a simple but efficient solution for the manipulation of lexical data in multilingual wordlists will be introduced. This solution, the PyLexibank FormSpec, was originall developed for the conversion of various kinds of lexical data to Cross-Linguistic Data Formats, but it can also be used as a standalone. This study offers a basic tutorial that illustrates how the FormSpec can be put to concrete use.
Integrating Semantic Embeddings into NoRaRe
This study illustrates how semantic embeddings can be added to and retrieved from NoRaRe. By that, it provides a template for handling vector data and makes popular methodology in semantic modeling available for cross-linguistic comparison.
Illustrating Data Curation in NoRaRe with the Help of Templates
This study introduces a collection of templates that can be used to contribute data to the Database of Norms, Ratings, and Relations (NoRaRe) of words and concepts. The templates are intended to facilitate the process of dataset conversion and serve as a starting point for those who are interested to contribute data to the catalog. A first template structure with two sample datasets is introduced and discussed in more detail, pointing to those aspects of data curation that may lead to confusion among users who contribute the first time to the NoRaRe database.
Continue reading
Digitizing Legacy Lexical Data of Muishaung for Computer-Assisted Language Comparison
This study describes the process of digitizing legacy materials into a computer-readable format for the purposes of computational typology and computer-assisted historical reconstruction. It presents a comparative wordlist that is made available in the formats recommended by the Cross-Linguistic Data Formats initiative.
Making a Lexibank Dataset from Lee’s “Phonological Features of Caijia” from 2023
Caijia is a very interesting Sino-Tibetan language variety. It has been documented only recently, it seems to belong to the Sinitic branch of Sino-Tibetan, but shows some archaic features that have led to some controversies among scholars regarding its proper affiliation, and detailed comparative analyses of the language in comparison with other Sino-Tibetan languages are still in their infancy. This little study demonstrates how a first published wordlist of Caijia (Lee 2023) can be prepared for the inclusion in the Lexibank repository.
Continue reading
Extracting Transparent Compounds from Lexibank
Many languages make use of transparent compounding processes in order to express certain words in their lexicon. With time, these processes can loose their transparency, making them hard to detect automatically. With large data collections simple tests can be designed to detect transparent compounds and investigate their distribution. This study illustrates how a very rudimentary analysis of cross-linguistically recurring transparent compound patterns can be applied to Lexibank data with a few lines of Python code.
PyLexibench — Generating Data for Lexibench with a Python Package
With PyLexibench we introduce a small Python package that can be used to populate the Lexibench benchmark for computational historical linguistics with benchmark data. Here, we introduce the package and show how it helps to access and expand Lexibench. We also introduce new data for character matrices in various forms and formats and lay out how we intend to use the package to manage Lexibench releases in the future.
Handling Non-Standard Datasets in NoRaRe: A Practical Guide
NoRaRe, the Database of Cross-Linguistic Norms, Ratings, and Relations, is a resource that curates multiple datasets containing information on various properties of words and concepts. When researchers contribute their data, the format and structure can vary widely, presenting challenges for seamless integration. Here, I offer practical guidance for addressing common issues such as data being placed in different sheets, headers in unexpected rows, or datasets contained within zip-files. The strategies shared here offer a foundational approach to understanding and adapting NoRaRe’s flexibility to accommodate the idiosyncrasy of each dataset.
Lexibench: Towards an Improved Collection of Benchmark Data for Computational Historical Linguistics
Computational approaches in historical linguistics have made great progress during the past two decades. As of now, it is much more common to propose subgroupings based on phylogenetic analyses than on traditional considerations using shared innovations. We have also seen a drastic increase in openly available datasets that share cognate judgments for various language families. Thanks to new standardization efforts providing facilitated access to several dozen comparative wordlists, it seems about time to work on on improved benchmarks of manually annotated cognates in computational historical linguistics. In this study, a first effort of this kind is undertaken, by presenting Lexibench, a preliminary gold standard for computational historical linguistics. Lexibench builds on the Lexibank repository to extract 63 multilingual wordlists, all manually annotated for cognacy, that can be used to assess the quality of cognate detection and phylogenetic reconstruction methods in computational historical linguistics.
Continue reading
How to Run EDICTOR 3 Locally
EDICTOR3 offers many ways of comparing language data with computer-assisted methods. This study offers a short overview of how to run EDICTOR3 locally, without the need for uploading the data to a server or being connected to the internet, while maintaining all the functionalities. In a first step, we will show how one can download a Lexibank dataset and create different types of files that one can use with EDICTOR. We will then proceed to present the possibility of running an EDICTOR server locally and to edit the dataset that one has downloaded.
Using CLDFBench and PyLexibank on Windows
Using tools such as CLDFBench and PyLexibank, datasets can be converted into Cross-Linguistic Data Formats (CLDF), offering a standardized and interoperable representation of linguistic data. While these tools are powerful, lifting datasets to CLDF can present unique challenges for Windows users due to idiosyncrasies in the Windows operating system. Although CLDFBench and PyLexibank are compatible with Windows, certain workarounds may be necessary to address system-specific issues. This guide aims to demonstrate how CLDFBench and PyLexibank can be effectively installed and used on a Windows computer to lift a dataset to CLDF.
Typing Special Characters as a Key Skill for Linguists
Most linguists have to type special characters that are not available on an ordinary keyboard on a regular basis. Reflecting about the general problems involved in typing special characters, I review different solutions and argue that linguists should not only be able to type special characters on their computers, but that they should also have some basic knowledge about their technical aspects and know how to expand and customize them. In order to improve the training of young scholars, it is important to discuss special character typing more openly in linguistics, especially in the classroom and with doctoral students, sharing individual solutions openly.
Continue reading
Preparing Acoustic Pitch Data for Computational Analysis and Presentation
Abstract
Pitch plays an important role in many linguistic systems. It is the primary set of features which determine vowel quality distinctions as well as forming the basis for intonation and contrastive tone systems. Unfortunately, much of the literature has relied on approaches to presenting and analysing pitch data that can result in a lack of data transparency, reproducibility, and analytical robustness. These issues are easily solved through the selection of a more appropriate scale for pitch values. This study presents the issues with using raw pitch data as Hertz values some historical efforts to resolve these issues, and two more appropriate solutions than some of the more widely used systems, with a way to easily calculate these alternative systems in a short Python script.
Continue reading
Adding Standardized Transcriptions to Panoan and Tacanan Languages in the Intercontinental Dictionary Series
In this study, we illustrate how standardized phonetic transcriptions can be added to the data for Panoan and Tacanan languages provided by the Intercontinental Dictionary Series. The result is presented as a new dataset that keeps reference to the original data and adds phonetic transcriptions for each word form in Panoan languages, Tacanan languages, as well as Spanish and Portuguese.