Author Archives: Johann-Mattis List

About Johann-Mattis List

Seit Anfang 2023 leite ich den Lehrstuhl für Multilinguale Computerlinguistik in Passau. In meiner Forschung nehme ich generell einen datenbasierten, empirischen und quantitativen Standpunkt in Bezug auf Sprachwandel und Sprachgeschichte ein, mit einem speziellen Fokus auf südostasiatischen Sprachen. Im Gegensatz zu rein computerbasierten Ansätzen versuche ich jedoch, meine Forschung nah an der traditionellen historischen Linguistik und der linguistischen Theorie auszurichten, weshalb ich einen computer-gestützten Ansatz im Gegensatz zu einem rein computer-basierten Ansatz verfolge.

Transparent Application of Text Generation Tools in Scientific Research

In this opinion piece, I share my view on the application of language models and text generation services in scientific research. In my opinion, scientific research that lives up to the promises of open science must provide full documentation of all prompts and exchanges that were used to create a given study. A mere mention that AI tools have been used in study design, writing, or coding is not enough.

Continue reading

Towards a Unified Conversion Table for Semitic Transcriptions and Transliterations

In this study we present a preliminary conversion table that can be used for transcriptions and transliterations across different Semitic languages. We introduce the basic idea behind the table, show how it can be used, and explain how we hope to expand it in the future.

Continue reading

Manipulating Lexical Forms with the PyLexibank FormSpec

Multilingual lexical data is typically stored in a wide variety of forms, based on many idiosyncratic decisions that vary from dataset to dataset. Here, a simple but efficient solution for the manipulation of lexical data in multilingual wordlists will be introduced. This solution, the PyLexibank FormSpec, was originall developed for the conversion of various kinds of lexical data to Cross-Linguistic Data Formats, but it can also be used as a standalone. This study offers a basic tutorial that illustrates how the FormSpec can be put to concrete use.

Continue reading

Illustrating Data Curation in NoRaRe with the Help of Templates

This study introduces a collection of templates that can be used to contribute data to the Database of Norms, Ratings, and Relations (NoRaRe) of words and concepts. The templates are intended to facilitate the process of dataset conversion and serve as a starting point for those who are interested to contribute data to the catalog. A first template structure with two sample datasets is introduced and discussed in more detail, pointing to those aspects of data curation that may lead to confusion among users who contribute the first time to the NoRaRe database.
Continue reading

Making a Lexibank Dataset from Lee’s “Phonological Features of Caijia” from 2023

Caijia is a very interesting Sino-Tibetan language variety. It has been documented only recently, it seems to belong to the Sinitic branch of Sino-Tibetan, but shows some archaic features that have led to some controversies among scholars regarding its proper affiliation, and detailed comparative analyses of the language in comparison with other Sino-Tibetan languages are still in their infancy. This little study demonstrates how a first published wordlist of Caijia (Lee 2023) can be prepared for the inclusion in the Lexibank repository.
Continue reading

Extracting Transparent Compounds from Lexibank

Many languages make use of transparent compounding processes in order to express certain words in their lexicon. With time, these processes can loose their transparency, making them hard to detect automatically. With large data collections simple tests can be designed to detect transparent compounds and investigate their distribution. This study illustrates how a very rudimentary analysis of cross-linguistically recurring transparent compound patterns can be applied to Lexibank data with a few lines of Python code.

Continue reading

Lexibench: Towards an Improved Collection of Benchmark Data for Computational Historical Linguistics

Computational approaches in historical linguistics have made great progress during the past two decades. As of now, it is much more common to propose subgroupings based on phylogenetic analyses than on traditional considerations using shared innovations. We have also seen a drastic increase in openly available datasets that share cognate judgments for various language families. Thanks to new standardization efforts providing facilitated access to several dozen comparative wordlists, it seems about time to work on on improved benchmarks of manually annotated cognates in computational historical linguistics. In this study, a first effort of this kind is undertaken, by presenting Lexibench, a preliminary gold standard for computational historical linguistics. Lexibench builds on the Lexibank repository to extract 63 multilingual wordlists, all manually annotated for cognacy, that can be used to assess the quality of cognate detection and phylogenetic reconstruction methods in computational historical linguistics.
Continue reading

Typing Special Characters as a Key Skill for Linguists

Most linguists have to type special characters that are not available on an ordinary keyboard on a regular basis. Reflecting about the general problems involved in typing special characters, I review different solutions and argue that linguists should not only be able to type special characters on their computers, but that they should also have some basic knowledge about their technical aspects and know how to expand and customize them. In order to improve the training of young scholars, it is important to discuss special character typing more openly in linguistics, especially in the classroom and with doctoral students, sharing individual solutions openly.
Continue reading

Adding Standardized Transcriptions to Panoan and Tacanan Languages in the Intercontinental Dictionary Series

In this study, we illustrate how standardized phonetic transcriptions can be added to the data for Panoan and Tacanan languages provided by the Intercontinental Dictionary Series. The result is presented as a new dataset that keeps reference to the original data and adds phonetic transcriptions for each word form in Panoan languages, Tacanan languages, as well as Spanish and Portuguese.

Continue reading

Converting an Artificial Proto-Language into Data for Testing Computational Approaches in Historical Linguistics

This small study shows how data for an artificially created language that was supposed to reflect features of “proto-languages”, predating modern languages by several thousand years, can be used in testing computational approaches in historical linguistics. In order to do so,  computational workflow is described that retrieves the data automatically, creating a comparative wordlist compatible in format with software tools for historical linguistics, and then uses a baseline method for automatic cognate detection to compare an artificial language against a sample of Indo-European languages.  The results show that artificial languages might help to fill a gap in testing that has so far been ignored in the literature.

Continue reading

A New Python Library for the Manipulation and Annotation of Linguistic Sequences

The Python package linse (https://pypi.org/project/linse) offers various methods for the manipulation and annotation of sequences. In this short overview, we summarize its major functionalities and provide some information on its background and how we intend to develop it further in the future.

Continue reading

Past and Future of Computer-Assisted Language Comparison in Practice

Our blog “Computer-Assisted Language Comparison in Practice” goes into its seventh year. We reflect on the role the blog played in the past and present and new goals and concrete ideas for the future. The most drastic innovation we initiated is to turn the blog into an open journal, which means that all future and successively also past contributions will be archived in PDF format with digital object identifiers.

Continue reading

Parsing IPA Transcriptions with CLTS

The Cross-Linguistic Transcription Systems (CLTS, https://clts.clld.org) project serves as a reference catalogue for speech sounds. At the core of the project is a generative method that parses existing IPA transcriptions (or transcriptions in other supported transcription systems) and checks if they conform to the principles and components laid out in the reference catalogue. As a result, Cross-Linguistic Transcription Systems is much more than a simple list of possible speech sounds transcribed in the International Phonetic Alphabet, but a system that allows to generate possible speech sounds and to check if sounds provided in various transcription systems contain problems. This study gives a short overview on the basic ideas that lead to the creation of the database and the parsing method and provides some examples showing how it can be employed in practice.

Continue reading

Sequence Manipulation with Orthography Profiles in JavaScript

Orthography profiles allow for the explicit simultaneous segmentation and conversion of sequences from one orthography to another. They play a crucial role in the standardization workflows developed as part of the Cross-Linguistic Data Formats initiative, where they are used to convert original orthographies used for language documentation to a strict version of the International Phonetic Alphabet. Given that the basic algorithm by which orthography profiles can be used to segment and convert sequences across orthographies is very straightforward, it can be easily implemented in JavaScript.
Continue reading

Creating a CLDF Wordlist from Heath et al.’s Dogon Comparative Wordlist

The Dogon and Bangime Linguistics project (https://dogonlanguages.org) offers a large comparative spreadsheet in which translational equivalents for a huge number of concepts are translated into various Dogon languages.  Due to its enormous size, no attempts have been made so far to integrate the spreadsheet with the lexical resources that were compiled as part of the CLDF initiative in order to populate the Lexibank repository. Here, we report a first attempt to circumvent the problems resulting from the size of the spreadsheet by convert not all but parts of the spreadsheet to CLDF Wordlist standards, which allows us to integrate parts of the data with other resources in Lexibank.

Continue reading