Author Archives: Johann-Mattis List

About Johann-Mattis List

Seit Anfang 2023 leite ich den Lehrstuhl für Multilinguale Computerlinguistik in Passau. In meiner Forschung nehme ich generell einen datenbasierten, empirischen und quantitativen Standpunkt in Bezug auf Sprachwandel und Sprachgeschichte ein, mit einem speziellen Fokus auf südostasiatischen Sprachen. Im Gegensatz zu rein computerbasierten Ansätzen versuche ich jedoch, meine Forschung nah an der traditionellen historischen Linguistik und der linguistischen Theorie auszurichten, weshalb ich einen computer-gestützten Ansatz im Gegensatz zu einem rein computer-basierten Ansatz verfolge.

Creating a CLDF Wordlist from Heath et al.’s Dogon Comparative Wordlist

The Dogon and Bangime Linguistics project (https://dogonlanguages.org) offers a large comparative spreadsheet in which translational equivalents for a huge number of concepts are translated into various Dogon languages.  Due to its enormous size, no attempts have been made so far to integrate the spreadsheet with the lexical resources that were compiled as part of the CLDF initiative in order to populate the Lexibank repository. Here, we report a first attempt to circumvent the problems resulting from the size of the spreadsheet by convert not all but parts of the spreadsheet to CLDF Wordlist standards, which allows us to integrate parts of the data with other resources in Lexibank.

Continue reading

Creating a Standardized Comparative Wordlist of Newari Varieties

Newari is one of the few ancient Sino-Tibetan languages attested in written texts. Since previous studies on the phylogeny of Sino-Tibetan did not take Newari data into account, we felt it is important to close this gap by providing an up-to-date comparative wordlist of Newari varieties. This wordlist has now been finalized in a first version that has additionally been standardized following the recommendations of the Cross-Linguistic Data Formats initiative.
Continue reading

Querying Datasets with Cognates in the Lexibank Repository

Recently, I was asked by a colleague how one could query only those datasets in the Lexibank repository which come with cognate sets annotated by humans. While I first thought this could be done in a very straightforward way, I figeured out, when trying it myself, that the code still needs some workarounds. As a result, I thought it is best to share the solution I came up with in a blog post in order to make it accessible to colleagues who might be interested in inspecting and using the data provided by the Lexibank repository in more detail.

Continue reading

PyEDICTOR: A Small Python Package that Integrates LingPy, EDICTOR, and CLDF

With the introduction of CLDF as a major format for data exchange, there is an increased need in handy solutions for the conversion of CLDF to the formats required by computer-assisted tools like LingPy and EDICTOR, which allow to preprocess data automatically or to curate data by adding detailed annotations. With the publication PyEDICTOR, there is now a very lightweight software package that is supposed to provide first solutions for a successful integration of CLDF with these existing tools for computer-assisted language comparison.

Continue reading

How to Visualize Colexification Networks with JavaScript and D3 (How to do X in Linguistics 12)

Having seen how colexifications can be inferred and how colexification networks can be computed in previous posts, this post concludes our mini series in showing how computed colexification networks can be visualized interactively, using a JavaScript application based on the popular visualization library D3.

Continue reading

How to Compute Colexification Networks with CL Toolkit (How to do X in Linguistics 11)

A colexification network is a network consisting of concepts as nodes with weighted edges drawn between the nodes, indicating how often the concepts colexify across the data in a given sample of languages. Having seen how individual colexifications can be computed with the help of the CL Toolkit package in an earlier blog post, we will now see how this code needs to be extended in order to compute colexification networks.

Continue reading

How to Compute Colexifications with CL Toolkit (How to do X in Linguistics 10)

Colleagues often ask us how they could receive more detailed information on specific languages and colexifications in the CLICS database. With the publication of the CL Toolkit package, which allows to merge several CLDF datasets on the fly, carrying out analyses on certain parts of the data underlying the CLICS database is now much easier than before. In order to illustrate this, this tutorial shows how colexifications for a selected number of languages can be computed from two distinct datasets that are included in CLICS (Version 3).

Continue reading

An Animated Demo of the Wagner-Fischer Algorithm for Sequence Alignment

A long time ago I prepared an animated demo of the Wagner-Fischer algorithm for pairwise sequence alignment. Having used the demo to teach phonetic alignment in class, I thought it might be useful to share it officially, as it may also be interesting for colleagues who teach phonetic alignment or rudimentary JavaScript programming.

Continue reading

How to Map Concepts with the PySem Library

Mapping concepts to common concept identifiers across resources has become an important task for the aggregation of lexical data from different sources. With the Concepticon, this task has been facilitated due to a specific mapping algorithm by which a concept list can be automatically mapped to the concept sets in the Concepticon reference catalogue in order to be later manually refined. PySem offers an additional possibility to map concepts to the Concepticon, but in contrast to the algorithm used in the Concepticon workflow, the PySem approach can be accessed from within Python applications.

Continue reading

How to write a term paper in linguistics (How to do X in linguistics 9)

Writing a term paper requires the same scrutiny as writing an article for a journal. As a result, the techniques which apply when writing term papers are very similar to those which apply when writing a journal article, and students should feel encouraged to take the task as seriously as a journal article that scientists send off to peer review. In the following, I will briefly introduce major techniques that help to structure one’s work when writing a term paper and which also help to interact well with one’s supervisor during the writing process.

Continue reading

Converting the Vietic Dataset by Sidwell and Alwes from 2021 to CLDF

A few days ago, Sidwell and Alwes submitted a very nice dataset on Vietic languages to Zenodo (10.5281/zenodo.5263194). When inspecting the data, I realized that this dataset could be easily converted to our CLDF formats in our new Lexibank standards. Since both authors explicitly invited for discussions of the data and the testing of the results, I thought it would even be better to quickly illustrate the CLDF conversion in a blog post, as this may enable colleagues to do the same with their datasets in the future.

Continue reading

How to Share Data and Code when Submitting Papers to a Journal: Transparent Data (How to do X in Linguistics 8)

When sharing data and code when submitting papers to a journal, you need to make sure that the reviewers can test and inspect your data as conveniently as possible. In between reviews, you should also be maximally transparent on any changes that have been made to the data or the code underlying your study. When using data that was published elsewhere, this means you should pay specific attention to the versions you have used and make sure they are readily accessible.

Continue reading

How to Share Data and Code when Submitting Papers to a Journal: Practical Questions (How to do X in Linguistics 7)

The scientific culture in linguistics has been changing recently, and more and more papers are published with code and data accompanying them. What is still often forgotten, however, is that code and data should also be shared with the reviewers during the first submission of a paper in order to guarantee a maximally transparent review process that includes also a thorough inspection of the data and the code. This calls for attention from two sides: Reviewers should make sure that they receive data and code if they are needed to replicate the results reported in a paper, while authors should make sure to submit them to the reviewers in a way that they can easily inspect them. In this new blog post series, I want to summarize what authors should keep in mind when preparing their data and code for submission to a journal. On the one hand, I hope that this post will increase awareness among colleagues that data and code should be shared upon submission. On the other hand, I hope it also provides active help to all colleagues who plan to submit an article to a journal and are not sure how to share their data in the best form.

Continue reading

Using EDICTOR 2.0 to Annotate Language-Internal Cognates in a German Wordlist

With the recent publication of the new version of the EDICTOR application for the curation and creation of etymological dictionaries, several new features were introduced which target specifically the annoation of language-internal word families opposed to cross-linguistic cognates. While working on the EDICTOR update, I carried out intensive tests of the new features by annotating a German wordlist for language-internal cognates. In this post, I will quickly discuss some of the new features in EDICTOR 2.0 by showing some examples of the freshly annotated wordlist for German.

Continue reading

Mapping Multi-SimLex to Concepticon

Multi-SimLex (https://multisimlex.com) is a multilingual resource which provides user ratings for word pairs translated into different languages. The data is important for the evaluation of methods that derive word embeddings from large corpora. While it is on the one hand desirable to link such a large dataset to Concepticon, it is difficult to do so in concrete, given that the datasets represents word similarity ratings withouth any clear reference to concepts. In this post, I will show how the data can nevertheless be linked to Concepticon, and how the original Multi-SimLex data can be represented without losing any information in the form of a Concepticon Concept List.

Continue reading