Most linguists have to type special characters that are not available on an ordinary keyboard on a regular basis. Reflecting about the general problems involved in typing special characters, I review different solutions and argue that linguists should not only be able to type special characters on their computers, but that they should also have some basic knowledge about their technical aspects and know how to expand and customize them. In order to improve the training of young scholars, it is important to discuss special character typing more openly in linguistics, especially in the classroom and with doctoral students, sharing individual solutions openly.
Continue reading
Author Archives: Johann-Mattis List
Adding Standardized Transcriptions to Panoan and Tacanan Languages in the Intercontinental Dictionary Series
In this study, we illustrate how standardized phonetic transcriptions can be added to the data for Panoan and Tacanan languages provided by the Intercontinental Dictionary Series. The result is presented as a new dataset that keeps reference to the original data and adds phonetic transcriptions for each word form in Panoan languages, Tacanan languages, as well as Spanish and Portuguese.
Converting an Artificial Proto-Language into Data for Testing Computational Approaches in Historical Linguistics
This small study shows how data for an artificially created language that was supposed to reflect features of “proto-languages”, predating modern languages by several thousand years, can be used in testing computational approaches in historical linguistics. In order to do so, computational workflow is described that retrieves the data automatically, creating a comparative wordlist compatible in format with software tools for historical linguistics, and then uses a baseline method for automatic cognate detection to compare an artificial language against a sample of Indo-European languages. The results show that artificial languages might help to fill a gap in testing that has so far been ignored in the literature.
A New Python Library for the Manipulation and Annotation of Linguistic Sequences
The Python package linse
(https://pypi.org/project/linse) offers various methods for the manipulation and annotation of sequences. In this short overview, we summarize its major functionalities and provide some information on its background and how we intend to develop it further in the future.
Past and Future of Computer-Assisted Language Comparison in Practice
Our blog “Computer-Assisted Language Comparison in Practice” goes into its seventh year. We reflect on the role the blog played in the past and present and new goals and concrete ideas for the future. The most drastic innovation we initiated is to turn the blog into an open journal, which means that all future and successively also past contributions will be archived in PDF format with digital object identifiers.
Parsing IPA Transcriptions with CLTS
The Cross-Linguistic Transcription Systems (CLTS, https://clts.clld.org) project serves as a reference catalogue for speech sounds. At the core of the project is a generative method that parses existing IPA transcriptions (or transcriptions in other supported transcription systems) and checks if they conform to the principles and components laid out in the reference catalogue. As a result, Cross-Linguistic Transcription Systems is much more than a simple list of possible speech sounds transcribed in the International Phonetic Alphabet, but a system that allows to generate possible speech sounds and to check if sounds provided in various transcription systems contain problems. This study gives a short overview on the basic ideas that lead to the creation of the database and the parsing method and provides some examples showing how it can be employed in practice.
Sequence Manipulation with Orthography Profiles in JavaScript
Orthography profiles allow for the explicit simultaneous segmentation and conversion of sequences from one orthography to another. They play a crucial role in the standardization workflows developed as part of the Cross-Linguistic Data Formats initiative, where they are used to convert original orthographies used for language documentation to a strict version of the International Phonetic Alphabet. Given that the basic algorithm by which orthography profiles can be used to segment and convert sequences across orthographies is very straightforward, it can be easily implemented in JavaScript.
Continue reading
Creating a CLDF Wordlist from Heath et al.’s Dogon Comparative Wordlist
The Dogon and Bangime Linguistics project (https://dogonlanguages.org) offers a large comparative spreadsheet in which translational equivalents for a huge number of concepts are translated into various Dogon languages. Due to its enormous size, no attempts have been made so far to integrate the spreadsheet with the lexical resources that were compiled as part of the CLDF initiative in order to populate the Lexibank repository. Here, we report a first attempt to circumvent the problems resulting from the size of the spreadsheet by convert not all but parts of the spreadsheet to CLDF Wordlist standards, which allows us to integrate parts of the data with other resources in Lexibank.
Creating a Standardized Comparative Wordlist of Newari Varieties
Newari is one of the few ancient Sino-Tibetan languages attested in written texts. Since previous studies on the phylogeny of Sino-Tibetan did not take Newari data into account, we felt it is important to close this gap by providing an up-to-date comparative wordlist of Newari varieties. This wordlist has now been finalized in a first version that has additionally been standardized following the recommendations of the Cross-Linguistic Data Formats initiative.
Continue reading
Querying Datasets with Cognates in the Lexibank Repository
Recently, I was asked by a colleague how one could query only those datasets in the Lexibank repository which come with cognate sets annotated by humans. While I first thought this could be done in a very straightforward way, I figeured out, when trying it myself, that the code still needs some workarounds. As a result, I thought it is best to share the solution I came up with in a blog post in order to make it accessible to colleagues who might be interested in inspecting and using the data provided by the Lexibank repository in more detail.
PyEDICTOR: A Small Python Package that Integrates LingPy, EDICTOR, and CLDF
With the introduction of CLDF as a major format for data exchange, there is an increased need in handy solutions for the conversion of CLDF to the formats required by computer-assisted tools like LingPy and EDICTOR, which allow to preprocess data automatically or to curate data by adding detailed annotations. With the publication PyEDICTOR, there is now a very lightweight software package that is supposed to provide first solutions for a successful integration of CLDF with these existing tools for computer-assisted language comparison.
How to Visualize Colexification Networks with JavaScript and D3 (How to do X in Linguistics 12)
Having seen how colexifications can be inferred and how colexification networks can be computed in previous posts, this post concludes our mini series in showing how computed colexification networks can be visualized interactively, using a JavaScript application based on the popular visualization library D3.
How to Compute Colexification Networks with CL Toolkit (How to do X in Linguistics 11)
A colexification network is a network consisting of concepts as nodes with weighted edges drawn between the nodes, indicating how often the concepts colexify across the data in a given sample of languages. Having seen how individual colexifications can be computed with the help of the CL Toolkit package in an earlier blog post, we will now see how this code needs to be extended in order to compute colexification networks.
How to Compute Colexifications with CL Toolkit (How to do X in Linguistics 10)
Colleagues often ask us how they could receive more detailed information on specific languages and colexifications in the CLICS database. With the publication of the CL Toolkit package, which allows to merge several CLDF datasets on the fly, carrying out analyses on certain parts of the data underlying the CLICS database is now much easier than before. In order to illustrate this, this tutorial shows how colexifications for a selected number of languages can be computed from two distinct datasets that are included in CLICS (Version 3).
An Animated Demo of the Wagner-Fischer Algorithm for Sequence Alignment
A long time ago I prepared an animated demo of the Wagner-Fischer algorithm for pairwise sequence alignment. Having used the demo to teach phonetic alignment in class, I thought it might be useful to share it officially, as it may also be interesting for colleagues who teach phonetic alignment or rudimentary JavaScript programming.