Category Archives: Code

Preparing Acoustic Pitch Data for Computational Analysis and Presentation

Abstract

Pitch plays an important role in many linguistic systems. It is the primary set of features which determine vowel quality distinctions as well as forming the basis for intonation and contrastive tone systems. Unfortunately, much of the literature has relied on approaches to presenting and analysing pitch data that can result in a lack of data transparency, reproducibility, and analytical robustness. These issues are easily solved through the selection of a more appropriate scale for pitch values. This study presents the issues with using raw pitch data as Hertz values some historical efforts to resolve these issues, and two more appropriate solutions than some of the more widely used systems, with a way to easily calculate these alternative systems in a short Python script.
Continue reading

Generating Phonological Feature Vectors with SoundVectors and CLTS

The recently published Python library soundvectors offers a simple and robust method to derive phonological feature vectors for any valid IPA sound via its canonical description. It is designed to interact neatly with the Cross-Linguistic Transcription Systems reference catalog (CLTS), which dynamically parses valid strings in phonetic transcription to describe speech sounds. This study illustrates how both systems can be used together to generate phonological feature vectors for all kinds of sounds without relying on a previously defined lookup table. Additionally, it compares the generated feature vectors with those obtained from two other prominent databases, PanPhon and PHOIBLE, showing how those systems can be accessed from the CLTS data via its Python API pyclts.

Continue reading

Implementing Fuzzy Spelling Search in Dictionaries of Under-Described Languages Lacking Standard Orthographies

Non-standard orthographies are common in the world of under-described language documentation. Whether they are semi-conventionalised community spellings, orthographies partially adopted from missionary works, or hastily transcribed texts representing as-yet uncertain phonologies, there is a need to be able to work through lexical data in a way which can accommodate and respond to such non-standard transcriptions. Here, a few options are considered, with a solution for fuzzy string matching based on attested variations is presented.

Continue reading

A New Python Library for the Manipulation and Annotation of Linguistic Sequences

The Python package linse (https://pypi.org/project/linse) offers various methods for the manipulation and annotation of sequences. In this short overview, we summarize its major functionalities and provide some information on its background and how we intend to develop it further in the future.

Continue reading

Parsing IPA Transcriptions with CLTS

The Cross-Linguistic Transcription Systems (CLTS, https://clts.clld.org) project serves as a reference catalogue for speech sounds. At the core of the project is a generative method that parses existing IPA transcriptions (or transcriptions in other supported transcription systems) and checks if they conform to the principles and components laid out in the reference catalogue. As a result, Cross-Linguistic Transcription Systems is much more than a simple list of possible speech sounds transcribed in the International Phonetic Alphabet, but a system that allows to generate possible speech sounds and to check if sounds provided in various transcription systems contain problems. This study gives a short overview on the basic ideas that lead to the creation of the database and the parsing method and provides some examples showing how it can be employed in practice.

Continue reading

Sequence Manipulation with Orthography Profiles in JavaScript

Orthography profiles allow for the explicit simultaneous segmentation and conversion of sequences from one orthography to another. They play a crucial role in the standardization workflows developed as part of the Cross-Linguistic Data Formats initiative, where they are used to convert original orthographies used for language documentation to a strict version of the International Phonetic Alphabet. Given that the basic algorithm by which orthography profiles can be used to segment and convert sequences across orthographies is very straightforward, it can be easily implemented in JavaScript.
Continue reading

Leveraging JavaScript, jQuery, and ChatGPT for Data Extraction from Web Tables

This case study explores the combined use of JavaScript, jQuery, and the AI-driven tool ChatGPT to efficiently extract data from HTML tables, specifically focusing on a website containing Middle Chinese and Old Chinese readings of characters. The study provides a step-by-step guide for accessing the website, utilizing browser developer tools, implementing JavaScript and jQuery code, and leveraging ChatGPT to refine the extraction process. By employing this methodology, the extraction of Chinese characters and their corresponding readings from an HTML table was automated, saving time and effort. The resulting data was then imported into a Google Sheets document for further analysis. This case study highlights the potential of AI-driven tools to enhance web development tasks and streamline data extraction processes, demonstrating their value for both technical and non-technical users.

Continue reading

Creating Custom Commands in CLDF: From Lexibank to Nexus Files

Thanks to the CLDFBench Python package, CLDF datasets compiled with CLDFbench have a rich command-line utility that can easily be expanded by custom commands (Forkel et al. 2018, Forkel and List 2020) . Taking as an example the creation of a Nexus-file for phylogenetic analysis from an existing Lexibank Wordlist (List et al. 2022) , this tutorial will guide you through the necessary steps for writing a script that can be used as a CLDFBench command. You will then be able to apply the same workflow to other scripts you may want to use for a certain repository.

Continue reading

PyEDICTOR: A Small Python Package that Integrates LingPy, EDICTOR, and CLDF

With the introduction of CLDF as a major format for data exchange, there is an increased need in handy solutions for the conversion of CLDF to the formats required by computer-assisted tools like LingPy and EDICTOR, which allow to preprocess data automatically or to curate data by adding detailed annotations. With the publication PyEDICTOR, there is now a very lightweight software package that is supposed to provide first solutions for a successful integration of CLDF with these existing tools for computer-assisted language comparison.

Continue reading

How to Visualize Colexification Networks with JavaScript and D3 (How to do X in Linguistics 12)

Having seen how colexifications can be inferred and how colexification networks can be computed in previous posts, this post concludes our mini series in showing how computed colexification networks can be visualized interactively, using a JavaScript application based on the popular visualization library D3.

Continue reading

How to Map Concepts with the PySem Library

Mapping concepts to common concept identifiers across resources has become an important task for the aggregation of lexical data from different sources. With the Concepticon, this task has been facilitated due to a specific mapping algorithm by which a concept list can be automatically mapped to the concept sets in the Concepticon reference catalogue in order to be later manually refined. PySem offers an additional possibility to map concepts to the Concepticon, but in contrast to the algorithm used in the Concepticon workflow, the PySem approach can be accessed from within Python applications.

Continue reading

Mapping Multi-SimLex to Concepticon

Multi-SimLex (https://multisimlex.com) is a multilingual resource which provides user ratings for word pairs translated into different languages. The data is important for the evaluation of methods that derive word embeddings from large corpora. While it is on the one hand desirable to link such a large dataset to Concepticon, it is difficult to do so in concrete, given that the datasets represents word similarity ratings withouth any clear reference to concepts. In this post, I will show how the data can nevertheless be linked to Concepticon, and how the original Multi-SimLex data can be represented without losing any information in the form of a Concepticon Concept List.

Continue reading

How to work with WALS data in CLDF (How to do X in linguistics 5)

With an increasing amount of data being available in Cross-Linguistic Formats, it is becoming more and more important to know the basics underlying the Python packages designed by the CLDF initiative in order to allow interested users a quick access to the data. This very short tutorial illustrates how the CLDF data underlying the World Atlas of Language structures can be accessed and written to a table in which each individual values for all WALS parameters are for each language variety in one row.

Continue reading

How to handle semantic data with tables (How to do X in linguistics 3)

Semantic data are notoriously difficult to handle. In contrast to the form-part of the linguistic sign, meanings are not organized sequentially, but rather network-like (List 2014: 34f). As a result, we often encounter problems when trying to model complex relations between different meanings, specifically in those cases, where we have only tables as our base material. This blog post tries to summarize how major types of semantic data are handled in the Concepticon project and how they can be accessed in code.

Continue reading

Computing colexification statistics for individual languages in CLICS

In the last two weeks we had a renewed interest in colexifications, especially in the third generation of the “Database of Cross-Linguistic Colexifications” (Rzymski, Tresoldi, et al., 2020). The attention was due to two different and independent requests in few days. For those unfamiliar, the concept of “colexification” (François, 2008) refers to instances in which a language uses the same lexeme to express more than one comparable concept (e.g., Russian де́рево, which can mean both “tree” and “wood”). The CLICS project, first developed by List et al. (2014), is an offspring of the transparent approaches to standardization, aggregation, and curation of linguistic data that have been promoted within the CLDF framework (Forkel et al., 2018). It uses standardized lexical databases to identify “colexification networks”.

Continue reading