Converting an Artificial Proto-Language into Data for Testing Computational Approaches in Historical Linguistics

This small study shows how data for an artificially created language that was supposed to reflect features of “proto-languages”, predating modern languages by several thousand years, can be used in testing computational approaches in historical linguistics. In order to do so,  computational workflow is described that retrieves the data automatically, creating a comparative wordlist compatible in format with software tools for historical linguistics, and then uses a baseline method for automatic cognate detection to compare an artificial language against a sample of Indo-European languages.  The results show that artificial languages might help to fill a gap in testing that has so far been ignored in the literature.

Continue reading

An Extended Concept List of Vietnamese

As part of our ongoing endeavour to expand the possible mappings in Concepticon, I introduce an extended concept list of Vietnamese based on the Intercontinental Dictionary Series (IDS). The list includes elicitation glosses for 1,310 concepts that can be used as a reference to add more data or for comparative analyses. Here, I present the creation of the list and its content.

Continue reading

Implementing Fuzzy Spelling Search in Dictionaries of Under-Described Languages Lacking Standard Orthographies

Non-standard orthographies are common in the world of under-described language documentation. Whether they are semi-conventionalised community spellings, orthographies partially adopted from missionary works, or hastily transcribed texts representing as-yet uncertain phonologies, there is a need to be able to work through lexical data in a way which can accommodate and respond to such non-standard transcriptions. Here, a few options are considered, with a solution for fuzzy string matching based on attested variations is presented.

Continue reading

Representing the Database of Semantic Shifts by Zalizniak et al. from 2024 in Cross-Linguistic Data Formats

In this brief study, we show how the Database of Semantic Shifts, a large resource on semantic change and semantic motivation, can be represented in Cross-Linguistic Data Formats. The representation allows for a convenient quantitative analysis of the numerous annotations on semantic change and semantic motivation and for the integration of the database with additional resources on semantic change and semantic motivation that have been compiled independently in the last years.

Continue reading

A New Python Library for the Manipulation and Annotation of Linguistic Sequences

The Python package linse (https://pypi.org/project/linse) offers various methods for the manipulation and annotation of sequences. In this short overview, we summarize its major functionalities and provide some information on its background and how we intend to develop it further in the future.

Continue reading

How to Visualize Colexification Networks in Cytoscape (How to Do X in Linguistics 14)

The ability to visualize data in an intelligible way is an important skill for scientists. In linguistics, especially in lexical semantics, data are often visualized using graphs, i.e., networks. For example, in the web app for the Database of Cross-Linguistic Colexifications (CLICS), we use networks to illustrate that a lexical form refers to two different concepts by connecting the concepts (i.e., nodes) with a line (i.e., edge). When identifying the colexifications between concepts across a large number of languages, the network grows and a tool to visualize multiple data points becomes necessary. Here, I present a tutorial for the first steps to visualize a colexification network with Cytoscape. The tutorial is intended for beginners who want to learn how the tool works and serves as a starting point for further skill development.

Continue reading

Past and Future of Computer-Assisted Language Comparison in Practice

Our blog “Computer-Assisted Language Comparison in Practice” goes into its seventh year. We reflect on the role the blog played in the past and present and new goals and concrete ideas for the future. The most drastic innovation we initiated is to turn the blog into an open journal, which means that all future and successively also past contributions will be archived in PDF format with digital object identifiers.

Continue reading

Five Recommendations for Creating Spreadsheets

Through the rapid increase in digital data, the use of tabular formats for data has also increased notably. However, the reusability of data is still an issue due to the lack of transparency in the presentation of data in spreadsheets. In our work with the Concepticon, we sometimes encounter spreadsheets provided by researchers that contain information not transparent to an external audience. Therefore, I offer guidelines on how to format tables with data and provide five concrete recommendations.

Continue reading

Parsing IPA Transcriptions with CLTS

The Cross-Linguistic Transcription Systems (CLTS, https://clts.clld.org) project serves as a reference catalogue for speech sounds. At the core of the project is a generative method that parses existing IPA transcriptions (or transcriptions in other supported transcription systems) and checks if they conform to the principles and components laid out in the reference catalogue. As a result, Cross-Linguistic Transcription Systems is much more than a simple list of possible speech sounds transcribed in the International Phonetic Alphabet, but a system that allows to generate possible speech sounds and to check if sounds provided in various transcription systems contain problems. This study gives a short overview on the basic ideas that lead to the creation of the database and the parsing method and provides some examples showing how it can be employed in practice.

Continue reading

Retrieving and Analyzing Taste Colexifications from Lexibank

Colexifications have enjoyed a considerable amount of popularity in the recent years. However, there are still many semantic domains, where not much research on colexification patterns has been carried out so far. Here we show, how the recently published Lexibank repository can be queried to yield colexification data on taste colexifications which can in turn be easily plotted on geographic maps.

Continue reading

Sequence Manipulation with Orthography Profiles in JavaScript

Orthography profiles allow for the explicit simultaneous segmentation and conversion of sequences from one orthography to another. They play a crucial role in the standardization workflows developed as part of the Cross-Linguistic Data Formats initiative, where they are used to convert original orthographies used for language documentation to a strict version of the International Phonetic Alphabet. Given that the basic algorithm by which orthography profiles can be used to segment and convert sequences across orthographies is very straightforward, it can be easily implemented in JavaScript.
Continue reading

Creating a CLDF Wordlist from Heath et al.’s Dogon Comparative Wordlist

The Dogon and Bangime Linguistics project (https://dogonlanguages.org) offers a large comparative spreadsheet in which translational equivalents for a huge number of concepts are translated into various Dogon languages.  Due to its enormous size, no attempts have been made so far to integrate the spreadsheet with the lexical resources that were compiled as part of the CLDF initiative in order to populate the Lexibank repository. Here, we report a first attempt to circumvent the problems resulting from the size of the spreadsheet by convert not all but parts of the spreadsheet to CLDF Wordlist standards, which allows us to integrate parts of the data with other resources in Lexibank.

Continue reading

Creating a Standardized Comparative Wordlist of Newari Varieties

Newari is one of the few ancient Sino-Tibetan languages attested in written texts. Since previous studies on the phylogeny of Sino-Tibetan did not take Newari data into account, we felt it is important to close this gap by providing an up-to-date comparative wordlist of Newari varieties. This wordlist has now been finalized in a first version that has additionally been standardized following the recommendations of the Cross-Linguistic Data Formats initiative.
Continue reading

A New Dataset with Phonological Reconstructions in CLDF

Data in historical linguistics is typically presented in non-machine-readable formats, such as text-based supplementary material or even handwritten manuscripts. Many annotations and important facts are given in prose or remain within linguists’ heads. Those problems make it difficult for non-experts in the specific field to understand the data, and to reproduce and replicate the results, and also limits the exposure that linguists receive for their hard work. Similar to previous blog posts on retro-standardizing data, we present the digitization of a dataset that includes phonological reconstructions. By representing this kind of data in CLDF, we can apply a variety of computer-assisted methods to assess the quality of the reconstructions.

Leveraging JavaScript, jQuery, and ChatGPT for Data Extraction from Web Tables

This case study explores the combined use of JavaScript, jQuery, and the AI-driven tool ChatGPT to efficiently extract data from HTML tables, specifically focusing on a website containing Middle Chinese and Old Chinese readings of characters. The study provides a step-by-step guide for accessing the website, utilizing browser developer tools, implementing JavaScript and jQuery code, and leveraging ChatGPT to refine the extraction process. By employing this methodology, the extraction of Chinese characters and their corresponding readings from an HTML table was automated, saving time and effort. The resulting data was then imported into a Google Sheets document for further analysis. This case study highlights the potential of AI-driven tools to enhance web development tasks and streamline data extraction processes, demonstrating their value for both technical and non-technical users.

Continue reading