Author Archives: Tiago Tresoldi

About Tiago Tresoldi

Post-doc of the CALC Group at the Max Planck Institut for the Science of Human History (MPI-SHH).

Computing colexification statistics for individual languages in CLICS

In the last two weeks we had a renewed interest in colexifications, especially in the third generation of the “Database of Cross-Linguistic Colexifications” (Rzymski, Tresoldi, et al., 2020). The attention was due to two different and independent requests in few days. For those unfamiliar, the concept of “colexification” (François, 2008) refers to instances in which a language uses the same lexeme to express more than one comparable concept (e.g., Russian де́рево, which can mean both “tree” and “wood”). The CLICS project, first developed by List et al. (2014), is an offspring of the transparent approaches to standardization, aggregation, and curation of linguistic data that have been promoted within the CLDF framework (Forkel et al., 2018). It uses standardized lexical databases to identify “colexification networks”.

Continue reading

A model of distinctive features for computer-assisted language comparison

This post introduces a model of segmental/distinctive features for the symbolic representation of sounds, covering almost 600 segments from CLTS (List et al., 2019) mapped to unique sets of bivalent features. It is being designed as an alternative input to vectors of presence/absence built from BIPA descriptors, analogous to other feature matrices like the one by Phoible (Moran & McCloy, 2019). While still under development, it can already be used both for training models of machine learning and statistics, notably decision trees, and for bootstrapping language- and process-specific models, aided by an “universal” and concise reference. The complete matrix is available on Zenodo. Asupporting Python library, distfeat, is available on PyPI.

Continue reading

Illustrating linguistic data reuse: a modest database for semantic distance

Besides new algorithms and tools that facilitate established workflows, one change prompted by computer-assisted approaches to language comparison is a distinct relationship between scientists and their data. A critical part of our work, and perhaps the one with the most lasting impact, is to promote an approach in which the data life-cycle is not constrained within the limits of planning and publishing a study. Data are organized and planned for reuse in investigations perhaps not even considered during collection, with the output of one project becoming the input of another.

Continue reading

Using pyconcepticon to map concept lists (II)

Mapping a given concept list to Concepticon can be done in a straight-forward way, even if automatic mappings need manual refinement. But what can we do when having to deal with larger datasets, say, a dictionary, from which we want to extract specific concepts, such as, for example, the ones in the classical Swadesh list of 100 items (Swadesh 1955)?

Continue reading

Using pyconcepticon to map concept lists

A major problem for data reuse in computer-assisted historical linguistics, especially when employing data collected with no computational workflows in mind, is linking datasets in terms of the meanings of the words (or, technically, “forms”) they carry. Just as linking languages across different datasets is not as straightforward as one might naively assume, demanding a complex reference catalog such as Glottolog, linking the concepts used in a wordlist (a “concept list”) to our Concepticon project might well be the most intensive task in preparing a dataset for cross-linguistic studies.

 

Continue reading

Extracting translation data from the Wiktionary project

Wiktionary is a project for creating a multilingual, web-based free dictionary of all words in all languages. Like its sister project Wikipedia, since its inception it has been subject to criticism both in terms of its lexicographic approaches and in terms of reliability, content, procedures, and community operation (see Lepore 2006, Fuertes-Olivera 2009, Meyer 2012). Faults have also been pointed in terms of its structure which is confusing for newcomers, with parallel and unaligned information shared among the various language dictionaries, and differences in accuracy and depth among languages. Notwithstanding, data from Wiktionary is routinely employed with successful results in natural language processing and, occasionally, in linguistic research (see Otte 2011, Schlippe 2012, Medero 2009, Li 2012), as it constitutes, by far, the largest free multilingual lexical source.

Continue reading