While the core of the Concepticon project (https://concepticon.clld.org, List et al. 2019) are the numerous conceptlists which are constantly being added by the growing list of contributors, we have already from the beginning of the project, with the first version (List et al. 2016) tried to collect various kinds of concept metadata for all our concept sets.
Concept metadata is different from traditional concept lists, as metadata are potentially unlimited in size. Typical collections of age of acquisition data, for example, may easily count more than 5000 entries. Lists of this size are quite difficult to map to Concepticon, not only because it is tedious to manually correct and link thousands of entries, but also because concept metadata, as we find it in collections of norm data, is usually language-specific, which makes it even more difficult to find the best way to link a given word to a Concepticon concept set, since words can show a degree of ambiguity which we cannot find in the concept sets listed in the Concepticon so far.
Given these difficulties, we decided that we will allow for a less stricter treatment of links to concepticon when dealing with concept metadata. Assuming that most usecases involving metadata do not necessarily need the strict hand-curated Concepticon mappings which we provide for typical wordlists, we can therefore make use of automated approaches to identify the best matches for a given metadataset.
But how can we best identify these matches when dealing with a new dataset providing interesting concept metadata? While one could apply a simple brute-force procedure in which all concept sets in Concepticon are compared against a given concept metadataset, including fuzzy matchings and the like, I recommend a different approach which is extremely fast and at the same time accurate enough to provide the most obvious matchings of a given dataset against the data we have already linked to Concepticon.
This procedure makes direct use of the multi-lingual Concepticon mappings which we automatically produce upon each new release of Concepticon. These mappings contain all elicitation glosses along with the concept sets to which they were linked for all glossing languages we have encountered so far in Concepticon (29 by now). The advantage of comparing a given collection of metadata directly with these mappings is that we can make active use of human judgments by which concepts were linked in the past.
In order to load these, we can make use of the new cldfcatalog API, which allows us to have convenient access to the Concepticon data on our system. We can install these along with the cldfbench package. Before we can start writing our Python script, we have to install cldfbench and configure our reference catalogs, by typing in the following two commands and following the instructions to which you will be prompted.
pip install cldfbench cldfbench catconfig
Now we can start and load all mapping data from Concepticon in a fresh Python script. We first load the libraries we will need for this tasks:
from cldfcatalog import Config from csvw.dsv import UnicodeDictReader from collections import defaultdict
Now, we can load the Concepticon repository, which gives us the absolute path to the Concepticon data in our system:
repos = Config.from_file().get_clone('concepticon') paths = {p.stem.split('-')[1]: p for p in repos.joinpath( 'mappings').glob('map-*.tsv')}
We now create a mapping dictionary which will store all direct multi-lingual mappings that we can find in our Concepticon data. These are ordered by their priority, and we will do the same, as this will help us later to identify the best matches in those cases where there are more possibilities.
mappings = {} for language, path in paths.items(): mappings[language] = defaultdict(set) with UnicodeDictReader(path, delimiter='\t') as reader: for line in reader: gloss = line['GLOSS'].split('///')[1] mappings[language][gloss].add( (line['ID'], int(line['PRIORITY'])))
For our mapping experiment, we use the data of Alonso et al. (2005) on age of acquisition in Spanish. The data are available for download from the publisher, but I had to modify them, since I encountered Unicode errors in the original version of the data. The resulting data file is called spanish-data.tsv and distributed along with the code accompanying this small tutorial.
To map the data, we have nothing else to do but to load the data and compare whether we find a direct match with the Spanish word in the original data and our list of mappings. The matches are storedin a specific dictionary (called esdata here).
esdata = defaultdict(list) with UnicodeDictReader( 'spanish-data.tsv', delimiter='\t') as reader: for i, line in enumerate(reader): if line['word'] in mappings['es']: best_match, priority = sorted( mappings['es'][line['word']], key=lambda x: x[1])[0] esdata[best_match] += [[ str(i+1), line['word'], line['averageAoA'], best_match, priority]]
All we have to do now is to write the data to file. We need to make sure that links are unique, so we take only the best out of potential multiple matches.
with open('spanish-data-mapped.tsv', 'w') as f: f.write('ID\tWORD\tAOA\tCONCEPTICON_ID\tMATCH\n') for key, lines in esdata.items(): best_line = sorted( lines, key=lambda x: x[-1])[0] best_line[-1] = str(best_line[-1]) f.write('\t'.join(best_line)+'\n')
And for convenience, we can print out, how many matches we could actually find:
print('Found {0} direct matches in data.'.format(len(esdata)))
The resulting mappings can be found in the file spanish-data-mapped.tsv. All data and code, along with the dependencies are available from this public GitHub Gist.
References
Alonso, M. A. and Fernandez, A. and Diez, E. (2015): Subjective age-of-acquisition norms for 7,039 Spanish words. Behav Res Methods 47.1. 268-274.
Johann Mattis List and Christoph Rzymski and Simon Greenhill and Nathanael Schweikhard and Kristina Pianykh and Robert Forkel (2019): Concepticon. A resource for the linking of concept lists (Version 2.2.0). Max Planck Institute for the Science of Human History. Jena: https://concepticon.clld.org/.
List, Johann-Mattis and Cysouw, Michael and Forkel, Robert (2016): Concepticon. A resource for the linking of concept lists. In: Proceedings of the Tenth International Conference on Language Resources and Evaluation. 2393-2400.
OpenEdition suggests that you cite this post as follows:
Johann-Mattis List (January 22, 2020). Automated Mapping of Metadata to Concepticon. Computer-Assisted Language Comparison in Practice. Retrieved December 13, 2024 from https://doi.org/10.58079/m6ki