Imagine you have two different datasets, both containing approximately the same concepts, but slightly different numbers of columns and — more importantly — potentially identical identifiers in the first column. A bad idea for merging these datasets would be to paste them in Excel or some other kind of spreadsheet software, and then trying to manually adjust all problems that might occur during this process.
A better idea is to just use LingPy and our CLDF curation framework (which was, for example, used when establishing the CLICS² database, see List et al. 2018), which is basically not much work, requiring just a few lines of code to be written, but giving you also the possibility to re-use these code pieces on similar tasks.
I want to illustrate how this can be done by showing for two datasets (Chacon 2017 and Chacon et al. forthcoming), which are both curated within our CLDF framework, how one can merge them with LingPy. Before we can start, we will install these two datasets via pip
.
$ pip install -e git+https://github.com/lexibank/chaconarawakan.git@v1.0.1#egg=lexibank_chaconarawakan $ pip install -e git+https://github.com/lexibank/chaconbaniwa.git@v1.0.0#egg=lexibank_chaconbaniwa
In our Python script, we then load the two packages along with LingPy and the pyconcepticon
package.
from lingpy import * from lexibank_chaconarawakan import Dataset as ds1 from lexibank_chaconbaniwa import Dataset as ds2 from pyconcepticon.api import Concepticon
Along with these packages, the LingPy packages will also have been installed, if it was not already present on your computer. Both datasets contain a raw
folder in which we find the original data as it was curated within the EDICTOR/LingPy approach, that is: the data was originally analyzed with LingPy (List et al. 2018) and then manually corrected with EDICTOR (List 2017). So instead of loading the data with LingPy’s CLDF reader, we load it in its raw form, for convenience.
wl1 = Wordlist( ds1().raw.joinpath( "arawakan_swadesh_100_edictor.tsv").as_posix()) wl2 = Wordlist( ds2().raw.joinpath( "Bruzzi_Granadillo.txt").as_posix())
The part of the data we want is the common Swadesh-list of 100 concepts (Swadesh 1955). We load this list with help of the pyconcepticon
API, published as part with the Concepticon project (List et al. 2016).
swad = [ c.concepticon_id for c in Concepticon( ).conceptlists['Swadesh-1955-100'].concepts.values()] concepts = { wl2[idx, 'concept'] for idx in wl2 if wl2[ idx, 'concepticon_id'] in swad}
We now declare a dictionary in Python to store the data that we want to then write to a LingPy-wordlist file (that can also be read in edictor). We use the columns present in the first wordlist as our standard, and add two more, one for the original index (called old_idx
) and one for a combined cognate-identifier (called cog
).
D = {0: wl1.columns+['old_idx', 'cog']}
We now iterate over both wordlists and add the relevant entries to the dictionary, making sure to separate the cognate-identifiers from each other by assigning them to different sets (with help of the name of the datasets that we add as part of the cognate-set identifier).
nidx = 1 for idx in wl1: D[nidx] = [wl1[idx, h] for h in wl1.columns ] + ['chaconarawakan-'+str(idx)] D[nidx] += ['chaconarawakan-'+str( wl1[idx, 'cogid'])] nidx += 1 for idx in wl2: D[nidx] = [wl2[idx, h] for h in wl1.columns ] + ['chaconbaniwa-'+str(idx)] D[nidx] += ['chaconbaniwa-'+str( wl2[idx, 'cogid'])] nidx += 1
All that’s left to do now is load the data in to a Wordlist
object provided by LingPy and write it to file (after a few operations).
wl = Wordlist(D)
First, we renumber
the cognates in the column cog
in order to make sure they are numeric.
wl.renumber('cogx', 'cogid', override=True)
Then, we segment the so far non-segmented entries in the data: for idx, ipa, segments in wl.iter_rows('ipa', 'segments'):
if not segments:
wl[idx, 'segments'] = ipa2tokens(ipa)
Now, we write all entries to file that conform to a concept that is also present in Swadesh’s list of 100 items.
wl.output( 'tsv', filename='chacon-arawakan-baniwa', subset=True, cols=[ c for c in wl.columns if c not in [ 'value_in_source']], rows=dict( concept = 'in '+str(concepts)) )
This is essentially all. The data in file chacon-arawakan-baniwa.tsv
can now be analyzed with LingPy or also manually annotated with EDICTOR.
References
Chacon, T. (2017): Arawakan and Tukanoan contacts in Northwest Amazonia prehistory. PAPIA 27.2. 237-265..
List, J.-M., M. Cysouw, and R. Forkel (2016): Concepticon. A resource for the linking of concept lists. In: Proceedings of the Tenth International Conference on Language Resources and Evaluation. 2393-2400.
List, J.-M. (2017): A web-based interactive tool for creating, inspecting, editing, and publishing etymological datasets. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. System Demonstrations. 9-12.
List, J.-M., S. Greenhill, C. Anderson, T. Mayer, T. Tresoldi, and R. Forkel (eds.) (2018): CLICS: Database of Cross-Linguistic Colexifications. Max Planck Institute for the Science of Human History: Jena. http://clics.clld.org/.
List, J.-M., S. Greenhill, T. Tresoldi, and R. Forkel (2018): LingPy. A Python library for quantitative tasks in historical linguistics. Max Planck Institute for the Science of Human History: Jena. http://lingpy.org.
Swadesh, M. (1955): Towards greater accuracy in lexicostatistic dating. International Journal of American Linguistics 21.2. 121-137.
OpenEdition suggests that you cite this post as follows:
Johann-Mattis List (December 10, 2018). Merging datasets with LingPy and the CLDF curation framework. Computer-Assisted Language Comparison in Practice. Retrieved October 3, 2024 from https://doi.org/10.58079/m6k0
Do you have a single-file version of the script, for example in a Github Gist, so we can read all the commands in context?
Not yet, I was too much in a rush when writing this, but I just uploaded it here: https://gist.github.com/LinguList/a217baf24023358a3e541040017bb503