In a previous post (Kaiping 2018), I described how to convert matrix-shape word lists given in Excel into the long format LingPy and other software can work with. My motivation for this was to provide my colleague Yunus Sulistyono with a good way to compare the lexicon of his Alorese [alor1247] dialects, and to understand the relationship between them. In this post, the data is automatically cognate coded and converted into CLDF.
If we have a file like the following, we can easily use LingPy’s functionality to get cognate classes.
ID | CONCEPT | DOCULECT | IPA | TOKENS |
1 | 1sg | dul | gɔ | g ɔ |
2 | 1sg | alk | go | g o |
3 | 1sg | alb | go | g o |
4 | 2sg (informal) | dul | mi | m i |
5 | 2sg (informal) | alk | mo | m o |
6 | 2sg (informal) | alb | mo | m o |
This whole file (wordlist.tsv
) contains data on dialects of one language (at least in simplified terms of the self-identification of the speakers, who refer to all these varieties as “bahasa Alor” – how valid that categorization actually is might be one outcome of his project) all transcribed by the same researcher, so we should not expect a lot of problems from inconsistent transcription or from cognates that cannot be recognized as such in the data.
Some word boundaries are marked, so we can make use of them as morpheme boundaries and use partial cognate coding. For that, we use LingPy’s Partial
class, with LexStat (List 2012) for getting similarities and infomap (Rosvall & Bergstrom 2008) as cluster method. Infomap, which seems to be a very good baseline cluster method for cognate coding purposes (List, Lopez & Bapteste 2016) is not listed in the current documentation of the partial_cluster
method, but it is implemented. The fuzzy
keyword to the Alignments
constructor below tells the alignment algorithm to align the individual words (or, more generally, morphemes) of each form instead of trying to align the forms globally. This tends to improve the resulting alignments vastly.
import lingpy lex = lingpy.compare.partial.Partial( "wordlist.tsv") lex.get_scorer(runs=10000) lex.partial_cluster( method='lexstat', threshold=0.55, cluster_method="infomap", ref='partialids', verbose=True) lex.get_scorer(runs=10000) alm = lingpy.Alignments(lex, ref="partialids", fuzzy=True) alm.align(method='progressive') alm.output('tsv', filename='aligned', ignore='all', prettify=False)
If all goes well, this generates a file aligned.tsv
in the current directory with the automatic partial cognate codes and alignments. (One way this can go wrong is if we have not filtered out empty forms in the generation of wordlist.tsv
.) This is good for a one-shot run to get an overview over the data (eg. with Edictor), but if we want to try out different thresholds and cluster algorithms, it would be wise to cache the LexStat scorers (in particular the bscorer
, which is expensive to calculate) somewhere.
The easiest way to cache the scorer – although at the cost of a huge overhead, because it also saves the whole word list (and in this case also all its metadate) – is by outputting the wordlist with scorer to a tsv file. If we want to have the scorer cache in lexstats.tsv
, we can replace the get_scorer
line above by
try: scorers_etc = lingpy.compare.lexstat.LexStat( "lexstats.tsv") lex.scorer = scorers_etc.scorer lex.cscorer = scorers_etc.cscorer lex.bscorer = scorers_etc.bscorer except OSError: lex.get_scorer(runs=10000) lex.output('tsv', filename='lexstats', ignore=[])
This reads the scorers from that file if it can, and computes them otherwise. This allows us to change threshold and cluster method without having to re-calculate the scorers every time.
The file generated by this script, aligned.tsv
, is again a TSV file (although it has minor issues in presence of line breaks and quotation marks in cells, because the QLC interface used by LingPy handles these things differently from the standard python CSV module – luckily we do not have the Comments
column in which people would be most likely to use these characters) and can be read in Edictor.
While the file works nicely with LingPy and Edictor, tools that follow the CLDF standard (Forkel et al. 2018) – of which there are not many yet, but BEASTling (Maurits et al. 2017), which I want to show in the following post, is one of them – will not be able to work with this file immediately. However, the standard is flexible enough that we can transform this TSV file into valid CLDF very easily. We just need to provide a JSON metadata file that describes the columns in this data set. Implicitly, we know exactly which columns the TSV file contains and what they mean in CLDF terms, so we can specify the metadata as follows.
{ "@context": [ "http://www.w3.org/ns/csvw", { "@language": "en" } ], "dc:conformsTo": "http://cldf.clld.org/v1.0/terms.rdf#Wordlist", "dc:creator": [ ], "dc:identifier": "", "special:contact": "", "dialect": { "commentPrefix": null }, "tables": [ { "dialect": { "delimiter": "\t" }, "dc:conformsTo": "http://cldf.clld.org/v1.0/terms.rdf#FormTable", "tableSchema": { "columns": [ { "datatype": "integer", "propertyUrl": "http://cldf.clld.org/v1.0/terms.rdf#id", "required": true, "name": "ID" }, { "datatype": "string", "propertyUrl": "http://cldf.clld.org/v1.0/terms.rdf#parameterReference", "required": true, "name": "CONCEPT" }, { "datatype": "string", "propertyUrl": "http://cldf.clld.org/v1.0/terms.rdf#languageReference", "required": true, "name": "DOCULECT" }, { "datatype": "string", "propertyUrl": "http://cldf.clld.org/v1.0/terms.rdf#form", "required": false, "name": "IPA" }, { "datatype": "string", "propertyUrl": "http://cldf.clld.org/v1.0/terms.rdf#segments", "required": false, "separator": " ", "name": "TOKENS" }, { "datatype": "integer", "required": false, "separator": " ", "name": "SONARS" }, { "datatype": "string", "propertyUrl": "http://cldf.clld.org/v1.0/terms.rdf#prosodicStructure", "required": false, "name": "PROSTRINGS" }, { "datatype": "string", "required": false, "name": "CLASSES" }, { "datatype": "integer", "required": false, "name": "LANGID" }, { "datatype": "string", "required": false, "separator": " ", "name": "NUMBERS" }, { "datatype": "boolean", "required": false, "name": "DUPLICATES" }, { "datatype": "integer", "propertyUrl": "http://cldf.clld.org/v1.0/terms.rdf#cognatesetReference", "required": false, "separator": " ", "name": "PARTIAL_IDS" }, { "datatype": "string", "propertyUrl": "http://cldf.clld.org/v1.0/terms.rdf#alignment", "required": false, "separator": " ", "name": "ALIGNMENT" } ], "primaryKey": [ "ID" ] }, "url": "aligned.tsv" } ] }
The source code, as well as a file containing the generic metadata for a dataset like the above and, for reference, the specific metadata of Yunus’ dataset, are available from Github.
References
Forkel, Robert & List, Johann-Mattis & Greenhill, Simon J. & Rzymski, Christoph & Bank, Sebastian & Cysouw, Michael & Hammarström, Harald & Haspelmath, Martin & Kaiping, Gereon A. & Gray, Russell D. 2018. Cross-Linguistic Data Formats, advancing data sharing and re-use in comparative linguistics. Scientific Data 5. 180205. doi:10.1038/sdata.2018.205.
Kaiping, Gereon Alexander. From Fieldwork to Trees 1: Data preparation. 2018. Blogpost. Computer-Assisted Language Comparison in Practice. https://calc.hypotheses.org/803 (15 November, 2018).
List, Johann-Mattis. 2012. LexStat: Automatic detection of cognates in multilingual wordlists. Proceedings of the EACL 2012 Joint Workshop of LINGVIS & UNCLH (EACL 2012), 117–125. Stroudsburg, PA, USA: Association for Computational Linguistics. http://dl.acm.org/citation.cfm?id=2388655.2388671 (12 August, 2015).
List, Johann-Mattis & Lopez, Philippe & Bapteste, Eric. 2016. Using sequence similarity networks to identify partial cognates in multilingual wordlists. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), vol. 2, 599–605.
Maurits, Luke & Forkel, Robert & Kaiping, Gereon A. & Atkinson, Quentin D. 2017. BEASTling: A software tool for linguistic phylogenetics using BEAST 2. PLOS ONE 12(8). e0180908. doi:10.1371/journal.pone.0180908.
Rosvall, Martin & Bergstrom, Carl T. 2008. Maps of random walks on complex networks reveal community structure. Proceedings of the National Academy of Sciences 105(4). 1118–1123. doi:10.1073/pnas.0706851105.
OpenEdition suggests that you cite this post as follows:
Gereon A. Kaiping (November 27, 2018). From Fieldwork to Trees 2: Cognate coding. Computer-Assisted Language Comparison in Practice. Retrieved November 2, 2024 from https://doi.org/10.58079/m6jz
Pingback: From Fieldwork to Trees 3: CLDF recipes – Computer-Assisted Language Comparison in Practice