From Fieldwork to Trees 1: Data preparation

A colleague of mine has recently returned from his fieldwork, where he collected data on on the dialectal variation of the Alorese language of Alor and Pantar in the East Nusa Tenggara province of Indonesia. He collected data on 13 Alorese varieties, including word list data. One obvious step for comparing the dialects is to mark which forms are obviously cognate and then use a standard tree (or network) construction algorithm to display the shared signal in the data. With standard tools and a bit of Python glue, this is an easy task. A script for 3 steps can be found in my repository on github. In this first part, I will describe how to get an Excel file into a format LingPy can deal with.

I was given a MS Excel file by my colleague. The file contains a single sheet with 13 Alorese word lists in matrix format, the top left of it looking like the following.

English Indonesian dul alk alb
1sg saya go go
2sg (informal) kamu mi mo mo
2sg (polite) Anda      
3sg dia no no no
1pl excl kami tite kame kame
1pl incl kita tite kame ite
2pl kalian punauŋ mi mi
3pl mereka feː fe fe
this ini h̃aʤa ha, kia
that itu teʤa kalːi kəte
here di sini haʤa ha ɔnɔŋ
there di sana fei kalei felio fali kali
who? siapa hafa feiru fiaru
what? apa paru pai pai, paru
where? di mana naŋ ga ɔrɔ oro pai noŋ naŋga, ɔrɔ naŋga, naŋga ʤafa
when? kapan ɛrɛ pira erepira ɛrə pira
how? bagaimana namo naŋga namonaŋga nəmga, nəmən;ga

This is a very frequent shape of comparative word lists, so I hope the procedure described in the following – and the script I will provide – will help with other language or dialect comparison tasks.

A first thing to notice is that some cells contain a single form in what looks like reasonably decent IPA, but other cells (cf. ‘what’ in alb, which quite clearly contains two forms, one shared with alk and one with dul) contain synonyms, which appear to be separated by ,. In the past, I have seen people not being very consistent about what separators they use, so I run a quick check by searching in the Excel sheet: While the two gloss columns, English and Indonesian, contain several different separators (,, ;, /, and also brackets), the transcribed data looks good in this respect: There are no / in the actual word list. The single ; in the matrix, which is the one you can see in alb ‘how?’ above, looks like it was used as an ad-hoc separator to stop <ng> to become /ŋ/ in a search-and-replace step I expect happened at some point. There are only a handful of instances of brackets, and commas seem to consistently separate different forms.In order to work with this data in LingPy (List et al. 2018) or in any program that supports the CLDF standard (Forkel et al. 2018), the data needs to be converted into a long table format, where each row lists one form, indexed by language and concept.

Unfortunately, the CLDF format has different default column headers from LingPy/edictor. All three support custom column headers, but CLDF makes it very easy to use them (without any additional effort in all but the most restrictive use case, even) while I tend to find it quite a hassle to convince edictor of custom column names, so we will use edictor’s defaults. That means that we want column headers ID for the row IDs (which may not start at 0), DOCULECT for the names of the varieties, CONCEPT for the gloss column, IPA for the column containing the forms and TOKENS for the column of segmented forms. For metadata-free CLDF, the corresponding header names would be ID, Language_ID, Concept_ID, Form, and Segments. We will create a file to bridge between these two name sets later.

While reading all the forms, it makes sense to also segment them already, because we need that to happen anyway for automatic cognate coding. For segmenting IPA transcribed data, I will use Robert Forkel’s segments python package in its default mode. Pavel Sofroniev’s ipatok might be an alternative. This is also the step where we could use pyclts to check the quality of the transcription, but that would go beyond the scope of this example.

Python has an Excel reader module called xlrd. Using it, the core functionality of the conversion script looks like this.

import csv
import xlrd
import segments

book = xlrd.open_workbook("Wordlists.xlsx")
sheet = book.sheet_by_index(0)

def cell(row, col):
return sheet.cell_value(row, col)

tokenizer = segments.Tokenizer()
def segment(word):
return tokenizer(word, ipa=True, separator=" _ ")

column_names = [cell(0, col)
for col in range(sheet.ncols)]

with open("wordlist.tsv", "w") as out:
write = csv.writer(out, dialect="excel-tab").writerow
write(["ID", "CONCEPT", "DOCULECT", "IPA", "TOKENS"])
i = 1
for row in range(1, sheet.nrows):
concept = cell(row, 0)
for col in range(2, sheet.ncols):
lect = cell(0, col)
for form in cell(row, col).split(","):
form = form.strip()
segments = segment(form)
write([i, concept, lect, form, segments])
i += 1

The output is the following TSV file which can be used eg. in edictor.

ID CONCEPT DOCULECT IPA TOKENS
1 1sg dul g ɔ
2 1sg alk go g o
3 1sg alb go g o
4 2sg (informal) dul mi m i
5 2sg (informal) alk mo m o
6 2sg (informal) alb mo m o
7 2sg (polite) dul    
8 2sg (polite) alk    
9 2sg (polite) alb    
10 3sg dul no n o
11 3sg alk no n o
12 3sg alb no n o
13 1pl excl dul tite t i t e
14 1pl excl alk kame k a m e
15 1pl excl alb kame k a m e
16 1pl incl dul tite t i t e
17 1pl incl alk kame k a m e
18 1pl incl alb ite i t e
19 2pl dul punauŋ p u n a u ŋ
20 2pl alk mi m i
21 2pl alb mi m i
22 3pl dul feː f eː
23 3pl alk fe f e
24 3pl alb fe f e
25 this dul h̃aʤa h̃ a ʤ a
26 this alk ha h a
27 this alk kia k i a
28 this alb hã h ã
29 that dul teʤa t e ʤ a
30 that alk kalːi k a lː i
31 that alb kəte k ə t e
32 here dul haʤa h a ʤ a
33 here alk hã h ã
34 here alb ha ɔnɔŋ h a _ ɔ n ɔ ŋ
35 there dul fei kalei f e i _ k a l e i
36 there alk felio f e l i o
37 there alb fali kali f a l i _ k a l i
38 who? dul hafa h a f a
39 who? alk feiru f e i r u
40 who? alb fiaru f i a r u
41 what? dul paru p a r u
42 what? alk pai p a i
43 what? alb pai p a i
44 what? alb paru p a r u
45 where? dul naŋ ga ɔrɔ n a ŋ _ g a _ ɔ r ɔ
46 where? alk oro pai noŋ o r o _ p a i _ n o ŋ
47 where? alb naŋga n a ŋ g a
48 where? alb ɔrɔ naŋga ɔ r ɔ _ n a ŋ g a
49 where? alb naŋga ʤafa n a ŋ g a _ ʤ a f a
50 when? dul ɛrɛ pira ɛ r ɛ _ p i r a
51 when? alk erepira e r e p i r a
52 when? alb ɛrə pira ɛ r ə _ p i r a
53 how? dul namo naŋga n a m o _ n a ŋ g a
54 how? alk namonaŋga n a m o n a ŋ g a
55 how? alb nəmga n ə m g a
56 how? alb nəmən;ga n ə m ə n ; g a

 

In order to make this script re-useable, it is obviously useful to replace the hard-coded assumptions (file paths, separators, number of gloss languages) with command line arguments and such like. If we want to use LingPy for cognate coding, we also have to skip empty forms like 7–9 above with an if not form: continue. But the core of the conversion are just the last 10-or-so lines of this script, and then I have my colleague’s language in a format ready for further use, and how I use it will be content of a later post.

Forms 4–6 from wordlist.tsv in Edictor

Forms 4–6 from wordlist.tsv in Edictor

References

List, Johann-Mattis & Greenhill, Simon J. & Forkel, Robert. 2018. LingPy. A Python Library for Quantitative Tasks in Historical Linguistics. Version 2.6.3. Zenodo. doi:10.5281/zenodo.1203193. https://zenodo.org/record/1203193#.W-xUdhBRfc8 (14 November, 2018).

Forkel, Robert & List, Johann-Mattis & Greenhill, Simon J. & Rzymski, Christoph & Bank, Sebastian & Cysouw, Michael & Hammarström, Harald & Haspelmath, Martin & Kaiping, Gereon A. & Gray, Russell D. 2018. Cross-Linguistic Data Formats, advancing data sharing and re-use in comparative linguistics. Scientific Data 5. 180205. doi:10.1038/sdata.2018.205.


OpenEdition suggests that you cite this post as follows:
Gereon A. Kaiping (November 14, 2018). From Fieldwork to Trees 1: Data preparation. Computer-Assisted Language Comparison in Practice. Retrieved October 3, 2024 from https://doi.org/10.58079/m6jx


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.