A subjective frequency and iconicity database for 1000 ASL signs
Word frequency plays an important role in language processing (e.g. word recognition, word naming, lexical decision, phonological processing) and might also be an important determinant of language structure. There are many large volume corpora and normative data sets available that spoken language researchers can use to control features such as frequency, phonological properties, imageability and parts of speech in their experiments. However, few similar large-scale datasets are available for sign languages.
To fill this gap, we have teamed up with researchers at Tufts University, Naomi Caselli and Ariel Cohen-Goldberg, to collaborate on a collection of subjective frequency and iconicity ratings from a group of deaf ASL signers (25-30 raters) for a set of 1000 ASL signs, providing the largest sign language dataset of its kind to date. In addition, all signs have been coded for phonological features based on a modified version of the Prosodic Model (Brentari, 1998) from which neighborhood densities can be calculated. Our analyses of this database are nearly complete (e.g., examining correlations between frequency, iconicity, and phonological complexity), and the results will soon be submitted for publication in Behavioral Research Methods.
The database itself will be made publically available, including the videos. Publication of the full dataset online will provide a valuable tool for anyone conducting sign language research.