Hand gestures are fundamental to interpersonal communication. They fulfill many communicative functions: forming key connections between language and real-world objects/locations during development; facilitating and compensating for lack of verbal skill during language acquisition; regulating and enhancing conversation; and being the principal articulators in the case of sign languages. Research on co-speech hand gestures and sign languages serves to advance the diagnosis and treatment of various neurological disorders; to improve accessibility for those with sensory, motor, or cognitive impairments; to contribute to educational practices; and to enable a variety of HCI themes, e.g. spatial UIs, intelligent environments, machine translation, and conversational agents.
In gesture and sign language research, motion capture (along with other instrumented measures) is used to enable precise, quantitative, and statistical inquiry. Issues that researchers encounter in this context include the high cost and expertise barriers to entry, trade-offs between convenience and accuracy, and the need for convenient tools for exploring and analysing multimodal data. Due to these issues, quantitative inquiry in sign and gesture research is difficult, comparatively scarce, and often is based on less data than ideal. The primary objective of this project is to design motion capture tools to support research on sign languages and co-speech hand gestures. We aim to address related issues from an interaction design perspective in close collaboration with sign and gesture researchers, and to produce artifacts which contribute to the field in a scalable manner.
A secondary objective in the project is to create knowledge that can be exploited to inform HCI artifacts. A report will be prepared at the end of the project, discussing how the knowledge from this project can inform future HCI research. Possible application domains for this knowledge can include supporting research in other fields, facilitating and enriching interpersonal communication and collaboration, supporting various tasks in home and work contexts, and general motion capture tools.
This effort is part of a project that has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 676063.
To introduce a wider audience of researchers to this field of inquiry, and to situate our work within existing research, we conducted a review of previous works that utilized motion capture to study sign and gesture production. We presented the preliminary results from our review, along with comments on technical and methodological issues, as a poster at the DComm conference Language as a Form of Action.