Our projects
Here are some of our current staff and student projects:
Typical and atypical language acquisition
About 10% of children have persistent difficulties with language acquisition and require clinical intervention by specialists such as speech-language pathologists. Some of these children have a known diagnosis in cognitive, sensori-motor or behavioural domains. The majority of the children, however; have no clear aetiology or cause to explain their persistent difficulties with language acquisition and use. Early and accurate identification remains a challenge.
This project is focussed on improving our knowledge of typical and atypical language acquisition in bilingual children. With growing linguistic and cultural diversity, speech-language pathologists are encountering an increasing number of bilingual children with suspected language difficulties. For many speech-language pathologists, it is a significant challenge to determine whether bilingual children appear to be different from monolingual children because they are learning two languages; or whether the differences are symptoms of a clinical condition requiring intervention.
At the Centre for Language Sciences, researchers conduct theoretical and experimental research about typical and atypical language acquisition to inform clinical speech-language pathology practice. The goal of this research is to improve assessment and intervention practices to benefit children who have difficulties with language acquisition and use. By working with local speech-language pathologists, the Centre for Language Sciences aims to be the research centre of knowledge translation as well as knowledge generation.
Project leader: Jae-Hyun Kim
Grammar in bilingual preschoolers
Many bilingual Australian preschoolers come to school only beginning to learn English – and little is known about just how much English they know.
This is why postdoctoral research fellow Nan Xu Rattanasone is investigating bilingual preschoolers’ knowledge of English grammar. Her research focuses on whether these children know how to use both the singular (eg cat) and the plural (eg cats).
This type of grammatical knowledge is especially challenging for children whose first language is Mandarin, which does not mark plurals at the ends of words and has few words that have final consonants.
Preliminary findings suggest many children may benefit from English language programs during early education.
Project leader: Nan Xu Rattanasone
Learning how to speak from learning how to listen
If children have difficulties with language, this often becomes apparent when they speak differently from their peers. But does the source of children’s difficulties lie in the speaking process, or in impaired listening to and understanding of language? Answering this question can have important implications for the age at which difficulties are diagnosed and the best method for intervention.
This is why Titia Benders, lecturer in Macquarie University’s Department of Linguistics, investigates how toddlers and pre-schoolers listen to the types of ‘errors’ that they themselves produce.
After working on this topic with colleagues in the Netherlands, Benders is now starting studies to investigate whether the speaking challenges of children who learn English are mirrored in their listening. Current and prospective students, at Macquarie and elsewhere, who are also intrigued by these questions, are encouraged to contact Titia to discuss the potential of working together.
Beyond Speech: Towards better communication for children with hearing loss
Researchers and clinicians from Macquarie University, Australian Hearing Services, Cochlear Ltd, Parents of Deaf Children Incorporated, The Shepherd Centre (TSC) and The Royal Institute for Deaf and Blind Children (RIDBC) are working on a project to better understand the locus of the listening and discourse challenges faced by children with hearing loss. Despite the benefits of early newborn hearing screening and early intervention programs for children with hearing loss, most still experience academic and social challenges at school.
The project will use online psycholinguistic methods to identify which levels of language (sounds/words, grammar/meaning, prosody/discourse) are most compromised in children with hearing loss during different types of listening/language processing activities.
The outcomes of the project will provide an evidence base for identifying the locus of communicative breakdowns for children with hearing loss, helping to inform more effective early interventions to enhance effective discourse interactions.
Project Participants: Distinguished Professor Katherine Demuth (Macquarie University), Associate Professor Mridula Sharma (Macquarie University), Dr Nan Xu Rattanasone (Macquarie University), Professor Gregory Leigh (RIDBC), Ms Inge Kaltenbrunn (RIDBC), Ms Aleisha Davis (TSC), Ms Alison King (Australian Hearing Services), Dr Mary-Beth Brinson (Cochlear Ltd), Dr Chi Lo (Parents of Deaf Children Incorporated).
Processing speech in a noisy world
Associate Professor Mridula Sharma is investigating auditory processing, that is, how sounds, tones and speech are mapped at the brain level. Her research centers on speech perception in noise; the role of auditory processing when listening in noise, and the use of cortical auditory evoked potentials to measure speech perception in noise.
She explores how children, in noisy environments such as classrooms, understand speech, and how this is linked to their language and reading development.
This research has application for children with hearing difficulties as well as those with diverse linguistic backgrounds.
Understanding the mechanisms of speech
Speaking requires fine control and rapid coordination of our tongue, lips, jaw, and other speech organs. Yet all normally developing adults do so effortlessly, without even thinking about it, often in more than one language.
Dr Michael Proctor is researching the mechanisms we use to produce the different sounds of Australian English and other languages. Of particular interest are the ‘l’- and ‘r’-like sounds in ‘lake’ and ‘rake’, which involve coordinated movements of different parts of the tongue at the same time, and which can be a challenge for children and second language learners. He uses state of the art technology including MRI, ultrasound, and special equipment called electromagnetic articulography (EMA), to study speech movements while people are talking.
He is interested in how children acquire this behaviour, how speech is represented in the mind, what goes wrong in disordered speech, and how speakers control these movements to generate the infinitely expressive patterns of sounds used in different languages.
Learning catssss and dogzzz
Ben Davies, who recently completed his PhD in the Child Language Lab, has been studying how children learn to use the plural ‘s’ on the end of nouns. This is tricky in English, because the plural ‘s’ actually has three different pronunciations. On a word like cat it is pronounced as ‘s’, but when it is attached to a word like dog, it is pronounced as a ‘z’. However, in words such as peach-es or bus-es, the plural takes the form of ‘ez’.
The hardest one for children to learn is the ‘ez’ ending used in peaches and buses.
These sounds pose a challenge for children with hearing loss. This raises questions about when children with hearing aids and cochlear implants might learn these plural endings.
Davies is now exploring these issues. His initial study was conducted with an eye-tracker. He’s now moved to a more portable and innovative iPad app.
Project leaders: Ben Davies and Professor Katherine Demuth
The bilingual mind / brain
Dr Xin Wang has been studying the cognitive and neural aspects of Second Language Processing and Representations. Her work is focused on Chinese-English bilinguals who use their two languages on a regular basis to understand the cognitive mechanisms and architecture of Bilingualism/Multilingualism. She uses a variety of experimental paradigms (i.e., eye-trackers, ERPs, reaction times tools, etc.) to empirically investigate these issues.
For instance, we found that Chinese-English bilinguals can activate the lexical representation of ‘feather’ in English when they listen to the English word ‘rain’. This cross-language activation is driven by the translations of both English words in Chinese, which share the same phonological information but carry different meanings. Importantly, we demonstrated the activation of lexical tones in the bilingual mind/brain even though this information is not needed in the target language task. These findings are important to understand the bilingual lexicon and provide insights into learning a second/foreign language.
She is interested in how language-specific properties drive language processing in the bilingual context, as well as how a second/foreign language is learned, acquired, and represented.
Emergence of logic
This project examines the meaning assigned to the word for the disjunction “or” in human languages.
Our research on the acquisition of logic has taken a different course to traditional studies into the topic. Our experiments discovered that children learning English assign the same interpretation as adults do, which invites the conclusion that children initially posit disjunction to be inclusive-or. This conclusion has been reinforced by the findings of other studies conducted in our lab.
We’ve been able to chart the course of the development of logic in young children learning typologically distinct languages, such as English, Japanese and Chinese. We discovered that the development of logic is remarkably similar across languages, with just a few notable exceptions.
Our research findings invite a reappraisal of previous conclusions, and support the idea that the logic of human languages, including child language, has a considerable overlap with classical logic.
Project leader: Professor Stephen Crain
Deep learning for natural language processing
Deep learning is a new technology in computational linguistics that enables us to build computer programs that are far better at natural language understanding than ever before. These deep neural networks are state-of-the-art for technologies such as automatic speech recognition and machine translation.
We are using deep learning in an ARC-funded project to improve automatic syntactic and semantic parsing (identifying the way words combine to form phrases and sentences) of natural language texts, in order to identify predicate-argument structure (“who did what to whom”) from texts for information extraction purposes.
Project leader: Mark Johnson
Language and reading in children with hearing loss
This research is part of a larger project examining the outcomes achieved by a population-based sample of Australian children with hearing loss and the factors influencing those outcomes (see https://outcomes.nal.gov.au/). Our specific interests lay in documenting the development of early spoken language and reading skills.
The research discovered that better language outcomes are associated with earlier age at intervention, including cochlear implantation and fitting of hearing aids. In accordance with the view that spoken language skill is central to the development of reading, we discovered that children’s early reading ability is associated with their skill in identifying the sounds contained in spoken words.
Another key interest was to examine the language outcomes achieved by children with disabilities in addition to their hearing loss. We discovered that children’s progress at 3 years of age is influenced by the nature of their additional disability. When the children were measured at 5 years of age, we found a direct link between the type of disability and nonverbal cognitive ability.
Our findings provide strong evidence for the benefits of early hearing aid fitting and early cochlear implantation. They further indicate that type of additional disability can be used to gauge expected language development before formal assessment of cognitive ability is feasible.
Project leaders: Professor Linda Cupples (CLaS) and Dr Teresa Ching (National Acoustic Laboratories, Australian Hearing)