Plenary speakers:


Morten Christiansen (Cornell University, Ithaca): Language Acquisition as Skill Learning    

Abstract: Language acquisition is often viewed as a problem of inference, in which the child—like a “mini-linguist”— tries to piece together the abstract grammar of her native language from incomplete and noisy input. This “language-as-knowledge” viewpoint contrasts with a more recent alternative, in which the challenge of language acquisition is practical, not theoretical: by practicing across myriads of social interactions, the child gradually learns to understand and produce language. In this talk, I explore some key implications of this “language-as-skill” framework, focusing on how constraints arising from the need to process language in the here-and-now shape acquisition. Because experience with language is fundamental to becoming a skilled language user, this perspective predicts substantial differences across individual language users as well as across languages. I discuss evidence from behavioral studies and computational modeling, highlighting experience-driven variation across individuals and languages. I conclude that language acquisition may be best construed as skill learning, on a par with learning other complex human skills such as riding a bicycle or playing a musical instrument. By reconnecting language to psychological mechanisms of learning and memory, this perspective moreover offers the possibility for a reintegration of the language sciences.





Judith Holler (Max Planck Institute, Nijmegen): Multimodal signalling for coordination in conversational interaction    

Abstract: The natural home of human language is face-to-face dialogue. In such an environment language is multimodal [1–3], meaning we use speech as well as a host of visual articulators conveying meaningful signals. Coordination is at the very heart of human communication, and in this talk, I will present a series of studies showing that visual bodily signals play an important role in coordination for conversation.

For conversational interaction to be successful, interlocutors must coordinate on the level of minds – that is, align in terms of the mental representations that can be derived from the messages the interlocutors encode. For demonstrating that bodily signals critically influence this process, it is crucial to show (i) that visual bodily signals do in fact carry meaningful information; (ii) that visual bodily signals are linked to the speaker’s communicative intent, thus creating the very possibility that they can form part of the message the speaker aims to get across, and that the signals are being recognized as such; and (iii) that the information visual bodily signals convey does indeed facilitate mutual understanding, such as through the processes of repair (in the case of problems in understanding [4]) and grounding (signalling that what has been communicated has been understood [5]). I will demonstrate that these three aspects indeed apply to the visual signals we use to accompany our speech.

Furthermore, conversational interaction requires coordination in terms of taking turns at talk. Here, it is crucial that next speakers produce semantically and pragmatically appropriate next utterances, and that they do so on time, requiring a certain element of prediction due to the tight timing of conversation [6]. In the final part of my talk, I will present findings showing that visual bodily signals significantly influence coordination also when taking conversational turns.    

In sum, the findings add to our understanding of human language and communication by showing that being equipped with an intrinsic social orientation, combined with a body and cognitive abilities that maximize our communicative abilities, helps us to engage in the highly coordinated activity of face-to-face conversation and to achieve mutual understanding of the things we intend to communicate about. Trying to understand the role of both words and the body in dialogue may also allow us to go further in discovering why the human communication system has evolved as the multimodal system that it is [7].



Sean Roberts (University of Bristol, UK): Causal graphs will save us all from Big Data

Abstract: It's a very exciting time to be a linguist. There's more data than ever and more ways of analysing it. The next generation of Big Data methods even promise to create theories for us. However, with big data comes big trouble. First, there's the danger of drowning in data. With so many possible connections to test, how do we know which to investigate or control for? The second kind of trouble is specific to large-scale cross-cultural studies. Spurious correlations abound in this data due to the historical relations between languages (Roberts & Winters, 2013). This problem is under-appreciated in some fields and can lead to some questionable claims such as the language you speak affecting your bank balance, business decisions, or how likely you are to go to secondary school (Chen, 2013; Kim, 2017; Feldmann, 2019). How do we know which to believe?

I'll argue for three solutions: The first is interdisciplinary collaboration. Working together with experts allows access to the best data and methods (Roberts, Winters & Chen, 2015). Linguists have a responsibility to share their expertise with other researchers. The second solution is to take a robust approach to explanation by using multiple empirical methods (Roberts, 2018). The third solution is to use causal graphs (Pearl & Mackenzie, 2018). They can help us be more explicit about our theories and more rigorous in our quantitative analyses. I'll present some work on a tool for using causal graphs in research: The Causal Hypotheses in Evolutionary Linguistics Database (CHIELD It will solve your problems and change your life. Maybe.



Sophie Scott (UCL, London): From sound to social meaning: the neural basis of speech and voice processing

Abstract: In human auditory perceptual systems, as in visual networks, there are clear differences between recognition and sensori-motor processes, in terms of anatomy and function. In this talk I will explore the possible computational differences that underlie these distinctions, and show how they make different contributions to aspects of speech perception and production. I will address how these interact with hemispheric asymmetries, and also explore the extent to which this approach might explain more domain general aspects of auditory processing. Finally, I show how social context can modulate the ways that these different neural systems are recruited in communicative interactions.



Alexander Huth (University of Texas, Austin): Mapping representations of language semantics in human cortex

Abstract: How does the human brain process and represent the meaning of language? We investigate this question by building computational models of language processing and then using those models to predict functional magnetic resonance imaging (fMRI) responses to natural language stimuli. The technique we use, voxel-wise encoding models, provides a sensitive method for probing richly detailed cortical representations. This method also allows us to take advantage of natural stimuli, which elicit stronger, more reliable, and more varied brain responses than tightly controlled experimental stimuli. In this talk I will discuss how we have used these methods to study how the human brain represents the meaning of language, and how those representations are linked to visual representations. The results suggest that the language and visual systems in the human brain might form a single, contiguous map over which meaning is represented.



Ewa Dabrowska (University of Birmingham, UK): "Functional" and "decorative" grammar in adult L2 acquisition

Abstract: There is a large literature showing that adult L2 learners, in contrast to children, often fail to acquire native-like competence in the second language. Because of such age effects, adult L2 learning is often viewed as “fundamentally different” from child acquisition, and defective in some way. Adult learners’ failure to develop native-like grammatical comptence is often attributed to maturational changes in the brain such lack of access to UG (e.g. Bley-Vroman 1989) or less effective procedural learning (Ullman 2015), although other researchers argue that the differences are better explained by appealing to first language interference, the quantity and/or quality of the input, and motivation (see Muñoz and Singleton 2011).

However, adult L2 learners do not always do worse than child learners. Studies comparing child and adult learners who have received similar amounts of input suggest that adults learn second languages faster, at least in the beginning (Huang 2015, Krashen, Long and Scarcella 1979, Snow and Hoefnagel-Höhle 1978). There is also evidence that many reach high levels of attainment in some aspects of language. For example, Dąbrowska (2018) found considerable overlap between L1 and L2 speakers in performance on a task tapping morphosyntactic knowledge, with 75% of the adult learners scoring within the native speaker range. Crucially, this study used a picture selection task which tapped mastery of “functional” grammar (i.e. grammatical contrasts which correspond to a clear difference in meaning, such as the assignment of agent and patient roles in sentences with noncanonical word order and quantifier scope). In contrast, most earlier ultimate attainment studies (e.g. Johnson and Newport 1989, DeKeyser 2000, DeKeyser et al. 2010, Flege et al 1999) used a grammaticality judgment task in which participants had to assess sentences such as (1)-(3) below (all taken from Johnson and Newport 1989). This task tests aspects of grammar which are “decorative” (agreement, tense marking, determiners)  in the sense that their contribution to the meaning conveyed by the sentence is largely redundant. 

  • Last night the old lady die(d) in her sleep.
  • John’s dog always wait(s) for him at the corner.
  • Tom is reading (a) book in the bathtub.

In this talk, I report the results of a large-scale study which directly compared native speakers, adult immersion leaners and classroom foreign language learners on tasks assessing both "decorative" and "functional" grammar. As predicted, there was much more overlap between groups on the functional grammar task than on the other two tasks. I conclude by discussing possible reasons for these differences.


Aarhus University, Faculty of Arts, School of Communication and Culture