Artificial Neural Networks (ANNs) equipped with general learning algorithms, but no linguistic knowledge, can learn to associate words with objects in naturalistic scenes when trained on head-mounted video recordings from a single child’s first-person experience. Similarly, ANNs can master syntax when trained on a similar amount of linguistic data a child experiences in a few years. These findings have been taken to challenge the view that innate linguistic priors play a role in child language acquisition. Here I show that the training environments and learning resources of ANN and humans are poorly matched, and accordingly, conclusions regarding human language priors are not merited. I also review three sets of findings that strongly suggest ANNs are missing human inductive biases: (1) children (but not ANNs) create new well-structured languages when only exposed to degraded ones; (2) ANN (but not humans) learn impossible and possible human languages in similar ways with similar facility; and (3) humans (but not ANNs) show a critical period for language learning. In this last case, adding an “innate” inductive prior to the ANN results in better ANN-human alignment. Just as is the case regarding claims of ANN-human alignment in the domain of vision, conclusions regarding ANN-human alignment in the domain of language is characterized by a lack of severe testing of hypotheses.
Organiser:
James Ladyman james.ladyman@bristol.ac.uk
For up-to-date details, see https://www.bristol.ac.uk/philosophy/events/#calendar