On Alignment Between Human and Neural Network Visual Representations
Simon Kornblith (Senior Research Scientist, Google Brain, Toronto)
online
Hosted by the Generalisation in Mind & Machine research group
Full details can be found on the Mind and Machine website: https://mindandmachine.blogs.bristol.ac.uk/seminars/
Abstract: Both brains and artificial neural networks learn layers of representations from massive amounts of data. To what extent are these similarities sufficient to lead them to converge on similar representations? Do neural networks that achieve higher accuracy on machine learning benchmarks also learn more human-like representation spaces? In this talk, I’ll first discuss the results of a large-scale investigation of how different factors affect alignment between representations from computer vision models and human semantic similarity judgments. This investigation reveals that model architecture and scale have essentially no effect on alignment with human behavioural responses, whereas the training dataset and objective function have a much larger impact. In the second part of the talk, I’ll speculate on why brains and artificial neural networks might learn similar semantic spaces despite relying on different image features.
Biography: Simon Kornblith is a Senior Research Scientist at Google Brain in Toronto. His primary research focus is understanding and improving representation learning with neural networks. Before joining Google, he received his PhD in Brain and Cognitive Sciences at MIT, where he studied the neural basis of multiple-item working memory with Earl Miller. He was also one of the original developers of Zotero and a developer of the Julia programming language.
Join via zoom: https://bristol-ac-uk.zoom.us/j/97898231529?pwd=cXQ0eWI5VW1pWGJzOW9HNldYZlNTQT09, Meeting ID: 978 9823 1529 | Passcode: 172816
Simon Kornblith, Senior Research Scientist, Google Brain, Toronto, Ontario, Canada
Contact information
Contact Abla Hatherell with any enquiries.