View all news

Why do some neurons respond so selectively to words, objects and faces?

26 February 2014

Jeff Bowes, Ivan Vankov, Markus Damian, and Colin Davis discuss why do some neurons respond so selectively to words, objects and faces

In a new paper published in Psychological Review,  Jeff Bowes, Ivan Vankov, Markus Damian, and Colin Davis discuss why do some neurons respond so selectively to words, objects and faces. They trained an artificial neural network to remember lists of words in a short-term memory task.  When the model succeeded in the task it learned localist representations, or “grandmother cells”.  The authors argue that highly selective neural representations are well suited to co-activating multiple things at the same time, and this may help explain why selective neurons are observed in cortex.  

Bowes, J.S., Vankov, I.I., Damian, M.F., & Davis, C.J. (2014).  Neural Networks Learn Highly Selective Representations in Order to Overcome the Superposition Catastrophe. Psychological Review

Edit this page