I really dislike posting about a paper that I don’t fully understand, but I guess I’ll have to, as this one seems pretty important. The best I can do is summarize it briefly and give a link so that those of you above my pay grade can look at the messy details. The paper, with only two authors, Le Chang and Doris Tsao of Caltech, was published in Cell (reference and free link at bottom), and involved functional magnetic resonance imaging (fMRI) and electrodes that probed individual neurons in the brains of two rhesus macaques (Macaca mulatta). (It’s amazing how far technology has advanced!)
The monkeys were presented with images originally derived from 200 human faces, with data taken on a set of 50 landmarks involving both shape and features like eye color and skin tone. Chang and Tsao then constructed composite faces based on various combinations of these landmarks. After identifying a small patch of the macaque brain as being involved in face recognition, they then probed the firing of individual neurons in this region when macaques saw the faces. Using various statistical analyses incorporating the correlation of each neuron’s firing with the measurements used to construct the facial images, they then figured out an algorithm that best translated the firing patterns into the multidimensionally constructed face.
Once they did that, they could “reverse engineer” the firing patterns alone into facial images; that is, they could test their algorithm by using measurements of firing alone and their model to predict the appearance of new faces seen by the macaques. The remarkable thing is that they could do this with amazing accuracy monitoring only 205 neurons!
Below are a set of reverse-engineered faces (“predicted face”) derived from neuronal firing alone and then compared to the actual image the macaques saw. You can see the remarkable fidelity. What this means is that, as far as we know, the investigators cracked the code of how a facial image is translated into patterns of neuronal firing in the brain.
What does this mean? Well, it means we’re a lot closer to understanding how the brain translates images into neuronal firing, though of course it tells us little about how that firing is reprocessed by the brain itself into an image, for that’s a matter of “qualia”, or consciousness. But it does unravel the complicated nexus by which a whole group of cells work together to create facial images, which of course are crucial for primates recognizing individuals.
One site reporting on this, ZME Science, notes two practical implications:
The researchers can translate from neuron activity in a brain to a human face; from brain activity to visual information. In addition to cracking the code of a brain in a living animal, this study also discovered how brains recognize faces. Before, it was believed that each face cell codes one specific face. However, now we know that each cell represents one piece of visual information that combines with all of the others to form a face. Perhaps human brains also have their own code that works in a similar way. In addition to crime applications, the research could help machine learning for recognizing faces, such as photo recognition on Facebook.
I’m not quite sure what the “crime applications” are unless they ask a victim to envision a perpetrator and use the neuronal firing (instead of a police artist) to get an image of that perp. But that seems impractical. The measurements of neuronal firing might be translatable into computer code, though this too is above my pay grade. Right now I’m just amazed that this was done, and how accurately the investigators could reverse engineer firing into images using so few neurons. I’ll let others speculate about the practical applications.
Chang, L. and D. Y. Tsao. The Code for Facial Identity in the Primate Brain. Cell 169:1013-1028.e1014.