The best paper I read this week was a question: Could a neuroscientist understand a microprocessor?
The quickest way to understand it is by asking another question: why would you expect one to?
There’s a big effort underway to map how all the neurons in animals connect to each other, and the electrical impulses from neurons in the system as the animal is doing things, and then Just Add Math to figure out how the whole nervous system works.
If you could Just Add Math to figure out what a nervous system is doing, you could also use the same math to figure out what a microprocessor is doing too. The same math would have to work for both; a computer is a computer because of what it does, not how it’s built.1
So the authors, Eric Jonas and Konrad Paul Kording, decided to try to do exactly this. They found plans for a simple computer, the MOS 6502, that were detailed enough to build a simulator, then built a faster simulator and ran 6502 programs on it. Since this was the same processor used in Atari and Nintendo game systems, they ran classic video games.
Then they tried to Just Add Math from neuroscience, to see if it would let them understand what was happening. It produced results that weren’t very helpful; you wouldn’t think of yourself as understanding the video game or the processor after you’d read them.
And there doesn’t seem to be a simple way to adapt the math to reach this understanding, either. If you think about what it takes to understand what a computer/program is going to do, it makes sense:
A nervous system is both program and machine, smushed together, and it’s been aggressively optimized by Nature over a very long time. It has much more in common with a computer/program that will be hard to understand thoroughly, than with one that will be easy.
Which is an insight that makes this paper very cool. I wouldn’t have thought to put “figuring out what a nervous system is doing” to “figuring out what a computer is computing” on the same scale but you learn a lot when you do.