Recently, I read a paper by Larry Abbot about the rising roleof theoretical neuroscience in understanding brain function. I could readily identify with many ofthe high-level observations that Larry made.
I couldn’t help smiling when I read the quip about new entrants to the field having high confidence to knowledge ratios. Having entered neuroscience from an electrical engineering background, I am sure that I fell into the high confidence to knowledge ratio side. This is the way it should be. Why would anyone enter a field if they already know everything about it or if they are not confident about making a difference in that field? As Larry muses in the paper, the most important thing is that the ratio gets adjusted through an increase in knowledge and not a decrease in confidence.
The very next paragraph in the paper discusses the difference between “word-models” and mathematical models. This is a very instructive paragraph, so I will quote it below.
“What has a theoretical component brought to the field of neuroscience? Neuroscience has always had models (how would it be possible to contemplate experimental results in such complex systems without a model in one’s head?), but prior to the invasion of the theorists, these were often word models. There are several advantages of expressing a model in equations rather than words. Equations force a model to be precise, complete, and self-consistent, and they allow its full implications to be worked out. It is not difficult to find word models in the conclusions sections of older neuroscience papers that sound reasonable but, when expressed as mathematical models, turn out to be inconsistent and unworkable. Mathematical formulation of a model forces it to be self-consistent and, although self-consistency is not necessarily truth, self-inconsistency is certainly falsehood.”
I have direct experience with this. There were many models that I started with a description of the kind when neuron A fires , followed by neuron B fires, it strengthens the connections and then the winner uses a soft-inhibition to inhibit the neurons around it. Although the descriptions seemed sensible to begin with, I was never able to get such models to work to in real life. I know that there are many scientists who have a knack for making such models work, but I never had any luck with them. I was always able to get interesting behavior out of those networks, but seldom able to get desired behavior.That is when I started paying attention to the equivalence between many complicated network models and simpler mathematical formulations.
For example, I had once built a temporal learning network model with learning rules that I thought were very ingenious. Later we understood that the model was equivalent to a special case of Hidden Markov model. Although the the special case was interesting in itself, our understanding greatly improved and the learning rules became much simpler when we realized the mapping between the network model and this mathematical model. This example taught me that it is not enough to just specify the learning rules in a mathematical form — it is important to seek to understand the actual computation that is being done in a mathematical form.