|computational models look beautiful (source)|
1. All models are bullshit.
2. Models rely on MATH, so of course they are right.
3. Some models are good and some are bad.
Obviously the first two are extremes and usually posited by people who don't know anything about computational neuroscience, and I am clearly advocating the third view. The only problem is that it is hard to tell if a model is good or bad unless you know a lot about it.
So here are some general principles that can help you divide the good and the bad in computational neuroscience.
1. The authors use the correct level of detail.
|devil's in the details (source)|
2. The authors tune and validate their model using separate data.
When you are making a model you tune it to fit data. For example, in a computational model of a neuron you want to make sure your particular composition of channels produces the right spiking pattern. However, you also want to validate it against data. So how is tuning different from validating? Tuning is when you change the parameters of the model to make it match data. Validating is when you check the tuned model to see if it matches data. Good practice in computational neuroscience is to tune your model to one set of data, but to validate it against a different set of data.
For example, if a cell does X and Y, you can tune your model to effect X, but then check to see that the parameters that make it do X also make it do Y. Sometimes this is not possible. Maybe there is not enough experimental data out there. But if it is not possible, you should at least test the robustness of your model (see point 3).
3. The authors test the robustness of their model.
|A robust computational model can be delicious (source)|
These are the main things I try to pay attention to, but I am sure there are other important things to keep in mind when making models and reading about them. What are your thoughts?