Gilles Laurent makes an excellent point in 23 Problems in Systems Neuroscience:
Integrative neuroscience is an odd biological science. Whereas most biologists would now agree that living organisms share a common evolutionary heritage and that, as a consequence, much can be learned about complex systems by studying simpler ones, systems neuroscientists seem generally quite resistant to this empirical approach when it is applied to brain function. Of course, no one now disputes the similarities between squid and macaque action potentials or between chemical synaptic mechanisms in flies and rats. In fact, much of what we know about the molecular biology of transmitter release comes form work carried out in yeast, which obviously possesses neither neurons nor brain. When it comes to computation, integrative principles, or "cognitive" issues such as perception, however, most neuroscientists act as if King Cortex appeared one bright morning out of nowhere, leaving in the mud a zoo of robotic critters, prisoners of their flawed designs and obviously incapable of perception, feeling, pain, sleep, or emotions, to name but a few of their deficiencies. How nineteenth century!Take that, you vertebrocentrists!
I do not dispute that large, complex systems such as mammalian cerebral cortices have their own idiosyncrasies, both worthy of intensive study and critical for mental life. [...] Yet considering our obsession with things cortical, can we say that we have, in over forty years, figured out how the visual cortex shapes the responses of simple and complex cells? Do we really understand the cerebellum? Do we even know what a memory is? Do we understand the simplest forms and mechanisms of pattern recognition, classification, or generalization? I believe that our hurried drive to tackle these immensely complicated problems using the most complex neuronal systems that evolution produced--have you ever looked down a microscope at a small section of Golgi-stained cerebral cortex?--makes little sense.
In defense of people who want to study the more complicated systems, Ed Callaway once said to me that there are two "model" approaches you can take. One, take a relatively simple system (e.g., the leech) and study the hell out of that. Another option is to take a relatively simple part of a complex system and study that(e.g., the retina, a single cortical column). Both approaches have their place.
8 comments:
The neuroscience community has been given the task, "Prove that the mind and the brain are the same thing."
I think this gets them in a rush, presumes many answers we don't actually know, and may hinder the most basic research you refer to.
I'm thinking that it would be better all around if we looked to see what the brain is and what it does and leave the larger philosophical questions until more facts are known.
anon said:
The neuroscience community has been given the task, "Prove that the mind and the brain are the same thing."
I wouldn't put it that way. That tends to be how people outside of neuroscience look at it, but primarily we just want to understand how brains work to control behavior. Of course we also want to draw correlations between mental states and neuronal states (e.g., you have a blind spot in your visual field because of a spot in the retina without any photoreceptors), but we tend to be fairly cautious in making any claims like 'the mind and the brain are the same.' Simple correlations are enough for now, and ultimately the philosophical questions should become more clear.
I don't see this as a zero-sum scientific game. Why shouldn't the cognitive neuroscientist investigate and theorize about the human brain/cortex in relation to cognition and phenomenal experience, while the study of simpler organisms like aplysia, leech, and fruit fly satisfies the curiosity of the neuroscientist so inclined?
I think the notion that theoretical models of large, complex systems must necessarily be less constrained than simple model systems has not been properly conceived. In my opinion, the level of constraint of *any* model must be determined (post hoc) by the predictive power of the model with respect to the phenomena predicted. For example, the retinoid model, though highly complex, has good predictive power when it comes to the phenomenal experiences of the moon illusion, the seeing-more-than-is there (SMTT) phenomenon, and the induction of motion after-effects by orthogonally-moving stimuli, as well as many other observations.
Arnold: I was more focused on models of the brain, which presently are quite underconstrained (e.g., we don't even have a complete serial EM of a single cortical column yet). However, clearly models that make interesting and unexpected psychological predictions that go beyond the phenomenon for which they were originally developed (i.e., they don't predict them because they were made to predict them specifically), should be taken seriously. There aren't many such models in psychology. There arent' many in systems neuroscience either. There are lots for lower-level things such as Hodgkin Huxely's model, generally things for which we have useful model systems.
I think your retinoid model is interesting, and I should probably revisit it as it's been a few years.
Eric,
If you are interested, preprints of two of my recent publications on the retinoid system as the neuronal substrate of consciousness and the self can be seen in pdf form here:
http://eprints.assc.caltech.edu/355/
http://eprints.assc.caltech.edu/468/
I also think it is important to consider what types of questions one is interested in answering. If its the most basic principles of molecular neuroscience, then yea, very reductionist preparations are in order. However, as a systems neuroscientist, I am interested in emergent properties of the brain. That is, in characterizing scale phenomena at a specific level of complexity / abstraction layer. I believe that this is not only valuable, but essential, in the big picture of trying to 'solve the brain'--- with all of the most mechanisms worked out, will that actually give us true insight into how high level cognition arises? Probably not, unless we have some way to integrate low level mechanisms into the context of larger scale dynamics, or emergent properties. Thus, its important to not only get at fundamental reductionist mechanisms, but it is also critical tackle high level mechanisms that are directly involved with the most interesting aspects of brain function, such as memory or cognition.
For example, the circuit diagram of the c. elegans nervous system has been worked out in excruciating detail. However, we still cannot model the dynamics of this system (e.g. replicate simple behaviors using our extensive knowledge of the connectivity).
Even if 'emergence' doesn't really exist, given how little we understand at this point, it might as well. That is, the distinctions between rigorous descriptions of rudimentary components and high level descriptions of how they interact are still very useful at this point, because the latter may provide insights into how the former actually interact and may yield insights that would not be found by looking exclusively at reductionist preparations.
Azriel: I think what you say is consistent with his quotes, is orthogonal.
Everyone would agree that there are network-level phenomena that arise via the interaction of simpler mechanisms (e.g., multineuronal oscillations). Laurent is saying that it is better to study such higher-level phenomena in the simplest system possible. That is different from saying that we should just study low-level mechanisms.
Rather, even in the honeybee, leech, or C elegans we can study the higher-level properties you talk about. And we should. Where are we likely to have success first, in the leech or in the rat?
What he rails against is the assumption that the higher-level features discovered in model organisms will be different than in animals with a cortex.
Ultimately of course it is an empirical question how well the emergent computational features discovered in model organisms will extend gracefully to other systems.
Post a Comment