Wednesday, December 17, 2008
Consciousness (3): Mr B's first look at consciousness
Recall that Mr B takes a naturalistic, empirical approach to things. His first order of business is to determine what variables are correlated with conscious states (just as he did with action potential generation). We'll focus mostly on conscious perception of external events (e.g., seeing a sunset), so as to avoid the complexity of things like consciousness of one's thoughts (e.g., the experience of thinking about a chess move).
Mr B does not focus narrowly on experiments that tell us about the neural basis of consciousness, but also on experiments that reveal important details of the structure of consciousness itself and its relationship to external stimuli (i.e., psychophysics). The more empirical constraints, the better. It is possible to learn a great deal about respiration without knowing anything about the respiratory system: you can learn how the inputs (composition of air breathed) and outputs (exhaled air) relate to one another, and to other variables such as the blood pressure and breathing rate of the organism. Hence, learning about a biological mechanism doesn't mean focusing in on that mechanism exclusively: much can be learned by studying its products, how it is perturbed by inputs, etc..
The brain is necessary for conscious experience
At the grossest and most obvious level, Mr B notes that the only organ necessary for consciousness is the brain. Contrary to the Greeks' heart-based theory of mind, he knows people have literally lived without hearts, perfectly conscious, for months (article here). You can lose kidneys, arms, your stomach, etc, and while you may not be healthy or happy, you will still be conscious. Conversely, if you inactivate a brain with an anesthetic, the loss of consciousness will be quite dramatic.
The brain is sufficient for conscious experience
Take a powerful hallucinogen and entire new experiences are evoked endogenously. Something similar seems to happen while dreaming: a world is experienced that is largely independent of present sensory inputs. Amputees often feel that the removed limb is still present, moving around, making gestures, in the well-known 'phantom limb' phenomenon (this has been shown to not be due to irritation of the nerves at the end of the severed limb).
In all such cases, we experience a world that is not actually there. So the brain in effect constructs the experience. Some might like to say that the brain builds a 'representation' or 'simulation' or 'virtual reality model' of the world, and this is what we experience. Mr B may slip into such (often metaphorical) language, but for now he just means that experience is a neural construction, which is a more neutral way to put things (though note by saying it is a 'construction' he doesn't mean to imply it is a "mere construct" with no validity).
Note this hypothesis already generalizes beyond the data: Mr B is assuming that perceptual experience during normal waking periods is generated by similar mechanisms to those used during sleep, hallucinations, and phantom limbs. Mr B realizes this could be a mistake, but as a provisional hypothesis, it seems reasonable, especially given the existence of illusions generated even in healthy brains (we will have more to say about illusions later).
Implicit in the hypothesis that experience is a neural construct is the claim that neural processes of a certain sort (to be determined) are not just necessary, but sufficient for experience. Given his general biological approach, it seems a conclusion almost forced upon Mr B.
In the next post, we'll continue to follow Mr B in his quest to understand consciousness. He'll see just how complicated a problem he has taken on.
Tuesday, December 09, 2008
Consciousness (2): Introducing the garden-variety biologist
Let's assume Mr B doesn't understand how neurons fire action potentials. In the rest of this post we'll examine his general approach to the problem. In a future post we'll consider how he approaches the problem of consciousness.
He believes that neuronal excitability is likely complex, but that it will ultimately be explained in terms of individually innocuous mechanisms, a complicated orchestra of proteins, lipids, carbohydrates, and other ingredients standardly found in cells. The mechanisms should all conform to physical principles, even if many of them cannot strictly be derived from the laws of physics. For instance, if there are untethered chemicals in a neuron, he expects their diffusion to follow the rules laid out in physical chemistry.
Mr B takes an empirical approach to his subject matter. He is likely to sit down at the lab bench with an example of what he is studying (a model system), and poke and prod at it to see how it behaves.
For instance, to get a bead on how neurons are activated, he may prepare a single neuron in a dish and treat it with various chemicals (e.g., sodium, potassium, neurotoxins), expose it to different temperatures, different light and oxygen levels, etc while measuring the voltage across its membrane. Such experiments will reveal how the behavior of the neuron depends on different variables in the preparation.
The experiments, guided by his best guess at how neurons work, will help him form new ideas or refine his old ideas. For instance, when he removes sodium from a neuron's bath, he finds that the neuron stops firing action potentials. This suggests to him that the action potential is caused by an influx of sodium into the neuron.
Mr B will usually write out equations to summarize what he has observed. However, he doesn't just want to describe his observations. He will attmpt to come up with new experiments to test his ideas about the action potential (e.g., if his sodium-based theory is true, then increasing the concentration of sodium in the neuron's bath should result in a larger action potential). The desire to turn his ideas into predictions often involves translating his words into mathematics so the concepts can be more clearly expressed, make his assumptions explicit, and provide a basis for precise predictions.
So far, Mr B is not much different than a physicist or chemist. All take an empirical approach to their subject matter, prefer mathematical to word-based models, and value empirical tests of their theories.
I've been painting Mr B as a bit myopic, focusing exclusively on how this little mechanism works. This leaves out his broader uniquely biological perspective. By focusing in on mechanisms, Mr B might be able to explain how a sperm locomotes, but that will tell him nothing about its function, about its role in the biological system in which it is embedded. This biofunctional orientation is what tends to distinguish Mr B from his colleagues in physics and chemistry.
Focusing in our our example, Mr B wants to know why neurons fire action potentials. What do action potentials contribute to the nervous system's higher-order goal of controlling behavior? Are action potentials involved in signalling from neuron to neuron? Could he be studying an epiphenomenon? It could be that the mechanisms he found in the dish are not even used in vivo. For instance, is there enough extracellular sodium in the nervous system for his sodium-based theory to work? Such questions will haunt Mr B and suggest new experiments.
Some might be tempted to insist that another facet of Mr B's approach is that he takes an evolutionary perspective on what he is studying. This is certainly possible, but not essential. Mr B realizes that brains are organs that evolved to help organisms navigate the world. But this doesn't necessarily help him understand how individual neurons work, or even their functional role in an intact animal. Evolution will indirectly color his perspective on the system he is studying, and certainly he has no patience with creationists who would say that the mechanism of action potential generation could not have evolved without divine intervention. Mr B realizes he doesn't even understand the mechanisms involved yet, and that is an important prerequisite to constructing a phylogenetic history.
Before taking leave of Mr B, we should note that he believes his ignorance of neural excitability is a relatively boring psychological fact about himself, not a deep fact with profound metaphysical implications (this is a point Patricia Churchland likes to make about consciousness, but right now we're leaving aside consciousness). He knows, as he approaches the problem of neuronal excitability, that he might be like the biologists in 1900 trying to understand the mechanisms of inheritance, that it might be a long time before he succeeds. A novel conceptual and empirical infrastructure might be required before the problem can be solved, or even posed in a way that yields results. His ignorance spurs his curiosity and creativity, it doesn't make him think there is something fundamentally wrong with biology. He stubbornly resists creativity sinks such as claims that neuronal excitability is forever beyond our understanding, or that supernatural beings are required to explain the strange animal electricity observed in nervous systems.
In the next post Mr B will examine the neural basis of consciousness at an abstract level, considering what types of processes in the brain are most likely to be conscious. He will also see how daunting his task is.
Wednesday, December 03, 2008
Creationists take aim at neuroscience (1): defining their target
Since I've thought about this topic way too much, I thought I'd throw my crap into the ring too. I'll discuss the arguments of the neodualists indirectly at first, dividing my discussion of consciousness into multiple posts. Because 'consciousness' is a dirty word in some neuroscience quarters, in this post I'll clear the air by clarifying what I mean by the term.
What is consciousness?
What are you experiencing right now? For instance, are you aware of hunger pangs in your gut, words on a screen, the deep red hues of a freshly picked rose? 'Consciousness' is just another word for this ability to perceive or be aware of the world. Indeed, for those who want to avoid the C-word, 'awareness' is a perfectly good synonym.
The canonical instances of conscious awareness are moments when we are awake, alert, and attending to something interesting such as a sunset. However, even while dreaming we are conscious of something, perhaps a sort of neuronal simulation of the world.
Should scientists bother with consciousness?
Over beers many neuroscientists are dismissive when consciousness comes up. They treat it as a "philosophical" problem, a waste of time for real scientists. I find this attitude strange. New data fuel conceptual progress in science, so it seems an empirical approach is the best way to make headway on something that is clearly a real and important phenomenon. Avoiding the topic leaves it in the hands of the philosophers, a fate just a little better than death.
I suppose one could argue that there is no way to study consciousness experimentally because it is inherently subjective or something. This argument doesn't work, though, as there already exist fairly straightforward experimental probes of consciousness. For example, binocular rivalry. If you show a different image to each eye (see example rivalrous stimulus below), you don't see a fusion of the two images. Rather, you perceive the images one at a time (a dog then a cat, not a dog-cat). Neuroscientists can compare the bits of the brain that track the eye-locked stimuli (which stay the same) with those that oscillate with the visual percept. This has provided a useful roadmap that tells us which parts of the brain are locked to the stimulus, and which shift with the object of conscious awareness.
The dismissive types are typically either unfamiliar with such experimental paradigms, or they tend to be skeptical of all research with a psychological component. For the former, Koch's book The Quest for Consciousness gives a nice summary of many experiments. For those skeptical of all cognitive neuroscience, there isn't much to be done (frankly, I am sympathetic to general skepticism toward cognitive neuroscience, which is a very speculative discipline right now). Hence, my take-home argument is that consciousness is just as legitimate (or illegitimate) a research topic as more mainstream psychological phenomena like attention and memory.
I should add one caveat. I have been writing as if all uses of the term 'consciousness' refer to the same thing. This may be false. Perhaps there are separate mechanisms for different sensory modalities. Or even within a modality: for instance, there could be different mechanisms for awareness of things in the center versus the periphery of our visual field. Maybe the mechanisms that underlie dreaming have little overlap with waking awareness. It could be that 'consciousness' is a mongrel term like 'memory,' and it will splinter as the science progresses.
My next post will begin describing what a biological approach to consciousness would look like.
Tuesday, October 28, 2008
Model systems in systems neuroscience?
Gilles Laurent makes an excellent point in 23 Problems in Systems Neuroscience:
Integrative neuroscience is an odd biological science. Whereas most biologists would now agree that living organisms share a common evolutionary heritage and that, as a consequence, much can be learned about complex systems by studying simpler ones, systems neuroscientists seem generally quite resistant to this empirical approach when it is applied to brain function. Of course, no one now disputes the similarities between squid and macaque action potentials or between chemical synaptic mechanisms in flies and rats. In fact, much of what we know about the molecular biology of transmitter release comes form work carried out in yeast, which obviously possesses neither neurons nor brain. When it comes to computation, integrative principles, or "cognitive" issues such as perception, however, most neuroscientists act as if King Cortex appeared one bright morning out of nowhere, leaving in the mud a zoo of robotic critters, prisoners of their flawed designs and obviously incapable of perception, feeling, pain, sleep, or emotions, to name but a few of their deficiencies. How nineteenth century!Take that, you vertebrocentrists!
I do not dispute that large, complex systems such as mammalian cerebral cortices have their own idiosyncrasies, both worthy of intensive study and critical for mental life. [...] Yet considering our obsession with things cortical, can we say that we have, in over forty years, figured out how the visual cortex shapes the responses of simple and complex cells? Do we really understand the cerebellum? Do we even know what a memory is? Do we understand the simplest forms and mechanisms of pattern recognition, classification, or generalization? I believe that our hurried drive to tackle these immensely complicated problems using the most complex neuronal systems that evolution produced--have you ever looked down a microscope at a small section of Golgi-stained cerebral cortex?--makes little sense.
In defense of people who want to study the more complicated systems, Ed Callaway once said to me that there are two "model" approaches you can take. One, take a relatively simple system (e.g., the leech) and study the hell out of that. Another option is to take a relatively simple part of a complex system and study that(e.g., the retina, a single cortical column). Both approaches have their place.
Tuesday, August 12, 2008
Scientology on psychiatry
Scientology is unalterably opposed, as a matter of religious belief, to the practice of psychiatry, and espouses as a religious belief that the study of the mind and the healing of mentally caused ills should not be alienated from religion or condoned in nonreligious fields. I am in full agreement with this religious belief. [italics added]Here they stake an interesting claim. Basically they are like Christian Scientists who restrict their refusal of medical help to one specialty. I can't say I'm completely unsympathetic: because of its subject matter, psychiatry is not nearly as well-developed or objective as other branches of medicine. That doesn't mean psychiatry doesn't help a lot of people (it does) or that it is pseudoscience (it isn't always).
They go much further than this, though. They add that any study of the mind, with "mentally caused" ills as a special case, should only be performed within a religious context. Any purely secular study of the mind is permanently opposed by Scientology.
Even the most kooky right-wing Creationist hacks wouldn't go this far. It would imply that psychology (other than behaviorism) is a taboo science. Psychophysics, cognitive tests on Alzheimer's patients, and the study of the visual capacities of people with different types of brain damage all seem fairly vanilla secular studies of the mind. So, the Scientologists really should clarify the subset of questions that can appropriately be targeted in a secular context.
This connects with another interesting issue. While they will not defer to psychiatrists in the case of "mentally caused" ills, what about physically caused mental ills? E.g., if I lose a chunk of my cortex in an accident, that will certainly cause some mental ills. I'll need to see a neurologist, if not a psychiatrist. Also, what is not mentally caused in their framework? Is sleepiness mentally caused? What about cancer? Given the emphasis on personal power and responsibility in their philosophy, we need to ask.
I do not believe in or subscribe to psychiatric labels for individuals. It is my strongly held religious belief that all mental problems are spiritual in nature and that there is no such thing as a mentally incompetent person--only those suffering from spiritual upset of one kind or another dramatized by an individual. I reject all psychiatric labels and intend for this Contract to clearly memorialize my desire to be helped exclusively through religious, spiritual means and not through any form of psychiatric treatment, specifically including involuntary commitment based on so-called lack of competence.The above reads almost word-for-word like the beliefs of Christian Scientists, who believe that all "illness" is really an invasion by evil spirits, not something that can be cured by medicine.
Why this animosity toward psychiatry? This could be an attempt (not necessarily conscious) to kill a direct competitor to their method: traditional methods of therapy within a secular context. This may sound crazy, but consider that L Ron Hubbard's first Scientology text, Dianetics, is subtitled 'The Modern Science of Mental Health.' It was published in 1950, when psychiatry was a fairly young science. The theory of the mind (and mental problems) advanced by Dianetics reads like warmed over psychoanalytic theory. The subject is directed to bring to consciousness certain buried memories ('engrams') that contribute to your 'reactive mind' that, unbeknownst to you, controls your behavior and emotions in the present. This is done via a process of talking to an 'auditor'. If interested, see the page How the mind works at the Scientology web site. This sounds a lot like early clinical psychiatry.
Since Dianetics is basically a reheated model of the mind, with its own set of exotic categories ('Clear' when your 'engrams' are properly 'audited'), this psychiatry hate is sort of amusing. The Scientologists clearly consider Dianetics a science, and science is self-conscious about its fallibility, open-ended nature, and openness to new information and data. Scientology, or at least this document, aspires to none of these scientific ideals.
There are, of course, interesting issues in the underbrush when the say they reject psychiatric categories. If you read the source of the categories, the Diagnostic and Statistical Manual of Mental Disorders (DSM), it isn't a perfect document. Some of the categories seem ill-defined or subjective, and there is a good deal of overlap among symptoms that makes it hard to come up with definitive labels. Historically it has included some silly cultural prejudices (like homosexuality) in its list of disorders. It still includes 'female hypoactive sexual desire disorder', which is controversial (see link above for other controversies).
All told, I rather like the DSM because it is self-consciously fallible, open-ended, and evolving. It reflects the tentative classifications formed by those working with patients, doctors that have noticed many different suits of symptoms tend to cluster together. If nothing else, it provides a consistent language for practitioners to use to talk with each other. While you could describe a long list of symptoms to a colleague ("Subject experiences auditory hallucinations, nervous and paranoid, prone to violent outbursts"), it is easier to say '"paranoid schizophrenic".
Under no circumstances, at any time, do I wish to be denied my right to care from members of my religion to the exclusion of psychiatric care or psychiatric directed care, regardless of what any psychiatrist, medical person, designated member of the state or family member may assert supposedly on my behalf. If circumstances should ever arise in which government, medical, or psychiatric officials or personnel or family members or friends attempt to compel or coerce or commit me for psychiatric evaluation, treatment or hospitalization, I fully desire and fully expect that the Church or Scientologists will intercede on my behalf to oppose such efforts and/or extricate me from that predicament so my spiritual needs may be addressed in accordance with the tenets of the Scientology religion free from psychiatric intervention.This is also an interesting paragraph. I sympathize with its likely motivation: they don't want Scientologists to be forced to receive psychiatric treatment against their will. I'm sure this is a serious concern partly because of their oddball beliefs, such as their putative beliefs about Xenu the alien dictator.
On the other hand, to sign a contract in which you agree to never see a psychiatrist regardless of the situation seems reckless and dangerous. Instances of postpartum depression, with clear physiological causes and treatments, don't respond particularly well to psychotherapy, and do respond well to antidepressant drugs. It seems silly to apply a screwdriver to a nail, and the Scientologists are saying there is only one tool you need to cure all ills with any mental dimension.
The clause that you want Scientology to intercede and oppose any attempts to provide you with psychiatric care is particularly worrisome. Does this mean Scientologists should actively try to prevent someone from seeking such care, even if they want it? Probably not. However, one practical consequence is that the social pressure to avoid psychiatry is very strong within their subculture. This will lead to many people not receiving adequate care for easily treatable conditions.
These are not idle worries. We have the tragic case of Linda McPherson, who died after her tortured 'Introspection Rundown' with the Scientologists. She was likely psychotic, but they refused to send her to a hospital, even after a Scientologist doctor told them they should send her, for fear that she would be taken to see psychiatrists.
Wednesday, July 23, 2008
What good is large-scale oscillatory activity?
What is the function, if any, of such large-scale "oscillatory" activity? Recently I read an argument that such activity is just epiphenomenal, that it has no biologically useful causal role. I am something of an atheist turned agnostic about the importance of such signals, so in what follows I try my best to step into the role of advocate for the claim that such oscillations have interesting functional consequences.
Let's consider a few experimental facts about such large-scale synchronous activity (not an exhaustive list!):
1. Neuronal activity can be directly influenced by weak external electric fields. These are known as "ephaptic interactions" in the literature. The question of the function, extent, and importance of ephaptic interactions (relative to synaptic interactions) is an open question that has received little attention. Google "ephaptic interactions" if you want to see some of the research that has been done. It could be that, for a neuron near threshold, an ephaptic signal may nudge it up into a spiking regime (or push it down away from threshold, making it less likely to spike).The second and third results mentioned are likely indicative of some very cool and interesting synaptic processing, which puts the first line of evidence in a unique class. Ephaptic interactions are the key for those who think oscillations actually do something. Unfortunately, there isn't a lot of good work on the role of ephaptic interactions in nervous systems.
2. The efficacy with which plasticity is induced in rat barrel cortex is synchrony-dependent. In the study from Diamond's group, more synchrony implied higher probability of inducing cortical plasticity with their protocol. Note in this paper they measured synchrony among spike trains using extracellular recordings, not LFP or EEG.
[Link to paper]
3. Gilles Laurent's group pharmacologically disrupted "synchronous oscillations" in the honeybee CNS and showed this disruption impaired behavioral discrimination of similar odorants (but not discrimination of clearly distinct odorants). Similar pharmacological manipulations have been done in vertebrate systems (see Buzsaki paper below).
Note these pharmacological manipulations are not clean, which makes it hard to interpret the results.
[Laurent paper] [Buzsaki paper]
My guess is that it will turn out that ephaptic interactions are important for the functioning of some parts of some systems, but settling which parts of which systems isn't a question that can be resolved a priori. Unfortunately, right now we are stuck mostly with correlation studies, which keeps the ratio of inference to data unacceptably high.
Stepping back from specific data suggesting that oscillations are important, Buzsaki's book Rhythms of the Brain is likely a good place to start to get the theist's side of things with respect to the importance of oscillations. I should admit I haven't read it, but only seen him speak and have read some of his papers, so caveat emptor. I can say his experimental work is wonderful, much of it driven by questions about the function of oscillations in the CNS, so his book seems like a natural place to start for someone interested in this question.
References:
Irina A. Erchova and Mathew E. Diamond (2004) Rapid Fluctuations in Rat Barrel Cortex Plasticity. The Journal of Neuroscience, June 30, 2004, 24(26):5931-5941
Stopfer, M., Bhagavan, S., Smith, B. H. and Laurent, G. (1997). “Impaired odour discrimination on desynchronization of odour-encoding neural assemblies.” Nature 390(6655): 70-4.
Robbe D, Montgomery SM, Thome A, Rueda-Orozco PE, McNaughton BL, Buzsaki G. Cannabinoids reveal importance of spike timing coordination in hippocampal function. Nat Neurosci. 9:1526-33 2006.
Wednesday, July 02, 2008
Anscombe's Quartet
The x- and y-values are included at the end of this post in a Matlab-friendly format. The Quartet provides a stark lesson on how useful it can be to simply look at one's data before diving in with all sorts of statistical ninja. For instance, Set 2 can be modelled with linear regression to yield the same mean-squared-error between the regression line and the data as the other plots, but it ain't a linear relationship.
This is old hat for most of us, but I like the Quartet for its simplicity and visual impact.
These data and graphs were first presented by F.J. Anscombe in 1973 in his paper Graphs in Statistical Analysis. It is quite fun reading over the paper, which ends:
Unfortunately, most persons who have recourse to a computer for statistical analysis of data are not much interested either in computer programming or in statistical method, being primarily concerned with their own proper business. Hence the common use of library programs and various statistical packages. Most of these originated in the pre-visual era. The user is not showered with graphical displays. He can get them only with trouble, cunning, and a fighting spirit. It's time that was changed.Thank goodness for Matlab.
The data (coded for Matlab)
x1=[10 8 13 9 11 14 6 4 12 7 5];
x2=x1;
x3=x1;
x4=[8 8 8 8 8 8 8 19 8 8 8];
y1=[8.04 6.95 7.58 8.81 8.33 9.96 7.24 4.26 10.84 4.82 5.68];
y2=[9.14 8.14 8.74 8.77 9.26 8.1 6.13 3.1 9.13 7.26 4.74];
y3=[7.46 6.77 12.74 7.11 7.81 8.84 6.08 5.39 8.15 6.42 5.73];
y4=[6.58 5.76 7.71 8.84 8.47 7.04 5.25 12.5 5.56 7.91 6.89];
Sunday, April 06, 2008
Dude, where's my 'dudes?
Note added 4/14/08: it seems to be back up, with a sleek new look.
Monday, March 31, 2008
Getting intimate with rat whiskers (2): Ritt et al.
Ex vivo whiskers
As I discussed in the previous post, the resonance hypothesis is the claim that the resonance frequencies of whiskers provide an ethologically important channel of information about what is going on in the world of the rat. Subsequent papers, however, have argued against the resonance hypothesis, as they were unable to reproduce whisker resonance in a more ethologically realistic preparation in which a whisker is fixed at a base and moved across a fixed surface (this is a little odd, since Hartmann already showed that resonance occurs in whiskers of awake behaving rats).
The first set of experiments in the paper addressed these issues, monitoring whisker movement as a whisker was brushed against a fixed surface at different speeds. They did observe resonance, as expected, but found an interesting twist--resonance wasn't prominent at every speed, but at a relatively narrow range of speeds, a 'hot spot' (see Figure 1B of the paper, shown below, which shows whisker angular velocity traces when the whisker is moved across the same surface at different speeds). This hot spot is thought to be within the range of speeds that whiskers move in ethological contexts, so it could be that rats exploit resonance during certain tasks by moving their whiskers at the appropriate speed.
In situ whiskers
The majority of the paper examined the micromotions of whiskers while rats performed a texture discrimination task. The task was relatively simple: the rat runs into a chamber that contains two different textures along the wall--a rough texture and a smooth texture (think of it as rough and smooth sandpaper). The rats are trained to get reward by running to a door that corresponds to a particular texture (e.g., when the smooth texture is on the right, enjoy a reward when you move to the right).
Their high-resolution monitoring of the whisker motions revealed a very interesting pattern to the movements. A whisker will slip along the surface, and then stick to a particularly abrasive feature, staying in place as the rat continues to move its head. Then, after the tension grows enough, the whisker whips forward (slips) before sticking at another spot further along (for example, see the clip from Figure 3 below: the top panel shows the trace of a whisker during the trial, and the bottom panel shows the angle (red) and position (blue) of the whisker). This is often accompanied by a kind of 'ringing' of the whisker as it sticks in place. Perhaps not surprisingly, this slip-n-stick pattern occurrs more often when it sweeps along the rough surface.
In the frequency domain, the mean frequency of movement of each whisker was inversely proportional to whisker length. This suggests that the elasticity that confers different resonance frequencies upon whiskers of different lengths also influences the frequency of whisker vibrations during the present discrimination task (the figure below, clipped from the paper, shows an example of the likelihood of different frequency components in two whiskers of different length).
While the frequencies of oscillation were consistent with resonance frequencies observed ex vivo, the rats didn't seem to sample the textures with all of their whiskers, which they might do if their goal was to maximize the information about frequency available from whiskers of different lengths (e.g., the "whiskers as cochlea" picture may need to be tweaked). Indeed, the largest two arcs of whiskers rarely touched the textured surface.
Among other things, their analysis also revealed a unique signature of rough-surface contact: a set of high velocity, large amplitude whisker deflections that were not seen in previous studies. Figure 8 of the paper provides a scatter plot that includes the rise time (roughly, time from trough to peak of a whisker movement) and velocity of whisker movements, and those events that were both high velocity and longer in duration (signifying higher amplitude) were much more likely to occur during contact with the rough surface.
Where are we?
This paper, a heroic effort that has established a great method and some intriguing results, is only the beginning. The whisker coding literature is finally catching up to vision when it comes to understanding the inputs to this system. Many more experiments are needed, not just to examine the role of resonance in transmitting information about the world, but to determine the role of different types of whisking patterns the rat employs as it scurries about.
Apropos of the resonance hypothesis, it would be interesting to see how rats fare when their whiskers are cut so they are all the same length. This should decrease the differences in resonance frequencies in different whiskers. Hence, to the extent that resonance is important for a task, we would expect to see their performance compromised (note, though, things are a little more complicated: resonance frequency depends on other factors than length). It will also be crucial to measure which micro-features of whisker movement are tracked neurally in awake behaving rodents.
This paper has opened up possibilities beyond just the resonance hypothesis. It reveals the slippery, sticky, messiness that is present when rats use their wonderful keratinous appendages to explore their world.
Acknowledgments:: Thanks to Jason Ritt for clarifying an important point about the velocity 'hot spot' mentioned in the ex vivo experiments.
Friday, March 07, 2008
Getting intimate with rat whiskers (1): Ritt et al.
Where's the neuroscience?
Before describing the main results, I'd like to discuss why, as neuroscientists, we should care about what whiskers are doing. "Where is the neuroscience?" someone may ask of a paper that just examines whisker movements. To that I would respond, "What is the stimulus?" The first rule of psychophysics and sensory coding is, "Know thy stimulus" (it was EJ Chichilnisky that hammered this into my mind). Students of psychophysics and sensory coding try to quantify the relationship between the stimulus and some other variable (either behavior (psychophysics) or neuronal activity (sensory coding)), so we need to precisely quantify the stimulus on each trial to make sure we have all the relevant information contained in the stimulus.
Because of such concerns, the question we should be asking is why researchers studying coding in the whisker system know so little about what the whiskers are doing, given the importance of understanding the stimulus in sensory coding and psychophysics. In the present paper, Ritt et al. provide a much-needed corrective to this deficit.
The resonance hypothesis
For a few years now, Moore (and Mitra Hartmann at Northwestern) have been pointing out that whiskers are not simple passive detectors of what goes on in the world. Rather, whiskers have interesting intrinsic properties that shape their responses as the rat scurries about palpating the world.
For example, each whisker has a resonance frequency. If you attach a stimulator to the end of a whisker and move it at frequencies, there is a 'hot spot' at which the whisker will vibrate with a significantly higher amplitude. The following figure (Figure 2 from Niemark et al) shows the amplitude of a whisker's movement as a function of stimulation frequency. The breakaway images show data from individual frequencies (gray line: stimulus, black line: whisker movement downstream from the stimulus):
This whisker in the figure has a resonance frequency just under 400 Hz. The resonance frequency of a whisker depends on that whisker's length (longer whiskers have lower resonance frequencies). Interestingly, the length of whiskers increases quite significantly as you move from the front to the back of a rat's face.
As an interesting historical aside, the possiblity that whiskers have resonance frequencies was initially noticed by Niemark while she was still an undergraduate working with John Hopfield. Then the idea was fleshed out experimentally by Niemark, Moore and Mark Andermann (leading to this paper) and by Mitra Hartmann and others at Cal Tech (leading to this paper). The present paper is the latest in a series of papers from Christopher Moore's group exploring the consequences of whisker resonance.
Getting back to the science, what is the significance of all this resonance business? Is it possible that whiskers with different resonant frequencies are induced to resonate by different types of stimuli? This is part of what has become known as the resonance hypothesis (review here). As described in the original paper by Niemark and others, "vibrissa resonance is optimally positioned to increase the range of detection and the specificity of discrimination, and may provide the ability to represent complex stimuli through a compact, somatotopically distributed code" (p 6500).
Many of us think of the resonance hypothesis as the 'whiskers as cochlea hairs' hypothesis. It has been a quite productive idea for experimentalists, and it is interesting to note that it wasn't even on our radar until Neimark asked a simple question: what are the properties of the sensor used by the rat?
Resonant oscillations have been observed in detached whiskers (image above) and in the whiskers of anesthetized rats. It has also been shown that cortical neurons respond differently when a whisker is stimulated at its resonance frequency (see the above review of the resonance hypothesis). But is whisker resonance ethologically important?
Does resonance really matter? Why it is hard to answer this question
A key question is whether resonance is important in the awake freely moving rat performing a discrimination task. One step toward anwering this question is to determine whether resonance is observed in the whiskers in a particular task. If not, then resonance cannot be used so we'd have evidence that resonance is irrelevant for that task.
So, what's the problem? Measure the whiskers in awake freely moving rats and get on with it. Unfortunately, it turns out to be extremely difficult to track tiny whisker movements in a rat that is running around moving its body, head, and whiskers any old place it wants. You need to get the optics just right to see many of these wispy whiskers in focus, use a high-speed camera (the resonance frequencies can be higher than 500 Hz), and then set up a system to automate the tracking of the whiskers (no way you are going to code all that data by hand with the required large fields of view and fast frame rates). It would be a herculean effort.
Mitra Hartmann, presently at Northwestern, chipped away at this problem a bit. She analyzed some interesting video data from the awake freely moving rat in this paper where whisker movements were sampled between 250 and 1000 Hz. She observed that whiskers often exhibit resonant oscillations after contacting an object, but not when the rat is simply whisking in open air.
But what about when a rat is performing a stimulus discrimination task? Could resonant oscillations actually be used to discriminate the features of an object such as its texture? To get at such questions, Ritt et al have pushed things much further. They have developed algorithms to automatically track the entire length of individual whiskers during a texture discrimination task in freely moving rats. They capture images at the impressive frame rate of 3000 Hz. Here's one example from Figure 3 of their paper:
Here they show the results when a single whisker is tracked (it shows the shape of the whisker over multiple frames into the past in red) as the rat moves its head across a surface that changes from rough to smooth texture (you can think of it as moving from rough to smooth sandpaper). To someone interested in sensory coding and psychophysics in the whisker system, this image is simply a symphony.
So what did they observe? To give away the punchline, it seems resonance is present during the task. But there is enough cool stuff in the paper that I'll put off the discussion of the results and interesting open questions to Part 2.
Acknowledgements Many thanks to Mitra Hartmann and Christopher Moore for explaining some of the history of the resonance hypothesis and for clarification of some key points.
Monday, February 25, 2008
Large-scale thamamocortical model
Some of the details of their model (roughly from large-scale to lower scale) include:
1. The cortical sheet's geometry was constructed from human MRI data.
2. Projections among cortical regions were modeled using data from diffusion tensor MRI of the human brain (above image is Figure 1 of the paper showing a subset of such connections).
3. Synaptic connectivity patterns among neurons within and between cortical layers are based on detailed studies of cat visual cortex (and iterated to all of cortex).
4. Individual neurons are not modelled using the relatively computationally intensive Hodgkin-Huxely models, but a species of integrate-and-fire neuron that included a variable threshold, short-term synaptic plasticity, and long term spike-timing dependent plasticity.
5. The only subcortical structure included in the model is the thalamus, but the model does include simple simulated neuromodulatory influences (dopamine, acetylcholine).
Their model exhibited some very interesting behavior. First, larger-scale oscillatory activity that we see in real brains emerged in the model (e.g., as you would observe via EEG). Also like real brains, the model exhibited ongoing spontaneous activity in the absence of inputs (note this only occurred after an initial 'setup' period in which they simulated random synaptic release events: the learning rule seemed to take care of the rest and push the brain into a regime in which it would exhibit spontaneous activity). Quite surprisingly, they also found that when a single spike was removed from a single neuron, the state of the entire brain would diverge compared to when that spike was kept in the model. There is a lot more, so if this sounds interesting check out the paper. They also mention in the paper that they are currently examining how things change when they add sensory inputs to the model.
Of course, a great deal of work is yet to be done, a great deal of thinking through the implications (and biological relevance) of some of the model's behavior (especially its global sensitivity to single spikes, which to me sounds biologically dubious). However, I find it quite amazing that by simply stamping the basic cortical template onto a model of the entire cortical sheet, and adding the rough inter-area connections, they observed many of the qualitative features of actual cortical activity. We tend to focus so much on local synaptic connections in our models of cortex, it is easy to miss the fact that the long-range projections could have similarly drastic influences on the global behavior of the system.
This paper is just fun. First, it is a great example of how to write a modeling paper for nonmathematicians. It had enough detail to give the modeler a sense for what they did, but not so much detail that your average systems neuroscientist would instinctively throw it in the trash (as is the case with too many modelling papers). Second, it provides a beautiful example of how people interested in systems-level phenomena can build biology into their model without making the model so computationally expensive that it would take fifty years to simulate ten milliseconds of cortical activity. It will be very interesting in the future as the hyper-realist Blue Brain style models make contact with these middle-level theories. I don't see conflict, but a future of productive theory co-evolution.
Monday, February 18, 2008
Visualizing the SVD
Warning: this post isn't directly about neuroscience, but a mathematical tool that is used quite a bit by researchers.
One of the most important operations in linear algebra is the singular value decomposition (SVD) of a matrix. Gilbert Strang calls the SVD the climax of his linear algebra course, while Andy Long says, "If you understand SVD, you understand linear algebra!" Indeed, it ties about a dozen central concepts from linear algebra into one elegant theorem.
The SVD has many applications, but the point of this message is to examine the SVD itself, to massage intuitions about what is going on mathematically. To help me build intuitions, I wrote a Matlab function to visualize what is happening in each step of the decomposition (svd_visualize.m, which you can click to download). I have found it quite helpful to play around with the function. It takes in two arguments: a 3x3 matrix (A) and a 3xN 'data' matrix in which each of the N columns is a 'data' point in 3-D space. The function returns the three matrices in the SVD of A, but more importantly it generates four plots to visualize what each factor in the SVD is doing.
To refresh your memory, the SVD of an mxn matrix A is a factorization of A into three matrices, U, S, and V' such that:
A=USV'
where V' means the transpose of V. Generally, A is an mxn matrix, U is mxm, S is mxn, and V' is nxn.
One cool thing about the SVD is that it breaks up the multiplication of a matrix A and a vector x into three simpler matrix transformations which can be easily visualized. To help with this visualization, the function svd_visualize generates four graphs: the first figure plots the original data and the next three plots show how those data are transformed via sequential multiplication by each of the matrices in the SVD.
In what follows, I explore the four plots using a simple example. The matrix A is a 3x3 rank 2 matrix, and the data is a 'cylinder' of points (a small stack of unit circles each in the X-Y plane at different heights). The first plot of svd_visualize simply shows this data in three-space:
In the above figure, the black lines are the standard basis vectors in which the cylinder is initially represented. The green and red lines are the columns of V, which form an orthogonal basis for the same space (more about this anon).
When the first matrix in the SVD (V') is applied to the data, this serves to rotate the data in three-space so that the data is represented relative to the V-basis. Spelling this out a bit, the columns of V form an orthogonal basis for three-space. Multiplying a vector by V' changes the coordinate system in which that vector is represented. The original data is represented in the standard basis, multiplication by V' produces that same vector represented in the V-basis. For example, if we multiply V1 (the first column of V) by V, this rotates V1 so that V1 is represented as the point [1 0 0]' relative to the V-basis. Application of this rotation matrix V' to the above data cylinder yields the following:
As promised, the data is the exact same as in Figure 1, but our cylinder has been rotated in three-space so that the V-basis vectors lie along the main axes of the plot. The two green vectors are the first two columns of V, which now lie along the two horizontal axes in the figure (for aficianados, they span the row space of A, or the set of all linear combinations of the rows of A). The red vertical line is the third column of V (its span is the null space of A, where the null space of A is the set of all vectors x such that Ax=0). So we see that the V' matrix rotates the data into a coordinate system in which the null space and row space of A can be more readily visualized.
The second step in the SVD is to multiply our rotated data by the 'singular matrix' S, which is mxn (in this case 3x3). S is a "diagonal" matrix that contains nonnegative 'singular values' of A sorted in descending order (technically, the singular values are the square roots of the eigenvalues of A'*A that correspond to its eigenvectors, which are the columns of V). In this case, the singular values are 3 and 1, while the third diagonal elment in S is zero.
What does this mean? Generally, multiplying a vector x=(x1,....xn)' by a diagonal matrix with r nonzero elements on the diagonal s1,....sr simply yields b=(s1*x1, s2*x2, .... sr*xr, 0 .. 0). That is, it stretches or contracts the components of x by the magnitude of the the singular values and zeroes out those elements of x that correspond to the zeros on the diagonal. Note that S*V1 (where V1 is the first column of V) would yield b=(s1, 0 0 0 0 0). That is, it yields a vector whose first entry is s1 and the rest zero. Recall this is because S acts on vectors represented in the V-basis, and in the V-basis, V1 is simply (1,0, ..., 0).
Application of our singular matrix to the above data yields the following:
This 3-D space represents the outputs space (range) of the A transformation. In this case, the range happens to be three-space, but if A had been Tx3, the input data in three-space would be sent to a point in a T-dimensional space. The figure shows the columns of the matrix U (in green and red) are aligned with the main axes: so the transform S returns values that are in the range of A, but represented in the orthogonal basis set in U. The green basis vectors are the first two columns of U (and they span the column space of A), while the red vector is the third column of U (which spans the null space of A').
Since the column space of A (for this example) is two dimensional, any point in 3-D space in the input space (the original data) is constrained to be projected onto a plane in the output space.
Notice that the individual circles that made up the cylinder have all turned into ellipses in the column space of A. This is due to the disproportionate stretching action of the singular values: the stretching is maximum for the vectors in the direction of V1. Also note that in the U-basis, S*V1 lies on the same axis as U1 (U1, in the U-basis, is of course (1, 0, 0)), but s1 units along that axis for reasons discussed in the text after Figure 2.
One way to look at S is that it implements the same linear transformation as the matrix A, but with the inputs and outputs represented in different basis sets. The inputs to S are the data represented in the V-basis, while the outputs from S are the data represented in the U-basis. That makes it clear why we first multiply the data by V': this changes the basis of the input space to that which is appropriate for S. As you might guess, the final matrix, U, simply transforms the output of the S transform from a representation in the U-basis back into the standard basis.
Hence, we shouldn't be surprised that the final step in the SVD is to apply the mxm (in this case, 3x3) matrix U to the transformed data represented in the U-basis. Just like V', U is a rotation matrix: it transforms the data from the U-basis (above picture) back to the standard basis. The standard basis vectors are in black in the above picture, and we can see that the U transformation brings them back into alignment with the main axes of the plot:
Pretty cool. The SVD lets you see, fairly transparently, the underlying transformations implicitly lurking in any matrix. Unlike many other decompositions (such as a diagonalization), it doesn't require A to have any special properties (e.g., A doesn't have to be square, symmetric, have linearly independent columns, etc). Any matrix can be decomposed into a change of basis (rotation by V'), a simple scaling (and "flattening") operation (by the singular matrix S), and a final change of basis (rotation by U).
Postscript: I realize this article uses a bit of technical jargon, so I will post a linear algebra primer someday that explains the terminology. For the aficianados, I have left out some details that would have complicated things and made this post too long. In particular, I focused on how the SVD factors act on "data" vectors, but little on the properties of the SVD itself (e.g., how to compute U, S, and V; comparison to orthogonal diagonalization, and tons of other things).
If you have suggestions for improving svd_visualize, please let me know in the comments or email me (thomson ~at~ neuro [dot] duke -dot- edu).
Edit in 2022: it has been almost 15 years since I made this post. It has held up fairly well as one Gilbert-Strang-like take on SVD, but I wrote it as if V and U are always rotation matrices when you perform SVD. This is not right: they can be either rotation or reflection matrices (the latter have a determinant of -1, while the former have a determinant of 1). For the example used in the code here, the determinants are 1 so we have rotation matrices, but this will not always be the case. Note a reflection is when certain axes flip (e.g., like going from a right to a left-handed coordinate system), so is similar to a rotation, but is mathematically not the same.
Tuesday, January 29, 2008
Sensory processing in mouse motor cortex
Here is the conclusion paragraph of my summary:
Ferezou et al. showed that subthreshold responses to whisker stimulation can be quite broadly distributed, often extending into M1. This suggests that M1 does not have a purely motor function, but serves also to process sensory information. While M1 projects directly to the brain stem and spinal cord to coordinate motor activity, its tight link with S1 opens up interesting questions about its role in sensory processing and sensorimotor transformations. Also, the sensory response in S1 and M1 depends on the behavioral state of the animal, suggesting that sensory processing isn’t a stationary process, but is sensitive to the context in which a stimulus is delivered. So, when someone asks how a mouse’s cortex would respond to a given stimulus, you probably have to ask, “What is the critter doing?”
Wednesday, January 09, 2008
Frontiers in Neuroscience
They are taking a big risk with this journal, as it flouts the traditional business model of the big journals like Nature (expensive, for profit journals with access only to paid subscribers). Just like its successful cousin, PLOS, Frontiers is open access, authors have copyright control and can distribute the article as they see fit. With these guys, you won't be directed to any annoying web pages asking you to pay $50.00 for an article you need.
What makes the Frontiers journals even more interesting is their novel policy for article reviewers. Reviewers are not anonymous, but rather "the Referee remains anonymous only during the review period. After the review, the screen is lifted and the Referees are disclosed and acknowledged on the published paper." No more annoying reviews by lazy referees who obviously haven't read the paper closely.
Also, if you review a paper that is ultimately accepted for publication you will have the option of writing up "a one-page summary of the paper co-authored by all participating Referees. These commentaries are referenced and citable and are major incentive for Referees because the current trend is for readers to read more meta-papers before going to the deeper original studies."
This is a very interesting experiment in publication practices. On one hand, writers are almost guaranteed to get constructive and helpful criticisms rather than half-thought-out potshots. On the other hand, if you are a small fish reviewing the paper of a "big name" lab, you might be tempted to hold your punches so as not to incur the wrath of someone with a lot of power in your subfield. Also, referees might be tempted to accept publications so they can get their summary published, thereby packing their CV.
Time will tell whether this radical experiment in open access journals has legs. The first issue has some very interesting articles. The first four are:
1.Shaul Druckmann, Yoav Banitt, Albert A. Gidon, Felix Schürmann, Henry Markram and Idan Segev A Novel Multiple Objective Optimization Framework for Constraining Conductance-Based Neuron Models by Experimental Data.
2.Alex Thomson and Christophe M. Lamy Functional maps of neocortical local circuitry.
3.Sidarta Ribeiro, Xinwu Shi, Matthew Engelhard, Yi Zhou, Hao Zhang, Damien Gervasoni, Shih-Chieh Lin, Kazuhiro Wada, Nelson A. Lemo and Miguel A. Nicolelis Novel experience induces persistent sleep-dependent plasticity in the cortex but not in the hippocampus.
4.Nestor Parga and Larry Abbott Network model of spontaneous activity exhibiting synchronous transitions between up and down states.