Monday, June 05, 2017

Systems Neuroscience Highlights: May 2017

It was a great month for systems neuroscience, and the following articles stood out as pushing things forward in unexpected (to me) and interesting ways.

Sensory Coding
Tien et al -- Homeostatic Plasticity Shapes Cell-Type-Specific Wiring in the Retina --
Neuron  [Pubmed] This is an amazing paper. 
    They generated a line of mice that was lacking a certain type of retinal bipolar cell (the B6 cell). The B6 cell is typically the main input to the ONα retinal ganglion cell. Instead of being completely wrecked in this line of  mice, the ONα RGCs actually maintained the same response profiles seen in wild type animals. This was because other types of bipolar cells compensated for the loss of the B6 cell in the circuit. Hence, it seems that compensatory plasticity mechanisms at play in the retina served to rewire the inputs to this class of RGC to maintain the same type of output to the brain. 
     I always thought homeostatic plasticity research was very cool, but more about neurons maintaining firing rates by changing concentrations/distributions of ion channels and other relatively vanilla properties confined to single units. If there are homeostatic mechanisms at play in systems, with homeostatic sculpting at the circuit level? This seems to be taking things in an entirely new direction.
 
Motor Control
Makino et al -- Transformation of Cortex-wide Emergent Properties during Motor Learning -- Neuron [Pubmed
     The authors looked at calcium dynamics in neurons across supragranular layers of cortex as mice learned a motor task (a simple lever-pressing task). The sequence of activation among different motor areas became more compressed in time as they learned the task, and response variability decreased as well. Interestingly, area M2, an infrequently studied motor region in rodents, became a key hub in the motor control network once animals learned the task: the movement-predicting signal in M2 started earlier as they learned, better predicted the activity of other motor areas, and inactivating M2 significantly impaired performance in the task.
     The reason I like this paper is that it isn't just another "Look at all the calcium imaging we did!" paper. It has substantive new results that seems to push our picture of motor control in cool new directions. Also, it is an interesting complement to the recent result from Kawai et al (from Ölveczky's lab) showing that performing a simple overlearned motor sequence does not require M1/M2 (Motor cortex is required for learning but not for executing a motor skill).  While Makino et al do not discuss the Kawai paper, it would be interesting to hear their thoughts on it.

Update added 6/7/17:  I got a helpful comment from an author of the Makino et al. paper who pointed out that in Kawai et al, they didn't just remove M1, but M1 and M2. I missed this in my first reading of Kawai et al, and updated my post accordingly. Further, he suggested that the task in the current paper requires finer-grained control of the fingers, while Kawai's task used more coarse-grained forelimb movement that are likely controlled subcortically. It is fairly well-known that dexterous digit control in rodents requires the cortex, as acknowledged by Kawai et al.. Finally, these are issues we will be hearing more about from Komiyama's group, so stay tuned!

Friday, May 05, 2017

Systems Neuroscience Highlights: April 2017

Lots of great systems neuroscience this month. It was hard to narrow it down, but three papers really stood out.

Cognitive Neuroscience


Eichenbaum -- The role of the hippocampus in navigation is memory. J. Neurophys.. [Pubmed] Most of us have wondered about the relationship between the two main views of the hippocampus: on one hand, the hippocampus is key for long-term memory formation, and on the other hand we have the view from the place field, where the hippocampus contains a map that is used for navigation. In this wide-ranging review article, Eichenbaum forcefully argues that the hippocampus is not specialized for spatial navigation per se, but for the construction of memories of relatively highly organized complex information in space and time (i.e., episodic memories). He argues that context-dependent spatial features are just one of many complex relational features to which the hippocampus is sensitive as it serves its role in memory function.
    This review article is notable partly because it is a rich source of references that outsiders probably don't know about. For instance, if you are really familiar with an environment, you can still navigate it even with hippocampal lesions (https://www.ncbi.nlm.nih.gov/pubmed/15723062). Also, an imaging study in humans suggests there may be a grid-like parcellation of abstract conceptual spaces, not just geometric space (https://www.ncbi.nlm.nih.gov/pubmed/27313047). Note I can't endorse all these studies, as I have yet to read or evaluate them; but it is useful to have all this intriguing stuff in one place as food for thought.

Motor Control

Giovannucci et al -- Cerebellar granule cells acquire a widespread predictive feedback signal during motor learning. Nat. Neurosci.. [Pubmed] Using calcium imaging, they recorded from populations of granule cells, the input cells of the cerebellum, during eyeblink conditioning (recall in eyeblink conditioning you associate a cue, such as a light, with an air puff to the eye, and eventually that cue will evoke a blink). As animals acquired the behavioral response to the new cue, this was reflected in the emergence of signals within the granule cells that predicted oncoming eyeblinks.
    What is really amazing in this study is that they recorded from populations of some of the smallest cells in the brain for multiple days in a row, in awake animals. I'm not surprised that the cerebellum acquired eyeblink-related control signals during training; what is most impressive to me is the raw experimental expertise involved here, and the potential this model system has for helping us dissect forward model theories of motor control.

Shadmehr -- Distinct neural circuits for control of movement vs. holding still. J. Neurophys.. [Pubmed] A fun review article by Shadmehr that focuses on the eye movement system. There are different mechanisms at play for movement versus holding still, even though from the perspective of the muscles in your eyes, holding still is "just as much an active process as movement" (Shadmehr, quoting Robinson, 1970). Could this be a general principle? After reviewing the evidence from the eye-movement system in some detail, Shadmehr discusses whether the same principles might hold for neuronal control of head movement, arm movement, and navigation.
    This intriguing possibility could help shed light on apparent discrepancies between pre-movement preparatory activity observed in M1 (when animals are still) and activity observed during movement. This topic has received a lot of attention lately from Mark Churchland's lab (see [1], [2]).

Tuesday, April 04, 2017

Systems Neuroscience Highlights: March 2017

First post of monthly highlights from the systems neuroscience literature. My goal is to point out cool stuff that people might not ordinarily see, so I will try not to just include Nature and Science papers. I will typically highlight three to five papers a month,  but this includes some February spillover so is a little longer. I will post by the fifth of each month.

Sensory Coding

Shi et al -- Retinal origin of direction selectivity in the superior colliculus. Nature Neuroscience [Pubmed] The authors used optogenetic stimulation to show that the motion-selectivity of superficial superior colliculus neurons is inherited entirely from the direction selectivity of retinal ganglion cells that project there.

Cognitive Neuroscience

Yackle et al -- Breathing Control Center Neurons That Promote Arousal in Mice. Science. [Pubmed] The CPG that controls breathing contains a small subpopulation of neurons that projects to the locus coeruleus, which releases noradrenaline (i.e., sympathetic activation for fight/flight). Removing this subset of neurons apparently did not influence the ability of mice to breath, but did make them especially chill. Take-home lesson: if you want to calm down, stop breathing.

Motor Control

Shadmehr -- Learning to Predict and Control the Physics of Our Movements. J Neurosci. [Pubmed]  Interestingly, this month there were quite a few papers related to the forward model framework in motor control (for a review, see Shadmehr and Krakaur's Error correction, sensory prediction, and adaptation in motor control (2010)). This paper from Shadmehr is an excellent summary of his many seminal contributions to this framework over the years. It focuses on his research on our ability to learn to manipulate objects with our hands, which involves quickly learning their unique dynamical signatures.


Maeda et al -- Foot placement relies on state estimation during visually guided walking. J. Neurophys. [Pubmed] The second notable paper from the forward-model theoretic framework. How do we walk when we wear prismatic lenses that render visual feedback unreliable? This paper suggests that subjects learn to weight internally generated predictions more than the resulting noisy and unreliable visual feedback. Similar results have been seen before in reaching tasks (e.g., Körding and, Wolpert, 2004). However, this is a cool use of distorting lenses to demonstrate such effects during walking, which is typically thought to rely on mindless CPGs.


Confais et al -- Nerve-Specific Input Modulation to Spinal Neurons during a Motor Task in the Monkey. J. Neurosci. [Pubmed]  When we move, we activate our own sensory transducers. What keeps our sensory systems from getting overwhelmed by such self-generated sensory signals?  Following up on Seki et al (2004), this paper shows that there are sensory-nerve specific patterns of modulation (both excitation and inhibition) of somatosensory responses in the spinal cord during voluntary wrist movements. The sign of modulation sometimes depended on the particular direction of movement of the wrist. This is a beautiful model system for the study of the effects of corollary discharge.

Chaisanguanthum et al -- Neural Representation and Causal Models in Motor Cortex. J. Neurosci. [Pubmed] An excellent paper straddling classical motor control theories of Georgopoulos and friends, and some modern ideas from a horde that has been attacking such ideas recently. They construct a simple mathematical model of the sensorimotor transformation required to perform a center-out reaching task, and show that movement variability will be minimized when the output neurons that directly drive behavior are tuned to velocity. Indeed, they discover just such a population in their data (using a somewhat rough-hewn spike-width criterion to individuate subclasses of cortical neurons). While the model in this paper is simple, it is a welcome counterweight to the recent overreactions against Georgopoulos. Hopefully it is the first of many studies that will ultimately absorb previous work in a principled way.

Why am I being so pro-Georgopoulos? I'm not: I'm just surprised that people have recently been so dismissive of Georgopoulos, to the point where it seems they are just attacking a straw man. Students of motor control were never so locked into the velocity-tuning framework that they thought it would apply to all neurons (for an excellent review, see Kalaska, 2009). Further, is anyone that surprised at nonstationarities in the system? That is, was anyone really surprised that neurons don't show the same tuning properties seconds before an animal starts moving, when recording in brain regions whose primary function is to directly control movement? The sensory systems literature is absorbing nonstationarities and dynamics without all this fanfare. What's up, motor control?

Wednesday, November 18, 2015

Matlab notes 2015

Notes to myself on little tricks and tips I find useful in Matlab. 2015 version. Last time I did this it was 2013.

Exporting surf plots for Illustrator in Matlab
Exporting surf plots is a pain, one of those things that is perennially a problem in Matlab but they seem to never get around to fixing. There are a couple of quick fixes. First, this thread is helpful. Use the painters renderer and it forces the plot to export vectorized. So something like:
 print -depsc2 -painters test.eps

Or, if you want a nice self-contained program, try the export_fig package, and then you can just do something like:
print2eps('FullWidthSurfTest2')
I prefer the export_fig package, because it preserves the tickmarks and such that I spent so much time making. 

Tuesday, February 10, 2015

PySide Tree Model V: Building trees with QTreeWidget and QStandardItemModel

Last in a series on treebuilding in PySide: see Table of Contents.

As mentioned in post IIC, if our ultimate goal were to display a tree as simple as the one in the simpletreemodel, we would probably just use QTreeWidget or QStandardItemModel. In both cases, it is almost embarrassing how much easier it is to create the tree. This is because we don't need to roll our own model or data item classes.

In what follows, we will see how to use QTreeWidget and QStandardItemModel to create and view a read-only tree with multiple columns of data in each row. To keep it simple, we won't load data from a file, and the code only creates a very simple little tree. It would be a useful exercise to expand these examples to exactly mimic the GUI created in simpletreemodel.

QTreeWidget
While it is often poo-poohed as slow and inflexible, for simple projects QTreeWidget is extremely convenient and easy to use. Simply instantiate an instance of QTreeWidget, populate the tree with QTreeWidgetItem instances, and then call show() on the widget:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from PySide import QtGui
import sys

app = QtGui.QApplication(sys.argv)

treeWidget = QtGui.QTreeWidget()
treeWidget.setColumnCount(2)
treeWidget.setHeaderLabels(['Title', 'Summary']);

#First top level item and its kids
item0 = QtGui.QTreeWidgetItem(treeWidget, ['Title 0', 'Summary 0'])
item00 = QtGui.QTreeWidgetItem(item0, ['Title 00', 'Summary 00'] )
item01 = QtGui.QTreeWidgetItem(item0, ['Title 01', 'Summary 01'])

#Second top level item and its kids
item1 = QtGui.QTreeWidgetItem(treeWidget, ['Title 1', 'Summary 1'])
item10 = QtGui.QTreeWidgetItem(item1, ['Title 10', 'Summary 10'])
item11 = QtGui.QTreeWidgetItem(item1, ['Title 11', 'Summary 11'])
item12 = QtGui.QTreeWidgetItem(item1, ['Title 12', 'Summary 12'])

#Children of item11
item110 = QtGui.QTreeWidgetItem(item11, ['Title 110', 'Summary 110'])
item111 = QtGui.QTreeWidgetItem(item11, ['Title 111', 'Summary 111'])

treeWidget.show() 
sys.exit(app.exec_())

QStandardItemModel
This is only slightly more complicated than QTreeWidget. We populate the tree with lists of  QStandardItems. To add a child to a row, we apply appendRow() to the first element (i.e., the first column) of the parent row:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
from PySide import QtGui
import sys

app = QtGui.QApplication(sys.argv)
model = QtGui.QStandardItemModel()
model.setHorizontalHeaderLabels(['Title', 'Summary'])
rootItem = model.invisibleRootItem()

#First top-level row and children 
item0 = [QtGui.QStandardItem('Title0'), QtGui.QStandardItem('Summary0')]
item00 = [QtGui.QStandardItem('Title00'), QtGui.QStandardItem('Summary00')]
item01 = [QtGui.QStandardItem('Title01'), QtGui.QStandardItem('Summary01')]
rootItem.appendRow(item0)
item0[0].appendRow(item00)
item0[0].appendRow(item01)

#Second top-level item and its children
item1 = [QtGui.QStandardItem('Title1'), QtGui.QStandardItem('Summary1')]
item10 = [QtGui.QStandardItem('Title10'), QtGui.QStandardItem('Summary10')]
item11 = [QtGui.QStandardItem('Title11'), QtGui.QStandardItem('Summary11')]
item12 = [QtGui.QStandardItem('Title12'), QtGui.QStandardItem('Summary12')]
rootItem.appendRow(item1)
item1[0].appendRow(item10)
item1[0].appendRow(item11)
item1[0].appendRow(item12)

#Children of item11 (third level items)
item110 = [QtGui.QStandardItem('Title110'), QtGui.QStandardItem('Summary110')]
item111 = [QtGui.QStandardItem('Title111'), QtGui.QStandardItem('Summary111')]
item11[0].appendRow(item110)
item11[0].appendRow(item111)

treeView= QtGui.QTreeView()
treeView.setModel(model)
treeView.show()
sys.exit(app.exec_())

While a tad more complicated than using QTreeWidget, this is still drastically simpler than subclassing QAbstractItemModel.

Conclusion
As is usually the case, there are many ways to get to the same destination. The route you take will depend on your goals, the complexity of your data, how much time you have to write your code, and how fast you want the program to be.  As mentioned before, it would be overkill to subclass QAbstractItemModel for a data store as simple as the one in simpletreemodel. This post shows just how easy it would be to create the exact same tree with an order of magnitude less code.

Those that have read any of these posts, thanks for reading! I'll be putting a PDF of all the posts together so you don't have to fight through a maze of posts for all the information.

Monday, February 09, 2015

PySide Tree Tutorial IV: What next?

Part of a series on treebuilding in PySide: see Table of Contents.

We have finished going over simpletreemodel. This and the final post are effectively appendices to our discussion of that example.

You have probably noticed that model/view programming is a complex subject, probably deserving book-length treatment. Tree views are the most complex built-in views there are, and hopefully we have made some headway on how to build them.

We have left out how we would handle an editable tree model (this is covered in the editabletreemodel example that comes with PySide). Nor have we addressed how to exert more precise control over how items are displayed, such as how to show html-formatted strings: this is the purview of custom delegates (a topic covered in the spinboxdelegate and stardelegate examples). We have also left open what to do if we want graphical rather than textual rendering of our data: this would involve the construction of a custom view (one example is to be found in chart).

For those that want a more principled overview of model/view programming in Python, Summerfield (2008) has three chapters on the topic. The brave can also try Summerfield (2010), for an extremely thorough treatment, and an entire chapter on trees. While the latter is not written for Python, it has tons of useful information about model-view programming if you can brave the translation from c++.

Summerfield, M (2010) Advanced Qt Programming. Prentice Hall.
Summerfield, M (2008) Rapid Gui Programming with Python and Qt. Prentice Hall.

Friday, February 06, 2015

PySide Tree Tutorial IIID: Creating the tree with setupModelData()

Part of a series on treebuilding in PySide: see Table of Contents
 
Recall that TreeModel uses setupModelData() to set up the initial tree structure. We provide a very brief description of its behavior here, and refer the reader to the code itself for more details (the code is in post IIIA). We begin with a text file (default.txt) that contains all the data for our tree:
Getting Started            How to familiarize yourself with Qt Designer
Launching Designer         Running the Qt Designer application
The User Interface         How to interact with Qt Designer
                             .
                             .
                             .
Connection Editing Mode    Connecting widgets together
Connecting Objects         Making connections in Qt Designer
Editing Connections        Changing existing connections
The entire text file is extracted in main, and sent to setupModelData() within TreeModel. Two tab-delimited strings are extracted from each line (the title and summary), and form the basis for a new TreeItem. The location of each node in the hierarchy is determined by the pattern of indentation in the file. We construct the tree exactly as discussed in Part II, using the following rules:
  • For each line, create a TreeItem in which the two tab-delimited strings on that line are assigned to TreeItem.itemData (Figure 4, post IIB).
  • If line N+1 is indented relative to line N, then make the (N+1)th item a child of item N.
  • If line N+1 is unindented relative to line N, then make the (N+1)th item a sibling of item N's parent.
The implementation details in setupModelData() look a bit complicated, but most of the code is there for recordkeeping (e.g., keeping track of the current level of indentation). I found it helpful to work through how it handles the very first line of the input file, and then keep iterating through the code by hand until everything is clear.