Sunday, February 17, 2019

Installing the NEST simulator for use with Anaconda

Installing the NEST simulator for use with Anaconda was pretty painless. However, it took me two failed tries to realize just how painless, so I thought I'd post how I got it working.

Note I am in Ubuntu 16/Python 3.7/Conda 4.5.11. If you are on Windows, don't even bother. Just install Linux. 

Edit:
Note since I posted this, someone has created an installer that should make installation really easy. I have not tried it, so cannot vouch for it, but I recommend that before you follow my recipe, see the first comment below and give it a shot.

1. Create/activate your nest environment
conda create --name nest
conda activate nest
2. Install system packages you will need
Note this is for the 'standard' configuration as described at http://www.nest-simulator.org/installation/.
  sudo apt-get install -y build-essential cmake \
  libltdl7-dev libreadline6-dev libncurses5-dev \
  libgsl0-dev openmpi-bin libopenmpi-dev

3. Install python packages
conda install numpy scipy matplotlib ipython nose cython scikit-learn

4. Install NEST proper (see http://www.nest-simulator.org/installation/)
Obviously you can use whatever directory structure you want, but I put my
build in /opt/nest, so change your values below accordingly if you want something else.

a. Create folder /opt/nest and give yourself ownership if needed (I had to use sudo chown)

b. Download NEST (I put the tarball in /opt/nest)

c. Unpack the tarball (in /opt/nest):
    tar -xzvf nest-simulator-2.16.0.tar.gz

d. Create a build directory (again, within opt/nest/):
    mkdir nest-simulator-2.16.0-build

e cd to the build directory:
    cd nest-simulator-2.16.0-build

f. Run cmake to build makefiles
Note the -Dwith-python=3 option, which forces it to use Python 3.
    cmake -DCMAKE_INSTALL_PREFIX:PATH=/opt/nest/ /opt/nest/nest-simulator-2.16.0 -Dwith-python=3
I got some warnings ('Cannot generate a safe linker search path for target sli_readline ') but things seemed to work out ok.

g. Set environment variables
Add the following line to .bashrc:
source /opt/nest/bin/nest_vars.sh

5. Run the makefiles
make  #this will take a few minutes(you may get some warnings)
make install #this goes quickly
make installcheck #takes a few miniutes, gives summary at end


6. Test something out
From your favorite IDE or command line, run a simple script (e.g., one_neuron.py in pynest/examples).
Things should just work. I like to use Spyder with conda, so added the following:
    conda install spyder
    spyder  #so spyder runs from the nest environment

The one_neuron.py example worked just fine. Note, though, if you are using spyder, you will not want to run your code using F5 unless you are a fan of restarting your Python kernel constantly. To avoid problems, I recommend entering run filename.py in Spyder's (ipython) command line when ready to run a script.

7. Have fun!
Enjoy NEST, it is a really amazing neural simulation framework. I recommend starting here to learn how to program in pynest, the Python interface for the NEST simulator: http://www.nest-simulator.org/introduction-to-pynest/


Sunday, August 06, 2017

Systems Neuroscience Highlights: June and July 2017

There were lots of great articles the last couple of months, in particular a series of articles on the fly's representation of it's position in space that seem to be coming together nicely.

Sensory Coding

Singla et al.. A cerebellum-like circuit in the auditory system cancels responses to self-generated sounds -- Nature Neuroscience [Pubmed] It is well-known that the brain is able to factor out which sensory inputs are generated by an animal's own behavior, and which are generated by events in the world. How brains do this is still an extremely active research are.  In this paper, the authors report a class of cells in the dorsal cochlear nucleus (DCN) of the brain stem that respond robustly to externally generated auditory cues, but not to self-generated auditory cues in mice (in particular, the sounds generated when they lick). Amazingly, when researchers artificially pipe in sounds in response to licking behavior, these DCN neurons eventually supress responses to such sounds, as if the brain were starting to treat them as being generated by the animal. 
      The authors seem to have found a beautiful model for sensory cancellation effects, one that is very much like that seen in the mormyrid electric fish (as discussed by Abbott's group recently here). It will be interesting to see how similar the principles are in these different systems as folks dig under the hood.

Green et al.. A neural circuit architecture for angular integration in Drosophila -- Nature [Pubmed].  

This is actually one of three that papers came out recently, building on a landmark paper from 2015, showing that the fruit fly contains a circular structure (the ellipsoid body, or EB), that acts as an internal compass, and contains a set of neurons that lights up at a different position on the ellipse depending on the angular position of the fly in space. In this paper, they test a potential circuit mechanism about how such an internal compass might be implemented. Namely, there is a second structure, the protocerebral bridge (PB), that is reciprocally connected to the compass, but slightly shifted around the circular axis, and which is preferentially activated by turning behavior. So, for instanced, when the fly turns right, the PB neurons (labeled P-EN in the video) will project to the compass neurons (labeled E-PG in the video), but to a region just a little bit ahead of the presently active spot on the EB. Excitatory interactions will drag the spot around to the appropriate location. This paper is nice because they have tests of sufficiency (activating PB glomeruli causes a shift in the compass location), and necessity (inactivating PB disrupts the compass).
    There is a lot more to the paper: the anatomy actually gets fairly complicated, and I have purposely suppressed tons of details and nomenclature (e.g., if you start to get lost in the anatomy, see this paper, or this one).
     So far, many of the fly-compass researchers seem to be doing relatively coarse-grained calcium imaging (e.g., this glomerulus lights up, and that one doesn't). The work is excellent, as they are pulling information from particular cell types to extract specific hypotheses about circuit mechanisms. Ultimately, though, you still end up with black-boxology (though with fine-grained boxes). The real power will come when they start triangulating their ridiculously powerful genetic toolkit with finger-grained electrophysiology and anatomy to really crack the circuits mechanisms at single-cell resolution. My guess is this is their aim.
    One question I have after reading this and other papers is what is the point of this compass? It seems to not effect the animal very much if you perturb it using optogenetics (see Turner-Evans paper below). In this paper they talk about "occasional" changes in behavior when they disrupt things by stimulating PB, but don't explore or quantify this effect. I am not sure what happens, to spatial navigation, if you ablate EB. Is it like the hippocampus, in that it is more involved in memory than online spatial navigation, even though there is a beautiful spatial representation contained there?

    Note there were two other papers on the fly's internal compass in the past cycle, that I'll just mention briefly:
  • Turner-Evans et al.. Angular velocity integration in a fly heading circuit -- Elife [Pubmed]. Testing the same phase-dragging model as Green et al., with similar results and some nice patch-clamp data from PB. Video is from this paper.
  • Kim et al.. Ring attractor dynamics in the Drosophila central brain -- Science  [Pubmed] Looking at the compass in animals in flight, instead of just on the floating  ball.


Motor Control
Park et al.. Moving slowly is hard for humans: limitations of dynamic primitives -- J. Neurophysiology [Pubmed]. While you will often hear of the speed-accuracy tradeoff (that is, the faster you try to do something, the more likely you are to make a mistake), does this mean when you move really slowly you get really accurate? People don't' typically study the lower extremes of the speed-accuracy tradeoff. In this study they did just that. They had human subjects move their hands back and forth at different speeds, sometimes extremely slowly, so slowly that they could no longer maintain a smooth oscillatory behavior, but started to halt and stop and start again, as if they were shifting from a continuous to a discrete behavioral strategy. 
     While this paper doesn't have any neuronal data, it is significant and fun because of its attempt to infer underlying mechanisms of motor control strategies from a clever and creative extension of simple behavioral techniques. A colleague of mine pointed out that it would be interesting to see how much improvement we would see with training on this task. My reaction is that, even if subjects could ultimately move smoothly with 500 hours of training (dear God please don't do that to your poor undergrads), it would still be significant if without such training, we naturally switched from a continuous to an intermittent control strategy in low-velocity regimes.

Thursday, July 27, 2017

Update on systems neuro lit summary for June

I had a grant and paper submission in the last month, so my summary had to take the back seat: I will be combining the June and July literature summaries..Lots of good stuff!

Monday, June 05, 2017

Systems Neuroscience Highlights: May 2017

It was a great month for systems neuroscience, and the following articles stood out as pushing things forward in unexpected (to me) and interesting ways.

Sensory Coding
Tien et al -- Homeostatic Plasticity Shapes Cell-Type-Specific Wiring in the Retina --
Neuron  [Pubmed] This is an amazing paper. 
    They generated a line of mice that was lacking a certain type of retinal bipolar cell (the B6 cell). The B6 cell is typically the main input to the ONα retinal ganglion cell. Instead of being completely wrecked in this line of  mice, the ONα RGCs actually maintained the same response profiles seen in wild type animals. This was because other types of bipolar cells compensated for the loss of the B6 cell in the circuit. Hence, it seems that compensatory plasticity mechanisms at play in the retina served to rewire the inputs to this class of RGC to maintain the same type of output to the brain. 
     I always thought homeostatic plasticity research was very cool, but more about neurons maintaining firing rates by changing concentrations/distributions of ion channels and other relatively vanilla properties confined to single units. If there are homeostatic mechanisms at play in systems, with homeostatic sculpting at the circuit level? This seems to be taking things in an entirely new direction.
 
Motor Control
Makino et al -- Transformation of Cortex-wide Emergent Properties during Motor Learning -- Neuron [Pubmed
     The authors looked at calcium dynamics in neurons across supragranular layers of cortex as mice learned a motor task (a simple lever-pressing task). The sequence of activation among different motor areas became more compressed in time as they learned the task, and response variability decreased as well. Interestingly, area M2, an infrequently studied motor region in rodents, became a key hub in the motor control network once animals learned the task: the movement-predicting signal in M2 started earlier as they learned, better predicted the activity of other motor areas, and inactivating M2 significantly impaired performance in the task.
     The reason I like this paper is that it isn't just another "Look at all the calcium imaging we did!" paper. It has substantive new results that seems to push our picture of motor control in cool new directions. Also, it is an interesting complement to the recent result from Kawai et al (from Ölveczky's lab) showing that performing a simple overlearned motor sequence does not require M1/M2 (Motor cortex is required for learning but not for executing a motor skill).  While Makino et al do not discuss the Kawai paper, it would be interesting to hear their thoughts on it.

Update added 6/7/17:  I got a helpful comment from an author of the Makino et al. paper who pointed out that in Kawai et al, they didn't just remove M1, but M1 and M2. I missed this in my first reading of Kawai et al, and updated the present post accordingly. Further, he suggested that the task in the current paper requires finer-grained control of the fingers, while Kawai's task used more coarse-grained forelimb movement that are likely controlled subcortically. It is fairly well-known that dexterous digit control in rodents requires the cortex, as acknowledged by Kawai et al.. Finally, these are issues we will be hearing more about from Komiyama's group, so stay tuned!

Friday, May 05, 2017

Systems Neuroscience Highlights: April 2017

Lots of great systems neuroscience this month. It was hard to narrow it down, but three papers really stood out.

Cognitive Neuroscience


Eichenbaum -- The role of the hippocampus in navigation is memory. J. Neurophys.. [Pubmed] Most of us have wondered about the relationship between the two main views of the hippocampus: on one hand, the hippocampus is key for long-term memory formation, and on the other hand we have the view from the place field, where the hippocampus contains a map that is used for navigation. In this wide-ranging review article, Eichenbaum forcefully argues that the hippocampus is not specialized for spatial navigation per se, but for the construction of memories of relatively highly organized complex information in space and time (i.e., episodic memories). He argues that context-dependent spatial features are just one of many complex relational features to which the hippocampus is sensitive as it serves its role in memory function.
    This review article is notable partly because it is a rich source of references that outsiders probably don't know about. For instance, if you are really familiar with an environment, you can still navigate it even with hippocampal lesions (https://www.ncbi.nlm.nih.gov/pubmed/15723062). Also, an imaging study in humans suggests there may be a grid-like parcellation of abstract conceptual spaces, not just geometric space (https://www.ncbi.nlm.nih.gov/pubmed/27313047). Note I can't endorse all these studies, as I have yet to read or evaluate them; but it is useful to have all this intriguing stuff in one place as food for thought.

Motor Control

Giovannucci et al -- Cerebellar granule cells acquire a widespread predictive feedback signal during motor learning. Nat. Neurosci.. [Pubmed] Using calcium imaging, they recorded from populations of granule cells, the input cells of the cerebellum, during eyeblink conditioning (recall in eyeblink conditioning you associate a cue, such as a light, with an air puff to the eye, and eventually that cue will evoke a blink). As animals acquired the behavioral response to the new cue, this was reflected in the emergence of signals within the granule cells that predicted oncoming eyeblinks.
    What is really amazing in this study is that they recorded from populations of some of the smallest cells in the brain for multiple days in a row, in awake animals. I'm not surprised that the cerebellum acquired eyeblink-related control signals during training; what is most impressive to me is the raw experimental expertise involved here, and the potential this model system has for helping us dissect forward model theories of motor control.

Shadmehr -- Distinct neural circuits for control of movement vs. holding still. J. Neurophys.. [Pubmed] A fun review article by Shadmehr that focuses on the eye movement system. There are different mechanisms at play for movement versus holding still, even though from the perspective of the muscles in your eyes, holding still is "just as much an active process as movement" (Shadmehr, quoting Robinson, 1970). Could this be a general principle? After reviewing the evidence from the eye-movement system in some detail, Shadmehr discusses whether the same principles might hold for neuronal control of head movement, arm movement, and navigation.
    This intriguing possibility could help shed light on apparent discrepancies between pre-movement preparatory activity observed in M1 (when animals are still) and activity observed during movement. This topic has received a lot of attention lately from Mark Churchland's lab (see [1], [2]).

Tuesday, April 04, 2017

Systems Neuroscience Highlights: March 2017

First post of monthly highlights from the systems neuroscience literature. My goal is to point out cool stuff that people might not ordinarily see, so I will try not to just include Nature and Science papers. I will typically highlight three to five papers a month,  but this includes some February spillover so is a little longer. I will post by the fifth of each month.

Sensory Coding

Shi et al -- Retinal origin of direction selectivity in the superior colliculus. Nature Neuroscience [Pubmed] The authors used optogenetic stimulation to show that the motion-selectivity of superficial superior colliculus neurons is inherited entirely from the direction selectivity of retinal ganglion cells that project there.

Cognitive Neuroscience

Yackle et al -- Breathing Control Center Neurons That Promote Arousal in Mice. Science. [Pubmed] The CPG that controls breathing contains a small subpopulation of neurons that projects to the locus coeruleus, which releases noradrenaline (i.e., sympathetic activation for fight/flight). Removing this subset of neurons apparently did not influence the ability of mice to breath, but did make them especially chill. Take-home lesson: if you want to calm down, stop breathing.

Motor Control

Shadmehr -- Learning to Predict and Control the Physics of Our Movements. J Neurosci. [Pubmed]  Interestingly, this month there were quite a few papers related to the forward model framework in motor control (for a review, see Shadmehr and Krakaur's Error correction, sensory prediction, and adaptation in motor control (2010)). This paper from Shadmehr is an excellent summary of his many seminal contributions to this framework over the years. It focuses on his research on our ability to learn to manipulate objects with our hands, which involves quickly learning their unique dynamical signatures.


Maeda et al -- Foot placement relies on state estimation during visually guided walking. J. Neurophys. [Pubmed] The second notable paper from the forward-model theoretic framework. How do we walk when we wear prismatic lenses that render visual feedback unreliable? This paper suggests that subjects learn to weight internally generated predictions more than the resulting noisy and unreliable visual feedback. Similar results have been seen before in reaching tasks (e.g., Körding and, Wolpert, 2004). However, this is a cool use of distorting lenses to demonstrate such effects during walking, which is typically thought to rely on mindless CPGs.


Confais et al -- Nerve-Specific Input Modulation to Spinal Neurons during a Motor Task in the Monkey. J. Neurosci. [Pubmed]  When we move, we activate our own sensory transducers. What keeps our sensory systems from getting overwhelmed by such self-generated sensory signals?  Following up on Seki et al (2004), this paper shows that there are sensory-nerve specific patterns of modulation (both excitation and inhibition) of somatosensory responses in the spinal cord during voluntary wrist movements. The sign of modulation sometimes depended on the particular direction of movement of the wrist. This is a beautiful model system for the study of the effects of corollary discharge.

Chaisanguanthum et al -- Neural Representation and Causal Models in Motor Cortex. J. Neurosci. [Pubmed] An excellent paper straddling classical motor control theories of Georgopoulos and friends, and some modern ideas from a horde that has been attacking such ideas recently. They construct a simple mathematical model of the sensorimotor transformation required to perform a center-out reaching task, and show that movement variability will be minimized when the output neurons that directly drive behavior are tuned to velocity. Indeed, they discover just such a population in their data (using a somewhat rough-hewn spike-width criterion to individuate subclasses of cortical neurons). While the model in this paper is simple, it is a welcome counterweight to the recent overreactions against Georgopoulos. Hopefully it is the first of many studies that will ultimately absorb previous work in a principled way.

Why am I being so pro-Georgopoulos? I'm not: I'm just surprised that people have recently been so dismissive of Georgopoulos, to the point where it seems they are just attacking a straw man. Students of motor control were never so locked into the velocity-tuning framework that they thought it would apply to all neurons (for an excellent review, see Kalaska, 2009). Further, is anyone that surprised at nonstationarities in the system? That is, was anyone really surprised that neurons don't show the same tuning properties seconds before an animal starts moving, when recording in brain regions whose primary function is to directly control movement? The sensory systems literature is absorbing nonstationarities and dynamics without all this fanfare. What's up, motor control?

Wednesday, November 18, 2015

Matlab notes 2015

Notes to myself on little tricks and tips I find useful in Matlab. 2015 version. Last time I did this it was 2013.

Exporting surf plots for Illustrator in Matlab
Exporting surf plots is a pain, one of those things that is perennially a problem in Matlab but they seem to never get around to fixing. There are a couple of quick fixes. First, this thread is helpful. Use the painters renderer and it forces the plot to export vectorized. So something like:
 print -depsc2 -painters test.eps

Or, if you want a nice self-contained program, try the export_fig package, and then you can just do something like:
print2eps('FullWidthSurfTest2')
I prefer the export_fig package, because it preserves the tickmarks and such that I spent so much time making. 

Tuesday, February 10, 2015

PySide Tree Model V: Building trees with QTreeWidget and QStandardItemModel

Last in a series on treebuilding in PySide: see Table of Contents.

As mentioned in post IIC, if our ultimate goal were to display a tree as simple as the one in the simpletreemodel, we would probably just use QTreeWidget or QStandardItemModel. In both cases, it is almost embarrassing how much easier it is to create the tree. This is because we don't need to roll our own model or data item classes.

In what follows, we will see how to use QTreeWidget and QStandardItemModel to create and view a read-only tree with multiple columns of data in each row. To keep it simple, we won't load data from a file, and the code only creates a very simple little tree. It would be a useful exercise to expand these examples to exactly mimic the GUI created in simpletreemodel.

QTreeWidget
While it is often poo-poohed as slow and inflexible, for simple projects QTreeWidget is extremely convenient and easy to use. Simply instantiate an instance of QTreeWidget, populate the tree with QTreeWidgetItem instances, and then call show() on the widget:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
from PySide import QtGui
import sys

app = QtGui.QApplication(sys.argv)

treeWidget = QtGui.QTreeWidget()
treeWidget.setColumnCount(2)
treeWidget.setHeaderLabels(['Title', 'Summary']);

#First top level item and its kids
item0 = QtGui.QTreeWidgetItem(treeWidget, ['Title 0', 'Summary 0'])
item00 = QtGui.QTreeWidgetItem(item0, ['Title 00', 'Summary 00'] )
item01 = QtGui.QTreeWidgetItem(item0, ['Title 01', 'Summary 01'])

#Second top level item and its kids
item1 = QtGui.QTreeWidgetItem(treeWidget, ['Title 1', 'Summary 1'])
item10 = QtGui.QTreeWidgetItem(item1, ['Title 10', 'Summary 10'])
item11 = QtGui.QTreeWidgetItem(item1, ['Title 11', 'Summary 11'])
item12 = QtGui.QTreeWidgetItem(item1, ['Title 12', 'Summary 12'])

#Children of item11
item110 = QtGui.QTreeWidgetItem(item11, ['Title 110', 'Summary 110'])
item111 = QtGui.QTreeWidgetItem(item11, ['Title 111', 'Summary 111'])

treeWidget.show() 
sys.exit(app.exec_())

QStandardItemModel
This is only slightly more complicated than QTreeWidget. We populate the tree with lists of  QStandardItems. To add a child to a row, we apply appendRow() to the first element (i.e., the first column) of the parent row:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
from PySide import QtGui
import sys

app = QtGui.QApplication(sys.argv)
model = QtGui.QStandardItemModel()
model.setHorizontalHeaderLabels(['Title', 'Summary'])
rootItem = model.invisibleRootItem()

#First top-level row and children 
item0 = [QtGui.QStandardItem('Title0'), QtGui.QStandardItem('Summary0')]
item00 = [QtGui.QStandardItem('Title00'), QtGui.QStandardItem('Summary00')]
item01 = [QtGui.QStandardItem('Title01'), QtGui.QStandardItem('Summary01')]
rootItem.appendRow(item0)
item0[0].appendRow(item00)
item0[0].appendRow(item01)

#Second top-level item and its children
item1 = [QtGui.QStandardItem('Title1'), QtGui.QStandardItem('Summary1')]
item10 = [QtGui.QStandardItem('Title10'), QtGui.QStandardItem('Summary10')]
item11 = [QtGui.QStandardItem('Title11'), QtGui.QStandardItem('Summary11')]
item12 = [QtGui.QStandardItem('Title12'), QtGui.QStandardItem('Summary12')]
rootItem.appendRow(item1)
item1[0].appendRow(item10)
item1[0].appendRow(item11)
item1[0].appendRow(item12)

#Children of item11 (third level items)
item110 = [QtGui.QStandardItem('Title110'), QtGui.QStandardItem('Summary110')]
item111 = [QtGui.QStandardItem('Title111'), QtGui.QStandardItem('Summary111')]
item11[0].appendRow(item110)
item11[0].appendRow(item111)

treeView= QtGui.QTreeView()
treeView.setModel(model)
treeView.show()
sys.exit(app.exec_())

While a tad more complicated than using QTreeWidget, this is still drastically simpler than subclassing QAbstractItemModel.

Conclusion
As is usually the case, there are many ways to get to the same destination. The route you take will depend on your goals, the complexity of your data, how much time you have to write your code, and how fast you want the program to be.  As mentioned before, it would be overkill to subclass QAbstractItemModel for a data store as simple as the one in simpletreemodel. This post shows just how easy it would be to create the exact same tree with an order of magnitude less code.

Those that have read any of these posts, thanks for reading! I'll be putting a PDF of all the posts together so you don't have to fight through a maze of posts for all the information.

Monday, February 09, 2015

PySide Tree Tutorial IV: What next?

Part of a series on treebuilding in PySide: see Table of Contents.

We have finished going over simpletreemodel. This and the final post are effectively appendices to our discussion of that example.

You have probably noticed that model/view programming is a complex subject, probably deserving book-length treatment. Tree views are the most complex built-in views there are, and hopefully we have made some headway on how to build them.

We have left out how we would handle an editable tree model (this is covered in the editabletreemodel example that comes with PySide). Nor have we addressed how to exert more precise control over how items are displayed, such as how to show html-formatted strings: this is the purview of custom delegates (a topic covered in the spinboxdelegate and stardelegate examples). We have also left open what to do if we want graphical rather than textual rendering of our data: this would involve the construction of a custom view (one example is to be found in chart).

For those that want a more principled overview of model/view programming in Python, Summerfield (2008) has three chapters on the topic. The brave can also try Summerfield (2010), for an extremely thorough treatment, and an entire chapter on trees. While the latter is not written for Python, it has tons of useful information about model-view programming if you can brave the translation from c++.

Summerfield, M (2010) Advanced Qt Programming. Prentice Hall.
Summerfield, M (2008) Rapid Gui Programming with Python and Qt. Prentice Hall.

Friday, February 06, 2015

PySide Tree Tutorial IIID: Creating the tree with setupModelData()

Part of a series on treebuilding in PySide: see Table of Contents
 
Recall that TreeModel uses setupModelData() to set up the initial tree structure. We provide a very brief description of its behavior here, and refer the reader to the code itself for more details (the code is in post IIIA). We begin with a text file (default.txt) that contains all the data for our tree:
Getting Started            How to familiarize yourself with Qt Designer
Launching Designer         Running the Qt Designer application
The User Interface         How to interact with Qt Designer
                             .
                             .
                             .
Connection Editing Mode    Connecting widgets together
Connecting Objects         Making connections in Qt Designer
Editing Connections        Changing existing connections
The entire text file is extracted in main, and sent to setupModelData() within TreeModel. Two tab-delimited strings are extracted from each line (the title and summary), and form the basis for a new TreeItem. The location of each node in the hierarchy is determined by the pattern of indentation in the file. We construct the tree exactly as discussed in Part II, using the following rules:
  • For each line, create a TreeItem in which the two tab-delimited strings on that line are assigned to TreeItem.itemData (Figure 4, post IIB).
  • If line N+1 is indented relative to line N, then make the (N+1)th item a child of item N.
  • If line N+1 is unindented relative to line N, then make the (N+1)th item a sibling of item N's parent.
The implementation details in setupModelData() look a bit complicated, but most of the code is there for recordkeeping (e.g., keeping track of the current level of indentation). I found it helpful to work through how it handles the very first line of the input file, and then keep iterating through the code by hand until everything is clear.

Wednesday, February 04, 2015

PySide Tree Tutorial IIIC: Index and parent

Part of a series on treebuilding in PySide: see Table of Contents

Now we get to the guts of the API, and what really separates our model from a table model. If we were just building a table model, subclassing QAbstractTableModel, our model would be done. Because we are subclassing QAbstractItemModel, we must provide two additional methods: index() and parent(). The view needs these methods to navigate among items in the tree.

We can view index() and parent() as inverse methods; parent() takes in a child index and returns the index of its parent, while index() takes in a parent index and returns the index of one of its children (Figure 7A). We implement both methods as outlined in Figure 7B, which you might want to study before looking at the details of the code: it is sometimes easy to lose the forest for the trees with these functions.


Figure 7: Implementing parent() and index()  
A. The basic logic of parent() and index(). TreeModel.parent() takes
in the index of a child and returns the index of its parent, while TreeModel.index()
takes in a parent index and returns the index of one of its children. B. Details about
how parent() and index() are implemented. The flow of index() is
counterclockwise (red arrows), and parent() is clockwise (green arrows). In each
case, the given index's associated TreeItem is retrieved using internalPointer().
Then, its parent (child) item is accessed using the child() (parent()) method
that was built into TreeItem. Finally, that item's index is created using
createIndex(), which takes this item as one of its parameters.

Figure 7B adapted from: 
http://qt-project.org/doc/qt-4.8/itemviews-editabletreemodel.html

We'll start by looking at the implementation of index().

index(row, column, parent)

This method takes in the index of a parent item, and returns the index of one of its children. We implement it with:

def index(self, row, column, parent):
    if not self.hasIndex(row, column, parent):
        return QtCore.QModelIndex() 
    if not parent.isValid():
        parentItem = self.rootItem
    else:
        parentItem = parent.internalPointer() #returns item, given index
    childItem = parentItem.child(row) #the actual item we care about
    if childItem:
        return self.createIndex(row, column, childItem)
    else:
        return QtCore.QModelIndex() 
 
While the basic strategy here is outlined in Figure 7B, it also has to handle some special cases. First, we check the validity of the input coordinates with hasIndex(): it determines if row and column are nonnegative and fit within the range of values allowed by parent. For instance, if the parent item has three children, hasIndex() will return False if the row argument exceeds two.

If hasIndex() returns False, then the coordinates submitted by the view are not valid, and we return the invalid index. Otherwise, we retrieve the parentItem that corresponds to parent, and then extract that item's desired child using TreeItem.child(row).

Once we have extracted the appropriate child item from its parent, we wrap it up into an index using createIndex(row, column, childItem). This is the built-in method that all models use to create new indexes. It requires that we specify the item's row number and column number, as well the TreeItem to which the resulting index will refer--this is the item that its internalPointer() will return.

Recall that each element in TreeItem.itemData corresponds to a different column in a row of our model (Figure 4, post IIB). Hence, in general there is a many-to-one relationship (in this case a 2:1 relationship) between model indexes and TreeItems. Given N rows in our tree,then 2N calls to index() would be required to specify all the indexes.

parent(index) 
As discussed above, this method takes in an item's index, and returns the index of its parent:

def parent(self, index):   
        if not index.isValid():
            return QtCore.QModelIndex()
        childItem = index.internalPointer()
        parentItem = childItem.parent()
        if parentItem == self.rootItem:
            return QtCore.QModelIndex()
   return self.createIndex(parentItem.row(), 0, parentItem)
 
The basic strategy is illustrated in Figure 7B, but there are a few wrinkles we should consider. First, if the index is invalid (i.e., it is the root index), then it has no parent and we return an invalid index. Also, as discussed in part IIIA, we return the invalid index as the parent of any top-level items in the model, i.e., the items whose parents are rootItem.

For all lower-level items, we create the parent index with createIndex(). This is a little more subtle than in the index() method: we must specify the row and column numbers of the parentItem relative to its parent (i.e, the grandparent of childItem). To find the row that parentItem occupies among its siblings, we use TreeItem.row(). For the column value, we follow the convention that only items in the first column of our model have children (that is, we set the column parameter for createIndex() to zero).

Conclusion
We are pretty much done going over the code. While we will briefly discuss setupModelData() in the next post, we are done going over the API provided by the model for the view. Once the tree structure and model are built, then the model is ready to be connected to a view. This is easily done with QTreeView.setModel(TreeModel). When we call show() on the view, the GUI should appear on our screen as in Figure 3 (post IIA).

Monday, February 02, 2015

PySide Tree Tutorial IIIB: QAbstractItemModel's API

Part of a series on treebuilding in PySide: see Table of Contents.

In the next two posts, we will go through the methods instantiated in our model, starting now with  rowCount(), columnCount(), data(), and headerData(). In the following post we will round it out with a discussion of index() and parent(), which are especially important in hierarchical models.

Let's start with rowCount().

rowCount(parent) 
The rowCount() method takes a parent index and returns the number of children the corresponding parent item has. Views call rowCount() to determine how many rows need to be displayed underneath a given parent item.

Recall that in simple, single-level data structures like tables, each item has the same (invalid) parent, so we can get away with returning a single number in response to rowCount() (Figure 2 in post IB). This strategy won't work with tree models, in which different parents typically have different numbers of children.

In our example, rowCount() is implemented as follows:

def rowCount(self, parent):
    if parent.column() > 0:
        return 0
    if not parent.isValid():
        parentItem = self.rootItem
    else:
        parentItem = parent.internalPointer()
    return parentItem.childCount()

The basic strategy in rowCount() is to extract the parent index's corresponding parentItem and then return the number of children this item has using the built-in item method TreeItem.childCount(). The parentItem is extracted from its index using internalPointer(). While this might seem a strange name (Python has no pointers), you can think of internalPointer() as a getItemFromIndex() method that refers to the TreeItem corresponding to an index (Figure 6).

Figure 6: internalPointer() pulls an item from an index.
Each index includes an internalPointer() method
that returns the TreeItem associated with that index.

While the core calculation is relatively simple, there are a couple of wrinkles. First, our convention is that only the first column in a row has children, so if parent.column() is greater than 0, then rowCount() returns 0. Second, as discussed above, if the parent index is the invalid QModelIndex(), then the parent item is the root item. Finally, if the parent is not the root, then we follow the algorithm described in the previous paragraph.

columnCount(parent)
The columnCount() method takes in a parent index and returns the number of columns the corresponding parent item has. The view calls this method to ask the model how many columns to display under a parent item:

def columnCount(self, parent):
    if not parent.isValid():
        return self.rootItem.columnCount()
    else:
        return parent.internalPointer().columnCount()

The basic algorithm is simple: extract the parent item corresponding to the given parent index, and then call TreeItem's built-in columnCount() on this parent item. As before, if the parent index is invalid, we assume the index corresponds to the root item.

data(index, role)
Given an item's index and a desired role, data() tells the view what to display at that index's location:

def data(self, index, role):
    if not index.isValid():
        return None
    if role != QtCore.Qt.DisplayRole:
        return None
    item = index.internalPointer()
    return item.data(index.column())

The model does not know when it will be used, or which data the view will ask for. It lies in wait, providing data each time the view requests it, using the universal interface.

When the role is set as DisplayRole, the view is asking what text to display at the location specified by the given index. Recall that each TreeItem contains all the data for an entire row of the tree (Figure 4, post IIB), but the view needs to know what text to display in just one column. Luckily, each index already has a built-in column() method, and each TreeItem has a built-in data() method that returns the data from column j of that item. In TreeModel.data(), we compose these two functions to pull the appropriate column of data from the item corresponding to the given index.

When querying the model with data(), the view sends the index of the item to be displayed, as well as a single role parameter (of type QtCore.Qt.itemDataRole). What exactly is this itemDataRole? Roles are sent by the view to indicate the type of data it is looking for, such as text, font styles, background color, and other information.

Each role is sent as a separate call to TreeModel.data() by the view. The model should always return values of the appropriate type for a given role. A partial list of the different roles and their expected return type is shown in Table 2. You can find an exhaustive enumeration in the PySide documentation.

Role Description Expected return type
DisplayRole Data to be displayed as text. Python str
ToolTipRole Text to temporarily display when you hover mouse over an item. Python str
FontRole Font with which to render items. QtGui.QFont
TextAlignmentRole Text alignment for item. QtCore.Qt.AlignmentFlag
BackgroundRole Set background color. QtGui.QBrush

Table 2: Some itemDataRoles, their descriptions, and return types.

Our simpletreemodel example only supports the DisplayRole. However, it is instructive to play around with other roles. For instance you could try adding:

if role == QtCore.Qt.ToolTipRole:
    return "Scalawag!"

This will display the helpful tooltip "Scalawag!" over each item in the view when you hover over it with your mouse. To change the background color in the first column of the tree, try this:

if role == QtCore.Qt.BackgroundRole:
    if index.column() == 0:
        return QtGui.QBrush(QtGui.QColor(QtCore.Qt.yellow))

The result is ugly, to be sure, but it's the principal that matters.

Some developers argue that this functionality, whereby the model controls how items appear, violates the desired division of labor between views and models. This is a valid concern, and some programmers leave all such appearance customization to delegates. But since we are ignoring delegates for now, it is useful to know how to sneak formatting in via the model.

headerData(section, orientation, role)
The headerData() function extracts the header data from the root item, and paints it in the column headers:

def headerData(self, section, orientation, role):
    if orientation==QtCore.Qt.Horizontal and role==QtCore.Qt.DisplayRole:
        return self.rootItem.data(section)
    return None

headerData() works similarly to TreeModel.data(). Note also that section is a generic term for the row or column number, depending on whether the orientation is vertical or horizontal, respectively.