It's been a while since I did any GUI coding in Python, so I thought I'd clear out the cobwebs with Mastering GUI Programming with Python by Alan Moore. I was very pleasantly surprised. It covers PyQt5 from the ground up.
This book is fantastic, and the first proper successor to the previous go-to book on learning this stuff: Summerfield's 2008 book on PyQt 4, which I read cover to cover about 6 years ago. While I thought I would just be refreshing my memory, it is teaching me a lot of new things about PyQt (e.g., the stuff on using QProxyStyle to override the default style is amazing -- Summerfield didn't cover it, and I had never even heard of it before).
It is quite comprehensive in scope (unless you want to do a deep dive with tree views), and very clearly written. I know a lot of Packt publishing books are pretty awful, but this bucks that trend.
If you have never used PyQt, or need a refresher course, this book is where I would start if you want a comprehensive, detailed, overview. Qt is much too expansive, complex, and idiosyncratic of a framework to leave to little web tutorials. A large-scale opus is what's needed, and this foots the bill perfectly.
Note I know it sounds like I was paid to write this: I wasn't. I bought the book with my own money and am writing this of my own volition during my week off. :)
Neurochannels
Neuroscience, coding, and neural coding
Wednesday, March 18, 2020
Friday, September 20, 2019
Creating and consuming tensorflow record files
TFRecord files can be confusing. They are the preferred data containers for training tensorflow models when using the object detection api (github).
It took me a while to converge on code I like for generating TFRecord files (including hard negative examples, with no bounding boxes), and for consuming TFRecord files to display their contents. The latter is especially important, as when you roll your own augmentation pipeline it is really helpful to look at the data that your are using to train your network, just to be sure everything looks reasonable.
I've encapsulated my experience with this in a github repo, tfrecord-view, which has a script for encoding data into a TFRecord file (given images and VOC-Pascal encoded xml annotation files), and a script for consuming a TFRecord file:
https://github.com/EricThomson/tfrecord-view
It includes data and annotation files so you can test it out easily.
It took me a while to converge on code I like for generating TFRecord files (including hard negative examples, with no bounding boxes), and for consuming TFRecord files to display their contents. The latter is especially important, as when you roll your own augmentation pipeline it is really helpful to look at the data that your are using to train your network, just to be sure everything looks reasonable.
I've encapsulated my experience with this in a github repo, tfrecord-view, which has a script for encoding data into a TFRecord file (given images and VOC-Pascal encoded xml annotation files), and a script for consuming a TFRecord file:
https://github.com/EricThomson/tfrecord-view
It includes data and annotation files so you can test it out easily.
Sunday, February 17, 2019
Installing the NEST simulator for use with Anaconda
Installing the NEST simulator for use with Anaconda was pretty painless. However, it took me two failed tries to realize just how painless, so I thought I'd post how I got it working.
Note I am in Ubuntu 16/Python 3.7/Conda 4.5.11. If you are on Windows, don't even bother. Just install Linux.
Edit:
Note since I posted this, someone has created an installer that should make installation really easy. I have not tried it, so cannot vouch for it, but I recommend that before you follow my recipe, see the first comment below and give it a shot.
1. Create/activate your nest environment
Note this is for the 'standard' configuration as described at http://www.nest-simulator.org/installation/.
sudo apt-get install -y build-essential cmake \
libltdl7-dev libreadline6-dev libncurses5-dev \
libgsl0-dev openmpi-bin libopenmpi-dev
3. Install python packages
conda install numpy scipy matplotlib ipython nose cython scikit-learn
4. Install NEST proper (see http://www.nest-simulator.org/installation/)
Obviously you can use whatever directory structure you want, but I put my
build in /opt/nest, so change your values below accordingly if you want something else.
a. Create folder /opt/nest and give yourself ownership if needed (I had to use sudo chown)
b. Download NEST (I put the tarball in /opt/nest)
c. Unpack the tarball (in /opt/nest):
tar -xzvf nest-simulator-2.16.0.tar.gz
d. Create a build directory (again, within opt/nest/):
mkdir nest-simulator-2.16.0-build
e cd to the build directory:
cd nest-simulator-2.16.0-build
f. Run cmake to build makefiles
Note the -Dwith-python=3 option, which forces it to use Python 3.
cmake -DCMAKE_INSTALL_PREFIX:PATH=/opt/nest/ /opt/nest/nest-simulator-2.16.0 -Dwith-python=3
I got some warnings ('Cannot generate a safe linker search path for target sli_readline ') but things seemed to work out ok.
g. Set environment variables
Add the following line to .bashrc:
source /opt/nest/bin/nest_vars.sh
5. Run the makefiles
make #this will take a few minutes(you may get some warnings)
make install #this goes quickly
make installcheck #takes a few miniutes, gives summary at end
6. Test something out
From your favorite IDE or command line, run a simple script (e.g., one_neuron.py in pynest/examples).
Things should just work. I like to use Spyder with conda, so added the following:
conda install spyder
spyder #so spyder runs from the nest environment
The one_neuron.py example worked just fine. Note, though, if you are using spyder, you will not want to run your code using F5 unless you are a fan of restarting your Python kernel constantly. To avoid problems, I recommend entering run filename.py in Spyder's (ipython) command line when ready to run a script.
7. Have fun!
Enjoy NEST, it is a really amazing neural simulation framework. I recommend starting here to learn how to program in pynest, the Python interface for the NEST simulator: http://www.nest-simulator.org/introduction-to-pynest/
Note I am in Ubuntu 16/Python 3.7/Conda 4.5.11. If you are on Windows, don't even bother. Just install Linux.
Edit:
Note since I posted this, someone has created an installer that should make installation really easy. I have not tried it, so cannot vouch for it, but I recommend that before you follow my recipe, see the first comment below and give it a shot.
1. Create/activate your nest environment
conda create --name nest2. Install system packages you will need
conda activate nest
Note this is for the 'standard' configuration as described at http://www.nest-simulator.org/installation/.
sudo apt-get install -y build-essential cmake \
libltdl7-dev libreadline6-dev libncurses5-dev \
libgsl0-dev openmpi-bin libopenmpi-dev
3. Install python packages
conda install numpy scipy matplotlib ipython nose cython scikit-learn
4. Install NEST proper (see http://www.nest-simulator.org/installation/)
Obviously you can use whatever directory structure you want, but I put my
build in /opt/nest, so change your values below accordingly if you want something else.
a. Create folder /opt/nest and give yourself ownership if needed (I had to use sudo chown)
b. Download NEST (I put the tarball in /opt/nest)
c. Unpack the tarball (in /opt/nest):
tar -xzvf nest-simulator-2.16.0.tar.gz
d. Create a build directory (again, within opt/nest/):
mkdir nest-simulator-2.16.0-build
e cd to the build directory:
cd nest-simulator-2.16.0-build
f. Run cmake to build makefiles
Note the -Dwith-python=3 option, which forces it to use Python 3.
cmake -DCMAKE_INSTALL_PREFIX:PATH=/opt/nest/ /opt/nest/nest-simulator-2.16.0 -Dwith-python=3
I got some warnings ('Cannot generate a safe linker search path for target sli_readline ') but things seemed to work out ok.
g. Set environment variables
Add the following line to .bashrc:
source /opt/nest/bin/nest_vars.sh
5. Run the makefiles
make #this will take a few minutes(you may get some warnings)
make install #this goes quickly
make installcheck #takes a few miniutes, gives summary at end
6. Test something out
From your favorite IDE or command line, run a simple script (e.g., one_neuron.py in pynest/examples).
Things should just work. I like to use Spyder with conda, so added the following:
conda install spyder
spyder #so spyder runs from the nest environment
The one_neuron.py example worked just fine. Note, though, if you are using spyder, you will not want to run your code using F5 unless you are a fan of restarting your Python kernel constantly. To avoid problems, I recommend entering run filename.py in Spyder's (ipython) command line when ready to run a script.
7. Have fun!
Enjoy NEST, it is a really amazing neural simulation framework. I recommend starting here to learn how to program in pynest, the Python interface for the NEST simulator: http://www.nest-simulator.org/introduction-to-pynest/
Friday, April 20, 2018
Systems Neuroscience Highlights: March 2018
I'll be doing monthly summaries again, but this time over at philosophy of brains. March 2018 is up:
http://philosophyofbrains.com/2018/04/19/systems-neuroscience-highlights-march-2018.aspx
http://philosophyofbrains.com/2018/04/19/systems-neuroscience-highlights-march-2018.aspx
Sunday, August 06, 2017
Systems Neuroscience Highlights: June and July 2017
There were lots of great articles the last couple of months, in particular a series of articles on the fly's representation of it's position in space that seem to be coming together nicely.
Sensory Coding
Singla et al.. A cerebellum-like circuit in the auditory system cancels responses to self-generated sounds -- Nature Neuroscience [Pubmed] It is well-known that the brain is able to factor out which sensory inputs are generated by an animal's own behavior, and which are generated by events in the world. How brains do this is still an extremely active research are. In this paper, the authors report a class of cells in the dorsal cochlear nucleus (DCN) of the brain stem that respond robustly to externally generated auditory cues, but not to self-generated auditory cues in mice (in particular, the sounds generated when they lick). Amazingly, when researchers artificially pipe in sounds in response to licking behavior, these DCN neurons eventually supress responses to such sounds, as if the brain were starting to treat them as being generated by the animal.
The authors seem to have found a beautiful model for sensory cancellation effects, one that is very much like that seen in the mormyrid electric fish (as discussed by Abbott's group recently here). It will be interesting to see how similar the principles are in these different systems as folks dig under the hood.
Green et al.. A neural circuit architecture for angular integration in Drosophila -- Nature [Pubmed].
This is actually one of three that papers came out recently, building on a landmark paper from 2015, showing that the fruit fly contains a circular structure (the ellipsoid body, or EB), that acts as an internal compass, and contains a set of neurons that lights up at a different position on the ellipse depending on the angular position of the fly in space. In this paper, they test a potential circuit mechanism about how such an internal compass might be implemented. Namely, there is a second structure, the protocerebral bridge (PB), that is reciprocally connected to the compass, but slightly shifted around the circular axis, and which is preferentially activated by turning behavior. So, for instanced, when the fly turns right, the PB neurons (labeled P-EN in the video) will project to the compass neurons (labeled E-PG in the video), but to a region just a little bit ahead of the presently active spot on the EB. Excitatory interactions will drag the spot around to the appropriate location. This paper is nice because they have tests of sufficiency (activating PB glomeruli causes a shift in the compass location), and necessity (inactivating PB disrupts the compass).
There is a lot more to the paper: the anatomy actually gets fairly complicated, and I have purposely suppressed tons of details and nomenclature (e.g., if you start to get lost in the anatomy, see this paper, or this one).
So far, many of the fly-compass researchers seem to be doing relatively coarse-grained calcium imaging (e.g., this glomerulus lights up, and that one doesn't). The work is excellent, as they are pulling information from particular cell types to extract specific hypotheses about circuit mechanisms. Ultimately, though, you still end up with black-boxology (though with fine-grained boxes). The real power will come when they start triangulating their ridiculously powerful genetic toolkit with finger-grained electrophysiology and anatomy to really crack the circuits mechanisms at single-cell resolution. My guess is this is their aim.
One question I have after reading this and other papers is what is the point of this compass? It seems to not effect the animal very much if you perturb it using optogenetics (see Turner-Evans paper below). In this paper they talk about "occasional" changes in behavior when they disrupt things by stimulating PB, but don't explore or quantify this effect. I am not sure what happens, to spatial navigation, if you ablate EB. Is it like the hippocampus, in that it is more involved in memory than online spatial navigation, even though there is a beautiful spatial representation contained there?
Note there were two other papers on the fly's internal compass in the past cycle, that I'll just mention briefly:
Motor Control
Park et al.. Moving slowly is hard for humans: limitations of dynamic primitives -- J. Neurophysiology [Pubmed]. While you will often hear of the speed-accuracy tradeoff (that is, the faster you try to do something, the more likely you are to make a mistake), does this mean when you move really slowly you get really accurate? People don't' typically study the lower extremes of the speed-accuracy tradeoff. In this study they did just that. They had human subjects move their hands back and forth at different speeds, sometimes extremely slowly, so slowly that they could no longer maintain a smooth oscillatory behavior, but started to halt and stop and start again, as if they were shifting from a continuous to a discrete behavioral strategy.
While this paper doesn't have any neuronal data, it is significant and fun because of its attempt to infer underlying mechanisms of motor control strategies from a clever and creative extension of simple behavioral techniques. A colleague of mine pointed out that it would be interesting to see how much improvement we would see with training on this task. My reaction is that, even if subjects could ultimately move smoothly with 500 hours of training (dear God please don't do that to your poor undergrads), it would still be significant if without such training, we naturally switched from a continuous to an intermittent control strategy in low-velocity regimes.
Sensory Coding
Singla et al.. A cerebellum-like circuit in the auditory system cancels responses to self-generated sounds -- Nature Neuroscience [Pubmed] It is well-known that the brain is able to factor out which sensory inputs are generated by an animal's own behavior, and which are generated by events in the world. How brains do this is still an extremely active research are. In this paper, the authors report a class of cells in the dorsal cochlear nucleus (DCN) of the brain stem that respond robustly to externally generated auditory cues, but not to self-generated auditory cues in mice (in particular, the sounds generated when they lick). Amazingly, when researchers artificially pipe in sounds in response to licking behavior, these DCN neurons eventually supress responses to such sounds, as if the brain were starting to treat them as being generated by the animal.
The authors seem to have found a beautiful model for sensory cancellation effects, one that is very much like that seen in the mormyrid electric fish (as discussed by Abbott's group recently here). It will be interesting to see how similar the principles are in these different systems as folks dig under the hood.
Green et al.. A neural circuit architecture for angular integration in Drosophila -- Nature [Pubmed].
There is a lot more to the paper: the anatomy actually gets fairly complicated, and I have purposely suppressed tons of details and nomenclature (e.g., if you start to get lost in the anatomy, see this paper, or this one).
So far, many of the fly-compass researchers seem to be doing relatively coarse-grained calcium imaging (e.g., this glomerulus lights up, and that one doesn't). The work is excellent, as they are pulling information from particular cell types to extract specific hypotheses about circuit mechanisms. Ultimately, though, you still end up with black-boxology (though with fine-grained boxes). The real power will come when they start triangulating their ridiculously powerful genetic toolkit with finger-grained electrophysiology and anatomy to really crack the circuits mechanisms at single-cell resolution. My guess is this is their aim.
One question I have after reading this and other papers is what is the point of this compass? It seems to not effect the animal very much if you perturb it using optogenetics (see Turner-Evans paper below). In this paper they talk about "occasional" changes in behavior when they disrupt things by stimulating PB, but don't explore or quantify this effect. I am not sure what happens, to spatial navigation, if you ablate EB. Is it like the hippocampus, in that it is more involved in memory than online spatial navigation, even though there is a beautiful spatial representation contained there?
Note there were two other papers on the fly's internal compass in the past cycle, that I'll just mention briefly:
- Turner-Evans et al.. Angular velocity integration in a fly heading circuit -- Elife [Pubmed]. Testing the same phase-dragging model as Green et al., with similar results and some nice patch-clamp data from PB. Video is from this paper.
- Kim et al.. Ring attractor dynamics in the Drosophila central brain -- Science [Pubmed] Looking at the compass in animals in flight, instead of just on the floating ball.
Motor Control
Park et al.. Moving slowly is hard for humans: limitations of dynamic primitives -- J. Neurophysiology [Pubmed]. While you will often hear of the speed-accuracy tradeoff (that is, the faster you try to do something, the more likely you are to make a mistake), does this mean when you move really slowly you get really accurate? People don't' typically study the lower extremes of the speed-accuracy tradeoff. In this study they did just that. They had human subjects move their hands back and forth at different speeds, sometimes extremely slowly, so slowly that they could no longer maintain a smooth oscillatory behavior, but started to halt and stop and start again, as if they were shifting from a continuous to a discrete behavioral strategy.
While this paper doesn't have any neuronal data, it is significant and fun because of its attempt to infer underlying mechanisms of motor control strategies from a clever and creative extension of simple behavioral techniques. A colleague of mine pointed out that it would be interesting to see how much improvement we would see with training on this task. My reaction is that, even if subjects could ultimately move smoothly with 500 hours of training (dear God please don't do that to your poor undergrads), it would still be significant if without such training, we naturally switched from a continuous to an intermittent control strategy in low-velocity regimes.
Thursday, July 27, 2017
Update on systems neuro lit summary for June
I had a grant and paper submission in the last month, so my summary had to take the back seat: I will be combining the June and July literature summaries..Lots of good stuff!
Monday, June 05, 2017
Systems Neuroscience Highlights: May 2017
It was a great month for systems neuroscience, and the following articles stood out as pushing things forward in unexpected (to me) and interesting ways.
Sensory Coding
Tien et al -- Homeostatic Plasticity Shapes Cell-Type-Specific Wiring in the Retina -- Neuron [Pubmed] This is an amazing paper.
They generated a line of mice that was lacking a certain type of retinal bipolar cell (the B6 cell). The B6 cell is typically the main input to the ONα retinal ganglion cell. Instead of being completely wrecked in this line of mice, the ONα RGCs actually maintained the same response profiles seen in wild type animals. This was because other types of bipolar cells compensated for the loss of the B6 cell in the circuit. Hence, it seems that compensatory plasticity mechanisms at play in the retina served to rewire the inputs to this class of RGC to maintain the same type of output to the brain.
I always thought homeostatic plasticity research was very cool, but more about neurons maintaining firing rates by changing concentrations/distributions of ion channels and other relatively vanilla properties confined to single units. If there are homeostatic mechanisms at play in systems, with homeostatic sculpting at the circuit level? This seems to be taking things in an entirely new direction.
Motor Control
Makino et al -- Transformation of Cortex-wide Emergent Properties during Motor Learning -- Neuron [Pubmed]
The authors looked at calcium dynamics in neurons across supragranular layers of cortex as mice learned a motor task (a simple lever-pressing task). The sequence of activation among different motor areas became more compressed in time as they learned the task, and response variability decreased as well. Interestingly, area M2, an infrequently studied motor region in rodents, became a key hub in the motor control network once animals learned the task: the movement-predicting signal in M2 started earlier as they learned, better predicted the activity of other motor areas, and inactivating M2 significantly impaired performance in the task.
The reason I like this paper is that it isn't just another "Look at all the calcium imaging we did!" paper. It has substantive new results that seems to push our picture of motor control in cool new directions. Also, it is an interesting complement to the recent result from Kawai et al (from Ölveczky's lab) showing that performing a simple overlearned motor sequence does not require M1/M2 (Motor cortex is required for learning but not for executing a motor skill). While Makino et al do not discuss the Kawai paper, it would be interesting to hear their thoughts on it.
Update added 6/7/17: I got a helpful comment from an author of the Makino et al. paper who pointed out that in Kawai et al, they didn't just remove M1, but M1 and M2. I missed this in my first reading of Kawai et al, and updated the present post accordingly. Further, he suggested that the task in the current paper requires finer-grained control of the fingers, while Kawai's task used more coarse-grained forelimb movement that are likely controlled subcortically. It is fairly well-known that dexterous digit control in rodents requires the cortex, as acknowledged by Kawai et al.. Finally, these are issues we will be hearing more about from Komiyama's group, so stay tuned!
Sensory Coding
Tien et al -- Homeostatic Plasticity Shapes Cell-Type-Specific Wiring in the Retina -- Neuron [Pubmed] This is an amazing paper.
They generated a line of mice that was lacking a certain type of retinal bipolar cell (the B6 cell). The B6 cell is typically the main input to the ONα retinal ganglion cell. Instead of being completely wrecked in this line of mice, the ONα RGCs actually maintained the same response profiles seen in wild type animals. This was because other types of bipolar cells compensated for the loss of the B6 cell in the circuit. Hence, it seems that compensatory plasticity mechanisms at play in the retina served to rewire the inputs to this class of RGC to maintain the same type of output to the brain.
I always thought homeostatic plasticity research was very cool, but more about neurons maintaining firing rates by changing concentrations/distributions of ion channels and other relatively vanilla properties confined to single units. If there are homeostatic mechanisms at play in systems, with homeostatic sculpting at the circuit level? This seems to be taking things in an entirely new direction.
Motor Control
Makino et al -- Transformation of Cortex-wide Emergent Properties during Motor Learning -- Neuron [Pubmed]
The authors looked at calcium dynamics in neurons across supragranular layers of cortex as mice learned a motor task (a simple lever-pressing task). The sequence of activation among different motor areas became more compressed in time as they learned the task, and response variability decreased as well. Interestingly, area M2, an infrequently studied motor region in rodents, became a key hub in the motor control network once animals learned the task: the movement-predicting signal in M2 started earlier as they learned, better predicted the activity of other motor areas, and inactivating M2 significantly impaired performance in the task.
The reason I like this paper is that it isn't just another "Look at all the calcium imaging we did!" paper. It has substantive new results that seems to push our picture of motor control in cool new directions. Also, it is an interesting complement to the recent result from Kawai et al (from Ölveczky's lab) showing that performing a simple overlearned motor sequence does not require M1/M2 (Motor cortex is required for learning but not for executing a motor skill). While Makino et al do not discuss the Kawai paper, it would be interesting to hear their thoughts on it.
Update added 6/7/17: I got a helpful comment from an author of the Makino et al. paper who pointed out that in Kawai et al, they didn't just remove M1, but M1 and M2. I missed this in my first reading of Kawai et al, and updated the present post accordingly. Further, he suggested that the task in the current paper requires finer-grained control of the fingers, while Kawai's task used more coarse-grained forelimb movement that are likely controlled subcortically. It is fairly well-known that dexterous digit control in rodents requires the cortex, as acknowledged by Kawai et al.. Finally, these are issues we will be hearing more about from Komiyama's group, so stay tuned!
Subscribe to:
Posts (Atom)