Zero degrees of separation

How connectomics is revealing the intricacies of neural networks, an interview with Josh Morgan

 

3D-EM

Complete scan of a volume of mouse cortex, and complete reconstruction of a few neurons within it (Berning et al., 2015).

On October 1st of 2015 the Human Genome Project (HGP) celebrated its 25th birthday. Six long years of planning and debating preceded its birth (1990), and at the young age of 10 the HGP fulfilled its potential by providing us with a ‘rough draft’ of the genome. In 2012, 692 collaborators published in Nature the sequence of 1092 human genomes1. All of this happened a mere 50 years after Watson and Crick first described the double stranded helix of DNA. In retrospect genomics had a surprisingly quick history, but by the numbers it was an effort of epic proportions, and a highly debated one. The promise of a complete sequence of the human genome was thrilling, but many were concerned. Some argued that the methods were unreliable and even unfeasible, others were concerned that a single genome couldn’t possibly represent the spectrum of human diversity, and yet others thought the task was overly ambitious and too time and money consuming.

Nevertheless, in the early 2000s genomics was taking over the scientific world and in its trail support was growing for the other –omics: proteomics, metabolomics, and last but not least connectomics. The connectome is a precise high definition map of the brain, its cells and the connections between them. While human connectomics uses fMRI (functional magnetic resonance imaging) and EEG (electroencephalography) to define neural connections, electron microscopy (EM) is leading the way in generating detailed 3D images of the brain of model organisms (C. Elegans, drosophila, and mouse) with nanometer resolution. Connectomics divided the scientific community into supporters and skeptics, and many of the same arguments were used as in the debate on the HGP in the late 1980s.

In 2013, Josh Morgan and Jeff Lichtman addressed head-on the main criticisms against connectomics in mouse2, arguing that obtaining a complete map of the brain would provide information about the structure of circuits that would be otherwise unattainable. Several labs embarked on an odyssey to fulfill the potential of 3D EM in the mouse brain. The last few years have seen a rapid succession in the latest improvements on optimizing the complex and multistep method. Put simply the procedure consists of fixing a piece of tissue, slicing it (at 29 nm thickness), imaging the sequence of sections, combining all the high resolution images into tiled stacks. This process has been sped up to take approximately 100 days for 2000 slices of a square millimeter. At this point the digital representation of the cube of tissue still needs to be segmented and cells within it traced, before analysis can be done on this dataset.

A monumental amount of work, yet a flurry of studies presenting reconstructed cubes of tissue from mouse brain have already been published. Most notable this year is the work from the Max Planck Institute for neurobiology4 on retinal tissue, and from the lab of Jeff Lichtman at Harvard University3, that published the complete reconstruction of a small (1500 μm3) volume of neocortex. Once obtaining such a large high-resolution dataset is no longer a limiting factor, what can this dataset tell us about brain connectivity?

I had the pleasure of seeing the potential of 3D EM during a talk that Josh Morgan (a postdoctoral fellow in the Lichtman lab) gave at the Center for Brain Science at Harvard University. He showed his work on scanning, segmenting and analyzing a piece of tissue from the mouse dLGN (dorsal Lateral Geniculate Nucleus). Afterwards he answered some questions about his work in the growing field of connectomics.

Can you briefly list your findings in the dLGN that you think are most representative of the kind of unique discoveries 3D EM allows us to make?

The big advantage of large scale EM is that you can look at many interconnected neurons in the same piece of tissue. At the local level, we could use that ability to find out which properties of retinal ganglion cell synapses were determined by the presynaptic neurons vs. which were determined by the post synaptic cell. At the network level, we found that the dLGN was not a simple relay of parallel channels of visual information. Rather, channels could mix together and split apart. My favorite result from the LGN project so far was seeing a cohort of axons innervate the dendrite of one thalamocortical cell and then jump together onto a dendrite of a second thalamocortical cell to form synapses. It is that sort of coordination between neurons that I think is critical to understanding the nervous system and that is extremely difficult to discover without imaging many cells in the same tissue.

 You showed some really nice analyses that are starting to chop at the vast dataset you created. Is it becoming more challenging to identify overarching patterns, or to synthesize findings? When datasets will expand to include tissue from multiple animals, will it be more challenging to do statistical analyses on them?

There was a critical point in my analysis, after I had traced a network of hundreds of neurons, where it was no longer possible for me to clearly see the organization of the network I had mapped. In that case, it was using a spring force model to organize all the cells into a 2D space that made the network interpretable again. I think visualization tools that make complex data transparent to the biologist studying them are essential to the process. As cellular network data becomes more common, I hope that some [visualization tools] for standard quantitative measures of synaptic network organization will emerge. For now, I think the main goal is giving neuroscientists as complete a view as possible of the circuits that they are studying.

For comparison between individuals, it would be convenient if each neural circuit could be divided into finer and finer gradations of cell types until each grouping was completely homogenous. In that case, comparing individuals just means identifying the same homogeneous groups of neurons in each animal. However, the dLGN data suggests that the brain has not been that considerate and instead can mix and match connectivity and cellular properties in complicated ways. To some extent, it might be possible to replace the list of stereotyped cell subtypes with a list of behaviors that cells of broad classes can perform under various conditions. However, I don’t think you can get around the fact that studying a less predictable system is going to be more difficult and doesn’t lend itself to statistical shortcuts. In particular, if everything is connected to everything, at least by weak connections, then relying on P values will tend to generate lots of false positives. That is, if your test is sensitive enough and you check enough times, wiggling any part of the network will give you a statistically significant effect in any other part of the network.

Technically speaking, you and your colleagues have been working for years on optimizing the method published this summer in Cell3. Can you foresee ways to improve it/speed it up? What remain as the main challenges/drawbacks?

Basically, we would like to acquire larger volumes (intact circuits) and trace more of the volumes that we acquire (more cells). My current dataset was acquired about an order of magnitude faster than the previous dataset that was published in Cell and we have a microscope now that can acquire images more than an order of magnitude faster than the scope I used. That leaves us with the growing problem of large data management and segmentation. It isn’t necessary to analyze every voxel of a dataset in order to get interesting biology (I have only used 1% of my voxels for my first LGN project). However, we all have the goal of eventually automatically segmenting every cell, synapse, and bit of ultrastructure in our 3D volumes. The people in Hanspeter Pfister’s lab have made significant progress in improving automated segmentation algorithms, but generating fast error free automatic segmentations is going to be a long-term computer vision challenge.

A paper came out this summer5 discussing the artifacts seen with EM in chemically fixed tissue, as opposed to cryo fixed. Will these findings impact the method of the Licthman lab, and do you think they are relevant to the value of your dataset?

The critical piece of data for us is connectivity so, to the extent that cryo fixation makes connectivity easier to trace, cryo fixation is a better technique. However, it is difficult to perform cryo fixation on large tissue volumes (>200 um). The alternative is to use cryo fixation as a guide to improving our chemical fixation techniques. For instance, preservation of extracellular space, one of the major benefits of cryo fixation, can also be achieved in some chemical fixation protocols.

Bibliography

  1. The 1000 genomes project consortium (2012) An integrated map of genetic variation from 1092 human genomes, Nature, 491, p 56-65.
  2. Morgan JL & Lichtman JW (2013) Why not connectomics?, Nature Methods, 10(6), p 494-500.
  3. Kasthuri N et al. (2015) Saturated reconstruction of a Volume of Neocrotex, Cell, 162(3), p 648-661.
  4. Berning et al. (2015) SegEM: Efficient image analysis for high-resolution connectomics, Neuron, 87(6), p 1193-1206.
  5. Korogod et al. (2015) Ultrastructural analysis of adult mouse neocortex comparing aldehyde perfusion with cryo fixation, eLife, 4:e05793.
Advertisements
Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: