BRAIN Initiative Interim Report: A Readers Guide
/As a neuroscientist who is … alive, I've been waiting, with trepidatious optimism, for any official news from the BRAIN Initiative Working Group, as they identify "high priority research areas" earmarked for the initial round of BRAINI funding (FY2014). Weighing in at 58 pages, the Interim Report of the BRAIN Working Group (online version, here) is a detailed document that identifies and discusses eight research areas that were determined by the working group (with help from expert consultants, aka additional neuroscientists) to be high priority areas for the 2014 fiscal year.
So what are these high priority research areas? How closely do they hew to ongoing research areas long acknowledged as important by the neuroscience community? How much do they rely on recruiting non-neuroscientists to research teams? How clearly do these areas address the Presidential mandate of the BRAIN Initiative? Will these goals help us to elucidate the importance of the Initiative, both in our minds and in the minds of the general public?
What follows are my impressions of the critical points contained within each of the eight sections that make up the body of the Interim Report. I have included some commentary, but in the interests of length, have severely restricted that commentary to brief asides. The Interim Report begins with a discussion of the core principles that should guide the BRAIN Initiative. Encompassing pages 10 thru 16, these core principles reflect fairly general themes, that seem the clear product of systems neuroscientists writing down why the brain is important, and how an ideal research study should be designed and conducted. Although well-written and interesting, I will be bypassing that section in the interests of heading straight into the meat of the report.
As stated above, the Interim Report lays out eight "high priority" research areas. Explanations of these research areas, and specific examples of questions that fall within their purview are found on pages 17 thru 53. Below, I will walk through each priority areas.
As this post is quite long, here are internal links which you can use to jump to a specific section:
- Mapping the Structure and Components of Circuits
- Neuronal Dynamics: Recording Neuronal Activity Across Time and Space
- Manipulating Circuit Activity
- The Importance of Behavior
- Theory, Modeling and Statistics Will Be Essential to Understanding the Brain
- Human Neuroscience and Neurotechnology
- Education
- Maximizing the Value of the BRAIN Initiative: Core Principles
1. Mapping the Structure and Components of Circuits.
This very lengthy section contains a couple of concrete goals, including a cell-type census, and the production of a structural map of the brain.
A. The Cell Type Census.
The goal: To generate a census of neurons, glia, vascular cells and immune cells in the brain, fully characterize their neurotransmitters, electrophysiological properties, morphology, connectivity, patterns of gene expression and other functional properties. This is to begin with more complete descriptions of well known neuron types (e.g. cortical pyramidal cells) and continuing through less well known cell types. Note: this census is what qualifies in the eyes of the Working Group as a short term goal.
Initial thoughts: The cell type census is one of a couple of clear goals with pretty well defined end points. My biggest question is how the work will practically be achieved. Will we stick to the distributed, lightly linked mode that has been the model for the lifetime of neuroscience? Or would it be more efficient to create centers that hew more closely to the Allen Institution model - many researchers, gathered together, using brute force and uniform techniques to generate a data set. Or will the powers-that-be convince institutions such as the Allen to dedicate themselves to this high priority question. A clear advantage of the non-distributed solution is the ability to ensure uniform data collection techniques. Contrast that with a distributed model involving many labs, each with their own preferred (and importantly, pre-owned) equipment, solutions, animal colonies (the list goes on); so many uncontrolled variables that would make comparing and combining data sets from different labs potentially problematic. Yes, increased variability could likely be averaged out, with a large enough data set and enough labs providing independent replications. but given that this goal is slated for fiscal year 2014, surely efficiency is a critical consideration.
Some specific questions: First off, the question arises as to which animals are best utilized for this survey. The answer to the question, 'Are neuron subtypes different between animal species?' is almost certainly, 'Yes. At times profoundly so'. Nevertheless, the answer to the question, 'Would knowing the full contingent of cellular types in one or two or more species be useful, even if one of those species is not human?' is an emphatic 'Yes'. The profound differences that distinguish species are eclipsed only by the commonalities that have persisted throughout evolutionary time.
Another question related to the practice of the census is whether different developmental time points will be considered. Or rather, given any reasonable time frame, is it possible to consider different developmental time points. Lastly, a common concern, especially from molecular and developmental biologists is that neuronal identity is most likely a continuum. The authors have clearly considered this perspective, as they write:
"In some cases, there may not even be sharp boundaries separating subtypes from each other. Nonetheless, there is general agreement that types can be defined provisionally by invariant and generally intrinsic properties, and that this classification can provide a good starting point for a census." (pg. 17)
In conclusion, the report calls for a "neuron-ontology" bioinformative framework, including cross-references to homologous cell populations in different model organisms. To quote,
"The ultimate census of cell types would include all of the neurons and glia with molecular annotation at sub cellular resolution: not just mRNA expression but ion channels, synaptic proteins, intracellular signaling pathways, and so on. This is beyond the reach of current technology, but stating the goal will provide impetus to technological development." (pg. 19)
B. Leveraging the Census for Technique Development.
As a follow-up to the cell type census, the report proposes using the genetic information gathered to create a comprehensive toolkit for genetically targeting specific neuronal populations. Special mention is made to the desire for methods that are easily generalized across multiple species. Also included are shout-outs to the concept of new ideas for cell type delivery of transgenes (e.g. CRISPRS, talens, antibody-targeted liposomes).
To conclude, the grand vision of this aim is,
"development of comprehensive, general suites of tools that target expression to a brain area of interest, disseminated for broad, effective use in neuroscience labs around the world." (pg. 19)
C. The Structural Map.
This goal is divided into three anatomical scales,
- Long-range: use serial sectioning, modern tract tracing techniques and methods like CLARITY, Scale, and SeeDB (which, incidentally involves 100% weight/volume fructose). Again, "inclusion of other species for comparative purposes is highly desired."
- Intermediate range: determine the long and short range connections making up functional circuits. This section includes a call for improvements in transsynaptic labeling techniques (e.g. rabies virus), and calls for possible use of array tomography, GRASP and ID-Prime (see pg. 21).
- Detailed connectivity (aka short range): this section is entitled "Toward a full connectome", and comments that the bottleneck for this process is data analysis of electron microscopy data, especially given recent enhancements in data collection, such as serial block face scanning electron microscopy.
This section makes the strong statement that the area where progress is most needed is in the development of software for segmenting and assembling EM data, as well as improved methods for synapse identification. The report suggests the use of crowd sourcing for data analysis, with "individual users [spending] their own time reconstructing areas of the brain of relevance to them, using software tools made available by the experts" (pg. 22). As the quality control issues associated with this strategy are obvious, I won't belabor them.
One concern of mine, prior to reading the whole report, was for an over-emphasis on technique development, at the expense of hypothesis based research. It seems the authors shared by concern, as evidenced by this statement:
"Development of these new technologies should proceed hand-in-hand with application to important problems in neuroscience. ... The important point is that broad support for large-scale, dense connectomics will only appear when it begins to yield answers to specific scientific questions that could not have emerged by other means." (pg. 22-3)
How true. (Back to top)
2. Neuronal Dynamics: Recording Neuronal Activity Across Time and Space
The grand vision:
"To systematically study brain mechanisms underlying a particular behavior or cognitive process, it is important to sample neuronal activity broadly across brain structures and record from many identified cell types. It is also critical to measure and analyze neuronal activity at multiple time scales that are relevant to behavior and cognition: fast (e.g. spikes), intermediate (e.g. short-term plasticity, recurrent excitation) and slow (e.g. global attentional and arousal states; neuromodulation)." (pg. 23)
Before we get distracted by the ideal of recording every single neuron, the question must be posed, how will we know when we have recorded form enough neurons to understand a cognitive process or mental state? This report, although it poses this question, does not provide an answer, nor does it provide a conceptual framework/plan for determining the answer to the question. Instead, the assumption is made that the answer is, at the very least, 'more neurons than we currently can record from'. The closes to a definitive statement on the matter is the comment that:
"Small systems provide a test bed for asking how much "emergent information" arises from recording an entire brain or brain structure, and provide initial clues to the density of recordings needed to characterize functional circuits... To assess the value added by monitoring all neurons in a circuit, complete or near-complete recordings should be gathered from a few model systems under a variety or conditions." (pg. 25)
I am not sure that is the only way - the monkey folks may be able to provide a computational perspective (Eric Trautmann, are you reading this?). Not to mention the obvious point that the amount of emergence could vary from organism to organism, with more complex systems exhibiting greater emergence.
And with that commentary, let's turn from the grand theme to specific aims.
Specific Aims
This section is divided up into several subsections that are very much business-as-usual for circuit/systems neuroscientists:
- Mapping dispersed circuits responsible for distinct behaviors.
- Using optogenetics for ... things.
- Circuit mapping using immediate early genes, a la recent utilizations that have an animal behaving, and use immediate early genes to drive optogenetic particles or other reporters.
Some space is spent exploring the various recording options that are presently available. The basic discussion seems to be electrically versus optically. Technologies targeted for improvement/continued use include: penetrating electrode arrays, surface electrodes, optical sensors (calcium imaging, voltage indicators, synaptic transmission imaging (e.g. glutamate sensors), biochemical readouts of neuromodulatory states, and glial activity/metabolic coupling). And for the optical fans out there:
"Optical methods capture the central vision of the BRAIN Initiative, that of integrating many approaches into a single experiment. Optical methods can be multiplexed to combine activity monitoring, manipulation, circuit reconstruction, and characterization of a single cell's morphology and molecular constituents (or at least a subset of the above) simultaneously." (pg. 28)
Special consideration is given to the potential for adapting novel optical hardware technologies for use in neuroscience, e.g. using red/near red infrared optical indicators and non-linear optical excitation strategies to address the problem of deep tissue penetration. Also mentioned are: miniaturized optics (e.g. Shnitzer lab tiny cameras), CMOS image sensor chips.
Lastly, some time is spent on the subject of nanotechnology and unanticipated innovations, particularly microwires, nanoposts and nanodiamonds:
"These and other technologies should be encouraged and given "room to breathe" but held to a standard of progress: Emphasis should be placed on supporting methods and research teams that provide a logical, clear experimental pathway from an in vitro demonstration to a set of in vivo applications in increasingly complex neuronal systems." (pg. 30)
How, exactly, is this not 'business-as-usual'? Surely this standard of progress is the baseline for normal NIH funding? Perhaps not. But maybe it should be. (Back to top)
3. Manipulating Circuit Activity
As someone working in a lab next to mine one said, we live in the age of causal neuroscience. Optogenetics has take the whole of neuroscience quite by storm, and the current emphasis on its use is reflected in the content of this goal, which is split between:
- calling for continue incremental advances in optogenetic tools, and
- a very general laundry list of other possibilities for in vivo neuronal manipulation, including: pharmacogenetics (RASSLs DREADDs, etc) and "new tools based on magnetic stimulation, gases, infrared excitation, ultrasound, or organic or physical chemistry." (pg 31)
Given the enthusiasm with which the Deisseroth lab alone has a) continued to provide novel optogenetic tools, and b) gathered a sizable funding pool, we can be confident that the FY2014 goal of "a new generation of optogenetics" will be forthcoming, without any additional BRAIN Initiative resources or prompting. (Back to top)
4. The Importance of Behavior
This relatively brief section of the report points out, as the section title indicates, that a full understanding of brain processes can only be achieve in the context of behavior. Given the importance of behavior, better tools for measuring and quantifying behavior are called for, as is the standardized use of those tools across labs and studies. The report particularly encourages scientists to use both formal psychophysics, and freely moving animals engaged in naturalistic tasks. Animal-on-a-ball experiments, with virtual reality setups, are highlighted as particularly amenable for optical/electrophysiological recording and stimulation experiments. Lastly, the report describes a need for automated tracking and quantification of behavior; it suggests leveraging advances in the fields of machine vision and machine learning for improved quantification of high dimensional behavioral data. The short term goal is to promote the development of
"technologies to quantify and interpret animal behavior, at high temporal and spatial resolution, reliably, objectively, over long period of time, under a broad set of conditions, and in combination with concurrent measurement and manipulation of neuronal activity. (pg. 33)"
5. Theory, Modeling and Statistics are Important
Subtitle: Send in the Mathematicians! Physicists and computer scientists too, please.
The amount of data any given neuroscience experiment extracts from the brain is large. And as we achieve the goal of recording from larger numbers of neurons, for longer periods of time, the resulting data sets will be larger, the data more complex. This increase begs the question, what are the best tools for quantifying highly complex, high dimensional data? The relatively simple statistics that fill so many neuroscience articles (e.g. t-tests) are not designed for quantifying richly sampled complex systems; new tools for statistical quantification will be required. Furthermore, computational modeling will likely, inevitably, be needed to tease apart the neural dynamics occurring across multiple cell populations and time scales. So, the authors make a detailed statement regarding the importance of incorporating computational models and appropriate statistically approaches in neuroscience experiments. In particular they encourage partnerships between experimentalists and theorists, along the lines of Hodgkin-Huxley (action potential generation), and Schulz, Sutton and Dayan (the role of the dopaminergic system in the computation of prediction errors in reinforcement learning).
Suggested focus areas for such partnerships are:
- developing new statistical and quantitative approaches for the increasingly complex neural data sets,
- developing more sophisticated dimensionality reduction techniques,
- determining how to model the interactions of brain processes that occur at multiple temporal scales (e.g. firing patterns, working memory, behavior),
- considering interactions between brain regions
The grand vision: We should strive for a
"systematic theory of how information is encoded in the chemical and electrical activity of neurons, how it is fused to determine behavior on short times scales, and how it is used to adapt, refine and learn behaviors on longer time scales. Finally, humans and perhaps some animals have the capacity for symbolic computation using language and in other domains as well; a brain based theory of these higher functions is notably lacking." (pg. 37)
A tall order for an initiative with a set time frame. In the short term, the report calls for a focus on fostering collaborations between neuroscientist experimentalists and "scientists from statistics, mathematics, engineering and computer science."
In conclusion: 'Hello, hard-science graduate students, postdocs and PIs. come do neuroscience. We have cookies. And a limited funding source.' (Back to top)
6. Human neuroscience and nanotechnology
One of the most general population-friendly aims laid out by the Interim Report, this section calls for 1) figuring out once-and-for-all the precise neural meaning of the various brain imaging techniques applied in clinical settings (e.g. MRI, MEG, EEG, PET), and 2) collaborations between clinicians and neuroscientists.
A. Human Brain Imaging
The disconnect between human brain imaging techniques and the recording techniques commonly used by animal model-based neuroscientists, makes it difficult to translate knowledge between clinicians and experimentalists. As the report states,
"Thus is is critically important to develop and experimentally validate theories of how MR signals are based in the underlying cellular-level activity [of the brain]..." (pg. 39)
As is apparent from the above quotation, particular emphasis is placed on understanding the cellular basis of MRI signals. Both on how 1) the hemodynamic response arises from activity in different cellular populations (including excitatory/inhibitory neurons, glial cells, axons, presynaptic terminals), and 2) how local activity of those populations is spatially integrated into the voxels captured in the hemodynamic response. Suggested experiment include using optogenetics to elicit a known quantity of neuronal activity, and observing those known signals at the level of fMRI.
Additional areas of interest include:
- studying resting state networks,
- developing advanced source localization tools for improving the spatial resolutions of eEG/MEG data (perhaps by combining the rapid temporal dynamics of EEG recordings with the high spatial resolution of fMRI),
- improving devices for monitoring/stimulating human brains (e.g. DBS electrodes, closed-loop systems, brain-computer interfaces, chronic multi-electrode arrays)
For this last area of interest, further emphasis is placed on improving the reliability and durability of implanted devices, as well as incorporating advances made in devices designed fro animals models into devices for human patients. Here, we have mention of the possibility of human optogenetics; "the use of optogenetics tools in humans is conceivable in the mid- to long-term." (pg. 45)
And as with the previous section dealing with computational modeling and statistics, great emphasis is placed on the importance of strong collaborations between clinicians and experimentalists. (Back to top)
7. Education
The point of this section is well summarized by the opening sentence:
"New tools, whether they come in the form of equipment, molecular clones, or data analysis algorithms, should be disseminated to a wide scientific user base, along with the knowledge required to wield them." (pg. 45)
Examples of the sort of training centers envisioned by the authors include the Cold Spring Harbor and Woods Hole courses, as well as the Optogenetics Innovation Laboratory (OIL) training courses run out of Stanford's BioX facility. Also getting a shout-out is a need to provide training in quantitative neuroscience, whether that be via teaching theory/statistics to biologists, or via teaching neuroscience to physicists, engineers and statisticians. This last point is not necessarily accomplished solely by increasing the numbers of math majors accepted into neuroscience PhD programs, or the number of mathematics courses required by PhD curriculums. Also encouraged is the recruitment of faculty members from "quantitative disciplines" into neuroscience. (Back to top)
8. Maximizing the Value of the Brain Initiative: Core Principles
This final section is a grab bag of "principles and approaches [that] can maximize the intellectual value and long-term impact of all aspects of the BRAIN Initiative." (pg. 47) As this post is far too long, I'll merely list these principles/approaches below; those readers interested in the specifics can find the section beginning on page 47.
The Core Principles
- Use appropriate experimental systems and models. (Side note, attention: the folks that review my grants. "Important insights into the brain have come from studies of many other creatures (barn owls, electric fish, chickens, bats, and more). The most significant technologies developed by the BRAIN Initiative should facilitate experiments in these and other specialized animals, broadening the reach and scope of questions that can be asked about the brain." (pg. 48))
- Cross boundaries in interdisciplinary collaborations. Encouraged pairings include: physicists/engineers with biologists/chemists, tool builders with neuroscientists, theorists with experimentalists, clinicians with animal model-based scientists.
- Integrate spatial and temporal scales, and accelerate all of neuroscience.
- Establish platforms for sharing data. Specifically, learn from existing public datasets such as: the Allen Brain Atlas, the Mouse Connectome Project, the Open Connectome Project, the CRCNS data sharing project, ModelDB and the Human Connectome Project. For a first attempt/framework, see: the Neuroscience Information Framework.
- Validate and disseminate new technology. "If the BRAIN Initiative develops next-generation electrodes, nanotechnologies, or chemical probes, it should ensure that they can be synthesized, fabricated, or readily purchased by researchers, as appropriate. If the BRAIN Initiative funds development of a next-generation microscope, it should ensure that private or publicly supported mechanisms make it available to a variety of users, not just the inventors. A pathway to dissemination should be expected for BRAIN-derived tools, wether that is commercialization, core facilities, or something else." (pg. 52). Absent from this discussion is how to disseminate knowledge learned during studies funded by the BRAIN Initiative. Would scientists be required to publish in open access journals, to ensure the broadest dissemination of their findings?
- Consider ethical implications of the BRAIN Initiative.
Concluding Remarks
The above post, though much longer than I anticipated, does not cover the full extent of the Interim Report. I highly encourage readers to consult the source material for more details on the proposals with most closely address their personal interests.
And finally, as the authors themselves note, the Interim Report functions to pose questions, providing few answers. We will have to wait until June 2014 for the final report, which will presumably contain a more concrete vision and set of deliverable goals.