Ask a Neuroscientist: Can dopamine release become addicting?

Ask a Neuroscientist: Can dopamine release become addicting?

In this issue of Ask a Neuroscientist, Dr. Talia Lerner fields a question about the exact relationship between dopamine and addiction. Writing in response to a question from Peter Senavallis, Talia says: "the dopamine hypothesis of drug addiction ... has been a driving force in addiction research ever since people noticed that addictive drugs all seem to act in one way or another on dopamine regulation." However, she notes (and describes) recent research that "calls into question the idea that dopamine neuron stimulation would be sufficient to induce and sustain all the classic hallmarks of addiction, both behavioral and molecular. "

Read More

Ask a Neuroscientist: Motor Skills and Handedness

Ask a Neuroscientist: Motor Skills and Handedness

Eric (age 18) asks: How different are the fine motor skills in your dominant hand rather than in your non-dominant hand? Say, if I have used a computer mouse for my entire life with my right hand, but am left-handed, would my computer mouse accuracy improve if I now switched to using the mouse with my left hand? How long would it take to catch up to my right-handed computer mouse skills?

Read More

Are you there, God? It’s me, dopamine neuron

Are you there, God? It’s me, dopamine neuron

Dopamine neurons are some of the most studied, most sensationalized neurons out there. Lately, though, they’ve been going through a bit of an identity crisis. What is a dopamine neuron? Some interesting recent twists in dopamine research have definitively debunked the myth that dopamine neurons are all of a kind – and you should question any study that treats them as such.

Read More

Thinking about Thinking

Like most neuroscientists, I’ve often thought about consciousness. I’ve worried about free will. And then I’ve gotten goosebumps and given up when I realized that I was consciously, willfully thinking about how consciousness and free will are illusions. Michael Graziano of Princeton University, however, has doubled down and tried to formulate a coherent theory of consciousness. He calls it “Attention Schema Theory.” While it’s far from the only theory of consciousness out there, it’s intriguing enough to me to be worth further consideration here.

Before I describe Attention Schema Theory, let’s do a little preliminary thinking about thinking. Very little actual data exists that tells us much about the nature of consciousness – it is a hard problem (or, as philosopher David Chalmers put it, the hard problem) – but we do have a few things to work with.

Everyone feels that he or she is conscious.

When I write about consciousness, every single reader knows intuitively what I mean. To be reading and understanding this blog post, you’ve got to be conscious. Many philosophers, though, argue that we can only know for sure about our own consciousness. You could all be “zombies,” automatons programmed cleverly to reply my statements in an apparently conscious way.

We are predisposed to assume that others are conscious.

Despite the existence of the zombie theory, in everyday life most people assume other people are conscious. In fact, we go much further. We also often ascribe consciousness to animals (plausible in the case of animals with a reasonably complicated nervous system) as well as teddy bears, cartoon characters, and things with googly eyes stuck to them (even though we know, intellectually, that’s implausible). We even sometimes ascribe consciousness to computers – yelling at them when they break, pleading with them to do what we want, and being tricked into thinking they are human in (admittedly constrained) Turing tests. What about our own consciousness is so inclined to presume consciousness in others? Why do we tend to equate eyes and facial expressions with real emotions?

"Heavy on the Nose" via eyebombing.com

Consciousness is inaccurate.

Many discussions of consciousness focus on its definition as a state of awareness. Awareness, though, can be tricky. If I see a cat and think “a cat!” then I’m having a conscious experience of a cat, for sure. But if I see a crumpled rag and think, just for a moment, “a cat!” then did I just have a conscious experience of a cat? Basically, yes. Our consciousness is easily duped by illusions, which reveal that consciousness involves assumptions made by our brain that can be independent of sensory experience. The delusions suffered by some psychiatric patients offer a stark example. In the article that inspired this post, Michael Graziano describes a patient who knew he had a squirrel in his head, despite the fact he was aware it was an illogical belief and claimed no sensory experience of the squirrel. Another example of the inaccuracy of consciousness is one we can all experience. It’s called the “phi phenomenon.” Two dots flash on a screen sequentially. If the right timing is used, it appears that there’s only one dot, which moves from one place to the other. In other words, although the dot did not, in reality, slide smoothly across the screen from one place to the other, our consciousness inaccurately perceives motion. Daniel Dennett uses the phi example in his exposition of his “Multiple Drafts” model of consciousness.

Consciousness can be manipulated.

Self-awareness is not always what it seems. Humans are programmed to search for patterns and meaning and we are naturally inclined to attribute causation to correlated events even when no such relationship exists. We are suggestible. We can become even more suggestible and less autonomous when hypnotized.  In numerous psychology studies, researchers have described various ways of reliably manipulating participants’ choices (for example using subtle peer pressure). Most of the time, the participants are not even aware of the manipulation and insist they are acting of their own free will. In addition to being a state of awareness, consciousness is conceived of as a feeling of selfhood, a sense of individuality that separates you from the rest of the world and allows you to find meaning in the words “me” and “you.” However, this feeling of selfhood can also be manipulated. Expert meditators such as Buddhist monks have trained themselves to erase this feeling of selfhood in order to experience a feeling of “oneness” while meditating. Brain scans of the meditating monks don’t provide a lot of details on the mechanisms underlying “oneness” but do suggest that the monks have learned to significantly alter their brain activity while mediating. A feeling of oneness can also be thrust upon you: Jill Bolte Taylor, the neuroscientist author of “My Stroke of Insight”, describes a feeling of oneness and loss of physical boundaries as her massive stroke progressed. Hallucinogenic drugs such as LSD can also provoke feelings of oneness. Out-of-body experiences fall into the same category: they can often be induced by meditation, drugs, near-death experiences, or direct brain stimulation of the temporoparietal junction. Damage to the temporoparietal junction on one side of the brain results in “hemispatial neglect,” in which a person essentially ignores the opposite side of their body and may even deny that this side of the body is a part of their self.

-----------------------

Now, let’s get back to Attention Schema Theory. What is this theory and how does it help fit some of our observations about consciousness together? Is it a testable theory? Can it help drive consciousness research forward? At the heart of Attention Schema Theory is an evolutionary hypothesis. It assumes that consciousness is real thing, physically represented in the brain, which evolved according to selective pressure. The first nervous systems were probably extremely simple, something like the jellyfish “nerve net” today. They were made to transduce an external stimulus into a signal within the organism that could be used to effect an adaptive action. The more information an organism could extract from its environment, the greater an advantage it had in surviving and reproducing in that environment, so many sophisticated sensory modalities developed. But there’s lots of information in the world – lights, sounds, smells, etc. coming at you from every angle all the time. It can be overwhelming and distracting. How do you know which bits of information are actually important to your survival and which can be ignored? The theory is that some kind of top-down control network formed that enhanced the most salient signals (think: a sudden crashing sound, the smell of food, anything that you previously learned means a predator is nearby). From this control network came attention. Attention allows you to focus on what’s important, but how do you know what’s important? Slowly, attention increased in sophistication. It went from, for example, always assuming the smell of food is attention-worthy to being able to decide whether it’s attention-worthy by modeling your own internal state of hunger. If you’re not hungry, it’s not worth paying attention to food cues. Finding shelter or a mate might be more important. According to Graziano, this internal model of attention is what constitutes self-awareness. Consciousness evolved so that you can relate information about yourself to the world around you in order to make intelligent decisions. But since consciousness is just a shorthand summary of an extremely complex array of signals, a little pocket reference version of the self, it involves simplifications and assumptions that make it slightly inaccurate.

Attention Schema Theory at a glance: selective signal enhancement to consciousness. via Granziano Lab Website

Consciousness isn’t quantal. Basic self-awareness is only the beginning. What about being able to visualize alternative realities? What about logical reasoning abilities? What about self-reflection and self-doubt? Graziano does not address all of the aspects of consciousness that exist or how they might have evolved, but he does go on to talk a bit about how consciousness informs complex social behaviors. If you’re living in a society, it helps to be able to model what other people are thinking and feeling in order to interact with them productively. To do this, you have to understand consciousness in an abstract way. You have to understand that your consciousness is only your perspective, not an objective account of reality, and that adds an additional level of insight and self-reflection into the equation.  It’s worth noting that there is a specific disorder in which this aspect of consciousness is impaired: autism.

-----------------------

Most of the appeal of Attention Schema Theory, to me, lies in its placement of consciousness as a fully integrated function of the brain. It doesn’t suppose any epiphenomenal aura that happens to be layered on top of normal brain function but that serves no real purpose. Instead, it says that consciousness is used in decision-making. It presents an evolutionary schema of why we might be conscious and also why we tend to attribute consciousness (especially emotions) to others. It explains, somewhat, why consciousness is inaccurate and malleable: it’s not built to represent everything about the real world faithfully, it’s just meant to be a handy reference schematic.

Attention Schema Theory isn’t entirely satisfying, though. It’s the outline of an interesting line of reasoning but not a complete thought. No actual brain mechanisms or areas are identified or even hypothesized. How is consciousness computed in the brain? I agree with Daniel Dennett that there’s no “Cartesian theater,” but there must be some identifiable principle of human brain circuit organization that allows consciousness. To move any theory of consciousness forward scientifically, we need a concrete hypothesis. But we don’t just need a hypothesis: we need a testable hypothesis. Without a way of experimentally measuring consciousness, the scientific method cannot be applied. Currently, our concept of consciousness stems only from our own self-reporting and, as mentioned above, the only consciousness you can really truly be sure of is your own.

Given the suppositions of Attention Schema Theory, though, there may be some proxies of consciousness we can study that would help us flesh out our understanding and piece together reasonable hypotheses. First, attention. Attention is by no means consciousness (I can tune a radio to a certain frequency but that doesn’t mean it’s conscious), but if consciousness evolved from attention then they should share some common mechanisms. Many neuroscientists already study attention, but they may not have considered their research findings in light of Attention Schema Theory. Perhaps there are already some principles of how brain circuits support selective attention that could be adapted and incorporated into Graziano’s schema. If consciousness really evolved from attention, then there should exist some “missing links,” organisms that display (or displayed) transitional states of consciousness somewhere between rudimentary top-down mechanisms for directing attention and the capacity for existential crises. Can we describe these links?

Second, theory of mind. Theory of mind is our ability to understand that other minds exist that may have different perspectives than our own. Having theory of mind should require a sophisticated version of consciousness, but the absence of theory of mind does not imply a lack of consciousness. You don’t need to be aware of others’ minds to be aware of your own. Most children with autism fail tests of theory of mind, but are still clearly conscious beings. Still, theory of mind and consciousness should be related if Graziano is right, and we know a few things about theory of mind. Functional imaging studies point towards the importance of the anterior paracingulate cortex as well as a few other brain areas in understanding the mental states of others. “Mirror neurons,” neurons that respond both when you perform an action and when you watch someone else perform that same action, have been discovered in the premotor cortex of monkeys, and some have argued that monkeys and chimpanzees have theory of mind.  If they do, then we’d at least have a potential animal model to pursue further neurophysiological research (though the ethics of such research could be thorny). There is very little evidence to support theory of mind in lower mammals such as rats. However, in that case comparative anatomy studies of theory of mind-related brain areas identified by functional imaging could be informative. We already know of one interesting mostly hominid-specific class of neurons that exist in suggestive cortical areas (such as anterior cingulate cortex, dorsolateral prefrontal cortex, and frontoinsular cortex): spindle neurons, also known as von Economo neurons (actually, these neurons can also be found in whales and elephants!). Unfortunately, we have no idea what these neurons do, yet. Further studies of von Economo neurons could tell us about the mechanisms underlying theory of mind and, by extension, consciousness. Maybe.

Location of von Economo neurons. via Neuron Bank

I’ll be curious to see where Graziano goes with his Attention Schema Theory. It is, at the very least, a bold attempt at answering a question that has vexed humanity through the ages. I wonder, though, whether the question can ever be answered. Perhaps you are now inspired to go out and do some awesome research. I, for one, am getting goosebumps again, so I think I’ll take a break.

 

A Myriad of Problems

Human genes can’t be patented. So said the Supreme Court in their June 13 decision in Association for Molecular Pathology v. Myriad Genetics, Inc.  I heard the news that morning on NPR and cheered aloud, even though I was alone. Then I paused. Myriad Genetics had patents on the genes BRCA1 and BRCA2, which are associated with breast and ovarian cancer. The value of these genes, for now at least, rests mostly in the information they contain (that is, they are not drug targets). People with a strong family history of breast or ovarian cancer can have samples of their own DNA tested by Myriad to determine if they have inherited particular mutations in these genes that put them at risk for developing cancer themselves. The stakes of this testing are high: many people who test positive for cancer-causing BRCA mutations opt for radical surgical procedures like double mastectomies and ovariectomies. Yet Myriad’s tests are quite expensive and, until the Supreme Court decision, only Myriad could legally run them – no shopping around, no second opinions.

Myriad Genetics (along with various partners, including, notably, publicly-funded scientists at the University of Utah, the University of Pennsylvania and the National Institute of Environmental Health Sciences) cloned, sequenced, and identified key mutations in BRCA1 and BRCA2 in the 1990s. The research was costly (though again, it is important to note that some of that cost was born by the American taxpayer), so Myriad wanted to patent the BRCA genes in order to profit from their discoveries. Without patent protection, anyone could have come along and provided low-cost BRCA testing. That’s because the techniques involved in BRCA testing are not actually that complicated. The valuable discoveries that Myriad and its collaborators made didn’t have to do with developing the testing process (i.e. gene sequencing), but with telling testers where and what to test for. Once those things were known, anyone could run a test, or rather, anyone could have run a test had Myriad not been granted gene patents.

Depending on your perspective, competition in the BRCA testing market could have been good or bad. Low-cost testing would have been better for patients and insurance companies. But if competition had been allowed, would Myriad have even bothered with the investments necessary to develop the test in the first place? Probably not. Why put in all that money and effort and risk of failure when you won’t be able to reap the profits? Why not wait around and see if some other sucker will try it? In the absence of patent guarantees, the private sector will not make risky investments in nascent biotechnologies. That’s not to say no test would ever have come along. There were probably some selfless science heroes out there who would have done it without a profit motive (Jonas Salk, inventor of the polio vaccine, famously refused to patent his invention, lest that limit its availability), but it would have taken longer. It also would likely have required government funding of the Salk-esque scientists who would be willing to do it.

For a stark illustration of public vs. private approaches to science, look no further than the Human Genome Project. The public Human Genome Project, which officially began in 1990 and was led primarily by the current director of the NIH Francis Collins, was plugging along slowly but surely on sequencing a human genome, dumping its sequences into a giant open-access database called GenBank as it went, every 24 hours. Then, in 1998, along came Celera, Craig Venter’s company, which was using “shotgun sequencing” techniques to piece together genetic information much more rapidly and cheaply than the government project had been doing. Celera’s work on the Human Genome Project accelerated its progress substantially. The final Human Genome Project came in several years ahead of time and under budget. But Celera also changed the nature of the project by delaying the release of its data (it agreed to release data yearly instead of daily), refusing to release data into the government’s open-access database, and seeking to patent the genes it had sequenced first, sometimes without knowing anything about what they did (in total, it filed 6500 preliminary “placeholder” gene patents).

In summary, Celera was good for genomics in that it accelerated progress toward a specific goal, but bad for science in that it impeded the efforts of others to make use of and build on Celera’s achievements. So, the question for us, as citizens, taxpayers, consumers and patients, is this: do you want specific projects done fast and cheap? Or do you want to pay, with your tax dollars, for open academic exchanges of information that will drive further innovation? Do you want the private sector to pay for research to develop new genetic tests and pharmaceuticals, in exchange for which we will grant them temporary monopolies? Or do you want to pay, with your tax dollars, for the research, which will permit the immediate availability of generics from a variety of competing companies?

Perhaps there is a middle ground between these two extremes. For one thing, it is unrealistic to think that all science could be government funded. It’s too expensive and a complete lack of competition would lead to profound inefficiencies. On the other hand, gene patents are too broad and cause too much restriction (thus my happiness and relief on hearing they are gone). A gene is not so much a material thing as it is information, and granting one entity the exclusive legal right to make use of information restricts intellectual freedom and scientific progress. What might make sense is to issue patents only for specific applications. We do currently allow patents on ideas in the form of “inhibit protein X to treat disease Y.” We can do the same for genes, allowing companies to pursue gene therapies under patent protection without allowing them to own the actual genes they’re seeking to manipulate.

Another idea percolating in the background is to offer monetary rewards for specific discoveries. If the rewards offered are sufficient, such a system could provide incentives for individuals or companies to pursue worthy goals independently of offering them patent protection. The XPrize Foundation has incentivized some amazing inventions, including the design of commercially viable passenger space shuttles. A new prize, the Archon Genomics XPrize, is offering $10 million for the complete, accurate whole-genome sequencing of 100 centenarians. The XPrize model is interesting, but there are still issues simmering in the background over the issue of who will retain intellectual property rights on the inventions made by XPrize competitors and how that will affect participation in the competitions. Another issue is the prize money. $10 million is a lot, but probably not enough to cover the costs of the eventual winner of the prize, so the XPrize as it is can’t replace patent rights as the sole incentive to achieve this feat.

As we consider pragmatically what can be done to reform biological patent law, it is impossible to ignore the fact that we are talking about biology, the study of life. The question of whether human genes, or living organisms, or any parts of living organisms are patentable is not just a pragmatic question, but also an ethical one. The Myriad case notwithstanding, we are trending more and more towards granting patents on life.  There are patents on genetically engineered viruses, bacteria, plants, animals, and even human stem cells. It sounds very creepy, but I still can’t help questioning whether some of those patents really are justified in the name of promoting innovation. As scientists develop more and more ways to use biomolecules as machines (e.g. to make computers), we are blurring the lines between invention and discovery, between innovation and information, between synthetic and natural. As a result, we are going to need to think long and hard about how to handle biological patents in the future. Preferably, we would do this as a society, through open debate and clear legislation, rather than waiting for the courts to do it for us.

Neuroscience Goes Big

Neuroscience Goes Big

Much has been made recently of President Obama’s announcement of the 100 million dollar BRAIN initiative to…well, to do what exactly? Some scientists exude optimism about the project, perhaps because they’re simply heartened to hear there’s money on the table for research. Other scientists are highly critical, citing the initiative’s lack of focus. Will the BRAIN initiative mean big-government intervention in the process of science? Will it scavenge resources from other important scientific initiatives? Will it produce vast mounds of data that we do not yet have a coherent way of processing and analyzing? Maybe. It all depends on what the BRAIN initiative really is.

Read More