You are currently browsing the archive for the News category.

Ben Goldacre, doctor and author of Bad Science, explains what the placebo effect is and describes its role in medical research and in the pharmaceutical industry.


Tags: , , , ,

Science is the only news. When you scan through a newspaper or magazine, all the human interest stuff is the same old he-said-she-said, the politics and economics the same sorry cyclic dramas, the fashions a pathetic illusion of newness, and even the technology is predictable if you know the science. Human nature doesn’t change much; science does, and the change accrues, altering the world irreversibly.

— Stewart Brand, Whole Earth Discipline (2009), p. 216



Here is a, in retrospect, hilarious article from the Newsweek magazine issue dated Feb 27, 1995.

After two decades online, I’m perplexed. It’s not that I haven’t had a gas of a good time on the Internet. I’ve met great people and even caught a hacker or two. But today, I’m uneasy about this most trendy and oversold community. Visionaries see a future of telecommuting workers, interactive libraries and multimedia classrooms. They speak of electronic town meetings and virtual communities. Commerce and business will shift from offices and malls to networks and modems. And the freedom of digital networks will make government more democratic.

Baloney. Do our computer pundits lack all common sense? The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works.

How about electronic publishing? Try reading a book on disc. At best, it’s an unpleasant chore: the myopic glow of a clunky computer replaces the friendly pages of a book. And you can’t tote that laptop to the beach. Yet Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Intenet. Uh, sure.

Logged onto the World Wide Web, I hunt for the date of the Battle of Trafalgar. Hundreds of files show up, and it takes 15 minutes to unravel them–one’s a biography written by an eighth grader, the second is a computer game that doesn’t work and the third is an image of a London monument. None answers my question, and my search is periodically interrupted by messages like, “Too many connectios, try again later.”

Then there are those pushing computers into schools(…)–but think of your own experience: can you recall even one educational filmstrip of decades past?

Then there’s cyberbusiness. We’re promised instant catalog shopping–just point and click for great deals. We’ll order airline tickets over the network, make restaurant reservations and negotiate sales contracts. Stores will become obselete. So how come my local mall does more business in an afternoon than the entire Internet handles in a month?

And who’d prefer cybersex to the real thing?

Read the whole piece, many more precious insights can be found in it:

via @KnightMare

Tags: , , , , , ,

The Singularity: An Appraisal from Michael Johnson on Vimeo.

This panel was held at Boskone 47 in Boston, MA on February 12th, 2010. Moderating was the Guest of Honor, Alastair Reynolds. Other panel participants included several time Hugo Award winner Vernor Vinge, Locus Award winner Charles Stross, and Karl Schroeder.

Tags: , , , ,

Lots of links to scaling visualisations and size comparisons from galaxy clusters to the carbon atom.

  • Amazing: Scale of the Universe

…check out this incredible interactive Flash animation from NewGrounds that provides a scale of the Universe, from the very small (0.0000000001 yoctometers) to as large as we know, the estimated size of the Universe. Click here to access, and after it loads, use the slider at the bottom to zoom in and out. Gives you a new appreciation for all that’s out there, big and small!

  • Cell Size and Scale

  • Powers of 10 New!

  • The Stars and the Grand Universe

  • Holy Cow We’re Small! Biggest Stars to Biggest Galaxies

  • The Known Universe by AMNH

  • My Location in the Universe

  • Did you know?

If New York to Chicago = from Earth to Alpha Centauri, then Earth to the Moon is equivalent to 0.3 millimeters.

  • Solar System Scale Model

This page shows a scale model of the solar system, shrunken down to the point where the Sun, normally more than eight hundred thousand miles across, is the size you see it here.

  • A simulated voyage through the solar system

At the speed of today’s fastest spacecraft (~20 km/second), it would take almost ten years to travel this distance. Even at the speed of light, the trip would last 5 1/2 hours. In this animation, the apparent speed of the viewer is over 300 times the speed of light.

  • Infographic: Tallest Mountain to Deepest Ocean Trench New!

  • Height

  • Gravity Wells

  • The MegaPenny Project

Visualizing huge numbers can be very difficult. People regularly talk about millions of miles, billions of bytes, or trillions of dollars, yet it’s still hard to grasp just how much a “billion” really is. The MegaPenny Project aims to help by taking one small everyday item, the U.S. penny, and building on that to answer the question: “What would a billion (or a trillion) pennies look like?”

  • What does one TRILLION dollars look like?

  • 1 pixel = 1 million dollars

  • One Trillion Dollars Visualized

  • How bad hyperinflation can get

The cumulative devaluation of the Zimbabwe dollar was such that a stack of 100,000,000,000,000,000,000,000,000 (26 zeros) two dollar bills (if they were printed) in the peak hyperinflation would have be needed to equal in value what a single original Zimbabwe two-dollar bill of 1978 had been worth. Such a pile of bills literally would be light years high, stretching from the Earth to the Andromeda Galaxy.

  • Trillions

Trillions from MAYAnMAYA on Vimeo.

This is a short film (a fast paced preview of a larger effort) by MAYA Design created to put some perspective on the invisible but fast approaching challenges and opportunities in the pervasive computing age.

  • Infographic: The Mariana Trench To Scale

It`s the deepest part of the world`s ocean and the lowest elevation of the surface of the Earth. Yeah, it`s that deep.

  • A Visual Comparison of Various Distances

  • Timeline: Future of an expanding universe  — 10^6 years to 10^100 years and beyond.

  • Graphical timeline from Big Bang to Heat Death

  • Tanker Size Comparison

  • Infographic: the complete and astounding history of storage

  • Infographic: A Day in the Internet

Some of us never realize how huge the Internet really is.

  • It’s a small world…

The Photonics Research Group of Ghent University-IMEC has fabricated a world map on a scale of 1 trillionth.

  • A sense of proportion

When thinking about existential risks it is important to have a sense of what the stakes are, and not just think “that is bad” – some things can be many orders of magnitude worse than others. At the same time, as Nick Bostrom pointed out, we have rather minimal research on how to prevent human extinction, about the same size as the literature on dung beetle reproduction. Toby Ord has pointed out that some charities can be up to 10,000 times more efficient in providing health than others (in terms of years of life per dollar donated), just because they focus on particular very effective means. Aubrey de Grey showed a pretty minor advance in biogerontology that was hailed in the media as “the secret of ageing”, while rattling of a series of papers with far more profound implications that nobody outside the field has heard of. A graph of cost and size of carbon abatement methods clearly shows that some fix a vastly bigger chunk than others.

Tags: , , , , , , ,

Here are two modern hard sf novels you should read. You can download them in various formats for free. Both are highly recommended and famous science fiction novels from the past few years:



The book is a collection of nine short stories telling the tale of three generations of a highly dysfunctional family before, during, and after a technological singularity.

The first three stories follow the character of “venture altruist” Manfred Macx starting in the early 21st Century, the second three stories follow his daughter Amber, and the final three focus largely on her son Sirhan in the completely transformed world at the end of the century.

In Accelerando, the planets of the solar system are dismantled to form a Matrioshka brain, a vast computational device inhabited by minds inconceivably more complex than naturally evolved intelligences such as human beings. This proves to be a normal stage in the life cycle of an inhabited solar system; the galaxies are filled with Matrioshka brains, communicating via wormhole networks. Lesser intelligences may live unmolested around brown dwarf stars.




Canadian author Watts (Starfish) explores the nature of consciousness in this stimulating hard SF novel, which combines riveting action with a fascinating alien environment. In the late 21st century, when something alien is discovered beyond the edge of the solar system, the spaceship Theseus sets out to make contact. Led by an enigmatic AI and a genetically engineered vampire, the crew includes a biologist who’s more machine than human, a linguist with surgically induced multiple personality disorder, a professional soldier who’s a pacifist, and Siri Keeton, a man with only half a brain. Keeton is virtually incapable of empathy, but he has a savant’s ability to model and predict the actions of others without understanding them. Once the Theseus arrives at the gigantic and hideously dangerous alien artifact (which has tellingly self-named itself Rorschach), the crew must deal with beings who speak English fluently but who may, paradoxically, not even be sentient, at least as we understand the term.

Blindsight focuses very heavily on the concepts of identity, cognition, and the problems of intelligence. The Chinese Room scenario features prominently in the book.

Also recommended by a neuroscientist called Peter Stimson (originally from Duke)— who thinks that Blindsight’s portrayal of various agnosias and pointy-haired homunculi serves as an apt introduction to the conundrum of self-awareness for his students:


Tags: , , , , , , ,

Assigning sex was hardly as easy as sizing someone up visually…. “For 99 percent of the population it’s easy to determine…. But one percent of the population have conditions that make it not so straightforward” [The New York Times]. In the 1960s, athletic federations began testing athletes by scraping cells from their mouths and testing them for a pair of X chromosomes, which typically establishes a person’s sex as female (as opposed to the XY chromosomes typically carried by males). But the tests were halted in the 1990s as critics pointed out that there are medical conditions that lead individuals with two X chromosomes to develop masculine characteristics, and others that mean individuals with one X and one Y chromosome never develop masculine characteristics. Some other individuals also exist outside the usual sexes of XX females and XY males; these may include males who are XXY, further confusing the tests [Nature News].

Whether an athlete has an unfair advantage isn’t necessarily determined by their sex chromosomes. Genes are only a blueprint, and sometimes nature doesn’t follow the blueprint precisely. Take the examples of XY athletes who appear to be women. At least five enzymes are required to synthesize testosterone, the hormone that produces most male characteristics, and occasionally one of those enzymes is defective.


Tags: , , , , , , , ,

There are two big goals, mind uploading (i.e. creating a backup) and to create human-level (speed) artificial intelligence. If the only way to do so is by reverse engineering the human brain, first of all, or at least we will have to develop the sufficient “hardware”, information processing capabilities to build a human equivalent computational substrate. The big questions here are about the nature of information processing and the neuronal information capacity of an average human brain.

Consequently many subquestions come up, for example of what importance are astrocytes and microtubule and are they involved in information processing, among other things?

Stuart Hameroff

The operations of microtubules are remarkably complex and their role pervasive in cellular operations; these facts led to the speculation that computation sufficient for consciousness might somehow be occurring there. These ideas are discussed in Hameroff’s first book Ultimate Computing (1987). The main substance of this book dealt with the scope for information processing in biological tissue and especially in microtubules and other parts of the cytoskeleton. Hameroff argued that the cytoskeleton components could be the basic units of processing rather than the neurons. The book was primarily concerned with information processing, with consciousness secondary at this stage.


Roger Penrose

Penrose presents the argument that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine-type of digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function.

On the basis of Godel’s incompleteness theorems, he argued that the brain could perform functions that no computer or system of algorithms could. From this it could follow that consciousness itself might be fundamentally non-algorithmic, and incapable of being modelled as a classical Turing machine type of computer.

Penrose made Gödel’s theorem the basis of what quickly became an intensely controversial claim. He argued that the theorem showed that the brain had the ability to go beyond what could be achieved by axioms or formal systems. This would mean that the mind had some additional function that was not based on algorithms (systems or rules of calculation). A computer is driven solely by algorithms. Penrose asserted that the brain could perform functions that no computer could perform. He called this type of functioning non-computable.

Link: /


Through the 1980s, colleagues and I developed models of microtubule information processing in which states of tubulin subunits were bits interacting with lattice neighbor tubulins. With about 10^7 (10 to the seventh) tubulins per neuron switching at 10^-9 seconds, we calculated a potential for 10^16 operations per second in each neuron. This was, and remains unpopular in AI/Singularity circles because it potentially pushes the goalpost for brain capacity significantly. Recent evidence has shown collective microtubule excitations at 10^-7 seconds (rather than the 10^-9 seconds we assumed), indicating a neuronal information capacity of ‘only’ 10^14 operations per second.


Quantum entanglement in a real biological  system found

“The future of clean green solar power may well hinge on scientists being able to unravel the mysteries of photosynthesis, the process by which green plants convert sunlight into electrochemical energy. To this end, researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California (UC), Berkeley have recorded the first observation and characterization of a critical physical phenomenon behind photosynthesis known as quantum entanglement.

The research team was surprised to see that significant entanglement persisted between molecules in the light harvesting complex that were not strongly coupled (connected) through their electronic and vibrational states. They were also surprised to see how little impact temperature had on the degree of entanglement.



Meet the forgotten 90 percent of your brain: glial cells, which outnumber your neurons ten to one. And no one really knows what they do.

If the glial cells called astrocytes really do process information, that would be a major addition to the brain’s computing power.

For some brain scientists, these discoveries are puzzle pieces that are slowly fitting together into an exciting new picture of the brain. Piece one: Astrocytes can sense incoming signals. Piece two: They can respond with calcium waves. Piece three: They can produce outputs—neurotransmitters and perhaps even calcium waves that spread to other astrocytes. In other words, they have at least some of the requirements for processing information the way neurons do. Alfonso Araque, a neuroscientist at the Cajal Institute in Spain, and his colleagues make a case for a fourth piece. They find that two different stimulus signals can produce two different patterns of calcium waves (that is, two different responses) in an astrocyte. When they gave astrocytes both signals at once, the waves they produced in the cells was not just the sum of the two patterns. Instead, the astrocytes produced an entirely new pattern in response. That’s what neurons—and computers, for that matter—do.

If astrocytes really do process information, that would be a major addition to the brain’s computing power. After all, there are many more astrocytes in the brain than there are neurons. Perhaps, some scientists have speculated, astrocytes carry out their own computing. Instead of the digital code of voltage spikes that neurons use, astrocytes may act more like an analog network, encoding information in slowly rising and falling waves of calcium. In his new book, The Root of Thought, neuroscientist Andrew Koob suggests that conversations among astrocytes may be responsible for “our creative and imaginative existence as human beings.”


Astrocytes affect brain’s information signaling

Astrocytes are the most common type of cell in the brain and play an important role in the function of neurons – nerve cells. New research from the University of Gothenburg, Sweden, shows that they are also directly involved in the regulation of signalling between neurons.

“Our results contribute to the insight that astrocytes can affect how the brain processes and stores information,” says My Andersson, a researcher from the Department of Physiology at the Institute of Neuroscience and Physiology. “This means that astrocytes should be given more attention in future when looking for causes of diseases that affect signalling between neurons, such as epilepsy.”


Astrocytes, playing a big role in the formation of memories.

Neurons need non-electrical brain cells known as astrocytes to establish synaptic memory, according to study published this week in Nature. The findings challenge the long-standing belief that this process involves only the activity of the neurons themselves, and bring glial cells onto the center stage in the study of brain activity.

This study shows that while neurotransmitter release and voltage changes at the synapse are important for synaptic memory formation, “you need the burst from the astrocyte to complete the process,” said physiologist Andrea Volterra of the University of Lausanne, who did not participate in the research. “It’s very surprising for many people.”

Astrocytes comprise some 90% of all human brain cells, but because they lack the electrical activity of neurons, they were never really considered to participate in the process of long-term potentiation — changes in synaptic strength thought to underlie learning and memory. Accumulating evidence suggests they play a bigger role in neuronal activity than previously believed.

But with astrocyte territories containing many thousands of synapses (about 140,000 in the hippocampus, for example), even effects limited to those within a territory can be enormous. Thus, the role that astrocytes play in synaptic function “cannot be overlooked,” Volterra noted in his review.


Underappreciated Star-Shaped Brain Cells May Help Us Breathe

Astrocytes, it was long believed, were little more than the scaffolding of the brain—they provided a support structure for the stars of the show, the neurons. But a study out in this week’s Science is the latest to suggest that this is far from the whole story. The study says that astrocytes (whose “astro” name come from their star-shape) may in fact play a critical role in the process of breathing.

Gourine’s team peeked into the brains of rats to figure out the connection between astrocytes and breathing. In humans and in rodents, the level of carbon dioxide in the blood rises after physical activity. The brain has to adjust to this, setting the lungs breathing harder to expel that CO2.

Astrocytes, the scientists found, are key players in this process. When the cells sensed a decrease in blood pH (because the carbon dioxide made it more acidic), they immediately released calcium ions, which the researchers could detect because they’d given the rats a gene encoding a protein that shone fluorescent in the presence of calcium. The astrocytes also released the chemical messenger ATP. That ATP appeared to trigger the nearby neurons responsible for respiration, kicking them into gear.

The astrocytes are no one-trick ponies, though. They could be important not only for breathing, but also for brain circulation, memory formation, and other activities.


Glial cells involved in processing

Koob’s evidence is indirect but suggestive. He points out that more intelligent animals have a higher astrocyte to neuron ratio than less intelligent animals, all the way from worms with one astrocyte per thirty neurons, to humans with an astrocyte: neuron ratio well above one. Within the human brain, the areas involved in higher thought, like the cortex, are the ones with the highest astrocyte:neuron ratio, and the most down-to-earth, like the cerebellum, have barely any astrocytes at all. Especially intelligent humans may have higher ratios still: one of the discoveries made from analyzing Einstein’s brain was that he had an unusually large number of astrocytes in the part of his brain responsible for mathematical processing. And learning is a stimulus for astrocyte development. When canaries learn new songs, new astrocytes grow in the areas responsible for singing.


In the late 19th century, microscopy advanced enough to look closely at the cellular structure of the brain. The pioneers of neurology decided that neurons were interesting and glia were the things you had to look past to get to the neurons. This theory should have raised a big red flag: Why would the brain be filled with mostly useless cells? But for about seventy five years, from the late 19th century to the mid to late 20th, no one seriously challenged the assumption that glia played a minor role in the brain.



The Blue Brain Project is the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations:

Allen Human Brain Atlas:

The point is that nobody knows how long it will take, since we don’t even know what we don’t know:

Orch OR (Orchestrated Objective Reduction) is a theory of consciousness, which is the joint work of theoretical physicist Sir Roger Penrose and anesthesiologist Stuart Hameroff. Mainstream theories assume that consciousness emerges from the brain, and focus particularly on complex computation at connections known as synapses that allow communication between brain cells (neurons). Orch OR combines approaches to the problem of consciousness from the radically different angles of mathematics, physics and anesthesia:

Minds, Machines, and Mathematics

Do Brains Make Minds? John Searle and David Chalmers get in on it:

In computer science and quantum physics, the Church–Turing–Deutsch principle (CTD principle) is a stronger, physical form of the Church–Turing thesis formulated by David Deutsch in 1985. The principle states that a universal computing device can simulate every physical process. The principle was originally stated by Deutsch with respect to finitary machines and processes. He immediately observed that classical physics, which makes use of the concept of real numbers, cannot be simulated by a Turing machine, which can only represent computable reals. Deutsch proposed that quantum computers may actually obey CTD, assuming that the laws of quantum physics can completely describe every physical process:

Even More

10 Important Differences Between Brains and Computers

Although the brain-computer metaphor has served cognitive psychology well, research in cognitive neuroscience has revealed many important differences between brains and computers. Appreciating these differences may be crucial to understanding the mechanisms of neural information processing, and ultimately for the creation of artificial intelligence. Below, I review the most important of these differences (and the consequences to cognitive psychology of failing to recognize them): similar ground is covered in this excellent (though lengthy) lecture.

Difference # 10: Brains have bodies

This is not as trivial as it might seem: it turns out that the brain takes surprising advantage of the fact that it has a body at its disposal. For example, despite your intuitive feeling that you could close your eyes and know the locations of objects around you, a series of experiments in the field of change blindness has shown that our visual memories are actually quite sparse. In this case, the brain is “offloading” its memory requirements to the environment in which it exists: why bother remembering the location of objects when a quick glance will suffice? A surprising set of experiments by Jeremy Wolfe has shown that even after being asked hundreds of times which simple geometrical shapes are displayed on a computer screen, human subjects continue to answer those questions by gaze rather than rote memory. A wide variety of evidence from other domains suggests that we are only beginning to understand the importance of embodiment in information processing.


Making brains: Reverse engineering the human brain to achieve AI

The ongoing debate between PZ Myers and Ray Kurzweil about reverse engineering the human brain is fairly representative of the same debate that’s been going in futurist circles for quite some time now. And as the Myers/Kurzweil conversation attests, there is little consensus on the best way for us to achieve human-equivalent AI.

That said, I have noticed an increasing interest in the whole brain emulation (WBE) approach. Kurzweil’s upcoming book, How the Mind Works and How to Build One, is a good example of this—but hardly the only one. Futurists with a neuroscientific bent have been advocating this approach for years now, most prominently by the European transhumanist camp headed by Nick Bostrom and Anders Sandberg.

While I believe that reverse engineering the human brain is the right approach, I admit that it’s not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don’t exist yet. And importantly, success won’t come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

But we have to start somewhere, and we have to start with a plan.


David Chalmers: Consciousness is not substrate dependent

It is widely accepted that conscious experience has a physical basis. That is, the properties of experience (phenomenal properties, or qualia) systematically depend on physical properties according to some lawful relation. There are two key questions about this relation. The first concerns the strength of the laws: are they logically or metaphysically necessary, so that consciousness is nothing “over and above” the underlying physical process, or are they merely contingent laws like the law of gravity? This question about the strength of the psychophysical link is the basis for debates over physicalism and property dualism. The second question concerns the shape of the laws: precisely how do phenomenal properties depend on physical properties? What sort of physical properties enter into the laws’ antecedents, for instance; consequently, what sort of physical systems can give rise to conscious experience? It is this second question that I address in this paper.


Status of Reverse Engineering the Brain

Computer simulations of the brain already allow experiments impossible to carry out with animals. “As good as modern neuroscience is—and it has been brilliant over the last two decades—we can’t really sample every neuron and every synapse as they are performing a behavior,” notes consciousness researcher Gerald Edelman, MD, PhD, director of the Neurosciences Institute and chair of neurobiology at the Scripps Research Institute in San Diego, California.



Artificial Flight and Other Myths

a reasoned examination of A.F. by top birds

Over the past sixty years, our most impressive developments have undoubtedly been within the industry of automation, and many of our fellow birds believe the next inevitable step will involve significant advancements in the field of Artificial Flight.  While residing currently in the realm of science fiction, true powered, artificial flying mechanisms may be a reality within fifty years.  Or so the futurists would have us believe.  Despite the current media buzz surrounding the prospect of A.F., a critical examination of even the most basic facts can dismiss the notion of true artificial flight as not much more than fantasy.



Whole Brain Emulation: The Logical Endpoint of Neuroinformatics?

The idea of creating a faithful, one-to-one computer copy of a human brain has been a popular philosophical thought experiment and science fiction plot for decades. While computational neuroscience and systems biology are currently very far away from this goal, the trends towards large-scale simulation, industrialized neuroinformatics, new forms of microscopy and powerful computing clusters point in this direction and are enabling new forms of simulations of unprecendented scope. In this talk I will discuss current estimates of how close we are to achieving emulated brains, technological requirements, research challenges and some of the possible consequences.

The Great Singularity Debate

The Singularity and the outer limits of physical possibility (08:38)
Do human brains run software? (09:58)
Consciousness, intelligence, and computation (03:14)
What could minds be made of? (13:08)
Is mind-uploading a dualist dream? (19:18)
Would the Singularity be a Vonnegut-style catastrophe? (10:56)

Tags: , , , , , , , , , , , , ,

Getting a fundamentally new form of government that is in a different league to democracy working would drastically improve our lives – if succesful. For transhumanists, getting futarchy or something like it working ought to be a very high priority, because better decisionmaking will naturally favor Transhumanist technologies like life-extension, mind uploading and safe superintelligence. This is an area where I think a small group can have a massive impact if, and only if, they play their cards right.

Links: /

Futarchy: Vote Values, But Bet Beliefs

This short “manifesto” describes a new form of government. In “futarchy,” we would vote on values, but bet on beliefs. Elected representatives would formally define and manage an after-the-fact measurement of national welfare, while market speculators would say which policies they expect to raise national welfare.

Democracy seems better than autocracy (i.e., kings and dictators), but it still has problems. There are today vast differences in wealth among nations, and we can not attribute most of these differences to either natural resources or human abilities. Instead, much of the difference seems to be that the poor nations (many of which are democracies) are those that more often adopted dumb policies, policies which hurt most everyone in the nation. And even rich nations frequently adopt such policies.

These policies are not just dumb in retrospect; typically there were people who understood a lot about such policies and who had good reasons to disapprove of them beforehand. It seems hard to imagine such policies being adopted nearly as often if everyone knew what such “experts” knew about their consequences. Thus familiar forms of government seem to frequently fail by ignoring the advice of relevant experts (i.e., people who know relevant things).

Links: /

Tags: , , , ,

As researchers discover more agents that alter mental states, the Chemical Weapons Convention needs modification to help ensure that the life sciences are not used for hostile purposes.

Recent scientific and technological advances could transform the biochemical-threat landscape. Indeed, in 2003, military analysts from the Counterproliferation and Technology Office of the Defense Intelligence Agency in Washington DC predicted that emerging biotechnologies were likely to lead to a “paradigm shift” in the development of biological warfare agents. They warned that it would soon become possible to engineer agents to target specific human biological systems at the molecular level.

This idea of identifying crucial biochemical pathways, and then designing compounds to disrupt them is a leap from the traditional model of biological-agent development. It expands the options: there are likely to be thousands of potential molecular targets and numerous ways of disrupting each one.

In cases in which ‘agonists’ of a particular system have been found to enhance some cognitive trait, an ‘antagonist’ might be developed that could reduce it and vice versa. If dopamine agonists enhance attention, say, so dopamine antagonists might disrupt it. They also warned, among other things, that nanotechnologies could overcome the blood–brain barrier and “exploit existing transport mechanisms to transmit substances into the brain in analogy with the Trojan horse”.

We will be “knowingly moving towards the top of a ‘slippery slope’ at the bottom of which is the spectre of ‘militarization’ of biology” including “intentional manipulation of peoples’ emotions, memories, immune responses or even fertility”.


Tags: , , , ,

« Older entries § Newer entries »