"All the World's a Stage We Pass Through" R. Ayana

Monday 31 May 2010

Brzezinski Fears 'Global Awakening' and Seeks 'Concentrated, Universal Power'

Brzezinski Fears 'Global Awakening' and Seeks 'Concentrated, Universal Power'


"No Regrets" Zbignew Brzezinski building up the Mujahadeen (later called the Taliban) in Afghanistan in early '80's
Zbigniew Brzezinski, CFR Trustee and founder and Obama advisor, recently gave the CFR (Council on Foreign Relations) branch in Montreal a presentation discussing world government and his fears of the mass global awakening that has taken place.
Pushing his globalist agenda, he goes on to tout a "moral imperative..for a concentrated source of power that has universal reach". As the talk unfolds you'll notice he gives a clear outline of ways these prominent members can affect their particular areas of influence towards this plan. In attendance at these meetings are members of government, business leaders and media magnates.

More on Zbig

Brzezinski has gone on record with his global imperialist agenda many times. Not only does he attend and address secret CFR and Bilderberg Group meetings, in his seminal book "The Grand Chessboard" he revealingly writes:
"To put it in a terminology that harkens back to the more brutal age of ancient empires, the three grand imperatives of imperial geostrategy are to prevent collusion and maintain security dependence among the vassals, to keep tributaries pliant and protected, and to keep the barbarians from coming together." (p.40)
He also includes these eerie and manipulative "observations":
"The attitude of the American public toward the external projection of American power has been much more ambivalent. The public supported America's engagement in World War II largely because of the shock effect of the Japanese attack on Pearl Harbor.” (pp 24-5)
"Moreover, as America becomes an increasingly multi-cultural society, it may find it more difficult to fashion a consensus on foreign policy issues, except in the circumstance of a truly massive and widely perceived direct external threat." (p. 211)

What is PNAC?

Two years after the publication of "The Grand Chessboard", another self appointed "Think Tank" called The Project for a New American Century (PNAC) masquerading as an "educational organization" issued a now-infamous report entitled "Rebuilding America's Defenses: Strategy, Forces and Resources For a New Century," September, 2000.
In the report, this hegemonist neo-conservative group clearly alligns with Mr. Brzezinki's aspirations and methods and seemed to lay down a road map to Iraq and beyond.
"Further, the process of transformation, even if it brings revolutionary change, is likely to be a long one, absent some catastrophic and catalyzing event, like a new Pearl Harbor..."


 While it may not name the  tragic events of 9-11, at the least it's a clear prediction of the sort of event that was required to speed up the "process of transformation" they were hoping for.


Machiavelli Would Be Proud

But that's not all. Even scarier and more Machiavellian ideas are put forward in that report, many of what have come to pass.
"Preserving the desirable strategic situation in which the United States now finds itself requires a globally preeminent military capability both today and in the future."
"Although it may take several decades for the process of transformation to unfold, in time, the art of warfare on air, land, and sea will be vastly different than it is today, and combat likely will take place in new dimensions: in space, cyber-space, and perhaps the world of microbes.
"Air warfare may no longer be fought by pilots manning tactical fighter aircraft sweeping the skies of opposing fighters, but a regime dominated by long-range, stealthy unmanned craft. On land, the clash of massive, combined-arms armored forces may be replaced by the dashes of much lighter, stealthier and information-intensive forces, augmented by fleets of robots, some small enough to fit in soldiers' pockets.


"Control of the sea could be largely determined not by fleets of surface combatants and aircraft carriers, but from land- and space-based systems, forcing navies to maneuver and fight underwater. Space itself will become a theater of war, as nations gain access to space capabilities and come to rely on them; further, the distinction between military and commercial space systems  combatants and noncombatants will become blurred. Information systems will become an important focus of attack, particularly for U.S. enemies seeking to short-circuit sophisticated American forces. And advanced forms of biological warfare that can target specific genotypes may transform biological warfare from the realm of terror to a politically useful tool."


"Men are so simple and so much inclined to obey immediate needs that a deceiver will never lack victims for his deceptions."

For further enlightenment enter a word or phrase into the search box @  New Illuminati:

And see

The Her(m)etic Hermit - http://hermetic.blog.com

This material is published under Creative Commons Copyright (unless an individual item is declared otherwise by copyright holder) – reproduction for non-profit use is permitted & encouraged, if you give attribution to the work & author - and please include a (preferably active) link to the original along with this notice. Feel free to make non-commercial hard (printed) or software copies or mirror sites - you never know how long something will stay glued to the web – but remember attribution! If you like what you see, please send a tiny donation or leave a comment – and thanks for reading this far…

From the New Illuminati – http://nexusilluminati.blogspot.com

Sunday 30 May 2010

Smallpox finding prompts HIV 'whodunnit'

Smallpox finding prompts HIV 'whodunnit'


People keep blaming the emergence of HIV on science, or at least medicine. For the longest time this came in the form of the claim that it was all due to contaminated polio vaccine. That turned out to be factually groundless [not at all – see below; New Illuminati Ed].

 Now a group of scientists in the US thinks it may all be down to the greatest medical intervention of all: the eradication of smallpox. It's nice timing: that eradication is officially 30 years old this week (to commemorate the event the World Health Organization unveiled this nice little monument yesterday in Geneva, Switzerland). But how could HIV be due to a dearth of smallpox?

Let's start with a fun fact about HIV: to infect white blood cells, most strains need to be able to latch onto a protein on the cells' surface called CCR5. Many people of European descent have a mutated version of CCR5, and resist HIV as a result. This means that some other viruses that also use CCR5 to get a foothold in immune cells, including dengue, herpes and measles, can slow down HIV infection, perhaps because they compete for the protein.

As smallpox, and vaccinia, the live virus used as a vaccine against smallpox, also use CCR5, Raymond Weinstein at George Mason University in Manassas, Virginia, and colleagues decided to find out if these pathogens could slow HIV infection rates too (see Pdf). They took lymphocytes from 10 people who had been vaccinated against smallpox up to six months previously, and tried to infect those cells, as well as cells from people who had never been vaccinated against smallpox, with HIV.

Fascinatingly, they found that lymphocytes from people vaccinated up to six months earlier - or in preliminary results from a much larger study, 14 months - were up to 10 times less likely to be infected by HIV strains that need to use CCR5. Viruses like measles only interfere with HIV as long as they are there causing their particular disease, but the effect of the vaccinia virus seemed to last months. The researchers conclude that vaccinia prevents HIV - and that once smallpox was eradicated, and smallpox vaccination wound down, HIV surged as a result. What's more, the timing of events supports this argument, claim the researchers.

HIV started taking off in the 1950s and 1960s just as smallpox vaccination was winding down [unproved assertion – N.I. Ed]. So far, so good... except that this doesn't take geography into account. Sure, smallpox vaccination was winding down in Europe and North America during this period, but not central Africa, which was where HIV was starting to spread. According to the definitive history of smallpox eradication, written by D. A. Henderson, who masterminded the effort, there wasn't much smallpox vaccination at all in the Congo in the 1950s and early 1960s: only in 1969 and 1970 did vaccination surge, winding down some years later.

An explanation that would fit these dates slightly better might be that it was smallpox itself - not the vaccine - that was keeping HIV at bay. The research team does note in their paper that smallpox virus should have the same effect as vaccinia [bullshit. see 'lies. damn lies and statistics - New Illuminati Ed]. I am also not sure that the researchers' suggestion to use vaccinia virus to fend off HIV is a great idea: vaccinia can have deadly side effects.

A more potentially useful observation about HIV and viruses comes from Jennifer Smith of the University of North Carolina in Chapel Hill and colleagues in the 1 June issue of the Journal of Infectious Diseases, in which they report that men with HPV infection on their penis are nearly twice as likely to catch HIV than men without. They suspect the virus - which causes cervical cancer in women, and genital warts in men and women - attracts lymphocytes to the skin of the penis for HIV to infect, or creates micro-lesions where it can enter...
originally from http://www.newscientist.com/blogs/shortsharpscience/2010/05/hiv-whodunnit-continues.htm - now erased [Despite the claim that this is news, I received a helluvalot of trouble for publishing info on this topic more than 20 years ago – it’s reprinted below - New Illuminati ed]

The Virus Engineers
AIDS – The Real Story – Section 2
Vaccination Programs and AIDS 
Part 1 – Smallpox Vaccines 

The World Health Organisation (W.H.O.) conducted a thirteen year campaign to eradicate smallpox in the ‘third world’ from 1967 to 1980. They used vaccinia (live smallpox virus serum), injected – sometimes intravenously – into hundreds of millions of people. What’s less widely known is that many of the vaccinia batches were contaminated with animal viruses, including retroviruses – organisms very closely related to Human Immuno-deficiency Virus (H.I.V.), the retrovirus and its many mutational forms believed to cause Acquired Immune Deficiency Syndrome (A.I.D.S.). 
Most people consider it’s been proven that HIV emerged from apes in Africa. Yet many apes in laboratories had been injected with blood and ‘unknown cytopathic agents’ for many years – the same accident-prone labs that produced vaccines, in many cases.
  In the recent WHO smallpox campaign, needles were re-used forty to sixty times. The main method of ‘sterilisation’ was waving the needle across a flame…                “WHO information indicates that the AIDS table of central Africa matches the concentration of smallpox vaccinations, i.e., the greatest spread of HIV infection coincides with the most intense immunization programs. Thus Zaire, at the top of the AIDS list, had 36,000,000 people immunized with the smallpox vaccine. Next Zambia, with fifteen million, Uganda with eleven million, Malawi with eight million, Ruanda with 3.3 million and Burundi with 3.2 million. Brazil, the only South American country covered by the smallpox eradication campaign, has the highest incidence of AIDS in that part of the world.               
“The theory – that the AIDS epidemic in Africa may have been triggered by the smallpox eradication program – has sparked intense debate among scientists. You may not have heard about this debate, but an urgent call for evidence to support the idea has been demanded by the World Health Organisation. The theory was discussed by WHO officials last Autumn (1987). No follow-up data are available from the smallpox eradication campaign because no systemic studies of the complications produced by the mass immunisation have ever been done.”                These statements were made by prodigious author Dr Robert S. Mendelsohn (who wrote many articles and books including Mal/e Practice and Confessions of a Medical Heretic) in 1987 and best sum up the issue – the vaccination programs used contaminated vaccines.[i]               
 “I thought it was just a coincidence until we studied the latest findings about the reactions which can be caused by vaccinia. Now I believe the smallpox vaccine theory is the explanation of the explosion of AIDS,” a WHO advisor said in 1987.[ii] The advisor suggested that the smallpox virus weakens the immune system, causing AIDS viruses to lose their dormancy.               
This (anonymous) outside consultant was hired by the WHO to see if smallpox vaccine was linked to AIDS. When he determined that it was, the WHO buried his report and he went to the London Times. According to Pearce Wright, Times science editor and author of the article, the statistical report compared numbers of vaccinations in Central Africa (and other locations) with the number of reported AIDS cases. The countries with the largest number of vaccinations also had the most AIDS cases. 

Vaccine Clues

  At the Walter Reed Army Medical Centre in Washington, D.C., a routine 1987 smallpox vaccination apparently triggered a dormant HIV infection that developed into full-blown AIDS in a 19 year old army recruit. According to Dr Robert Redfield, leader of a Walter Reed research team, the recruit developed full-blown AIDS two and a half weeks after the vaccination and died soon later. “Our case raises provocative questions concerning the ultimate safety of such vaccines,” said Dr Redfield.[iii]              
  Dr Robert Gallo (hailed as the co-discoverer of HIV) subsequently wrote; “The link between the WHO [smallpox eradication] programme and the [AIDS] epidemic in Africa is an interesting and important hypothesis. I cannot say that it actually happened, but I have been saying for some years that the use of live vaccine such as that used for smallpox can activate a dormant infection such as HIV.”[iv]              
  Others including clinical AIDS researcher Dr Laurence Gerlis, concurred;             
   “Previous circumstantial evidence looks more persuasive alongside the latest research that shows AIDS can be stimulated by smallpox vaccination.” But others, who have researched AIDS in particular and contaminated vaccines in general go one step further.                “The point has nothing to do with triggering dormant HIV infections. Those vaccines were contaminated with cattle viruses which directly contribute to AIDS. They are AIDS – or damn close to it,” said world-renowned virologist Dr Robert Strecker.[v]                The National Institutes of Health (NIH) in Bethesda, Maryland in the US is (and was) arguably the world’s major biological research establishment. On July 7th 1987, Jeremy Rifkin (president of the Foundation on Economi
c Trends and well-known Washington medical activist) delivered a petition to the NIH insisting – under threat of lawsuit – that they examine world-wide stocks of human vaccines to see if they were contaminated with cattle viruses that may be responsible for AIDS.           
     The petition stated, in part:                
 “It has been reported that bovine viruses are ‘a fairly common contaminant of foetal calf serum.” Foetal bovine serum is almost universally used in the creation of cell tissue culture for subsequent use in the production of vaccines for human use.”        
        In an expanded form of his petition sent to the FDA in August 1987, Rifkin calls for a thorough examination of all animal viruses in the same class as Bovine Leukaemia Virus (BLV), Bovine Immuno-deficiency Virus (BIV) and Visna viruses (all retroviruses like HIV) - to test for contamination of medical products and to see “whether, over the last thirty years, these highly adaptable microbes have combined in humans with human genetic material to form new virulent viruses.”[vi]            
    “I’m not asserting I know for certain that the cattle viruses are the true AIDS viruses,” said Rifkin. “But I am saying that they have to be researched. It’s a scandal that they’ve been ignored by the medical research establishment. They could be AIDS… there’s literature to suggest the possibility. And we know cattle viruses can find their way into smallpox vaccines. The problem isn’t going to go away by wishing it would.”[vii] 

Species No Barrier  

 Of course, animal viruses regularly cross the ‘species barrier’ into humans.     
           Dr Luc Montagnier of the Pasteur Institute (‘co’-discoverer of HIV) told the 4th International Conference
 on AIDS in Stockholm (1988) that experiments have shown a human AIDS virus can produce AIDS in animals.[viii]Chimpanzees had hitherto been shown to be the only animals ‘successfully’ infected with HIV, but although they developed related antibodies and swollen lymph nodes, full-blown AIDS could not be induced in them.[ix]           
     The Central African Journal of Medicine for April 1974 reports a series of studies on arboviruses; “For many years the term arbovirus has been applied to viruses which are transmitted from vertebrate to vertebrate by an arthropod vector. It is characteristic of these viruses that they multiply in the vector but cause no apparent harm…” – these viruses are transmitted via insect or similar creatures to ‘higher order’ animals, but the creature transmitting the virus isn’t injured by it.            
    The article describes the infection of large herds of sheep and cattle with arboviruses across southern Africa from 1956 to 1973. The authors also describe how some of these viruses caused “widespread infection among farm workers” and report test results of human blood collected across then Rhodesia (later Zimbabwe) from 1969-73.[x]The studies showed human infection with these viruses in sites across Rhodesia, even though “arbovirus infections are unlikely to be differentiated from malaria and influenza at rural clinics…”            
    “In the present survey [Chikungunya] virus was found to be widespread in Rhodesia…” in human blood.                “Wesselsbron virus was… found to be widely prevalent in Rhodesia in the present survey. Veterinarians, stockmen and farm labourers can acquire infection from the handling and consumption of infected carcasses during epidemics in sheep…”            
    Rift valley fever infects “large numbers of humans” during livestock epidemics. “It is estimated that 20,000 people became infected during the 1951 epidemic in South Africa and that 100,000 sheep and cattle died (Weiss, 1957).[xi]It must be emphasized that compared to HIV/AIDS these viruses are all relatively easy to catch. But these studies demonstrated that viruses can spread from sheep and cattle to human beings. Animal viruses have long been known to cross the species ‘barrier’ into humans and there are many more recent and widespread examples of diseases that have done so – Avian Influenza and ‘Q Fever’ (cattle fever) for example. Keeping and handling animals and killing them for meat often leads to debilitating (and sometimes lethal) infections in humans. That diseases can cross species ‘barriers’ and be deliberately or accidentally recombined into novel forms that end up contaminating humans isn’t news to aware humans in the 21st Century.
The question is, did HIV (and any of the other fairly recently discovered human retroviruses) come from another animal – and if so, how and when?In the1984 edition of Acta Haematologica, Karger and Basel state that “We are proposing the hypothesis, now new in the international literature,  that bovine leukaemia virus [BLV] might be the cause or a contributing factor of human… leukaemias.” They cite a “significantly higher incidence” of human lymphatic leukaemias among farmers in Midwest U.S. farming areas where there is also a high incidence of BLV.Dr Robert Strecker, the previously mentioned medical researcher and physician from Eagle Rock in the US, was widely quoted – mainly in the European, not American press – regarding his theory that AIDS is caused by a combination of sheep and cow (bovine) viruses.“The next question is, can bovine leukaemia virus get into vaccines, vaccines which are plugged directly into people’s bloodstreams?” Dr Strecker asked. “We have the strong possibility that the virus can be passed to humans. So let’s look at the vaccines.”[xii]“The first thing that ought to be done,” responded Dr Alex Thiermann (head of animal research at the US Department of Agriculture), “in the case of bovine immuno-deficiency virus which Mr Rifkin cites… is a test of human blood to see if signs appear that BIV is present. There is no current evidence that BIV does infect humans. Certainly, Mr Rifkin’s general question, which has to do with animal viruses causing AIDS, is a legitimate one. As far as BLV, bovine leukaemia is concerned, it might be worthwhile to examine vaccines, as he suggests, for signs of contamination… there is always a risk of contamination.”Ordinary cattle serum, a rich nutrient medium used for growing cell cultures and blood products (including vaccines) “can of course be contaminated with bovine viruses. It’s very hard to maintain purity of that serum,” according to Dr Thiermann.“That is why we have gone to using foetal bovine serum, a much purer method. It is extracted by tapping and bleeding unborn cow fetuses at slaughterhouses. On rare occasions, even this serum has shown signs of contamination.”[xiii] 

Contaminated Vaccines

 Perhaps Dr Thiermann was unaware that foetal bovine serum had been shown to be regularly contaminated with bovine viruses. This was demonstrated in a 1972 study, ‘Isolation and Characterization of Viruses from Fetal (sic) Calf Serum’, that was published in the journal In Vitro;                “Information on the possible presence of virus contaminants in bovine serums has obvious importance to all investigators, and in particular to vaccine manufacturers. Sixteen lots of commercial fetal calf serum were tested for bovine contamination. One isolated [finding] was unequivocally identified as bovine diarrhea virus.”  
“Hundreds and hundreds of millions of people inoculated – potentially getting bovine viruses in their bloodstreams.”  
               A subsequent issue of In Vitro from 1975 presents another relevant paper.             
   “Fifty-one lots of fetal bovine serum from fourteen suppliers were examined. Over 30% of the lots tested were found to contain bovine viruses; they included bovine diarrhea virus, parainfluenza type 3-like virus, bovine herpes virus 1, and an unidentified cytopathic agent [a microbe or toxin causing harm to cells – our emphases.]”           
     “Contamination of foetal bovine serum isn’t rare – it’s usual,” said Dr Strecker, who unearthed these studies. “If you were giving out grant money, wouldn’t you fund studies checking for bovine leaukaemia in the serum?           
     “Foetal bovine serum can be shot through with bovine viruses. These viruses get into all kinds of medical products, including vaccines. No one has bothered to find out how harmful this is to humans,” he said.            
    “In the case of smallpox vaccines and other vaccines, the problem is basically the same. Hundreds and hundreds of millions of people inoculated – potentially getting bovine viruses in their bloodstreams. The medical literature suggests that AIDS-like symptoms (the chimp experiment) or leukaemias can result from a few of these bovine viruses. But where is the concerned human rush to check all this out, to take on the task of seeing whether we’re being infected with very harmful agents? Nowhere?” he asks.[xiv] 
 To those who see this report as old news, it must be pointed out that these questions are still strangely unanswered. The issue has disappeared from the media. Most people consider it’s been proven that HIV emerged from apes in Africa. Yet many apes in laboratories had been injected with blood and ‘unknown cytopathic agents’ for many years – the same accident-prone labs that produced vaccines, in many cases.                BIV is the best candidate for being the precursor of the AIDS virus, according to Dr Strecker. He said HIV may be BIV adapted to humans.               
Matthew Gonda, contract worker for the US National Cancer Institute (NCI), released a paper at the June 3rd, 1987 International Conference on AIDS in Washington that indicated striking similarities between HIV
and bovine immuno-deficiency virus.              
  “BIV is a different name for what has been called bovine Visna virus,” Rifkin said (Bovine Visna is quite possibly a cross between bovine leukaemia and a sheep Visna virus, which causes brain rot – a close relative or precursor to ‘mad cow disease’, which has a latency period of up to decades before symptoms show up in infected humans).             
   Dr Strecker unearthed another report from 1981 that stated “…a retrovirus assumed to be bovine Visna virus is a fairly common contaminant of foetal calf serum.”[xv] Also in 1981, Cedric Mims published an article in which he said there was a bovine virus contaminating culture media at the World Health Organisation.             
   According to Paul Meyer, a pathologist at USC, if you combined the effects bovine leukaemia has on cattle with the effects Visna has on sheep, “you would have a combined pathogenic effect like AIDS, assuming such a microbe existed and it could take up residence in humans.”[xvi]     
           An extensive 1987 survey of top US and Canadian scientists showed that a large proportion acknowledged that animal retrovirus contamination of medical products has been a serious problem. The problem hasn’t gone away and the questions haven’t been answered.       
         “It’s about time,” an anonymous Canadian virologist said in the 1987 report. “This is a scandal of major proportions. It’s been swept under the carpet for at least twenty years – this contamination business – and it’s turned into a potential nightmare as far as human health is concerned.”[xvii]     
       Another twenty years has passed.  
              In 1974 the National Academy of Sciences (NAS) recommended that “Scientists throughout the world join the members of this committee in voluntarily deferring experiments (linking) animal viruses.”                As we have all seen since, this was recommended for good reasons – and widely ignored.  -         

by R.Ayana

images -http://www.vaclib.org/vaxworld/vax7.gif

[i]  From articles presented in The People’s Doctor and Australasian Health & Healing, Vol 7, No 2, December 1987
[ii]  Quoted from the front page of the London Times, 11-5-87
[iii]  New England Journal of Medicine, March 1987, & Reuter
[iv]  Dr Robert Gallo, co-discoverer of the first HIV strain, London Times 11-5-87
[v]  Dr Robert Strecker quoted in Reader, 7-8-1987 (L.A., USA) reprinted in Australasian Health & Healing, Vol 7, No 2, December 1987.
[vi]  Ibid, p.29
[vii]  Ibid, p. 25.
[viii]  Associated Press & The Times, via The Australian 14-6-88
[ix]  New York Times, Sydney Morning Herald 19-3-87
[x]  The Central African Journal of Medicine, April 1974, Vol 20 No 4 p 71
[xi]  Ibid, pp 75-78
[xii]  Reader, 7-8-87 (L.A., USA) reprinted in Australasian Health & Healing, Vol 7, No 2, December 1987.
[xiii]  Ibid, p 27
[xiv]  Ibid, p 28
[xv]  Microbiological Review, June 1981
[xvi]  Reader, 7-8-87 (L.A., USA) reprinted in Australasian Health & Healing, Vol 7, No 2, December 1987, p 28
[xvii]  Montreal Gazette, 31-7-87

For further enlightenment enter a word or phrase into the search box @ New Illuminati:

And see

The Her(m)etic Hermit - http://hermetic.blog.com

This material is published under Creative Commons Copyright (unless an individual item is declared otherwise by copyright holder) – reproduction for non-profit use is permitted & encouraged, if you give attribution to the work & author - and please include a (preferably active) link to the original along with this notice. Feel free to make non-commercial hard (printed) or software copies or mirror sites - you never know how long something will stay glued to the web – but remember attribution! If you like what you see, please send a tiny donation or leave a comment – and thanks for reading this far…

From the New Illuminati – http://nexusilluminati.blogspot.com

Saturday 29 May 2010

The Moral Life of Babies

The Moral Life of Babies

image byNicholas Nixon for The New York Times

Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands. The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.
This incident occurred in one of several psychology studies that I have been involved with at the Infant Cognition Center at Yale University in collaboration with my colleague (and wife), Karen Wynn, who runs the lab, and a graduate student, Kiley Hamlin, who is the lead author of the studies. We are one of a handful of research teams around the world exploring the moral life of babies.
Like many scientists and humanists, I have long been fascinated by the capacities and inclinations of babies and children. The mental life of young humans not only is an interesting topic in its own right; it also raises — and can help answer — fundamental questions of philosophy and psychology, including how biological evolution and cultural experience conspire to shape human nature. In graduate school, I studied early language development and later moved on to fairly traditional topics in cognitive development, like how we come to understand the minds of other people — what they know, want and experience.
But the current work I’m involved in, on baby morality, might seem like a perverse and misguided next step. Why would anyone even entertain the thought of babies as moral beings? From Sigmund Freud to Jean Piaget to Lawrence Kohlberg, psychologists have long argued that we begin life as amoral animals. One important task of society, particularly of parents, is to turn babies into civilized beings — social creatures who can experience empathy, guilt and shame; who can override selfish impulses in the name of higher principles; and who will respond with outrage to unfairness and injustice. Many parents and educators would endorse a view of infants and toddlers close to that of a recent Onion headline: “New Study Reveals Most Children Unrepentant Sociopaths.” If children enter the world already equipped with moral notions, why is it that we have to work so hard to humanize them?
A growing body of evidence, though, suggests that humans do have a rudimentary moral sense from the very start of life. With the help of well-designed experiments, you can see glimmers of moral thought, moral judgment and moral feeling even in the first year of life. Some sense of good and evil seems to be bred in the bone. Which is not to say that parents are wrong to concern themselves with moral development or that their interactions with their children are a waste of time. Socialization is critically important. But this is not because babies and young children lack a sense of right and wrong; it’s because the sense of right and wrong that they naturally possess diverges in important ways from what we adults would want it to be.

Smart Babies

Babies seem spastic in their actions,
undisciplined in their attention. In 1762, Jean-Jacques Rousseau called the baby “a perfect idiot,” and in 1890 William James famously described a baby’s mental life as “one great blooming, buzzing confusion.” A sympathetic parent might see the spark of consciousness in a baby’s large eyes and eagerly accept the popular claim that babies are wonderful learners, but it is hard to avoid the impression that they begin as ignorant as bread loaves. Many developmental psychologists will tell you that the ignorance of human babies extends well into childhood. For many years the conventional view was that young humans take a surprisingly long time to learn basic facts about the physical world (like that objects continue to exist once they are out of sight) and basic facts about people (like that they have beliefs and desires and goals) — let alone how long it takes them to learn about morality.
I am admittedly biased, but I think one of the great discoveries in modern psychology is that this view of babies is mistaken.
A reason this view has persisted is that, for many years, scientists weren’t sure how to go about studying the mental life of babies. It’s a challenge to study the cognitive abilities of any creature that lacks language, but human babies present an additional difficulty, because, even compared to rats or birds, they are behaviorally limited: they can’t run mazes or peck at levers. In the 1980s, however, psychologists interested in exploring how much babies know began making use of one of the few behaviors that young babies can control: the movement of their eyes. The eyes are a window to the baby’s soul. As adults do, when babies see something that they find interesting or surprising, they tend to look at it longer than they would at something they find uninteresting or expected. And when given a choice between two things to look at, babies usually opt to look at the more pleasing thing. You can use “looking time,” then, as a rough but reliable proxy for what captures babies’ attention: what babies are surprised by or what babies like.
The studies in the 1980s that made use of this methodology were able to discover surprising things about what babies know about the nature and workings of physical objects — a baby’s “naïve physics.” Psychologists — most notably Elizabeth Spelke and Renée Baillargeon — conducted studies that essentially involved showing babies magic tricks, events that seemed to violate some law of the universe: you remove the supports from beneath a block and it floats in midair, unsupported; an object disappears and then reappears in another location; a box is placed behind a screen, the screen falls backward into empty space. Like adults, babies tend to linger on such scenes — they look longer at them than at scenes that are identical in all regards except that they don’t violate physical laws. This suggests that babies have expectations about how objects should behave. A vast body of research now suggests that — contrary to what was taught for decades to legions of psychology undergraduates — babies think of objects largely as adults do, as connected masses that move as units, that are solid and subject to gravity and that move in continuous paths through space and time.
Other studies, starting with a 1992 paper by my wife, Karen, have found that babies can do rudimentary math with objects. The demonstration is simple. Show a baby an empty stage. Raise a screen to obscure part of the stage. In view of the baby, put a Mickey Mouse doll behind the screen. Then put another Mickey Mouse doll behind the screen. Now drop the screen. Adults expect two dolls — and so do 5-month-olds: if the screen drops to reveal one or three dolls, the babies look longer, in surprise, than they do if the screen drops to reveal two.
A second wave of studies used looking-time methods to explore what babies know about the minds of others — a baby’s “naïve psychology.” Psychologists had known for a while that even the youngest of babies treat people different from inanimate objects. Babies like to look at faces; they mimic them, they smile at them. They expect engagement: if a moving object becomes still, they merely lose interest; if a person’s face becomes still, however, they become distressed.
But the new studies found that babies have an actual understanding of mental life: they have some grasp of how people think and why they act as they do. The studies showed that, though babies expect inanimate objects to move as the result of push-pull interactions, they expect people to move rationally in accordance with their beliefs and desires: babies show surprise when someone takes a roundabout path to something he wants. They expect someone who reaches for an object to reach for the same object later, even if its location has changed. And well before their 2nd birthdays, babies are sharp enough to know that other people can have false beliefs. The psychologists Kristine Onishi and Renée Baillargeon have found that 15-month-olds expect that if a person sees an object in one box, and then the object is moved to another box when the person isn’t looking, the person will later reach into the box where he first saw the object, not the box where it actually is. That is, toddlers have a mental model not merely of the world but of the world as understood by someone else.
These discoveries inevitably raise a question: If babies have such a rich understanding of objects and people so early in life, why do they seem so ignorant and helpless? Why don’t they put their knowledge to more active use? One possible answer is that these capacities are the psychological equivalent of physical traits like testicles or ovaries, which are formed in infancy and then sit around, useless, for years and years. Another possibility is that babies do, in fact, use their knowledge from Day 1, not for action but for learning. One lesson from the study of artificial intelligence (and from cognitive science more generally) is that an empty head learns nothing: a system that is capable of rapidly absorbing information needs to have some prewired understanding of what to pay attention to and what generalizations to make. Babies might start off smart, then, because it enables them to get smarter.

 Nice Babies

Psychologists like myself who are interested in the cognitive capacities of babies and toddlers are now turning our attention to whether babies have a “naïve morality.” But there is reason to proceed with caution. Morality, after all, is a different sort of affair than physics or psychology. The truths of physics and psychology are universal: objects obey the same physical laws everywhere; and people everywhere have minds, goals, desires and beliefs. But the existence of a universal moral code is a highly controversial claim; there is considerable evidence for wide variation from society to society.
In the journal Science a couple of months ago, the psychologist Joseph Henrich and several of his colleagues reported a cross-cultural study of 15 diverse populations and found that people’s propensities to behave kindly to strangers and to punish unfairness are strongest in large-scale communities with market economies, where such norms are essential to the smooth functioning of trade. Henrich and his colleagues concluded that much of the morality that humans possess is a consequence of the culture in which they are raised, not their innate capacities.
At the same time, though, people everywhere have some sense of right and wrong. You won’t find a society where people don’t have some notion of fairness, don’t put some value on loyalty and kindness, don’t distinguish between acts of cruelty and innocent mistakes, don’t categorize people as nasty or nice. These universals make evolutionary sense. Since natural selection works, at least in part, at a genetic level, there is a logic to being instinctively kind to our kin, whose survival and well-being promote the spread of our genes. More than that, it is often beneficial for humans to work together with other humans, which means that it would have been adaptive to evaluate the niceness and nastiness of other individuals. All this is reason to consider the innateness of at least basic moral concepts.
In addition, scientists know that certain compassionate feelings and impulses emerge early and apparently universally in human development. These are not moral concepts, exactly, but they seem closely related. One example is feeling pain at the pain of others. In his book “The Expression of the Emotions in Man and Animals,” Charles Darwin, a keen observer of human nature, tells the story of how his first son, William, was fooled by his nurse into expressing sympathy at a very young age: “When a few days over 6 months old, his nurse pretended to cry, and I saw that his face instantly assumed a melancholy expression, with the corners of his mouth strongly depressed.”
There seems to be something evolutionarily ancient to this empathetic response. If you want to cause a rat distress, you can expose it to the screams of other rats. Human babies, notably, cry more to the cries of other babies than to tape recordings of their own crying, suggesting that they are responding to their awareness of someone else’s pain, not merely to a certain pitch of sound. Babies also seem to want to assuage the pain of others: once they have enough physical competence (starting at about 1 year old), they soothe others in distress by stroking and touching or by handing over a bottle or toy. There are individual differences, to be sure, in the intensity of response: some babies are great soothers; others don’t care as much. But the basic impulse seems common to all. (Some other primates behave similarly: the primatologist Frans de Waal reports that chimpanzees “will approach a victim of attack, put an arm around her and gently pat her back or groom her.” Monkeys, on the other hand, tend to shun victims of aggression.)
Some recent studies have explored the existence of behavior in toddlers that is “altruistic” in an even stronger sense — like when they give up their time and energy to help a stranger accomplish a difficult task. The psychologists Felix Warneken and Michael Tomasello have put toddlers in situations in which an adult is struggling to get something done, like opening a cabinet door with his hands full or trying to get to an object out of reach. The toddlers tend to spontaneously help, even without any prompting, encouragement or reward.
Is any of the above behavior recognizable as moral conduct? Not obviously so. Moral ideas seem to involve much more than mere compassion. Morality, for instance, is closely related to notions of praise and blame: we want to reward what we see as good and punish what we see as bad. Morality is also closely connected to the ideal of impartiality — if it’s immoral for you to do something to me, then, all else being equal, it is immoral for me to do the same thing to you. In addition, moral principles are different from other types of rules or laws: they cannot, for instance, be overruled solely by virtue of authority. (Even a 4-year-old knows not only that unprovoked hitting is wrong but also that it would continue to be wrong even if a teacher said that it was O.K.) And we tend to associate morality with the possibility of free and rational choice; people choose to do good or evil. To hold someone responsible for an act means that we believe that he could have chosen to act otherwise.
Babies and toddlers might not know or exhibit any of these moral subtleties. Their sympathetic reactions and motivations — including their desire to alleviate the pain of others — may not be much different in kind from purely nonmoral reactions and motivations like growing hungry or wanting to void a full bladder. Even if that is true, though, it is hard to conceive of a moral system that didn’t have, as a starting point, these empathetic capacities. As David Hume argued, mere rationality can’t be the foundation of morality, since our most basic desires are neither rational nor irrational. “ ’Tis not contrary to reason,” he wrote, “to prefer the destruction of the whole world to the scratching of my finger.” To have a genuinely moral system, in other words, some things first have to matter, and what we see in babies is the development of mattering.

Moral-Baby Experiments

So what do babies really understand about morality? Our first experiments exploring this question were done in collaboration with a postdoctoral researcher named Valerie Kuhlmeier (who is now an associate professor of psychology at Queen’s University in Ontario). Building on previous work by the psychologists David and Ann Premack, we began by investigating what babies think about two particular kinds of action: helping and hindering.
Our experiments involved having children watch animated movies of geometrical characters with faces. In one, a red ball would try to go up a hill. On some attempts, a yellow square got behind the ball and gently nudged it upward; in others, a green triangle got in front of it and pushed it down. We were interested in babies’ expectations about the ball’s attitudes — what would the baby expect the ball to make of the character who helped it and the one who hindered it? To find out, we then showed the babies additional movies in which the ball either approached the square or the triangle. When the ball approached the triangle (the hinderer), both 9- and 12-month-olds looked longer than they did when the ball approached the square (the helper). This was consistent with the interpretation that the former action surprised them; they expected the ball to approach the helper. A later study, using somewhat different stimuli, replicated the finding with 10-month-olds, but found that 6-month-olds seem to have no expectations at all. (This effect is robust only when the animated characters have faces; when they are simple faceless figures, it is apparently harder for babies to interpret what they are seeing as a social interaction.)
This experiment was designed to explore babies’ expectations about social interactions, not their moral capacities per se. But if you look at the movies, it’s clear that, at least to adult eyes, there is some latent moral content to the situation: the triangle is kind of a jerk; the square is a sweetheart. So we set out to investigate whether babies make the same judgments about the characters that adults do. Forget about how babies expect the ball to act toward the other characters; what do babies themselves think about the square and the triangle? Do they prefer the good guy and dislike the bad guy?
Here we began our more focused investigations into baby morality. For these studies, parents took their babies to the Infant Cognition Center, which is within one of the Yale psychology buildings. (The center is just a couple of blocks away from where Stanley Milgram did his famous experiments on obedience in the early 1960s, tricking New Haven residents into believing that they had severely harmed or even killed strangers with electrical shocks.) The parents were told about what was going to happen and filled out consent forms, which described the study, the risks to the baby (minimal) and the benefits to the baby (minimal, though it is a nice-enough experience). Parents often asked, reasonably enough, if they would learn how their baby does, and the answer was no. This sort of study provides no clinical or educational feedback about individual babies; the findings make sense only when computed as a group.
For the experiment proper, a parent will carry his or her baby into a small testing room. A typical experiment takes about 15 minutes. Usually, the parent sits on a chair, with the baby on his or her lap, though for some studies, the baby is strapped into a high chair with the parent standing behind. At this point, some of the babies are either sleeping or too fussy to continue; there will then be a short break for the baby to wake up or calm down, but on average this kind of study ends up losing about a quarter of the subjects. Just as critics describe much of experimental psychology as the study of the American college undergraduate who wants to make some extra money or needs to fulfill an Intro Psych requirement, there’s some truth to the claim that this developmental work is a science of the interested and alert baby.
In one of our first studies of moral evaluation, we decided not to use two-dimensional animated movies but rather a three-dimensional display in which real geometrical objects, manipulated like puppets, acted out the helping/hindering situations: a yellow square would help the circle up the hill; a red triangle would push it down. After showing the babies the scene, the experimenter placed the helper and the hinderer on a tray and brought them to the child. In this instance, we opted to record not the babies’ looking time but rather which character they reached for, on the theory that what a baby reaches for is a reliable indicator of what a baby wants. In the end, we found that 6- and 10-month-old infants overwhelmingly preferred the helpful individual to the hindering individual. This wasn’t a subtle statistical trend; just about all the babies reached for the good guy.
(Experimental minutiae: What if babies simply like the color red or prefer squares or something like that? To control for this, half the babies got the yellow square as the helper; half got it as the hinderer. What about problems of unconscious cueing and unconscious bias? To avoid this, at the moment when the two characters were offered on the tray, the parent had his or her eyes closed, and the experimenter holding out the characters and recording the responses hadn’t seen the puppet show, so he or she didn’t know who was the good guy and who the bad guy.)
One question that arose with these experiments was how to understand the babies’ preference: did they act as they did because they were attracted to the helpful individual or because they were repelled by the hinderer or was it both? We explored this question in a further series of studies that introduced a neutral character, one that neither helps nor hinders. We found that, given a choice, infants prefer a helpful character to a neutral one; and prefer a neutral character to one who hinders. This finding indicates that both inclinations are at work — babies are drawn to the nice guy and repelled by the mean guy. Again, these results were not subtle; babies almost always showed this pattern of response.
Does our research show that babies believe that the helpful character is good and the hindering character is bad? Not necessarily. All that we can safely infer from what the babies reached for is that babies prefer the good guy and show an aversion to the bad guy. But what’s exciting here is that these preferences are based on how one individual treated another, on whether one individual was helping another individual achieve its goals or hindering it. This is preference of a very special sort; babies were responding to behaviors that adults would describe as nice or mean. When we showed these scenes to much older kids — 18-month-olds — and asked them, “Who was nice? Who was good?” and “Who was mean? Who was bad?” they responded as adults would, identifying the helper as nice and the hinderer as mean.
To increase our confidence that the babies we studied were really responding to niceness and naughtiness, Karen Wynn and Kiley Hamlin, in a separate series of studies, created different sets of one-act morality plays to show the babies. In one, an individual struggled to open a box; the lid would be partly opened but then fall back down. Then, on alternating trials, one puppet would grab the lid and open it all the way, and another puppet would jump on the box and slam it shut. In another study (the one I mentioned at the beginning of this article), a puppet would play with a ball. The puppet would roll the ball to another puppet, who would roll it back, and the first puppet would roll the ball to a different puppet who would run away with it. In both studies, 5-month-olds preferred the good guy — the one who helped to open the box; the one who rolled the ball back — to the bad guy. This all suggests that the babies we studied have a general appreciation of good and bad behavior, one that spans a range of actions.
A further question that arises is whether babies possess more subtle moral capacities than preferring good and avoiding bad. Part and parcel of adult morality, for instance, is the idea that good acts should meet with a positive response and bad acts with a negative response — justice demands the good be rewarded and the bad punished. For our next studies, we turned our attention back to the older babies and toddlers and tried to explore whether the preferences that we were finding had anything to do with moral judgment in this mature sense. In collaboration with Neha Mahajan, a psychology graduate student at Yale, Hamlin, Wynn and I exposed 21-month-olds to the good guy/bad guy situations described above, and we gave them the opportunity to reward or punish either by giving a treat to, or taking a treat from, one of the characters. We found that when asked to give, they tended to chose the positive character; when asked to take, they tended to choose the negative one.
Dispensing justice like this is a more elaborate conceptual operation than merely preferring good to bad, but there are still-more-elaborate moral calculations that adults, at least, can easily make. For example: Which individual would you prefer — someone who rewarded good guys and punished bad guys or someone who punished good guys and rewarded bad guys? The same amount of rewarding and punishing is going on in both cases, but by adult lights, one individual is acting justly and the other isn’t. Can babies see this, too?
To find out, we tested 8-month-olds by first showing them a character who acted as a helper (for instance, helping a puppet trying to open a box) and then presenting a scene in which this helper was the target of a good action by one puppet and a bad action by another puppet. Then we got the babies to choose between these two puppets. That is, they had to choose between a puppet who rewarded a good guy versus a puppet who punished a good guy. Likewise, we showed them a character who acted as a hinderer (for example, keeping a puppet from opening a box) and then had them choose between a puppet who rewarded the bad guy versus one who punished the bad guy.
The results were striking. When the target of the action was itself a good guy, babies preferred the puppet who was nice to it. This alone wasn’t very surprising, given that the other studies found an overall preference among babies for those who act nicely. What was more interesting was what happened when they watched the bad guy being rewarded or punished. Here they chose the punisher. Despite their overall preference for good actors over bad, then, babies are drawn to bad actors when those actors are punishing bad behavior.
All of this research, taken together, supports a general picture of baby morality. It’s even possible, as a thought experiment, to ask what it would be like to see the world in the moral terms that a baby does. Babies probably have no conscious access to moral notions, no idea why certain acts are good or bad. They respond on a gut level. Indeed, if you watch the older babies during the experiments, they don’t act like impassive judges — they tend to smile and clap during good events and frown, shake their heads and look sad during the naughty events (remember the toddler who smacked the bad puppet). The babies’ experiences might be cognitively empty but emotionally intense, replete with strong feelings and strong desires. But this shouldn’t strike you as an altogether alien experience: while we adults possess the additional critical capacity of being able to consciously reason about morality, we’re not otherwise that different from babies — our moral feelings are often instinctive. In fact, one discovery of contemporary research in social psychology and social neuroscience is the powerful emotional underpinning of what we once thought of as cool, untroubled, mature moral deliberation.
Is This the Morality We’re Looking For?

What do these findings about babies’ moral notions tell us about adult morality? Some scholars think that the very existence of an innate moral sense has profound implications. In 1869, Alfred Russel Wallace, who along with Darwin discovered natural selection, wrote that certain human capacities — including “the higher moral faculties” — are richer than what you could expect from a product of biological evolution. He concluded that some sort of godly force must intervene to create these capacities. (Darwin was horrified at this suggestion, writing to Wallace, “I hope you have not murdered too completely your own and my child.”)
A few years ago, in his book “What’s So Great About Christianity,” the social and cultural critic Dinesh D’Souza revived this argument. He conceded that evolution can explain our niceness in instances like kindness to kin, where the niceness has a clear genetic payoff, but he drew the line at “high altruism,” acts of entirely disinterested kindness. For D’Souza, “there is no Darwinian rationale” for why you would give up your seat for an old lady on a bus, an act of nice-guyness that does nothing for your genes. And what about those who donate blood to strangers or sacrifice their lives for a worthy cause? D’Souza reasoned that these stirrings of conscience are best explained not by evolution or psychology but by “the voice of God within our souls.”
The evolutionary psychologist has a quick response to this: To say that a biological trait evolves for a purpose doesn’t mean that it always functions, in the here and now, for that purpose. Sexual arousal, for instance, presumably evolved because of its connection to making babies; but of course we can get aroused in all sorts of situations in which baby-making just isn’t an option — for instance, while looking at pornography. Similarly, our impulse to help others has likely evolved because of the reproductive benefit that it gives us in certain contexts — and it’s not a problem for this argument that some acts of niceness that people perform don’t provide this sort of benefit. (And for what it’s worth, giving up a bus seat for an old lady, although the motives might be psychologically pure, turns out to be a coldbloodedly smart move from a Darwinian standpoint, an easy way to show off yourself as an attractively good person.
The general argument that critics like Wallace and D’Souza put forward, however, still needs to be taken seriously. The morality of contemporary humans really does outstrip what evolution could possibly have endowed us with; moral actions are often of a sort that have no plausible relation to our reproductive success and don’t appear to be accidental byproducts of evolved adaptations. Many of us care about strangers in faraway lands, sometimes to the extent that we give up resources that could be used for our friends and family; many of us care about the fates of nonhuman animals, so much so that we deprive ourselves of pleasures like rib-eye steak and veal scaloppine. We possess abstract moral notions of equality and freedom for all; we see racism and sexism as evil; we reject slavery and genocide; we try to love our enemies. Of course, our actions typically fall short, often far short, of our moral principles, but these principles do shape, in a substantial way, the world that we live in. It makes sense then to marvel at the extent of our moral insight and to reject the notion that it can be explained in the language of natural selection. If this higher morality or higher altruism were found in babies, the case for divine creation would get just a bit stronger.
But it is not present in babies. In fact, our initial moral sense appears to be biased toward our own kind. There’s plenty of research showing that babies have within-group preferences: 3-month-olds prefer the faces of the race that is most familiar to them to those of other races; 11-month-olds prefer individuals who share their own taste in food and expect these individuals to be nicer than those with different tastes; 12-month-olds prefer to learn from someone who speaks their own language over someone who speaks a foreign language. And studies with young children have found that once they are segregated into different groups — even under the most arbitrary of schemes, like wearing different colored T-shirts — they eagerly favor their own groups in their attitudes and their actions.
The notion at the core of any mature morality is that of impartiality. If you are asked to justify your actions, and you say, “Because I wanted to,” this is just an expression of selfish desire. But explanations like “It was my turn” or “It’s my fair share” are potentially moral, because they imply that anyone else in the same situation could have done the same. This is the sort of argument that could be convincing to a neutral observer and is at the foundation of standards of justice and law. The philosopher Peter Singer has pointed out that this notion of impartiality can be found in religious and philosophical systems of morality, from the golden rule in Christianity to the teachings of Confucius to the political philosopher John Rawls’s landmark theory of justice. This is an insight that emerges within communities of intelligent, deliberating and negotiating beings, and it can override our parochial impulses.
The aspect of morality that we truly marvel at — its generality and universality — is the product of culture, not of biology. There is no need to posit divine intervention. A fully developed morality is the product of cultural development, of the accumulation of rational insight and hard-earned innovations. The morality we start off with is primitive, not merely in the obvious sense that it’s incomplete, but in the deeper sense that when individuals and societies aspire toward an enlightened morality — one in which all beings capable of reason and suffering are on an equal footing, where all people are equal — they are fighting with what children have from the get-go. The biologist Richard Dawkins was right, then, when he said at the start of his book “The Selfish Gene,” “Be warned that if you wish, as I do, to build a society in which individuals cooperate generously and unselfishly toward a common good, you can expect little help from biological nature.” Or as a character in the Kingsley Amis novel “One Fat Englishman” puts it, “It was no wonder that people were so horrible when they started life as children.”
Morality, then, is a synthesis of the biological and the cultural, of the unlearned, the discovered and the invented. Babies possess certain moral foundations — the capacity and willingness to judge the actions of others, some sense of justice, gut responses to altruism and nastiness. Regardless of how smart we are, if we didn’t start with this basic apparatus, we would be nothing more than amoral agents, ruthlessly driven to pursue our self-interest. But our capacities as babies are sharply limited. It is the insights of rational individuals that make a truly universal and unselfish morality something that our species can aspire to.
Paul Bloom is a professor of psychology at Yale. His new book, “How Pleasure Works,” will be published next month.

All black & white photographs taken at the Infant Cognition Center at Yale University.

Xtra Images- http://www.geekologie.com/2008/04/08/baby-chocolate.jpg

For further enlightenment enter a word or phrase into the search box @  New Illuminati:

And see

The Her(m)etic Hermit - http://hermetic.blog.com

This material is published under Creative Commons Copyright (unless an individual item is declared otherwise by copyright holder) – reproduction for non-profit use is permitted & encouraged, if you give attribution to the work & author - and please include a (preferably active) link to the original along with this notice. Feel free to make non-commercial hard (printed) or software copies or mirror sites - you never know how long something will stay glued to the web – but remember attribution! If you like what you see, please send a tiny donation or leave a comment – and thanks for reading this far…

From the New Illuminati – http://nexusilluminati.blogspot.com