"All the World's a Stage We Pass Through" R. Ayana

Tuesday 12 October 2010

Intelligent Universe: Approaching the Singularity

Intelligent Universe:
Approaching the Singularity 
by Abou Farman
The next stage in evolution—a machine consciousness able to manipulate time and space — is just around the corner. The catch: humans will no longer be in charge.



ASSUME, FOR A MOMENT, the point of view of Intelligence. Not an intelligent point of view, but the perspective of Intelligence itself, gazing out on the cold and gaseous 13.5-billion-year-old universe.
It would seem, would it not, that you ought to give yourself a pat on the back. 

You’ve done a damn good job of progressing from a few dumb rocks, flung out of the Big Bang, into monocellular creatures that learned how to make copies of themselves. Next, you grew into a complex, hyperaware species called Homo sapiens that extended its brain power through machines. Finally you took up residence inside buzzing electronic circuits whose intellectual abilities increased so quickly they unified everything in one gigantic supersmart info-sphere.

And you’re not done yet! In fact, you would be forgiven for thinking the universe was arranged to accommodate your flourishing. Especially now that, thanks to silicon-based computation, you’ve transcended the narrow conditions of your previous biological platform: the human body. Restless for even greater complexity you will soon spread across the void, saturating atoms, energy, space, and waking all of creation from its long slumber.

THIS SCENARIO frames the worldview of a loose movement assembled under a tent called the Singularity. If the term is familiar, you’ve likely heard it in tandem with the name Ray Kurzweil. A short, dapper techie with a thinning tuft of silver hair, Kurzweil is an accomplished inventor who, among other things, created the first text-to-speech device and built the first acoustic synthesizer, the Kurzweil 250, which came out in 1983 and revolutionized music. 

Currently, Kurzweil has links to NASA and Google, and acts as an advisor to DARPA, the US Department of Defense’s advanced research arm, which, since its launch in 1958, has been responsible for everything from the internet to biodefense to unmanned bombing aircraft such as the Predator.

Kurzweil is also the unofficial leader of the Singularity—its Chief Executive Oracle. He assumed this mantle with the publication of The Singularity is Near (2005), a book that analyzes the curve of technological development from humble flint-knapping to the zippy microchip. The curve he draws rises exponentially, and we are sitting right on the elbow, which means very suddenly this trend toward smaller and smarter technologies will yield greater-than-human machine intelligence. 

That sort of superintelligence will proliferate not by self-replication, but by building other agents with even greater intelligence than itself, which will in turn build more superior agents. The result will be an “intelligence explosion” so fast and so vast that the laws and certainties with which we are familiar will no longer apply. That event-horizon is called the Singularity.

Since our brains are wet, messy, relatively inefficient products of evolution and, as one Singularitarian put it, were “not designed to be end-user modifiable,” we humans may simply be uploaded into this intelligence expansion. But if we don’t survive at all, well, at least the universe itself will be flooded with something of independent value, all its “dumb matter”—to quote Kurzweil’s book—transformed into “exquisitely sublime forms of intelligence.”

The prospect of such a destiny makes some people ecstatic. It terrifies others. Singularitarians tend to harbour both reactions simultaneously, which is just how Edmund Burke first defined the effect of “the sublime.” While Kurzweil may not intend it in its original sense, the word seems apt: insofar as it elicits terror and awe at once, the Singularity is sublime.

 https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiklk0FZPCo3v5XsWD4i8d9zkLiay34qoCD4Pz8wPQ33BNCxXXQRNmvPpfQj3eJ6ckb3mHrluqb91gZesO3bu1UxXdqOTgDHaEqefT2kXW1SCwGjt3Lws4C-EW8w5XEqc1VOEgHHLLNqNg/s1600/transhumanism_560-3101.jpg

THE SINGULARITY did not originate with Kurzweil. In 1993, a computer scientist, mathematician and science fiction writer named Vernor Vinge delivered a lecture at a NASA-sponsored symposium that laid out a serious scenario in a half-troubled, half-exuberant tone. “Within thirty years, we will have the technological means to create superhuman intelligence,” he declared. “Shortly after, the human era will be ended.” Borrowing a term from mathematics and physics that describes a point past which known laws do not hold, Vinge called his threshold the Singularity.

Dropped like a gauntlet, the Singularity meme was picked up by a young artificial intelligence (AI) researcher by the name of Eliezer Yudkowsky, who, along with a programmer called Tyler Emerson, set up the Singularity Institute For Artificial Intelligence (SIAI) in 2000. A bearded, convivial prodigy who speaks in highly formal sentences and is proud to not have a PhD, “Eli,” as he’s known, was excited by the prospect of superintelligent artificial agents, but the humanist in him worried that such an agent might end up, willfully or accidentally, destroying all the things we care about—like human lives.

For a crude illustration of an accidental case of destruction, picture a superintelligence optimized to produce paperclips. It would have the ability to rearrange the atomic structure of all matter in its vicinity — including, quite possibly, you — to obtain a lot of high quality paperclips. It may not despise you in particular, but your atomic arrangement is simply not to its liking. Unless you have an office-supply fetish, you’d consider that Unfriendly AI

Yudkowsky, taking both the prospect and his fears seriously, urged research toward the development of Friendly AI. His worrier ethos, or at least its rhetoric, has been passed down; whenever someone throws up an alarming new Singularity scenario, a Singularity fellow will say something like, “Oh, now I’m really beginning to worry.”

Early on, the Singularity Institute was essentially an email list called SL4, or Shock Level 4, with a small subscription base of futurists from groups such as Transhumanists and Extropians, and a few researchers tracking the holy grail of Artificial General Intelligence (AGI). The early discussions sound exploratory now, but they already contained the quasi-schizophrenic sublime element that characterizes Singularity conversations today: intense anxiety mixed with excited anticipation for the “most critical event in human history.” We will be apes watching the rise of superior beings—beings we ourselves will have created.

Kurzweil was well-known back then, already consulting with the US government and appearing on TV shows. His book The Age of Spiritual Machines (2000) laid out the first exponential curves of an accelerating technology trend, presenting a utopian future of unlimited energy and great sex enabled by conscious machines. But it did not mention the Singularity.

It was only after the turn of the millennium—when else?—that the Singularity increasingly moved to the centre of Kurzweil’s platform. He started out by debating Vinge and then spoke about the Singularity at various futurist symposia, where Yudkowsky would politely lambaste him from the audience. Yudkowsky asked Kurzweil whether the Singularity was good or bad, and what people could do about it. Kurzweil had no answer. On his SL4 list, Yudkowsky would later write, “What Kurzweil is selling, under the brand name of the ‘Singularity,’ is the idea that technological progress will continue to go on exactly as it has done over the last century.”

Kurzweil’s “pseudo-Singularity” depended on the inevitability of his predictions and was leading to nothing more than “luxuries” such as superintelligent servants. Yudkowsky’s true Singularity, by contrast, was potentially frightening and demanded intervention. It could lead to a land of post-human bliss, but only if handled properly. It would need a vanguard of rationalists and scientists to attend to it. In short, Yudkowsky was calling for a movement.

His beef with Kurzweil was about engaged activism versus passive predictivism, but it was also a bid for attention. He upped it by issuing a paper about Friendly AI timed to coincide with the release of the film adaptation of Isaac Asimov’s I, Robot. Kurzweil, who was about to release his book on the Singularity, joined the Institute as advisor and then board member. It benefited both. Kurzweil couldn’t have assumed a mantle of authority without a strong link to the activists at the Institute, while the Institute needed his clout and his connections. 

The non-profit Institute soon found firm financial grounding, thanks in part to Kurzweil’s friend and libertarian financial guru Peter Thiel, who made his money by co-founding PayPal and investing early in Facebook. Thiel also co-authored a nativist book called The Diversity Myth (1998), reportedly donated $1 million to the anti-immigrant group NumbersUSA, and funded James O’Keefe, the not-so-independent guerrilla videomaker responsible for undermining Planned Parenthood and ACORN.

Today the Singularity Institute is based in Silicon Valley, the Singularity University is campused at NASA, and an annual Singularity Summit, with nine hundred-plus in attendance last year, takes place around the country. There are Singularity activists, Singularity blogs, Singularity t-shirts, and even Singularity bashers, all of which confirms the status of the Singularity as a social movement, though Singularitarians themselves sometimes call it a revolution. 

“The problem with choosing or not choosing to be a part of our ‘revolution’,” Michael Anissimov, an organizer of the Singularity Summit, wrote on his blog, “is that, for better or for worse, there probably is no choice. When superintelligence is created, it will impact everyone on Earth, whether we like it or not.”

While Kurzweil’s draw has helped galvanize the movement, Yudkowsky’s concerns have guided its practical and ideological agendas, linking the development of Friendly AI to “the common concern of humanity.” So far, the consensus on how one might safeguard humanity goes like this: if we manage to code proper “values” in any superintelligent agent at the beginning—that is, more or less right now—then that agent will not have the desire later to do things that we think are not good, because it too will think them undesirable. 

The proffered example is Gandhi. The Mahatma, it is suggested, would never have willingly taken a pill that would turn him into a killer. Ditto with superintelligence. If initial settings are steeped with pacifist values, superintelligence won’t rewrite itself into a destroyer of humans. This, you might say, is AGI as Artificial Gandhi Intelligence.

It sounds odd, then, to hear Singularitarians say that the biggest mistake anyone can make on the way to Friendly AI is to try and predict superintelligence using human standards. When I laid out some political scenarios involving some destructive human-machine alliances, current SIAI President Michael Vassar chided me for the great sin of anthropomorphism and dismissed all my predictions. 

But it’s hard to think of anything more anthropomorphic than giving AI such profoundly human attributes as “our values,” especially if they are Gandhian. And if the Gandhi analogy seems simplistic, that’s because it is. It asks that we imagine Gandhi as some sort of DNA-driven pacifist source code without acknowledging the social process of the man’s own life, or minor inconveniences in the historical narrative, such as the million people killed at the birth of Indian independence. Politics is anthropomorphic.

The trouble, at any rate, is that it isn’t Gandhi setting the initial conditions. It is an advisor to the US Department of Defense (which uses artificially intelligent agents to bomb Afghan villages) and a free market xenophobe. As they say: now I’m really beginning to worry.  

OUTSIDE the fourth Singularity Summit, held last fall in New York, Giulio Prisco taps a half-smoked cigarette stub out of his pack of Marlboro Lights and gently puts it to a flame. “There is a lot of demonization,” he says, puffing out smoke through Italian-inflected syllables. “People are bashing the Singularity.”
 
An active Transhumanist trained as a theoretical physicist, Prisco decided to fly over from Italy to attend the Summit as a gesture of support—to make, in his own words, “a political statement.” Before leaving, he wrote a blog entry titled “I am a Singularitarian who does not believe in the Singularity.” As headlines go, it’s a little unwieldy, and the distinction is awkward—like saying, “I’m a Christian who does not believe in Christ”—but it led to a firestorm of online debate.

Not all Transhumanists, futurists, immortalists or analytic philosophers are won over by the Singularity. Criticisms come in two shades. One is content-driven, based on the distance between the claims and the scientific evidence supporting them. For example, some contest the inevitability of Kurzweil’s trends, while others point out that there is no proper definition of intelligence, that neuroscience is not even close to fully understanding the mind. Reviewing Kurzweil’s earlier work in the New York Review of Books, the philosopher John Searle argued that “increased computational power” is a different order of thing from “consciousness in computers.”

Though such questions about the nature of mind and consciousness are ancient and unsettled, the Singularity calls them up with enough verve and credibility to involve high-powered scientists and philosophers. Speakers at the 2009 Summit included philosopher David Chalmers, NYU cognitive psychologist Gary Marcus, Wired editor Gary Wolf and famed mathematician, physicist and inventor Stephen Wolfram; in attendance to see and interact with the techie A-list were neuroscientists, physicists and programmers, researchers from Lockheed Martin, a Canadian immortality activist and lots of grad student groupies, some with “Homo Sapiens Siliconis” t-shirts, waiting for photo ops with their favourite neuroscientist.

Because the Singularity is a movement as well as a philosophy, a second shade of criticism faults Singularitarians for ignoring political and cultural context. How, detractors ask, can you chart technological development without accounting for the conditions that gave rise to it, from class conflict in the industrial revolution to the interests of the military-industrial complex?  The United States Department of Defense, for example, appears frequently in Kurzweil’s book, bathed in the glowing light of an enlightened research institute, a bit like the great lab run by James Bond’s Q. It is celebrated for making cool gadgets that benefit humanity, not lethal ones that kill it off. Yet, of all the charges—messianic, absolutist, reductionist, deterministic, anti-human, flesh-hating, undemocratic, individualistic—it’s the one about politics that really makes people like Giulio bristle.

I was brought up with Marxism,” he fumes. “That was the context, in Italy. So yes, we know, politics is money and power. Technology is political. You can tell me the Singularity is impossible, you can tell me it is not desirable, and we can disagree. But what I can’t stand is if you tell me I’m a naïve science fiction geek who doesn’t understand the complicated social and political context. I am tired of the demonization.”

Seventy blocks further downtown, in the packed back room of a bar on 23rd street, the NYC Future Salon has counter-programmed a session with one such “demonizer.” Jamais Cascio, a research fellow at the Institute for the Future, as well as a senior fellow at the Institute for Ethics and Emerging Technologies, spoke at the 2007 Singularity Summit but is generally disillusioned with what he calls the “Singularity mythology.”

A self-avowed user of the neuropharmaceutical modafinil, thought to help with alertness and enhance cognitive abilities, Cascio himself is deeply involved in the high-stakes poker of tech prediction. He believes that non-biological intelligent systems will come to pass soon and that an acceleration of “intelligence density” will transform society. He just doesn’t like the transcendent tropes of the Singularity, “this creation of a greater mind that we will become part of.” He doesn’t like the detachment from the human and the social.

Brilliance around technology does not translate into an understanding of how humans operate,” he says. “One thing I’ve observed from a lot of the Singularity proponents is that they dislike being human, they dislike the body, the messiness of human relations, of human politics. It’s one of the big flaws in the story. They have left out such a big chunk of what it means to be an intelligent social being.”
Cascio suggests that we will necessarily be involved in how intelligent systems build our world. After all, humans are the ones doing the coding. We might increasingly merge with non-biological systems through enhancement devices like cochlear implants or brain-computer interfaces, but there will be no great rupture between humans and machines, nothing like what the Singularitarians project.

Software is political,” he asserts. “So AGI will also have politics.”

Clearly, it already does. If the last few decades of science are any indication, a human-machine future is well on its way. What’s at stake is who will guide its arrival. 

 http://www.mi2g.com/images/transhuman.jpg

CONTEMPORARY SCIENCE and technology came to a head in the nineties. The internet became ubiquitous. The dot-com boom (and bust) pushed high-tech into private lives and created a new financial industry, as well as a new class of workers; ditto for biotech, as its successes created new financial, cultural and scientific sectors, starting with the human genome project in 1990 before moving on to Dolly in 2001. Next came the genetic modification of everything we ate and, potentially, of ourselves, with labs synthesizing tissue, organs and hybrid animals, and information sciences doing biology by code rather than by pipette. Brain mapping took off as neuroscience became one of the great fields. 

The Human Brain Project was established while Deep Blue beat chess champion Garry Kasparov in 1997, giving a boost to flagging AI enthusiasm. These sciences were converging into what is now called NBIC: Nano, Bio, Info, Cogno.
NBIC and its hybrid products (machines that think, cells that are machines) shook the few remaining ontological certainties on which we stood, teetering, since mid-century. Suddenly a library of books by biologists, physicists, computer engineers and social scientists rolled off the presses asking again the fundamental questions: What is matter, what is object, what is human? What is the point of it all, anyway?

Not surprisingly, this environment of doubt spawned an immense industry of prediction. Wrapped in Spiritus Divinatio, legions of alarmists and techtopians slouched at different angles toward an unrealized Bethlehem. One side, like bearded men of the temple, warned us that technological hubris would rob us of our humanity (ignoring the fact that we had co-evolved with technology). The other heralded the advent of a New Age in which NBIC would leave no problem, human or super-human, unsolved.

The predictable alignments of left and right collapsed. Left-of-centre figures like Jürgen Habermas and journalist Bill McKibben were found in bed with conservatives like Francis Fukuyama. Famously, Sun Microsystems founder Bill Joy warned against future technologies, as did the virtual reality pioneer Jaron Lanier. The ceremony of innocence was drowned. Everywhere a different revelation seemed at hand.

Images of transhuman and posthuman figures, hybrids and chimeras, robots and nanobots became uncannily real, blurring further the distinction between science and science fiction. Now, no one says a given innovation can’t happen; the naysayers simply argue that it shouldn’t. But if the proliferating future scenarios no longer seem like science fiction, they are not exactly fact either—not yet. They are still stories about the future and they are stories about science, though they can no longer be banished to the bantustans of unlikely sci-fi. In a promise-oriented world of fast-paced technological change, prediction is the new basis of authority.

That is why futurist groups, operating thus far on the margins of cultural conversation, were thrust into the most significant discussions of the twenty-first century: What is biological, what artificial? Who owns life when it’s bred in the lab? Should there be cut off-lines to technological interventions into life itself, into our DNA, our neurological structures, or those of our foodstuffs? What will happen to human rights when the contours of what is human become blurred through technology?

The futurist movement, in a sense, went viral. Bill McKibben’s Enough (2004) faced off against biophysicist Gregory Stock’s Redesigning Humans (2002) on television and around the web. New groups and think tanks formed every day, among them the Foresight Institute and the Extropy Institute. Their general membership started to overlap, as did their boards of directors, with figures like Ray Kurzweil ubiquitous. Heavyweight participants include Eric Drexler—the father of nanotechnology—and MIT giant Marvin Minsky. One organization, the World Transhumanist Association, which broke off from the Extropy in 1998, counts six thousand members, with chapters across the globe.

If the emergence of NBIC and the new culture of prediction galvanized futurists, the members were also united by an obligatory and almost imperial  sense of optimism, eschewing the dystopian visions of the eighties and nineties. They also learned the dangers of too much enthusiasm. For example, the Singularity Institute, wary of sounding too religious or rapturous, presents its official version of the future in a deliberately understated tone: “The transformation of civilisation into a genuinely nice place to live could occur, not in a distant million-year future, but within our own lifetimes.”

A genuinely nice place to live” sounds like a promo for a new housing development across the river. But make no mistake—the Singularity is utopian. Kurzweil describes the future as an age of “greater beauty, greater creativity, and greater levels of subtle attributes such as love.”  The Singularity’s brand of utopianism is unique, however, because it is premised not on the improvement of human beings but on their obsolescence (even though many have learned not to be forthright about this). In a Forbes article, Singularity Institute researcher Ben Goertzel wrote, “Just as there’s intrinsic value in helping other humans, there’s intrinsic value in helping higher intelligence come into existence. These future minds will experience growth and joy beyond human capability.”

Obsolescence does not automatically require annihilation. “Are ants obsolete or pigs obsolete?” Goertzel asks rhetorically, leaning back on an old wood bench in the hallway of the Singularity Summit. “They exist and continue to do what they do, but they are not the most complex or most interesting creatures on the planet. That’s what I’m assuming is the fate in store for humans. I hope that some humans continue to exist in their current form, but there’s going to be other minds.”

Our purpose, as humans, is to bring them about. We will be surpassed. Not in the narrow sense of old research being trumped by new findings, but surpassed as a species by superior manifestations of evolution. The son of West Coast hippies, Goertzel is nevertheless not opposed to annihilation. “If it really came down to it,” he says, “I wouldn’t hesitate to annihilate myself in favour of some amazing superbeing.” 

 http://lifeboat.com/images/neo.matrix.jpg

WHO WOULDN’T want to see the rise of some incredible superintelligent agent? Who wouldn’t want to merge into a great universe-wide mind in which Peter Thiel’s consciousness would be indistinguishable from a Guatemalan migrant’s, and Kurzweil’s from the son of an Afghan villager killed by a Predator drone? That would be one version of the good life, and not such a bad one given what’s been on offer. Human progress wasn’t supposed to have Auschwitz and Hiroshima at its heart. Guantanamo and Afghanistan, devastating oil spills and wild economic meltdowns weren’t meant to be part of the twenty-first century. We are betrayed children of secular utopias, flapping around under a collapsed canopy trying to find a post to drape the future on. There’s little faith left in the state or in social arrangements to return to us that lost promise, restore some blush on the future’s pretty face.

Even science scaled back its promises. “We claim, and we shall wrest from theology, the entire domain of cosmological theory,” the naturalist John Tyndall prophesied in a famous address to the British Association for the Advancement of Science in 1874. No scientist today would make such a claim. Science has long since abandoned the project of revealing a greater purpose to our existence. Instead, human significance has receded to a vanishing point, a passing shadow in a lonely outpost of the empty universe, itself destined to disappear through heat death or cold death, depending on which theory you accept.

The Singularity’s emerging popularity, however, lies partly in its unabashed solutions to dilemmas of purpose that science abandoned a long time ago. Tyndall-esque cosmological ambition is at the very heart of Kurzweil’s futurology. “I have begun to reflect on the future of our civilization and its relationship to our place in the universe,” he declares at the beginning of his book. And six hundred pages later, he concludes: “the purpose of the universe reflects the same purpose as our lives: to move toward greater intelligence and knowledge.”

In an odd way, the Singularity puts humans back at the centre of things, at one with the intelligent universe itself. There’s a good chance more and more people will be attracted by this sort of metaphysical gumption, combined as it is with supercharged technological optimism that taps into a new sort of hope: if we can’t improve the world by rearranging its social structures, then maybe we can enhance it through rearranging its atomic structure. We can make it rosy and smart atom by atom, spreading intelligence bit by bit, infusing the whole universe with our ones and naughts. That does sound sublime, doesn’t it?


From http://maisonneuve.org/pressroom/article/2010/aug/2/intelligent-universe/




For more information about related past-larval topics see http://nexusilluminati.blogspot.com/search/label/smi2le
and http://nexusilluminati.blogspot.com/search/label/singularity
- See ‘Older Posts’ at the end of each section



This is a ‘not for profit’ site -
But if you like what we do please buy us a meal if you can
Donate any amount and receive at least one New Illuminati eBook!
Please click in the jar -'



 



Xtra  Images - https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiklk0FZPCo3v5XsWD4i8d9zkLiay34qoCD4Pz8wPQ33BNCxXXQRNmvPpfQj3eJ6ckb3mHrluqb91gZesO3bu1UxXdqOTgDHaEqefT2kXW1SCwGjt3Lws4C-EW8w5XEqc1VOEgHHLLNqNg/s1600/transhumanism_560-3101.jpg
http://www.mi2g.com/images/transhuman.jpg
http://lifeboat.com/images/neo.matrix.jpg

For further enlightenment see –

The Her(m)etic Hermit - http://hermetic.blog.com

This material is published under Creative Commons Copyright – reproduction for non-profit use is permitted & encouraged, if you give attribution to the work & author - and please include a (preferably active) link to the original along with this notice. Feel free to make non-commercial hard (printed) or software copies or mirror sites - you never know how long something will stay glued to the web – but remember attribution! If you like what you see, please send a tiny donation or leave a comment – and thanks for reading this far…

From the New Illuminati – http://nexusilluminati.blogspot.com

No comments:

Post a Comment

Add your perspective to the conscious collective