The Singularity Is Near

Moderators: Elvis, DrVolin, Jeff

The Singularity Is Near

Postby nomo » Tue Oct 18, 2005 5:35 pm

Here It Comes<br>Technology's progress will soon accelerate --<br>exponentially. You have no idea how much.<br>Ray Kurzweil does.<br><br>By GLENN HARLAN REYNOLDS<br>Special to THE WALL STREET JOURNAL<br>October 1, 2005; Page P8<br><br><!--EZCODE AUTOLINK START--><a href="http://online.wsj.com/public/article/SB112811088248757062.html?mod=todays_free_feature">online.wsj.com/public/art...ee_feature</a><!--EZCODE AUTOLINK END--><br><br>The Singularity Is Near<br>By Ray Kurzweil<br>Viking, 652 pages, $29.95<br><br>The bearded fellow with a sign reading "The End Is Nigh" is a staple of editorial cartoons. The title-phrase of "The Singularity Is Near" is obviously meant to evoke that image, and for any reader who is slow to catch on Ray Kurzweil makes the allusion clear in Chapter 7 ("Ich bin ein Singularitarian") with an amusing photo of himself holding up a sign announcing the imminence of the Singularity. But Mr. Kurzweil's book is about beginnings, not endings.<br>[singularity]<br><br>The Singularity is a term coined by futurists to describe that point in time when technological progress has so transformed society that predictions made in the present day, already a hit-and-miss affair, are likely to be very, very wide of the mark. Much of Mr. Kurzweil's book consists of a closely argued analysis suggesting that the Singularity is, well, near: poised to appear in a mere three or four decades.<br><br>People's thoughts of the future tend to follow a linear extrapolation -- steadily more of the same, only better -- while most technological progress is exponential, happening by giant leaps and thus moving farther and faster than the mind can easily grasp. Mr. Kurzweil himself, thinking exponentially, imagines a plausible future, not so far away, with extended life-spans (living to 300 will not be unusual), vastly more powerful computers (imagine more computing power in a head-sized device than exists in all the human brains alive today), other miraculous machines (nanotechnology assemblers that can make most anything out of sunlight and dirt) and, thanks to these technologies, enormous increases in wealth (the average person will be capable of feats, like traveling in space, only available to nation-states today).<br><br>Naturally, Mr. Kurzweil has little time for techno-skeptics like the Nobel Prize-winning chemist Richard Smalley, who in September 2001 published a notorious piece in Scientific American debunking the claims of nanotechnologists, in particular the possibility of nano-robots (nanobots) capable of assembling molecules and substances to order. Mr. Kurzweil's arguments countering Dr. Smalley and his allies are a pleasure to read -- Mr. Kurzweil clearly thinks that nanobots are possible -- but in truth he is fighting a battle that is already won. These days skeptics worry that advanced technologies, far from failing to deliver on their promises, will deliver on them only too well -- ushering in a dystopia of, say, destructive self-replication in which the world is covered by nanobots that convert everything into copies of themselves (known in the trade as the "gray goo" problem). Mr. Kurzweil's sense of things isn't nearly so bleak as that -- he is an optimist, after all, an enthusiast for the techno-future -- but he does sound a surprisingly somber note.<br><br>Indeed, "The Singularity Is Near" is partly a cautionary tale. Having established that we're going to face a very different world in the second half of the 21st century -- and face it healthier, wealthier and more artificially intelligent if not precisely wiser -- Mr. Kurzweil concedes that so-called GNR technologies (genetics, nanotech and robotics) may present problems. We may find ourselves battling genetically enhanced super pathogens, deadly military nanobots and powerful "unfriendly" artificial intelligences scheming against those of us with mere natural intelligence. Though Mr. Kurzweil regards these threats as manageable, he does not minimize them and offers chilling scenarios of what could go wrong. These scenarios are all the more credible because they come from Mr. Kurzweil and not from one of the usual gang of scaremongering Luddites.<br><br>Unlike the Luddites, Mr. Kurzweil argues that the best way of curbing technology's potential harm is ... more technology. He notes that to pull back on forward-looking research, or to abandon various machine-marvels, will only make things worse by driving research underground and into irresponsible hands. Instead we should start thinking now about how to safeguard society from technology gone wrong.<br><br>Mr. Kurzweil advocates prophylactic measures like the Asilomar guidelines for recombinant DNA research, which require special precautions for dangerous pathogens and restrict the most serious meddling. He also calls for much more research into antiviral drugs, rapid vaccines, defensive nanotech and artificial intelligence designed to remain friendly.<br><br>It is a persuasive plea, but will anyone listen in time? The political system tends to lag behind technological change, which is often a good thing. I remember attending a House subcommittee hearing in the 1980s on whether the U.S. should create a phone-computer system modeled on the state-funded French Minitel, a text-only network being promoted as the wave of the future. Fortunately, the Internet exploded -- making Minitel obsolete -- before Congress could fund such a project.<br><br>But when it comes to the dangers that Mr. Kurzweil worries about, the slow approach is a problem. The government is so often behind the curve -- think of how sluggishly it adapts to changes in employment patterns, shifts in international trade or attacks of new diseases like avian flu. What happens when the curve is an exponential one?<br><br>Perhaps it won't matter. As Mr. Kurzweil notes, private entrepreneurs seem to have pushed back the threat of computer viruses, for instance, moving more rapidly and more effectively than any government agency ever could. And certainly free-marketeers reading Mr. Kurzweil's book will see opportunities for profit: Imagine the billions to be made from rapid vaccine-production technologies in a world where genetic engineering is common. But I would feel more comfortable if more people started following Mr. Kurzweil's advice in the near future, before the Singularity gets here. <p>--<br>When all else fails... panic.</p><i></i>
User avatar
nomo
 
Posts: 3388
Joined: Tue Jul 26, 2005 1:48 pm
Location: New York City
Blog: View Blog (0)

I've met Kurzweil a couple times

Postby glubglubglub » Tue Oct 18, 2005 6:18 pm

He's a bright man, but he's a bit of a crank. As a matter of personal opinion I don't think it's even going to happen, period, but as a practical matter the general consensus of people working in perhaps the most critical field -- artificial intelligence -- estimate it's, as always, at least 30 years away (like fusion it's been 30 years away since the 60s). Since this is basically what I do for a living I consider myself informed enough to offer a reasonable opinion on the matter -- solid, general purpose AI on a level needed for the singularity is still decades away at the earliest.<br><br>Besides -- Glenn Reynolds is in my opinion a general nutter (he's instapundit) and outside of my opinion the fact is that by training a lawyer, with no supplementary knowledge or experience to make his opinion on the matter worth a damn. <p></p><i></i>
glubglubglub
 
Posts: 328
Joined: Fri Apr 22, 2005 5:14 pm
Blog: View Blog (0)

yeah, ggg but:

Postby Homeless Halo » Tue Oct 18, 2005 9:22 pm

But AI itself isn't inherently attached to the idea of singularities studies, something the government takes seriously enough to have a thinktank working on it round the clock nowadays. The idea is also treated with serious by many respectable scientists(Michio Kaku for example, "VISIONS"), and is seen as a not "if", but when.<br><br>I'd point out that something far more potentially dangerous, and much closer is already seeing the light of day in its primitive forms. This is AIA, or Artificial Intelligence Amplification. The basic dynamic of human/computer fusion have already been achieved by US Naval Researchers, declassified not so long ago. It was on the MSM news. Hardwired computer manipulation, onboard control. Fascinating stuff really. <br><br>Humans wired into machines, IMO are far scarier than smart machines, and will probably impact us far greater than we can imagine (literally) at present. Although its still in its infancy, you should expect that as there are now dozens of researchers scuttling about all over this stuff, we should start seeing applications in less than half the time to AI (which I think they're being optimistic about with a 30 yr estimate, although working AIA could realign our numbers dramatically). <br><br>The scarier implication is that since its Military Research, that means they are probably 20-35 yrs beyond this, given their normal trends. Which means it might already exist in functional forms in a BOp setting. The mainstream American research labs DO NOT represent the actual frontiers of science. What if the USMIC achieves accidental singularity next weekend?<br><br>This is something we need to be taking seriously. A lot of the wonkier ideas are probably a bit down the line. Nanotechnology which would be exceedingly expensive to create, costing about as much money as dissappears from the American budget each year, would be nearly free to reproduce en masse, and leads to a number of military applications, many of which could destroy our Solar System in days(an optimistic appraisal).<br><br>Not only should we make rules for what we'll do when these things will be plausible(already possible), but we should realize the tendency of off the record research to be far enough ahead that they might already be there (electrode implants for behavior modification in 1890s, etc) before we regard it as something to take seriously.<br><br>Just a thought.<br><br>-SCHR<br><br> <p></p><i></i>
Homeless Halo
 
Posts: 564
Joined: Wed Sep 28, 2005 1:51 am
Blog: View Blog (0)

from what I've seen and heard

Postby glubglubglub » Tue Oct 18, 2005 9:51 pm

in computer-related areas the military hardware isn't 20-30 years ahead; the software maybe, the hardware no (though I've heard thrugh the grapevine that mil. hologram tech -- including holographic data storage -- has seen progress)....it used to be that they'd just run top-line computer equipment in cryogenic fluids, sort of an extreme version of today's overclocking. I mean, anything's possible -- esp. if you take the prospects of a roswellesque event seriously -- but on the computationl front it's damn tightly sealed if true. I'd guess that in terms of physical technology -- propulsion and other aerospace in particular -- the military's 50+ ahead now, as there've been few real civilian-side improvements since the 70s.<br><br>I did completely ignore the cyborg possibility; no idea how feasible that actually is, but it seems more reasonable at the moment than smart machines. I still have my doubts that AIA in the near term is going to lead to a Vinge-style singularity -- entities smart enough to improve their own design, with the resultant design-improve-design process leading to exponential progress -- primarily because, again, the software's not smart enough and at the moment the man-machine integration is still crude: giving the blind pixellated sight is nice until you look up how the visual cortex actually arranges images -- do that and you'll see just how far we still are from directly injecting 'vision' into people -- and the remaining stuff is mostly far cruder (not least b/c the understanding of how, say, semantic info is actually stored is not even in its infancy yet)...although auditory stuff is another manner altogether.<br><br>We can speculate all day, but I'm still bloody skeptical re: the singularity. Also, the physics of Drexler-style nanoassemblers basically doesn't work -- ie, if we're building things atom by atom it'll be by inkjet, not by an atomic-scale crane arm -- and at scales much larger than that energy considerations mean the grey goo scenario's pretty far-fetched (unless zero point or some such is tappable by those devices, but if so all bets are off).<br><br>We can agree to disagree but I'm still in the skeptic's camp, and I'll eat my hat if we see a recognizable singularity in our lifetime. <p></p><i></i>
glubglubglub
 
Posts: 328
Joined: Fri Apr 22, 2005 5:14 pm
Blog: View Blog (0)


Return to UFOs and High Weirdness

Who is online

Users browsing this forum: No registered users and 4 guests