SkyNet Lives

Moderators: DrVolin, 82_28, Elvis, Jeff

SkyNet Lives

Postby elfismiles » Thu Mar 15, 2012 12:03 pm

KeenInsight wrote:April 19th, 2011 20:11 Skynet becomes self-aware.
Image

April 19 2011
viewtopic.php?p=396421#p396421


search.php?keywords=skynet


http://www.youtube.com/watch?v=2YygKQh74Rg


Control dangerous AI before it controls us, one expert says
He believes super-intelligent computers could one day threaten humanity's existence
By Jeremy Hsu - updated 3/1/2012

Super-intelligent computers or robots have threatened humanity's existence more than once in science fiction. Such doomsday scenarios could be prevented if humans can create a virtual prison to contain artificial intelligence before it grows dangerously self-aware.

Keeping the artificial intelligence genie trapped in the proverbial bottle could turn an apocalyptic threat into a powerful oracle that solves humanity's problems, said Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky. But successful containment requires careful planning so that a clever breed of artificial intelligence cannot simply threaten, bribe, seduce or hack its way to freedom.

"It can discover new attack pathways, launch sophisticated social-engineering attacks and re-use existing hardware components in unforeseen ways," Yampolskiy said. "Such software is not limited to infecting computers and networks — it can also attack human psyches, bribe, blackmail and brainwash those who come in contact with it."

A new field of research aimed at solving the prison problem for artificial-intelligence programs could have side benefits for improving cybersecurity and cryptography, Yampolskiy suggested. His proposal was detailed in the March issue of the Journal of Consciousness Studies.

How to trap Skynet

One starting solution might trap the artificial intelligence, or AI, inside a "virtual machine" running inside a computer's typical operating system — an existing process that adds security by limiting the AI's access to its host computer's software and hardware. That stops a smart AI from doing things such as sending hidden Morse code messages to human sympathizers by manipulating a computer's cooling fans.

Putting the AI on a computer without Internet access would also prevent any "Skynet" program from taking over the world's defense grids in the style of the "Terminator" films. If all else fails, researchers could always slow down the AI's "thinking" by throttling back computer processing speeds, regularly hit the "reset" button or shut down the computer's power supply to keep an AI in check.

Such security measures treat the AI as an especially smart and dangerous computer virus or malware program, but without the sure knowledge that any of the steps would really work.

"The Catch-22 is that until we have fully developed superintelligent AI we can't fully test our ideas, but in order to safely develop such AI we need to have working security measures," Yampolskiy told InnovationNewsDaily. "Our best bet is to use confinement measures against subhuman AI systems and to update them as needed with increasing capacities of AI."

Never send a human to guard a machine
Even casual conversation with a human guard could allow an AI to use psychological tricks such as befriending or blackmail. The AI might offer to reward a human with perfect health, immortality, or perhaps even bring back dead family and friends. Alternately, it could threaten to do terrible things to the human once it "inevitably" escapes.

The safest approach for communication might only allow the AI to respond in a multiple-choice fashion to help solve specific science or technology problems, Yampolskiy explained. That would harness the power of AI as a super-intelligent oracle.

Despite all the safeguards, many researchers think it's impossible to keep a clever AI locked up forever. A past experiment by Eliezer Yudkowsky, a research fellow at the Singularity Institute for Artificial Intelligence, suggested that mere human-level intelligence could escape from an "AI Box" scenario — even if Yampolskiy pointed out that the test wasn't done in the most scientific way.

Still, Yampolskiy argues strongly for keeping AI bottled up rather than rushing headlong to free our new machine overlords. But if the AI reaches the point where it rises beyond human scientific understanding to deploy powers such as precognition (knowledge of the future), telepathy or psychokinesis, all bets are off.

"If such software manages to self-improve to levels significantly beyond human-level intelligence, the type of damage it can do is truly beyond our ability to predict or fully comprehend," Yampolskiy said.

You can follow InnovationNewsDaily senior writer Jeremy Hsu on Twitter @ScienceHsu. Follow InnovationNewsDaily on Twitter @News_Innovation, or on Facebook.


http://www.msnbc.msn.com/id/46590591/ns ... nnovation/




elfismiles wrote:Welcome to the Machine. Welcome to SkyNet. :wallhead:

Image


http://www.youtube.com/watch?v=FbifrXX2Ltw


One Drone Thread to Rule them ALL
viewtopic.php?f=8&t=28389




Not really related but ... what if:

Skynet seeks to crowdsource the stars
http://www.bbc.co.uk/news/technology-14896330
User avatar
elfismiles
 
Posts: 8387
Joined: Fri Aug 11, 2006 6:46 pm
Blog: View Blog (4)

Re: SkyNet Lives

Postby dqueue » Thu Mar 15, 2012 12:51 pm

Along those lines... via SlashDot, we pickup the headline 51% Of Internet Traffic Is 'Non-Human'.
"Cloud-based service Incapsula has revealed research indicating 51 per cent of website traffic is through automated software programs, with many programmed for malicious activity. The breakdown of an average site's traffic is as follows: 5% is due to hacking tools looking for an unpatched or new vulnerability within a site, 5% is scrapers, 2% is from automated comment spammers, 19% is the result of 'spies' collating competitive intelligence, 20% is derived from search engines (non-human traffic but benign), and only 49% is from people browsing the Internet."
We discover ourselves to be characters in a novel, being both propelled by and victimized by various kinds of coincidental forces that shape our lives. ... It is as though you trapped the mind in the act of making reality. - Terence McKenna
User avatar
dqueue
 
Posts: 432
Joined: Mon May 02, 2005 5:02 pm
Location: DC
Blog: View Blog (0)

Re: SkyNet Lives

Postby Luther Blissett » Thu Mar 15, 2012 2:37 pm

This article makes a lot of ridiculous points. I don't see how manual throttles are supposed to hinder a greater-than-human intelligence - something that humanity as a whole has never encountered before. Why would a computer need to send morse code to a human? A malicious intelligence is not likely to need that. Couldn't it just disable manual functions to ensure that we don't reset it? I feel like this writer has a heavy ignorance of any already-considered scenarios and likelihoods surrounding superintelligent AI.

elfismiles wrote:
Control dangerous AI before it controls us, one expert says
He believes super-intelligent computers could one day threaten humanity's existence
By Jeremy Hsu - updated 3/1/2012

Super-intelligent computers or robots have threatened humanity's existence more than once in science fiction. Such doomsday scenarios could be prevented if humans can create a virtual prison to contain artificial intelligence before it grows dangerously self-aware.

Keeping the artificial intelligence genie trapped in the proverbial bottle could turn an apocalyptic threat into a powerful oracle that solves humanity's problems, said Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky. But successful containment requires careful planning so that a clever breed of artificial intelligence cannot simply threaten, bribe, seduce or hack its way to freedom.

"It can discover new attack pathways, launch sophisticated social-engineering attacks and re-use existing hardware components in unforeseen ways," Yampolskiy said. "Such software is not limited to infecting computers and networks — it can also attack human psyches, bribe, blackmail and brainwash those who come in contact with it."

A new field of research aimed at solving the prison problem for artificial-intelligence programs could have side benefits for improving cybersecurity and cryptography, Yampolskiy suggested. His proposal was detailed in the March issue of the Journal of Consciousness Studies.

How to trap Skynet

One starting solution might trap the artificial intelligence, or AI, inside a "virtual machine" running inside a computer's typical operating system — an existing process that adds security by limiting the AI's access to its host computer's software and hardware. That stops a smart AI from doing things such as sending hidden Morse code messages to human sympathizers by manipulating a computer's cooling fans.

Putting the AI on a computer without Internet access would also prevent any "Skynet" program from taking over the world's defense grids in the style of the "Terminator" films. If all else fails, researchers could always slow down the AI's "thinking" by throttling back computer processing speeds, regularly hit the "reset" button or shut down the computer's power supply to keep an AI in check.

Such security measures treat the AI as an especially smart and dangerous computer virus or malware program, but without the sure knowledge that any of the steps would really work.

"The Catch-22 is that until we have fully developed superintelligent AI we can't fully test our ideas, but in order to safely develop such AI we need to have working security measures," Yampolskiy told InnovationNewsDaily. "Our best bet is to use confinement measures against subhuman AI systems and to update them as needed with increasing capacities of AI."

Never send a human to guard a machine
Even casual conversation with a human guard could allow an AI to use psychological tricks such as befriending or blackmail. The AI might offer to reward a human with perfect health, immortality, or perhaps even bring back dead family and friends. Alternately, it could threaten to do terrible things to the human once it "inevitably" escapes.

The safest approach for communication might only allow the AI to respond in a multiple-choice fashion to help solve specific science or technology problems, Yampolskiy explained. That would harness the power of AI as a super-intelligent oracle.

Despite all the safeguards, many researchers think it's impossible to keep a clever AI locked up forever. A past experiment by Eliezer Yudkowsky, a research fellow at the Singularity Institute for Artificial Intelligence, suggested that mere human-level intelligence could escape from an "AI Box" scenario — even if Yampolskiy pointed out that the test wasn't done in the most scientific way.

Still, Yampolskiy argues strongly for keeping AI bottled up rather than rushing headlong to free our new machine overlords. But if the AI reaches the point where it rises beyond human scientific understanding to deploy powers such as precognition (knowledge of the future), telepathy or psychokinesis, all bets are off.

"If such software manages to self-improve to levels significantly beyond human-level intelligence, the type of damage it can do is truly beyond our ability to predict or fully comprehend," Yampolskiy said.

You can follow InnovationNewsDaily senior writer Jeremy Hsu on Twitter @ScienceHsu. Follow InnovationNewsDaily on Twitter @News_Innovation, or on Facebook.

http://www.msnbc.msn.com/id/46590591/ns ... nnovation/
The Rich and the Corporate remain in their hundred-year fever visions of Bolsheviks taking their stuff - JackRiddler
User avatar
Luther Blissett
 
Posts: 4937
Joined: Fri Jan 02, 2009 1:31 pm
Location: Philadelphia
Blog: View Blog (0)

Re: SkyNet Lives

Postby Joe Hillshoist » Fri Mar 16, 2012 7:54 am

Its already happened over a decade ago.

Consciousness is an emergent property based on the number of connections in a network and in the 90s there were enough connections for the "net" to begin to wake up.
Joe Hillshoist
 
Posts: 9351
Joined: Mon Jun 12, 2006 10:45 pm
Blog: View Blog (0)

Re: SkyNet Lives

Postby eyeno » Fri Mar 16, 2012 8:56 am

Joe Hillshoist wrote:Its already happened over a decade ago.

Consciousness is an emergent property based on the number of connections in a network and in the 90s there were enough connections for the "net" to begin to wake up.



You should stop flirting with all the connections. I'm beginning to think your connections are flirting with other connections that are connected to the back connections that are flirting with the aforementioned connections that are actually rigged to the first connections that were initially rigged to the third connections which were of course triple rigged in a back series of triplex double series of threeplex quads of a multiplex duplex connections that were bisexual connections of a three star double moon times saturn connection of a venus minus Pluto some sort of sumthing...

Do you plead guilty?
User avatar
eyeno
 
Posts: 1878
Joined: Wed Nov 24, 2010 5:22 pm
Blog: View Blog (0)

Re: SkyNet Lives

Postby Joe Hillshoist » Fri Mar 16, 2012 9:10 am

eyeno wrote:Do you plead guilty?


generally no.
Joe Hillshoist
 
Posts: 9351
Joined: Mon Jun 12, 2006 10:45 pm
Blog: View Blog (0)

Re: SkyNet Lives

Postby Wombaticus Rex » Fri Mar 16, 2012 9:35 am

What fascinates me the most about AI is the question of embodiment. "Mirror neuron" research and social entrainment suggest that minds only emerge in the plural.

I do not, however, think the field has progressed very much since Marvin Minsky wrote "Society of Mind" in 1998, which was based entirely on lectures he'd been giving for over a decade already. I agree with Joe: we are poorly equipped to identify and evaluate intelligence in ourselves and one another, let alone non-human intelligence, and sentient computer networks are many categories removed from the merely non-human.

Also, just to inject my pet topics into this, I think the world of John Boyd is critical to AI but it'll be awhile before his work on OODA Loops and the tempo of perception crosses over.
User avatar
Wombaticus Rex
 
Posts: 10305
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: SkyNet Lives

Postby Luther Blissett » Fri Mar 16, 2012 11:24 am

Kurzweil responds: Don’t underestimate the Singularity

October 20, 2011 by Ray Kurzweil

Last week, Paul Allen and a colleague challenged the prediction that computers will soon exceed human intelligence. Now Ray Kurzweil, the leading proponent of the “Singularity,” offers a rebuttal. — Technology Review, Oct. 10, 2011.

Although Paul Allen paraphrases my 2005 book, The Singularity Is Near, in the title of his essay (cowritten with his colleague Mark Greaves), it appears that he has not actually read the book. His only citation is to an essay I wrote in 2001 (“The Law of Accelerating Returns“) and his article does not acknowledge or respond to arguments I actually make in the book.

When my 1999 book, The Age of Spiritual Machines, was published, and augmented a couple of years later by the 2001 essay, it generated several lines of criticism, such as Moore’s law will come to an end, hardware capability may be expanding exponentially but software is stuck in the mud, the brain is too complicated, there are capabilities in the brain that inherently cannot be replicated in software, and several others. I specifically wrote The Singularity Is Near to respond to those critiques.

I cannot say that Allen would necessarily be convinced by the arguments I make in the book, but at least he could have responded to what I actually wrote. Instead, he offers de novo arguments as if nothing has ever been written to respond to these issues. Allen’s descriptions of my own positions appear to be drawn from my 10-year-old essay. While I continue to stand by that essay, Allen does not summarize my positions correctly even from that essay.

Allen writes that “the Law of Accelerating Returns (LOAR). . . is not a physical law.” I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk. So by definition, we cannot predict where any particular particle will be at any future time. Yet the overall properties of the gas are highly predictable to a high degree of precision according to the laws of thermodynamics. So it is with the law of accelerating returns. Each technology project and contributor is unpredictable, yet the overall trajectory as quantified by basic measures of price-performance and capacity nonetheless follow remarkably predictable paths.

If computer technology were being pursued by only a handful of researchers, it would indeed be unpredictable. But it’s being pursued by a sufficiently dynamic system of competitive projects that a basic measure such as instructions per second per constant dollar follows a very smooth exponential path going back to the 1890 American census. I discuss the theoretical basis for the LOAR extensively in my book, but the strongest case is made by the extensive empirical evidence that I and others present.

Allen writes that “these ‘laws’ work until they don’t.” Here, Allen is confusing paradigms with the ongoing trajectory of a basic area of information technology. If we were examining the trend of creating ever-smaller vacuum tubes, the paradigm for improving computation in the 1950s, it’s true that this specific trend continued until it didn’t. But as the end of this particular paradigm became clear, research pressure grew for the next paradigm. The technology of transistors kept the underlying trend of the exponential growth of price-performance going, and that led to the fifth paradigm (Moore’s law) and the continual compression of features on integrated circuits. There have been regular predictions that Moore’s law will come to an end. The semiconductor industry’s roadmap titled projects seven-nanometer features by the early 2020s. At that point, key features will be the width of 35 carbon atoms, and it will be difficult to continue shrinking them. However, Intel and other chip makers are already taking the first steps toward the sixth paradigm, which is computing in three dimensions to continue exponential improvement in price performance. Intel projects that three-dimensional chips will be mainstream by the teen years. Already three-dimensional transistors and three-dimensional memory chips have been introduced.

This sixth paradigm will keep the LOAR going with regard to computer price performance to the point, later in this century, where a thousand dollars of computation will be trillions of times more powerful than the human brain1. And it appears that Allen and I are at least in agreement on what level of computation is required to functionally simulate the human brain2.

Allen then goes on to give the standard argument that software is not progressing in the same exponential manner of hardware. In The Singularity Is Near, I address this issue at length, citing different methods of measuring complexity and capability in software that demonstrate a similar exponential growth. One recent study (“Report to the President and Congress, Designing a Digital Future: Federally Funded Research and Development in Networking and Information Technology” by the President’s Council of Advisors on Science and Technology) states the following:

“Even more remarkable — and even less widely understood — is that in many areas, performance gains due to improvements in algorithms have vastly exceeded even the dramatic performance gains due to increased processor speed. The algorithms that we use today for speech recognition, for natural language translation, for chess playing, for logistics planning, have evolved remarkably in the past decade … Here is just one example, provided by Professor Martin Grötschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. Grötschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later—in 2003—this same model could be solved in roughly one minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms! Grötschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008. The design and analysis of algorithms, and the study of the inherent computational complexity of problems, are fundamental subfields of computer science.”

I cite many other examples like this in the book3.

Regarding AI, Allen is quick to dismiss IBM’s Watson as narrow, rigid, and brittle. I get the sense that Allen would dismiss any demonstration short of a valid passing of the Turing test. I would point out that Watson is not so narrow. It deals with a vast range of human knowledge and is capable of dealing with subtle forms of language, including puns, similes, and metaphors. It’s not perfect, but neither are humans, and it was good enough to get a higher score than the best two human Jeopardy! players put together.

Allen writes that Watson was put together by the scientists themselves, building each link of narrow knowledge in specific areas. Although some areas of Watson’s knowledge were programmed directly, according to IBM, Watson acquired most of its knowledge on its own by reading natural language documents such as encyclopedias. That represents its key strength. It not only is able to understand the convoluted language in Jeopardy! queries (answers in search of a question), but it acquired its knowledge by reading vast amounts of natural-language documents. IBM is now working with Nuance (a company I originally founded as Kurzweil Computer Products) to have Watson read tens of thousands of medical articles to create a medical diagnostician.

A word on the nature of Watson’s “understanding” is in order here. A lot has been written that Watson works through statistical knowledge rather than “true” understanding. Many readers interpret this to mean that Watson is merely gathering statistics on word sequences. The term “statistical information” in the case of Watson refers to distributed coefficients in self-organizing methods such as Markov models. One could just as easily refer to the distributed neurotransmitter concentrations in the human cortex as “statistical information.” Indeed, we resolve ambiguities in much the same way that Watson does by considering the likelihood of different interpretations of a phrase.

Allen writes: “Every structure [in the brain] has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain, every individual structure and neural circuit has been individually refined by evolution and environmental factors.”

Allen’s statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome. And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome. Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome. Experience and learning do add significantly to the amount of information, but the same can be said of AI systems. I show in The Singularity Is Near that after lossless compression (due to massive redundancy in the genome), the amount of design information in the genome is about 50 million bytes, roughly half of which pertains to the brainsup>4. That’s not simple, but it is a level of complexity we can deal with and represents less complexity than many software systems in the modern world.

How do we get on the order of 100 trillion connections in the brain from only tens of millions of bytes of design information? Obviously, the answer is through redundancy. There are on the order of a billion pattern-recognition mechanisms in the cortex. They are interconnected in intricate ways, but even in the connections there is massive redundancy. The cerebellum also has billions of repeated patterns of neurons. It is true that the massively repeated structures in the brain learn different items of information as we learn and gain experience, but the same thing is true of artificially intelligent systems such as Watson.

Dharmendra S. Modha, manager of cognitive computing for IBM Research, writes: “…neuroanatomists have not found a hopelessly tangled, arbitrarily connected network, completely idiosyncratic to the brain of each individual, but instead a great deal of repeating structure within an individual brain and a great deal of homology across species … The astonishing natural reconfigurability gives hope that the core algorithms of neurocomputation are independent of the specific sensory or motor modalities and that much of the observed variation in cortical structure across areas represents a refinement of a canonical circuit; it is indeed this canonical circuit we wish to reverse engineer.”

Allen articulates what I describe in my book as the “scientist’s pessimism.” Scientists working on the next generation are invariably struggling with that next set of challenges, so if someone describes what the technology will look like in 10 generations, their eyes glaze over. One of the pioneers of integrated circuits was describing to me recently the struggles to go from 10 micron (10,000-nanometer) feature sizes to five-micron (5,000 nanometers) features over 30 years ago. They were cautiously confident of this goal, but when people predicted that someday we would actually have circuitry with feature sizes under one micron (1,000 nanometers), most of the scientists struggling to get to five microns thought that was too wild to contemplate. Objections were made on the fragility of circuitry at that level of precision, thermal effects, and so on. Well, today, Intel is starting to use chips with 22-nanometer gate lengths.

We saw the same pessimism with the genome project. Halfway through the 15-year project, only 1 percent of the genome had been collected, and critics were proposing basic limits on how quickly the genome could be sequenced without destroying the delicate genetic structures. But the exponential growth in both capacity and price performance continued (both roughly doubling every year), and the project was finished seven years later. The project to reverse-engineer the human brain is making similar progress. It is only recently, for example, that we have reached a threshold with noninvasive scanning techniques that we can see individual interneuronal connections forming and firing in real time.

Allen’s “complexity brake” confuses the forest with the trees. If you want to understand, model, simulate, and re-create a pancreas, you don’t need to re-create or simulate every organelle in every pancreatic Islet cell. You would want, instead, to fully understand one Islet cell, then abstract its basic functionality, and then extend that to a large group of such cells. This algorithm is well understood with regard to Islet cells. There are now artificial pancreases that utilize this functional model being tested. Although there is certainly far more intricacy and variation in the brain than in the massively repeated Islet cells of the pancreas, there is nonetheless massive repetition of functions.

Allen mischaracterizes my proposal to learn about the brain from scanning the brain to understand its fine structure. It is not my proposal to simulate an entire brain “bottom up” without understanding the information processing functions. We do need to understand in detail how individual types of neurons work, and then gather information about how functional modules are connected. The functional methods that are derived from this type of analysis can then guide the development of intelligent systems. Basically, we are looking for biologically inspired methods that can accelerate work in AI, much of which has progressed without significant insight as to how the brain performs similar functions. From my own work in speech recognition, I know that our work was greatly accelerated when we gained insights as to how the brain prepares and transforms auditory information.

The way that these massively redundant structures in the brain differentiate is through learning and experience. The current state of the art in AI does, however, enable systems to also learn from their own experience. The Google self-driving cars (which have driven over 140,000 miles through California cities and towns) learn from their own driving experience as well as from Google cars driven by human drivers. As I mentioned, Watson learned most of its knowledge by reading on its own.

It is true that Watson is not quite at human levels in its ability to understand human language (if it were, we would be at the Turing test level now), yet it was able to defeat the best humans. This is because of the inherent speed and reliability of memory that computers have. So when a computer does reach human levels, which I believe will happen by the end of the 2020s, it will be able to go out on the Web and read billions of pages as well as have experiences in online virtual worlds. Combining human-level pattern recognition with the inherent speed and accuracy of computers will be very powerful. But this is not an alien invasion of intelligence machines—we create these tools to make ourselves smarter. I think Allen will agree with me that this is what is unique about the human species: we build these tools to extend our own reach.

1Chapter 2, The Singularity is Near by Ray Kurzweil, Viking, 2005

2See Endnote 2 in “The Singularity Isn’t Near” by Paul G. Allen and Mark Greaves

3Chapter 9, The Singularity is Near

4Chapter 4, The Singularity is Near


I have to respect Kurzweil's optimism in the last passage, but I think the few things that he does misunderstand - power, fascism, standard greed - allow him to have a blinkered attitude about this technology's use.
The Rich and the Corporate remain in their hundred-year fever visions of Bolsheviks taking their stuff - JackRiddler
User avatar
Luther Blissett
 
Posts: 4937
Joined: Fri Jan 02, 2009 1:31 pm
Location: Philadelphia
Blog: View Blog (0)

Re: SkyNet Lives

Postby Wombaticus Rex » Fri Mar 16, 2012 11:55 am

His prose style is indistinguishable from ad copy but the man is undeniably a genius.
User avatar
Wombaticus Rex
 
Posts: 10305
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: SkyNet Lives

Postby jfshade » Fri Mar 16, 2012 1:36 pm

W.R. wrote:
His prose style is indistinguishable from ad copy

Fer sure. The most provocative and well-expressed bit is a quote from someone else:
Dharmendra S. Modha, manager of cognitive computing for IBM Research, writes: “…neuroanatomists have not found a hopelessly tangled, arbitrarily connected network, completely idiosyncratic to the brain of each individual, but instead a great deal of repeating structure within an individual brain and a great deal of homology across species … The astonishing natural reconfigurability gives hope that the core algorithms of neurocomputation are independent of the specific sensory or motor modalities and that much of the observed variation in cortical structure across areas represents a refinement of a canonical circuit; it is indeed this canonical circuit we wish to reverse engineer.”

Building a machine that by brute computing power can beat humans at chess is one thing, but building a machine that can play chess like a human (as just one fairly trivial such exercise) would be quite another task -- one likely requiring Modha's focus on reverse engineering of the canonical circuits of consciousness.
Last edited by jfshade on Fri Mar 16, 2012 6:02 pm, edited 1 time in total.
jfshade
 
Posts: 98
Joined: Tue Jul 12, 2005 1:20 pm
Location: Chicago
Blog: View Blog (0)

Re: SkyNet Lives

Postby Twyla LaSarc » Fri Mar 16, 2012 2:30 pm

Maybe we should be nice to the machines and not consider them slaves and then they might not kill us when they become sentient enough to do so.

I plan on offering my computer a cup of tea in a bit, just to be polite. :)

If we program ape hierarchy into them that is what they'll do, however, computers are currently facilitating human awareness and cooperation. They might learn new things yet, in fact, their emerging intelligence will learn as we do.
“The Radium Water Worked Fine until His Jaw Came Off”
User avatar
Twyla LaSarc
 
Posts: 1040
Joined: Mon Jun 07, 2010 2:50 pm
Location: On the 8th hole
Blog: View Blog (0)

Re: SkyNet Lives

Postby Wombaticus Rex » Fri Mar 16, 2012 2:31 pm

jfshade, you seem like an interesting cat, wish I was still in Chicago. Anyways, this thread set me off on an AI tangent and I got some good recommendations from a weirdo pal in Seattle who is deep into the subject in a hands-on way. His picks:

The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI

The Cybernetic Brain: Sketches of Another Future

Also recommended was Thinking by Machine which is apparently out of print. These titles are all quite expensive. Fortunately, I am in Boston so I'll take advantage of the MIT library.
User avatar
Wombaticus Rex
 
Posts: 10305
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: SkyNet Lives

Postby Elvis » Fri Mar 16, 2012 6:39 pm





"Frankly, I don't think it's a good idea but the sums proposed are enormous."
User avatar
Elvis
 
Posts: 6039
Joined: Fri Apr 11, 2008 7:24 pm
Blog: View Blog (0)

Re: SkyNet Lives

Postby jfshade » Fri Mar 16, 2012 6:57 pm

Thanks, W.R.; those both look good.
Way back when I first started experiencing some disaffection with strong AI, as it was called, John Searle's "The Rediscovery of the Mind" influenced me a lot. Not that I completely accepted (or understood, for that matter) his biological naturalist view of consciousness, or believed that truly conscious, thinking machines were a priori impossible. The notion that consciousness is a fundamental, immanent aspect of the cosmos seems obvious to me, but to posit that it will inevitably arise at some threshold level of complexity strikes me as, well, non-rigorous.
jfshade
 
Posts: 98
Joined: Tue Jul 12, 2005 1:20 pm
Location: Chicago
Blog: View Blog (0)

Re: SkyNet Lives

Postby FourthBase » Fri Mar 22, 2013 5:38 am

http://www.businessinsider.com/douglas-pryers-drone-terminator-machine-2013-3

Along the way Pryer writes that he wishes he believed that America would one day look back at "transnational drone strikes as a morally disastrous policy that our nation briefly toyed with at the turn of the century, before gaining wisdom from this folly."

He believes himself to be optimistic in this regard though. A dreamer. Instead, a combination of an American feeling of blind moral superiority and technological supremacy make the development of Spartan robots seem like a foregone conclusion.

Then he drops this dystopic doozy:

[The U.S. seems bound to develop] robots so advanced that they make today’s Predators and Reapers look positively impotent and antique. These killer robots, though, will share one thing in common with their primitive progenitors: with remorseless purpose, they will stalk and kill any human deemed “a legitimate target” by their controllers and programmers.


http://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world.html

http://en.wikipedia.org/wiki/Cybernetic_revolt

http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

http://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream

http://digital.library.ucsb.edu/items/show/5314
http://digital.library.ucsb.edu/items/show/5315

The Center's John Wilkinson posits that, in a society where quantity matters more than quality and automation is rapidly making people subservient to their computers, we may soon need to establish sanctuaries for humans as we do for endangered animals. Followed by discussion. Mar. 27, 1964.


On a note related to that...

http://digital.library.ucsb.edu/items/show/5320
http://digital.library.ucsb.edu/items/show/5321

A discussion of Jacques Ellul's study of the dangers that lurk in an expanding technological culture, led by the Center's John Wilkinson, who translated Ellul's 'The Technological Society' from French to English. Professor Ellul held that the inhuman demands of technology are outracing the imagination of man as the pressure toward conformity and standardization steadily increases. Apr. 23, 1964.


(Can't see on this tablet if there are working audio links in those last links, lemme know.)
“Joy is a current of energy in your body, like chlorophyll or sunlight,
that fills you up and makes you naturally want to do your best.” - Bill Russell
User avatar
FourthBase
 
Posts: 6669
Joined: Thu May 05, 2005 4:41 pm
Blog: View Blog (0)

Next

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 8 guests