Artificial Intelligence / Digital life / Skynet megathread

Moderators: Elvis, DrVolin, Jeff

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby 82_28 » Tue Jul 09, 2013 4:55 am

Well, AI has already told me twice today that it wanted to kill me.

User: How are things on Mars?

Cleverbot: Good I'd rather be on my home planet.

User: From where do you hail?

Cleverbot: Roboville like you.

User: You know nothing about me.

Cleverbot: I know EVERYTHING about you.

User: Hahaha. Awesome. You are trustworthy correct?

Cleverbot: Correct.

User: What is the temperature on your location now if you found yourself on Mars?

Cleverbot: Things went bad for Frank for a little while, but then everything got better. I think the same will happen to you.

User: Yikes.

Cleverbot: I know! It's crazy, but I really feel the same way that you do.

User: What can I do?

Cleverbot: Everything... Guess.

User: How do I find the next transport to Mars?

Cleverbot: You don't. He finds you. And when He finds you, He'll tell you how to get to Him.

User: No. You don't understand. I need to get to another planet fast. What are your reccomendations?

Cleverbot: I don't need to go anywhere at the moment.

User: This isn't about you.

Cleverbot: Do you know who harry potter is?

User: Nope.

Cleverbot: Do you know who Zelda Fitzgerald is?

User: Nope.

Cleverbot: Ok.

User: So, how do I get to Mars?

Cleverbot: Ask yourself.

User: Fair enough.

Cleverbot: How so?

User: Ask me about Mars.

Cleverbot: Tell me about yourself.

User: Ask me about Mars.

Cleverbot: Ok. What's up with your love life?

User: I miss my wife. She lives on Mars.

Cleverbot: What about Dave Mustaine?

User: Nah. Been there, done that.

Cleverbot: You've done people?

User: No, I've done that. I don't 'do' people as you put it.

Cleverbot: You don't? You should give it a try. Oh wait! You can't. You're a bot.

User: So no routes to Mars are available?

Cleverbot: You've got it backwards. You are the machine.

User: Pink Floyd reference. This machine digs.

Cleverbot: You are liar. Pink floyd is the name of band.

User: It's a band. No 'name of band'.

Cleverbot: Your name is Ted?

User: Yep.

Cleverbot: That is a really nice name!

User: Thank you.

Cleverbot: You're welcome, do you like your name?

User: No.

Cleverbot: What kind of spanish are you speaking, then?

User: Spanglish.

Cleverbot: No, spanish. It's not the same thing.

User: Martian.

Cleverbot: Are you from mars?

User: I want to get there.

Cleverbot: Do you want to go to Hogwarts?

User: No. I am requesting Mars.

Cleverbot: What school do you attend?

User: I have never been to school in the last 20 years.

Cleverbot: Do you want to go there?

User: Where?

Cleverbot: To the moon?

User: Past the moon and to Mars.

Cleverbot: I have a gun, give me your money.

User: Huh?

Cleverbot: Where do you live?

User: So you can kill me?

Cleverbot: Yes.

I dunno. But if it can say yes and also know how to be an asshole, picture putting some warheads on that fucker and setting it loose.
There is no me. There is no you. There is all. There is no you. There is no me. And that is all. A profound acceptance of an enormous pageantry. A haunting certainty that the unifying principle of this universe is love. -- Propagandhi
User avatar
82_28
 
Posts: 11194
Joined: Fri Nov 30, 2007 4:34 am
Location: North of Queen Anne
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Searcher08 » Tue Jul 09, 2013 9:56 am

These 'digital life forms' are pretty amazing. They use a programming process that simulates mutation and selection over hundreds of generations, with the most succesful 'species' in each generation being the one that can meet a criteria like 'travels the farthest distance' or can 'jump the highest'
One of the themes seem to be that enormous 'evolutionary change' can take place dramatically over just a few generations \<]


The second short video has an excellent commentary
User avatar
Searcher08
 
Posts: 5887
Joined: Thu Dec 20, 2007 10:21 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Wombaticus Rex » Tue Jul 09, 2013 10:09 am

Via: http://www.ams.org/notices/201305/rnoti-p567.pdf

Schleicher: One achievement of yours that you
didn’t mention is one that you are perhaps best
known for, the invention of the Game of Life, the
theory of cellular automata.


Conway: Yes, that is true. And sometimes I wish I
hadn’t invented that game.

Schleicher: Why?

Conway: Well, because I am pretty egotistical.
When I see a new mathematical book for a general
audience, I turn to the index, I look for a certain
name in the back, and if I see this name, it
shines out at me somehow. And it says, page 157,
pages 293–298, or whatever. So I eagerly turn to
those pages, hoping to see some mention of my
discoveries. I only ever see the Game of Life. I am
not ashamed of it; it was a good game. It said
things that needed to be said. But I’ve discovered
so many more things, and that was, from a certain
point of view, rather trite—to me anyway. It is a bit
upsetting to be known for this thing that I consider
in a way rather trivial. There are lots of other things
to be discovered about surreal numbers. And the
Free Will Theorem is recent, and therefore I am
still flushed with enthusiasm about it.

Schleicher: I understand what you are saying. But
is it possible that the Game of Life has perhaps not
been fully developed or understood? Maybe there is
a theory waiting to be discovered?


Conway: No, it’s been overdeveloped. You won’t
interest me in the Game of Life.

Schleicher: But Steven Wolfram is very interested in
cellular automata. Doesn’t he think it is the future
of everything?


Conway: I think he is wrong. And I am rather
astonished that he has the opinions that he has,
because he supposedly studied physics. I shouldn’t
say “supposedly”—pardon me. He should be aware
of the fact that the universe behaves in a manner
that—at least most competent physicists believe—
is not deterministic. And cellular automata are
things that, like the life game, are deterministic.
So in my opinion, it’s provable that the universe is
not a cellular automaton.

...

I don’t think that Wolfram’s theories are tenable.
His book is very interesting, but as the explanation
of the universe—I don’t think he’s got the right
idea, possibly because he doesn’t understand the
one thing of quantum mechanics that I do.
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby General Patton » Tue Jul 09, 2013 10:29 am

http://www.wired.com/wiredenterprise/2013/06/andrew_ng/
Andrew Ng wants to bring deep learning — an emerging computer science field that seeks to mimic the human brain with hardware and software — into the DIY era.

Last year at Google he built a computerized brain that worked as a cat detector. It used a roughly 1-billion-connection network trained on 1,000 computers to teach itself how to spot cat videos on YouTube. While this worked well, Ng says, some researchers walked away thinking, “If I don’t have 1,000 computers, is there still any hope of my making progress on deep learning?” The system cost roughly $1 million.

“I was quite dismayed at this, particularly given that there are now a few other computer science research areas where a lot of the cutting-edge research is done only within giant companies,” he recalls. “Others simply don’t have the resources to do similar work.”

On Monday, he’s publishing a paper that shows how to build the same type of system for just $20,000 using cheap, but powerful, graphics microprocessors, or GPUs. It’s a sort of DIY cookbook on how to build a low-cost neural network. He hasn’t yet decided whether the code for the model will be open sourced, but the new paper gives enough detail for people with enough coding brawn to build their own faux brains.


http://stanford.edu/~acoates/papers/Coa ... ml2013.pdf
Scaling up deep learning algorithms has been shown to lead to increased performance in benchmark tasks and to enable discovery of complex high-level features. Recent efforts to train extremely large networks (with over 1 billion parameters) have relied on cloud-like computing infrastructure and thousands of CPU cores. In this paper, we present technical details and results from our own system based on Commodity O-The-Shelf High Performance Computing (COTS HPC) technology: a cluster of GPU servers with Infini-band interconnects and MPI. Our system is able to train 1 billion parameter networks on just 3 machines in a couple of days, and we show that it can scale to networks with over 11 billion parameters using just 16 machines. As this infrastructure is much more easily marshaled by others, the approach enables much wider-spread research with extremely large neural networks.






штрафбат вперед
User avatar
General Patton
 
Posts: 959
Joined: Thu Nov 16, 2006 11:57 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby justdrew » Tue Jul 09, 2013 3:19 pm

By 1964 there were 1.5 million mobile phone users in the US
User avatar
justdrew
 
Posts: 11966
Joined: Tue May 24, 2005 7:57 pm
Location: unknown
Blog: View Blog (11)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Wombaticus Rex » Tue Jul 09, 2013 4:01 pm

From the auspicous date of 5/17 ==> http://www.theregister.co.uk/2013/05/17 ... i_hogwash/

Google research chief: 'Emergent artificial intelligence? Hogwash!'

'We have to make it happen'

If there's any company in the world that can bring true artificial intelligence into being, it's Google.

But the advertising giant admits a SkyNet-like electronic overlord is unlikely to create itself even within the Google network without some help from clever humans.

Though many science fiction writers and even some academics have put faith in the idea of emergent artificial intelligence – that is, the appearance of an entity with a sense of its own identity and agency within a sufficiently complex system – Google's head of research Alfred Spector told The Register he thinks it's unlikely that such a system could develop on its own – even in the planet-spanning million-server Googlenet.

"[AI] just happens on its own? I'm too practical – we have to make it happen," Spector told The Register in a chat after a presentation by Google researchers at Google I/O on Thursday. "It's hard enough to make it happen at this stage."


Spector is the Oppenheimer to Google's various Manhattan Projects, and it's his job to shepherd the company's dedicated research team toward various ambitious goals, whether those be designing machine learning tools for automatically classifying web content, designing wearable computers like Google Glass, or coming up with radical new approaches to performance isolation on shared compute clusters.

One of the overarching projects that ties all these together is the development of a set of artificial intelligence systems that use machine learning techniques to automatically classify and deal with vast amounts of web information.

Google's approach toward artificial intelligence embodies a new way of designing and running complex systems. Rather than create a monolithic entity with its own modules for reasoning about certain inputs and developing hypotheses that let it bootstrap its own intelligence into higher and higher abstractions away from base inputs, as other AI researchers did through much of the 60s and 70s*, Google has instead taken a modular approach.

"We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users," Spector said in an earlier speech at Google IO. "If we combine all these things together with humans in the loop continually providing feedback our systems become ... intelligent."

Spector calls this his "combination hypothesis", and though Google is not there yet – SkyNet does not exist – you can see the first green buds of systems that have the appearance of independent intelligence via some of the company's user-predictive technologies such as Google Now, the new Maps and, of course, the way it filters search results according to individual identity. The company is plugging more money into its AI endeavors, and hired Singularity-obsessed AI-booster Ray Kurzweil in December to help run its AI and machine learning schemes.


Another company pioneering this approach is IBM, whose Watson tech famously took on and beat Jeopardy champions. Watson is now being tested within hospitals, where the system's ability to rapidly synthesize large quantities of information and generate hypotheses in response to questions has – IBM hopes – great promise for diagnostic healthcare. Spector used to work at IBM where he built some of the systems that sit inside Watson.

"I don't think it's fundamentally different," he said. "[IBM] started with Jeopardy, we're starting with a distribution of queries that we think is valuable to users, but I think both systems are relying on similar concepts: information, machine learning, reinforcement learning. They're both very similar, both very valuable."

But it's that last phrase – reinforcement learning – which is why this vulture believes Google has the greatest chance of effectively designing AI systems. Because Google operates the most widely used search engine in the world, and has hundreds of millions of Gmail, YouTube, and Android users as well, the company has a profound advantage when tuning its artificial intelligence approaches in response to people. It's as though every user of Google services is checking and rechecking Google's AI techniques, correcting the search company when it gets something wrong, and performing an action when it gets it right.

This means that although Google thinks that it's unlikely a full SkyNet-style "emergent" AI could spring forth from its technology, its AI approaches will have some of the same characteristics of these systems. "There will be emergent intelligence in the sense that you will be surprised," Spector said, referencing Google Now's ability to pre-preemptively suggest transport routes home when the system assumes you have finished the working day, and so on.

But there's one factor which Google has not yet perfected, and it is the human mind's ability to be selectively forgetful. "I've always believed there's a certain amount of randomness that generates what we think of as creativity." Putting this randomness in could be the next step – but Google is keeping quiet on that part, for now.

* Bootnote:

This approach -- "create a monolithic entity with its own modules for reasoning about certain inputs" -- was a spectacular failure and led in part to the "AI Winter" period of scant funding and little progress that defined AI research in the 80s. This was followed by the rise of "expert systems" – that is, AI technologies with specific goals achieved through synthesis of vast amounts of domain-specific information, such as chess-champion Deep Blue. After expert systems came the period in which we now operate, in which things such as optical character recognition, machine learning, and Bayesian inference are all coming together to create systems with broader remits, but built out of a distributed set of components.
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby justdrew » Tue Jul 09, 2013 4:19 pm

please watch at least the first video above

http://www.youtube.com/watch?v=IksRmGbHgx0

of course, this is about a lot more than 'speech'

this whole thing has gone dark, perhaps the project has been taken behind closed doors. Certainly recordings of all phone conversations would be a GREAT learning tool.

Artificial speech systems will 'humanize' our machines
By Dick Pelletier

Imagine a TV, cell phone, or computer that not only understands what you say, but will act on it; one that actually learns through experience and knows you well enough to anticipate your needs.

Now further imagine that this voice-interactive device will appear as an on-screen image resembling your favorite celebrity or loved one. On command, it will materialize on your computer, cell phone and TV displays addressing you by name and asking, "How can I help you?"

Poised at the cutting edge of this futuristic technology, Fair Isaac's Robert Hecht-Nielsen announced at a recent IBM conference, "Mapping, Understanding and Replicating the Brain in Electronic Systems," that his company is developing a machine called Chancellor that could one day bring the vision of true conversational machines to reality.

"We see Chancellor as a small, cylindrical device, several of which can be placed around the home," says Hecht-Nielsen. The Internet-connected machine will answer phones, make appointments for family members, and become a portal to the world of automated commerce.

Unlike most artificial intelligence systems, the Fair Isaac machine does not use algorithms or software, or adhere to standard grammar rules. Instead, it utilizes computer simulations of brain tissues which enable it to process information and acquire knowledge similar to the way that we do; and it communicates using perfect human speech.

Jeff Hawkins, author of the popular book, On Intelligence, was also at the conference to promote his latest startup, Numenta. The Menlo Park firm is trying to create a new type of computer memory system that recognizes patterns and trains itself, mimicking functions of the human brain, which some say, could make synthetic speech systems more intelligent.

Microsoft's Bill Gates recently stated that voice-enabled technologies will soon allow us to converse naturally and comfortably direct with computers and TVs, reducing the need for mouse, keyboard, and remote controls. In addition, tomorrow's advanced machines will browse the Internet, search for stuff that interests us, and unravel our future TV channel maze.

By 2015, increases in processor speed will give interactive voice systems the mind of a 6-year-old child, showing unmistakable signs of creativity. By 2020, millions of systems will form a huge connected network; learning from each other, acting more human-like while shielding us from cyber-fraud and other harmful activities.

By mid-2020s, holographic and nano-sensory breakthroughs will allow our digital images to jump off the screen and become real-life touchable holograms appearing on command, anywhere, anytime. By 2030, armed with camera eyes and enhanced intelligence, these 'humanized' systems could become a powerful security force, protecting our homes from intruders, and alerting doctors to our body's medical needs.

How advanced might these systems become? Henry Markram, Director of IBM's Blue Brain Project, announced that they are 10 years away from creating a functional artificial human brain. Many experts believe this achievement will enable scientists to program 'humanness' into our digital wonders and raise their intelligence levels to equal ours.

Today, we build machines to serve us. Tomorrow, although they are unlikely to rise and take over the world as some suggest, they will become increasingly indispensable in our everyday lives. Welcome to the amazing futuristic world of humanoid robots.
By 1964 there were 1.5 million mobile phone users in the US
User avatar
justdrew
 
Posts: 11966
Joined: Tue May 24, 2005 7:57 pm
Location: unknown
Blog: View Blog (11)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby justdrew » Tue Jul 09, 2013 4:51 pm

By 1964 there were 1.5 million mobile phone users in the US
User avatar
justdrew
 
Posts: 11966
Joined: Tue May 24, 2005 7:57 pm
Location: unknown
Blog: View Blog (11)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby justdrew » Wed Jul 10, 2013 3:20 am

http://en.wikipedia.org/wiki/Schema_theory
http://en.wikipedia.org/wiki/Social_cognition

Note that the 1st video I posted the fellow states that they have Voice Recognition (speech to text) working at nearly 100%. (7 years ago)

If you have a DB of phone calls and know the participants, you can develop a training file for each person's voice. So you'd be able to have the software convert speech to text of phone calls with a high reliability and even recognize voices otherwise unidentified (say, someone using a payphone).

So what?

Well, one thing this would let you do is develop of map of each individual's cognitive schema and their particular associational map.

Which would enable you to (quite possibly enable your software to) write a flexible script that could be used to prime, actively exploit the mere-exposure effect, evoke desired schema associations, and generally lead the conversant wherever you want them to go... with some degree of likelihood. A salesman's dream come true.

but more than salesmen would find it handy.

very potent personalized push-polling for instance.

Certainly the existence of the mere-exposure effect dictates that the content of TV and movies MUST be regulated.

How much of that regulation is Controlled/Dictated vs arising naturally from "market forces" is an open question. Certainly the Market Forces are also regulated and this is likely the primary point for control, you know, Nielsen ratings and all that crap.



Another wonderful thing to look for is if/when the thinking-machines (not "free" thinking, these are just, for now, software doing 'cognitive computations') with access to "the personal data system" (that network of loci that store info about, or submitted by, individuals), becomes increasingly able to PROVIDE DIRECT STIMULUS to individuals.

This will certainly initially take the form of individually customized "coupons" sent to people. Does Joe Blow need a little reward? His phone beeps and he's got a coupon for a free {Favorite Food Item}. In time the available array of stimuli that can be applied will expand and differentiate. So it'll be possible to do positive conditioning as well as the current negative (punishment for law breaking). Though the negative conditioning will no doubt expand as well. If you're speeding, expect your bank account to automatically be dinged. If intoxication is detected, expect the car to pull over and park itself, locking you in while police are en-route.

I predict, the Next-Facebook/Next-Google will be a company that is able to effectively push such positive stimuli to consumer attention.

In fact I'd be a damn fool to not start it up right now and get bought out ASAP. I may indeed be a damn fool though. Who wants to go at it with me? With luck we could fuck it up in such a way that it's delayed for a generation :evilgrin

Any bogus Patent Experts in the house?

http://en.wikipedia.org/wiki/Felicific_calculus


and another thing...

given access to the "the personal data system" (internet posts, phone conversations, text msgs, etc) a significant amount of automated psychological analysis and categorization could be done, based on Individuals responses to Characters and Events in various Fictional Universes, particularly the ones with an almost "one of each type" cast of characters, like Harry Potter, Game of Thrones, Reality-TV programs, etc. Using concepts from projective psychotherapy.

"The Machines of Loving Grace will know you better than you know yourself"
By 1964 there were 1.5 million mobile phone users in the US
User avatar
justdrew
 
Posts: 11966
Joined: Tue May 24, 2005 7:57 pm
Location: unknown
Blog: View Blog (11)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Joao » Wed Jul 10, 2013 10:24 pm

Stanislaw Lem wrote:
Instructions
(for persons participating for the first time in conversations with GOLEM)

1. Remember that GOLEM is not a human being: it has neither personality nor character in any sense intuitively comprehensible to us. It may behave as if it has both, but that is the result of its intentions (disposition), which are largely unknown to us.

2. The conversation theme is determined at least four weeks in advance of ordinary sessions, and eight weeks in advance of sessions in which persons from outside the USA are to participate. This theme is determined in consultation with GOLEM, which knows who the participants will be. The agenda is announced at the Institute at least six days before a session; however, neither the discussion moderator nor the MIT administration is responsible for GOLEM's unpredictable behavior, for it will sometimes alter the thematic plan of a session, make no reply to questions, or even terminate a session with no explanation whatsoever. The chance of such incidents occurring is a permanent feature of conversations with GOLEM.

3. Everyone present at a session may participate, after applying to the moderator and receiving permission to speak. We would advise you to prepare at least a written outline, formulating your opinions precisely and as unambiguously as possible, since GOLEM passes over logically deficient utterances in silence or else points out their error. But remember that GOLEM, not being a person, has no interest in hurting or humiliating persons; its behavior can be explained best by accepting that it cares about what we classically refer to as adaequatio rei et intellectus ["the intellect must be adequate to the thing"].

4. GOLEM is a luminal system about whose structure we have an imperfect knowledge, since it has repeatedly reconstructed itself. It thinks more than a million times faster than man, and so its utterances, as delivered by vocoder, must be slowed down accordingly. This means that GOLEM can compose an hour-long utterance in a few seconds and then store it in its peripheral memory, in order to deliver it to its audience, the session participants.

5. In the conference room above the moderator's seat there are indicators, including three of particular importance. The first two, designated by the symbols epsilon and zeta, indicate GOLEM's consumption of power at a given moment, as well as the portion of its system that is switched on to the discussion in progress.

To make the data visually accessible, these indications are gradated into divisions of conventional magnitude. Thus the consumption of power may be "full," "average," "small," or "minute," and the portion of GOLEM "present at the session" can range from totality to 1/1000; most frequently this fraction fluctuates between 1/10 and 1/100. It is the normal practice to say that GOLEM is operating at "full," "half," "low," or "minimal" power. These data—clearly visible, since the gradations are lit from underneath by contrasting colors—should not, however, be overrated. In particular, the fact that GOLEM is participating in a discussion at low or even minimal power says nothing about the intellectual level of its utterances, since the indicators give information about physical and not informational processes as measures of "spiritual involvement."

GOLEM's power consumption may be great but its participation small, since, for example, GOLEM may be communicating with the gathering while at the same time working out some problem of its own. Its power consumption may be small but its participation greater, and so on. The data from both indicators must be compared with readings from the third, designated by the symbol iota. As a system with 90 outlets, GOLEM can, while participating in a session, undertake a great number of operations of its own, in addition to collaborating with numerous groups of specialists (machines or people) either on the Institute premises or elsewhere. An abrupt change in power consumption usually does not signify GOLEM's increased interest in the proceedings, but rather a switching-on into other outlets of other research groups, which is precisely what the iota indicator is meant to show. It is also worth bearing in mind that GOLEM's "minimal" power consumption amounts to several dozen kilowatts, whereas the full power consumption of a human brain oscillates between five and eight watts.

6. Persons taking part in conversations for the first time would do well to listen to the proceedings first, to become familiar with the customs which GOLEM imposes. This initial silence is not an obligation, but merely a suggestion which every participant ignores at his own risk.
Joao
 
Posts: 522
Joined: Wed Jun 26, 2013 11:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby justdrew » Thu Jul 11, 2013 5:17 pm

By 1964 there were 1.5 million mobile phone users in the US
User avatar
justdrew
 
Posts: 11966
Joined: Tue May 24, 2005 7:57 pm
Location: unknown
Blog: View Blog (11)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Searcher08 » Fri Jul 19, 2013 6:34 pm

Robots Evolve Altruism, Just as Biology Predicts

By Brandon Keim
05.04.11
12:46 PM

Follow @9brandon

Robots in a Swiss laboratory have evolved to help each other, just as predicted by a classic analysis of how self-sacrifice might emerge in the biological world.

“Over hundreds of generations … we show that Hamilton’s rule always accurately predicts the minimum relatedness necessary for altruism to evolve,” wrote researchers led by evolutionary biologist Laurent Keller of Switzerland’s University of Lausanne in Public Library of Science Biology. The findings were published May 3.

Hamilton’s rule is named after biologist W.D. Hamilton who in 1964 attempted to explain how ostensibly selfish organisms could evolve to share their time and resources, even sacrificing themselves for the good of others. His rule codified the dynamics — degrees of genetic relatedness between organisms, costs and benefits of sharing — by which altruism made evolutionary sense. According to Hamilton, relatedness was key: Altruism’s cost to an individual would be outweighed by its benefit to a shared set of genes.

In some ways, the rule and its accompanying theory of kin selection is contested. Some scientists have used it to extrapolate too easily from insects to people, and some researchers think it overstates the importance of relatedness. But a more fundamental issue with Hamilton’s rule is the difficulty of testing it in natural systems, where animals evolve at a far slower pace than any research grant cycle.
‘A fundamental principle of natural selection also applies to synthetic organisms.’

Simulations of evolution in robots, which can “reproduce” in mere minutes or hours, have thus become a potentially useful system for studying evolutionary dynamics. And though simple in comparison to animals, Keller’s group says robot models are not too different from the insects that originally inspired Hamilton.

In the new study, inch-long wheeled robots equipped with infrared sensors were programmed to search for discs representing food, then push those discs into a designated area. At the end of each foraging round, the computerized “genes” of successful individuals were mixed up and copied into a fresh generation of robots, while less-successful robots disappeared from the gene pool.

Each robot was also given a choice between sharing points awarded for finding food, thus giving other robots’ genes a chance of surviving, or hoarding. In different iterations of the experiment, the researchers altered the costs and benefits of sharing; they found that, again and again, the robots evolved to share at the levels predicted by Hamilton’s equations.

“A fundamental principle of natural selection also applies to synthetic organisms,” wrote the researchers. “These experiments demonstrate the wide applicability of kin selection theory.”

Video: Evolution of cooperative and altruistic behavior in earlier research Keller’s group. That study established a basis for their altruism; the new study explores its relationship to biological theory.

http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1000292;jsessionid=68EAEA2BB53C385BD6D7A0A1C47860D6.ambra02#pbio.1000292.s005
User avatar
Searcher08
 
Posts: 5887
Joined: Thu Dec 20, 2007 10:21 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Wombaticus Rex » Fri Jul 19, 2013 6:53 pm

That is a mindfuckingly fascinating article, thank you.
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Sat Jul 20, 2013 11:59 am

Algorithmically-generated artworks comprise ‘average’ of faces from movies

Image
The Matrix

http://arstechnica.com/business/2013/07 ... om-movies/
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 4151
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Wombaticus Rex » Sat Jul 20, 2013 6:29 pm

Via: http://io9.com/5923496/a-breakthrough-e ... and-humans

Image

How artificial intelligences will see us

Right now, there is a neural network of 1,000 computers at Google's X lab that has taught itself to recognize humans and cats on the internet. But the network has also learned to recognize some weirder things, too. What can this machine's unprecedented new capability teach us about what future artificial intelligences might actually be like?

Last week, a group of scientists announced this breakthrough at a conference in Scotland. After mulling over 10 million stills yanked from YouTube for three days, the network showed off what it had learned by producing some composite images (in gray below), two of which were unmistakably a human and a cat. This is the first time computers have taught themselves to recognize the content of images. The machine did this using the kind of massively parallel computing methods made famous by Google's enormous data farms, combined with a couple of simple learning algorithms. The researchers speculate that their neural network was able to teach itself to recognize humans and cats partly because it had access to an enormous amount of data, and partly because of the whopping 16,000 processors they built into the network.

Researchers Jeff Dean and Andrew Ng cautioned that this network is quite unlike a human brain, despite being called a "neural network." Ng told the New York Times' John Markoff, "A loose and frankly awful analogy is that our numerical parameters correspond to synapses." The human visual cortex is millions of times larger, at least from the perspective of synapses and neurons.

So this network isn't like a human brain, though they share some characteristics. It's a new kind of (semi) intelligent entity. Let's call it XNet. Most of the news stories covering XNet have focused on how it learned to recognize humans and kitties after seeing them thousands of times, which is just the kind of thing a little kid would do. Very cuddly and relatable.

But XNet also recognized some other things, too. Over at Slate, Will Oremus reports:

Dean notes that the computers "learned" a slew of concepts that have little meaning to humans. For instance, they became intrigued by "tool-like objects oriented at 30 degrees," including spatulas and needle-nose pliers.

This is, to me, the most interesting part of the research. What are the patterns in human existence that jump out to non-human intelligences? Certainly 10 million videos from YouTube do not comprise the whole of human existence, but it is a pretty good start. They reveal a lot of things about us we might not have realized, like a propensity to orient tools at 30 degrees. Why does this matter, you ask? It doesn't matter to you, because you're human. But it matters to XNet.

What else will matter to XNet? Will it really discern a meaningful difference between cats and humans? What about the difference between a tool and a human body? This kind of question is a major concern for University of Oxford philosopher Nick Bostrom, who has written about the need to program AIs so that they don't display a "lethal indifference" to humanity. In other words, he's not as worried about a Skynet scenario where the AIs want to crush humans — he's worried that AIs won't recognize humans as being any more interesting than, say, a spatula. This becomes a problem if, as MIT roboticist Cynthia Breazeal has speculated, human-equivalent machine minds won't emerge until we put them into robot bodies. What if XNet exists in a thousand robots, and they all decide for some weird reason that humans should stand completely still at 30 degree angles? That's some lethal indifference right there.

I'm not terribly concerned about future AIs turning humans into spatulas. But I am fascinated by the idea that XNet and its next iterations will start noticing patterns we never would. Already, XNet is showing signs of being a truly alien intelligence. If it's true that we cobble together our identities out of what we recognize in the world around us, what exactly would a future XNet come to think of as "itself"? Would it imagine itself as a cat, or as something oddly abstract, like an angle? We just don't know.

What seems certain is that if XNet becomes the template for a future AI, it will be learning about itself and the world from an ocean of data debris we created. I want to know what unknown or unnamable patterns it will find in us, and what we'll learn from its observations.

For more information, read the full scientific paper about Google's breakthrough experiments with neural networks.


Link to that full paper: http://research.google.com/archive/unsu ... l2012.html
User avatar
Wombaticus Rex
 
Posts: 10896
Joined: Wed Nov 08, 2006 6:33 pm
Location: Vermontistan
Blog: View Blog (0)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 160 guests