Moderators: Elvis, DrVolin, Jeff
Schleicher: One achievement of yours that you
didn’t mention is one that you are perhaps best
known for, the invention of the Game of Life, the
theory of cellular automata.
Conway: Yes, that is true. And sometimes I wish I
hadn’t invented that game.
Schleicher: Why?
Conway: Well, because I am pretty egotistical.
When I see a new mathematical book for a general
audience, I turn to the index, I look for a certain
name in the back, and if I see this name, it
shines out at me somehow. And it says, page 157,
pages 293–298, or whatever. So I eagerly turn to
those pages, hoping to see some mention of my
discoveries. I only ever see the Game of Life. I am
not ashamed of it; it was a good game. It said
things that needed to be said. But I’ve discovered
so many more things, and that was, from a certain
point of view, rather trite—to me anyway. It is a bit
upsetting to be known for this thing that I consider
in a way rather trivial. There are lots of other things
to be discovered about surreal numbers. And the
Free Will Theorem is recent, and therefore I am
still flushed with enthusiasm about it.
Schleicher: I understand what you are saying. But
is it possible that the Game of Life has perhaps not
been fully developed or understood? Maybe there is
a theory waiting to be discovered?
Conway: No, it’s been overdeveloped. You won’t
interest me in the Game of Life.
Schleicher: But Steven Wolfram is very interested in
cellular automata. Doesn’t he think it is the future
of everything?
Conway: I think he is wrong. And I am rather
astonished that he has the opinions that he has,
because he supposedly studied physics. I shouldn’t
say “supposedly”—pardon me. He should be aware
of the fact that the universe behaves in a manner
that—at least most competent physicists believe—
is not deterministic. And cellular automata are
things that, like the life game, are deterministic.
So in my opinion, it’s provable that the universe is
not a cellular automaton.
...
I don’t think that Wolfram’s theories are tenable.
His book is very interesting, but as the explanation
of the universe—I don’t think he’s got the right
idea, possibly because he doesn’t understand the
one thing of quantum mechanics that I do.
http://www.wired.com/wiredenterprise/2013/06/andrew_ng/
Andrew Ng wants to bring deep learning — an emerging computer science field that seeks to mimic the human brain with hardware and software — into the DIY era.
Last year at Google he built a computerized brain that worked as a cat detector. It used a roughly 1-billion-connection network trained on 1,000 computers to teach itself how to spot cat videos on YouTube. While this worked well, Ng says, some researchers walked away thinking, “If I don’t have 1,000 computers, is there still any hope of my making progress on deep learning?” The system cost roughly $1 million.
“I was quite dismayed at this, particularly given that there are now a few other computer science research areas where a lot of the cutting-edge research is done only within giant companies,” he recalls. “Others simply don’t have the resources to do similar work.”
On Monday, he’s publishing a paper that shows how to build the same type of system for just $20,000 using cheap, but powerful, graphics microprocessors, or GPUs. It’s a sort of DIY cookbook on how to build a low-cost neural network. He hasn’t yet decided whether the code for the model will be open sourced, but the new paper gives enough detail for people with enough coding brawn to build their own faux brains.
http://stanford.edu/~acoates/papers/Coa ... ml2013.pdf
Scaling up deep learning algorithms has been shown to lead to increased performance in benchmark tasks and to enable discovery of complex high-level features. Recent efforts to train extremely large networks (with over 1 billion parameters) have relied on cloud-like computing infrastructure and thousands of CPU cores. In this paper, we present technical details and results from our own system based on Commodity O-The-Shelf High Performance Computing (COTS HPC) technology: a cluster of GPU servers with Infini-band interconnects and MPI. Our system is able to train 1 billion parameter networks on just 3 machines in a couple of days, and we show that it can scale to networks with over 11 billion parameters using just 16 machines. As this infrastructure is much more easily marshaled by others, the approach enables much wider-spread research with extremely large neural networks.
Google research chief: 'Emergent artificial intelligence? Hogwash!'
'We have to make it happen'
If there's any company in the world that can bring true artificial intelligence into being, it's Google.
But the advertising giant admits a SkyNet-like electronic overlord is unlikely to create itself even within the Google network without some help from clever humans.
Though many science fiction writers and even some academics have put faith in the idea of emergent artificial intelligence – that is, the appearance of an entity with a sense of its own identity and agency within a sufficiently complex system – Google's head of research Alfred Spector told The Register he thinks it's unlikely that such a system could develop on its own – even in the planet-spanning million-server Googlenet.
"[AI] just happens on its own? I'm too practical – we have to make it happen," Spector told The Register in a chat after a presentation by Google researchers at Google I/O on Thursday. "It's hard enough to make it happen at this stage."
Spector is the Oppenheimer to Google's various Manhattan Projects, and it's his job to shepherd the company's dedicated research team toward various ambitious goals, whether those be designing machine learning tools for automatically classifying web content, designing wearable computers like Google Glass, or coming up with radical new approaches to performance isolation on shared compute clusters.
One of the overarching projects that ties all these together is the development of a set of artificial intelligence systems that use machine learning techniques to automatically classify and deal with vast amounts of web information.
Google's approach toward artificial intelligence embodies a new way of designing and running complex systems. Rather than create a monolithic entity with its own modules for reasoning about certain inputs and developing hypotheses that let it bootstrap its own intelligence into higher and higher abstractions away from base inputs, as other AI researchers did through much of the 60s and 70s*, Google has instead taken a modular approach.
"We have the knowledge graph, [the] ability to parse natural language, neural network tech [and] enormous opportunities to gain feedback from users," Spector said in an earlier speech at Google IO. "If we combine all these things together with humans in the loop continually providing feedback our systems become ... intelligent."
Spector calls this his "combination hypothesis", and though Google is not there yet – SkyNet does not exist – you can see the first green buds of systems that have the appearance of independent intelligence via some of the company's user-predictive technologies such as Google Now, the new Maps and, of course, the way it filters search results according to individual identity. The company is plugging more money into its AI endeavors, and hired Singularity-obsessed AI-booster Ray Kurzweil in December to help run its AI and machine learning schemes.
Another company pioneering this approach is IBM, whose Watson tech famously took on and beat Jeopardy champions. Watson is now being tested within hospitals, where the system's ability to rapidly synthesize large quantities of information and generate hypotheses in response to questions has – IBM hopes – great promise for diagnostic healthcare. Spector used to work at IBM where he built some of the systems that sit inside Watson.
"I don't think it's fundamentally different," he said. "[IBM] started with Jeopardy, we're starting with a distribution of queries that we think is valuable to users, but I think both systems are relying on similar concepts: information, machine learning, reinforcement learning. They're both very similar, both very valuable."
But it's that last phrase – reinforcement learning – which is why this vulture believes Google has the greatest chance of effectively designing AI systems. Because Google operates the most widely used search engine in the world, and has hundreds of millions of Gmail, YouTube, and Android users as well, the company has a profound advantage when tuning its artificial intelligence approaches in response to people. It's as though every user of Google services is checking and rechecking Google's AI techniques, correcting the search company when it gets something wrong, and performing an action when it gets it right.
This means that although Google thinks that it's unlikely a full SkyNet-style "emergent" AI could spring forth from its technology, its AI approaches will have some of the same characteristics of these systems. "There will be emergent intelligence in the sense that you will be surprised," Spector said, referencing Google Now's ability to pre-preemptively suggest transport routes home when the system assumes you have finished the working day, and so on.
But there's one factor which Google has not yet perfected, and it is the human mind's ability to be selectively forgetful. "I've always believed there's a certain amount of randomness that generates what we think of as creativity." Putting this randomness in could be the next step – but Google is keeping quiet on that part, for now.
* Bootnote:
This approach -- "create a monolithic entity with its own modules for reasoning about certain inputs" -- was a spectacular failure and led in part to the "AI Winter" period of scant funding and little progress that defined AI research in the 80s. This was followed by the rise of "expert systems" – that is, AI technologies with specific goals achieved through synthesis of vast amounts of domain-specific information, such as chess-champion Deep Blue. After expert systems came the period in which we now operate, in which things such as optical character recognition, machine learning, and Bayesian inference are all coming together to create systems with broader remits, but built out of a distributed set of components.
Artificial speech systems will 'humanize' our machines
By Dick Pelletier
Imagine a TV, cell phone, or computer that not only understands what you say, but will act on it; one that actually learns through experience and knows you well enough to anticipate your needs.
Now further imagine that this voice-interactive device will appear as an on-screen image resembling your favorite celebrity or loved one. On command, it will materialize on your computer, cell phone and TV displays addressing you by name and asking, "How can I help you?"
Poised at the cutting edge of this futuristic technology, Fair Isaac's Robert Hecht-Nielsen announced at a recent IBM conference, "Mapping, Understanding and Replicating the Brain in Electronic Systems," that his company is developing a machine called Chancellor that could one day bring the vision of true conversational machines to reality.
"We see Chancellor as a small, cylindrical device, several of which can be placed around the home," says Hecht-Nielsen. The Internet-connected machine will answer phones, make appointments for family members, and become a portal to the world of automated commerce.
Unlike most artificial intelligence systems, the Fair Isaac machine does not use algorithms or software, or adhere to standard grammar rules. Instead, it utilizes computer simulations of brain tissues which enable it to process information and acquire knowledge similar to the way that we do; and it communicates using perfect human speech.
Jeff Hawkins, author of the popular book, On Intelligence, was also at the conference to promote his latest startup, Numenta. The Menlo Park firm is trying to create a new type of computer memory system that recognizes patterns and trains itself, mimicking functions of the human brain, which some say, could make synthetic speech systems more intelligent.
Microsoft's Bill Gates recently stated that voice-enabled technologies will soon allow us to converse naturally and comfortably direct with computers and TVs, reducing the need for mouse, keyboard, and remote controls. In addition, tomorrow's advanced machines will browse the Internet, search for stuff that interests us, and unravel our future TV channel maze.
By 2015, increases in processor speed will give interactive voice systems the mind of a 6-year-old child, showing unmistakable signs of creativity. By 2020, millions of systems will form a huge connected network; learning from each other, acting more human-like while shielding us from cyber-fraud and other harmful activities.
By mid-2020s, holographic and nano-sensory breakthroughs will allow our digital images to jump off the screen and become real-life touchable holograms appearing on command, anywhere, anytime. By 2030, armed with camera eyes and enhanced intelligence, these 'humanized' systems could become a powerful security force, protecting our homes from intruders, and alerting doctors to our body's medical needs.
How advanced might these systems become? Henry Markram, Director of IBM's Blue Brain Project, announced that they are 10 years away from creating a functional artificial human brain. Many experts believe this achievement will enable scientists to program 'humanness' into our digital wonders and raise their intelligence levels to equal ours.
Today, we build machines to serve us. Tomorrow, although they are unlikely to rise and take over the world as some suggest, they will become increasingly indispensable in our everyday lives. Welcome to the amazing futuristic world of humanoid robots.
Stanislaw Lem wrote:
Instructions
(for persons participating for the first time in conversations with GOLEM)
1. Remember that GOLEM is not a human being: it has neither personality nor character in any sense intuitively comprehensible to us. It may behave as if it has both, but that is the result of its intentions (disposition), which are largely unknown to us.
2. The conversation theme is determined at least four weeks in advance of ordinary sessions, and eight weeks in advance of sessions in which persons from outside the USA are to participate. This theme is determined in consultation with GOLEM, which knows who the participants will be. The agenda is announced at the Institute at least six days before a session; however, neither the discussion moderator nor the MIT administration is responsible for GOLEM's unpredictable behavior, for it will sometimes alter the thematic plan of a session, make no reply to questions, or even terminate a session with no explanation whatsoever. The chance of such incidents occurring is a permanent feature of conversations with GOLEM.
3. Everyone present at a session may participate, after applying to the moderator and receiving permission to speak. We would advise you to prepare at least a written outline, formulating your opinions precisely and as unambiguously as possible, since GOLEM passes over logically deficient utterances in silence or else points out their error. But remember that GOLEM, not being a person, has no interest in hurting or humiliating persons; its behavior can be explained best by accepting that it cares about what we classically refer to as adaequatio rei et intellectus ["the intellect must be adequate to the thing"].
4. GOLEM is a luminal system about whose structure we have an imperfect knowledge, since it has repeatedly reconstructed itself. It thinks more than a million times faster than man, and so its utterances, as delivered by vocoder, must be slowed down accordingly. This means that GOLEM can compose an hour-long utterance in a few seconds and then store it in its peripheral memory, in order to deliver it to its audience, the session participants.
5. In the conference room above the moderator's seat there are indicators, including three of particular importance. The first two, designated by the symbols epsilon and zeta, indicate GOLEM's consumption of power at a given moment, as well as the portion of its system that is switched on to the discussion in progress.
To make the data visually accessible, these indications are gradated into divisions of conventional magnitude. Thus the consumption of power may be "full," "average," "small," or "minute," and the portion of GOLEM "present at the session" can range from totality to 1/1000; most frequently this fraction fluctuates between 1/10 and 1/100. It is the normal practice to say that GOLEM is operating at "full," "half," "low," or "minimal" power. These data—clearly visible, since the gradations are lit from underneath by contrasting colors—should not, however, be overrated. In particular, the fact that GOLEM is participating in a discussion at low or even minimal power says nothing about the intellectual level of its utterances, since the indicators give information about physical and not informational processes as measures of "spiritual involvement."
GOLEM's power consumption may be great but its participation small, since, for example, GOLEM may be communicating with the gathering while at the same time working out some problem of its own. Its power consumption may be small but its participation greater, and so on. The data from both indicators must be compared with readings from the third, designated by the symbol iota. As a system with 90 outlets, GOLEM can, while participating in a session, undertake a great number of operations of its own, in addition to collaborating with numerous groups of specialists (machines or people) either on the Institute premises or elsewhere. An abrupt change in power consumption usually does not signify GOLEM's increased interest in the proceedings, but rather a switching-on into other outlets of other research groups, which is precisely what the iota indicator is meant to show. It is also worth bearing in mind that GOLEM's "minimal" power consumption amounts to several dozen kilowatts, whereas the full power consumption of a human brain oscillates between five and eight watts.
6. Persons taking part in conversations for the first time would do well to listen to the proceedings first, to become familiar with the customs which GOLEM imposes. This initial silence is not an obligation, but merely a suggestion which every participant ignores at his own risk.
How artificial intelligences will see us
Right now, there is a neural network of 1,000 computers at Google's X lab that has taught itself to recognize humans and cats on the internet. But the network has also learned to recognize some weirder things, too. What can this machine's unprecedented new capability teach us about what future artificial intelligences might actually be like?
Last week, a group of scientists announced this breakthrough at a conference in Scotland. After mulling over 10 million stills yanked from YouTube for three days, the network showed off what it had learned by producing some composite images (in gray below), two of which were unmistakably a human and a cat. This is the first time computers have taught themselves to recognize the content of images. The machine did this using the kind of massively parallel computing methods made famous by Google's enormous data farms, combined with a couple of simple learning algorithms. The researchers speculate that their neural network was able to teach itself to recognize humans and cats partly because it had access to an enormous amount of data, and partly because of the whopping 16,000 processors they built into the network.
Researchers Jeff Dean and Andrew Ng cautioned that this network is quite unlike a human brain, despite being called a "neural network." Ng told the New York Times' John Markoff, "A loose and frankly awful analogy is that our numerical parameters correspond to synapses." The human visual cortex is millions of times larger, at least from the perspective of synapses and neurons.
So this network isn't like a human brain, though they share some characteristics. It's a new kind of (semi) intelligent entity. Let's call it XNet. Most of the news stories covering XNet have focused on how it learned to recognize humans and kitties after seeing them thousands of times, which is just the kind of thing a little kid would do. Very cuddly and relatable.
But XNet also recognized some other things, too. Over at Slate, Will Oremus reports:
Dean notes that the computers "learned" a slew of concepts that have little meaning to humans. For instance, they became intrigued by "tool-like objects oriented at 30 degrees," including spatulas and needle-nose pliers.
This is, to me, the most interesting part of the research. What are the patterns in human existence that jump out to non-human intelligences? Certainly 10 million videos from YouTube do not comprise the whole of human existence, but it is a pretty good start. They reveal a lot of things about us we might not have realized, like a propensity to orient tools at 30 degrees. Why does this matter, you ask? It doesn't matter to you, because you're human. But it matters to XNet.
What else will matter to XNet? Will it really discern a meaningful difference between cats and humans? What about the difference between a tool and a human body? This kind of question is a major concern for University of Oxford philosopher Nick Bostrom, who has written about the need to program AIs so that they don't display a "lethal indifference" to humanity. In other words, he's not as worried about a Skynet scenario where the AIs want to crush humans — he's worried that AIs won't recognize humans as being any more interesting than, say, a spatula. This becomes a problem if, as MIT roboticist Cynthia Breazeal has speculated, human-equivalent machine minds won't emerge until we put them into robot bodies. What if XNet exists in a thousand robots, and they all decide for some weird reason that humans should stand completely still at 30 degree angles? That's some lethal indifference right there.
I'm not terribly concerned about future AIs turning humans into spatulas. But I am fascinated by the idea that XNet and its next iterations will start noticing patterns we never would. Already, XNet is showing signs of being a truly alien intelligence. If it's true that we cobble together our identities out of what we recognize in the world around us, what exactly would a future XNet come to think of as "itself"? Would it imagine itself as a cat, or as something oddly abstract, like an angle? We just don't know.
What seems certain is that if XNet becomes the template for a future AI, it will be learning about itself and the world from an ocean of data debris we created. I want to know what unknown or unnamable patterns it will find in us, and what we'll learn from its observations.
For more information, read the full scientific paper about Google's breakthrough experiments with neural networks.
Users browsing this forum: No registered users and 176 guests