Artificial Intelligence / Digital life / Skynet megathread

Moderators: Elvis, DrVolin, Jeff

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Sat Jul 25, 2015 5:53 pm

divideandconquer » Sat Jul 25, 2015 2:51 pm wrote:
I'm not sure if you could say that the "racist" AI was programmed by humans. It learns on its own using data collected by Google. What exactly it learns isn't planned out in advance. The same algorithm used to label people of all colors as dogs too at one point.

Obviously the underlying algorithms for how it learns is made by humans, but from there on out the programmers act more like chaperones than engineers. It's a bit like a fractal: The underlying algorithm (made by humans) is pretty simple, but the patterns that emerge aren't in any way designed by humans. They're an emergent property of the algorithm.

As a side note: I remember reading an interview with Google engineers where they talked about how some of their larger data-bases were starting to act almost as if they were alive, doing things no-one had predicted.


Yes, but isn't it possible to create biased computer algorithms? Or to manipulate the input data in order to exploit specific vulnerabilities of learning algorithms? To redirect the computation of the underlying algorithms in some way?

I mean, we trust algorithms, because we think of them as objective, whereas the reality is that biased humans create these algorithms and can embed in them all sorts of biases and perspectives. In other words, a computer algorithm is unbiased in its execution, but this doesn't mean that there is not bias encoded within it.

Google is intensely resistant to releasing the computer algorithms they use to process and adjust the data so how is it possible to know that Google isn't using algorithms that assigns exaggerated weight to things that will ensure the results they want?


It's perfectly possible, and they are, in fact. Can't remember the number, but there's dozens, if not hundreds of variables that help rank the results. Their search algorithm takes reliability into account when ranking web-sites for instance (Anthony Watts is very upset), but of course it's all automated, so snafus are bound to happen. It's been ongoing since 2011 and they're constantly updating it (Google Panda).

And as much as I'd like to see their search algorithms, that's not going to happen. It's their secret sauce, not to mention that Google would become one giant spam-bot about two nano-seconds after it went public. People are devious enough about manipulating the rankings now, imagine what it would be like if they knew exactly how it worked.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3981
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Luther Blissett » Tue Jul 28, 2015 12:52 pm

Elvis » Sat Jul 25, 2015 8:23 am wrote:I noticed that in those Google 'pattern recognition'-processed photos include a lot of dogs -- dogs popping out of everything. I suppose that's because of the disproportionate number of dog photos on the Internet. So why no cats? It's dogs, dogs, dogs in those images.


Human operators have to tell it what to look for - fruit, architecture, dogs, etc. Still chilling but somewhat less so.
The Rich and the Corporate remain in their hundred-year fever visions of Bolsheviks taking their stuff - JackRiddler
User avatar
Luther Blissett
 
Posts: 4990
Joined: Fri Jan 02, 2009 1:31 pm
Location: Philadelphia
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Luther Blissett » Tue Jul 28, 2015 12:57 pm

Musk, Wozniak and Hawking urge ban on warfare AI and autonomous weapons
More than 1,000 experts and leading robotics researchers sign open letter warning of military artificial intelligence arms race

Over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.

The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis and professor Stephen Hawking along with 1,000 AI and robotics researchers.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

The authors argue that AI can be used to make the battlefield a safer place for military personnel, but that offensive weapons that operate on their own would lower the threshold of going to battle and result in greater loss of human life.

Should one military power start developing systems capable of selecting targets and operating autonomously without direct human control, it would start an arms race similar to the one for the atom bomb, the authors argue. Unlike nuclear weapons, however, AI requires no specific hard-to-create materials and will be difficult to monitor.

“The endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” said the authors.

Toby Walsh, professor of AI at the University of New South Wales said: “We need to make a decision today that will shape our future and determine whether we follow a path of good. We support the call by a number of different humanitarian organisations for a UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers.”

Musk and Hawking have warned that AI is “our biggest existential threat” and that the development of full AI could “spell the end of the human race”. But others, including Wozniak have recently changed their minds on AI, with the Apple co-founder saying that robots would be good for humans, making them like the “family pet and taken care of all the time”.

At a UN conference in Geneva in April discussing the future of weaponry, including so-called “killer robots”, the UK opposed a ban on the development of autonomous weapons, despite calls from various pressure groups, including the Campaign to Stop Killer Robots.
The Rich and the Corporate remain in their hundred-year fever visions of Bolsheviks taking their stuff - JackRiddler
User avatar
Luther Blissett
 
Posts: 4990
Joined: Fri Jan 02, 2009 1:31 pm
Location: Philadelphia
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby smoking since 1879 » Tue Jul 28, 2015 1:03 pm

Luther Blissett » Tue Jul 28, 2015 5:52 pm wrote:
Elvis » Sat Jul 25, 2015 8:23 am wrote:I noticed that in those Google 'pattern recognition'-processed photos include a lot of dogs -- dogs popping out of everything. I suppose that's because of the disproportionate number of dog photos on the Internet. So why no cats? It's dogs, dogs, dogs in those images.


Human operators have to tell it what to look for - fruit, architecture, dogs, etc. Still chilling but somewhat less so.



I believe Google's DeepDream net is trained on the ImageNet dataset.
http://image-net.org
http://image-net.org/explore?wnid=n01861778
There are more images of dogs than cats for some reason, maybe because there are more breeds of domestic dog than of domestic cat?
"Now that the assertive, the self-aggrandising, the arrogant and the self-opinionated have allowed their obnoxious foolishness to beggar us all I see no reason in listening to their drivelling nonsense any more." Stanilic
smoking since 1879
 
Posts: 509
Joined: Mon Apr 20, 2009 10:20 pm
Location: CZ
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Pele'sDaughter » Wed Sep 16, 2015 5:52 pm

http://www.pddnet.com/news/2015/09/neur ... e-learning

Neuromemristive Processor Breaks Boundaries in Machine Learning

Alex Nugent is working on making computers more human.

Specifically, he and his team at Knowm Inc. have developed the world’s first adaptive neuromemristive processor along with Dr. Kris Campbell, a researcher at Boise State University and an expert on memrisistors. Product Design & Development interviewed him on Sept. 1 to find out how this adaptive neuromemristive processor came about, and how it could transform machine learning applications, autonomous platforms, and data center operations.

Memristors provide the key to adaptive learning, said Nugent, who has been working for almost 15 years on how to make a chip that functions like a human brain. His product has leapfrogged past even IBM’s supercomputing abilities, he said, thanks to neuromemristive technology.

IBM’s True North supercomputer incorporates the largest neuromorphic chip in the world, but the chip isn’t capable of learning on its own: it requires the supercomputer’s support in order to do that.

“This memristor enables efficient on-chip learning. We can do away with the supercomputers. We can create chips that are intrinsically adaptive,” Nugent said.

In about 2001, Nugent started working on neuromorphic computers and large-scale adaptive networks. He advised on DARPA’s SYNAPSE program and developed what he called the Knowm synapse, based on the kind of bifurcating patterns found in nature. Bifurcation of energy, which occurs in natural phenomenon such as lightning and photosynthesis as well as neurons, involves energy being dissipated through an adaptive container, with each branching pathway competing to hold the energy.

The Knowm BS-AF-W Memristor, built on these principles, is what Nugent calls “the ideal learning memristor.” It is capable of bi-directional incremental operation, which means that the resistance of the memristor can be nudged in both directions. The resistance is an analog to the weight of a synapse, and the process of learning comes about through nudging a device so that it hones itself onto a certain value.

Other memristor and filamentary devices available today have this property in one direction, but are unstable in the other: if they are adjusted in the other direction, they perform a complete erase.

True adaptive learning in Knowm’s products is enabled by kT-RAM, a new processor specification and, Nugent said, “our best guess as to a memristor-based synaptic processor.” It is based on Knowm’s theory of AHaH Computing, the result of more than 13 years of research. The AHaH Computing architecture eliminates the separation of memory and processing, removing the ‘von Neumann Bottleneck’ which places a restrictive limit on the data bandwidth between memory and processing. Using memristors, each of which hold 6 to 9 bits of information, memory and processing are done in the same compact area.

Image
A a graph of the Boise State Data confirming bi-directional incremental learning. Image credit: Knowm Inc. / Boise State University

The resistance used to adjust the current in the memristor is about 100 kiloohms, which Nugent said is ideal for an AHaH computing memristor.

“Everything eventually reaches its physical limits. We are currently witnessing the end of Moore’s Law scaling in CMOS electronics just as we are finally gaining the computational capacity needed to demonstrate the potential of machine learning,” Nugent said in a press release. “Pursing the same digital computing methodologies for machine learning systems keeps us from realizing the full potential of machine learning and artificial intelligence. Through AHaH Computing, Knowm is making hardware soft, uniting memory and processing for synaptic operations, and in the process helping to define the post-Moore’s Law era.”



“This is massive, massive news,” Nugent said. “I’ve been at this since 2001 and it’s only recently that everything has aligned to say this is no longer a theory, this is something that is capable of being built and we’re on our way to doing that.”

Knowm’s discrete memristors are now on sale for people to test and prototype, as are back-end of line services. The company is in talks to build out the market for kT-RAM VR emulators that can solve real world problems.

Knowm is still “a tiny company,” Nugent said, but they’re working on more and more ways to make kT-RAM commercially useful. To that end they developed the Knowm API, which exposes modules and executes instruction sequences on the kT-RAM processor in order to produce standard machine learning capabilities. The Knowm Anomaly app relies on this to offer statistical distribution and temporal pattern anomaly detection, useful in IT security.

Memristors capable of bidirectional, incremental learning could be used to make on-chip computer ‘brains’ that function more like their biological inspirations than was ever possible before. This could be used in smart devices like automated cars or super intelligent computers, building on the human-level performance GPUs have already enabled computers to achieve in areas such as image recognition, natural language processing, and automation.

Image
An image of the overlay of frequency and Amp variable curves in the memristor. Image credit: Knowm Inc.

Does Nugent feel that artificial intelligence is on its way? Yes, and that might not be a good thing. He said he “absolutely” expects to see super intelligent computing emerge in the next 50 years. He’s also fully aware of the warnings that science giants like Steven Hawking and Elon Musk have leveled against artificial intelligence. In 2014 Musk described the development of AI as “summoning the demon.”

Nugent doesn’t take as decisive a stand against the idea. “I’m not on either end of the camp,” he said. “I just think we ought to be aware. And I want to point out that the definition of intelligence is connected to learning, and as of now we have a step toward learning.”

Computing technology has been playing catch-up for years, he said, struggling against the increasing physical limitations of the medium and denying Moore’s Law as people try to make machines faster and faster.

With memristors, Nugent said, “The chains are sort of coming off. The handcuffs are coming off, and computing tech is enabling it.”
Don't believe anything they say.
And at the same time,
Don't believe that they say anything without a reason.
---Immanuel Kant
User avatar
Pele'sDaughter
 
Posts: 1917
Joined: Thu Sep 13, 2007 11:45 am
Location: Texas
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby justdrew » Thu Sep 17, 2015 1:01 am

I'm not so sure a bigger/faster brain is necessarily going to be "more powerful" there may be limits to synaptic network capabilities, and a 'thought process' beyond some level of complexity, to solve in a 'neural' way, may entail so much synaptic overhead and training that it would be simpler to find another way to do it. It certainly seems that locality of synapses involved imposes a limit. In human terms the neurons that 'remember' and 'play' motor skills for instance are fixed in number and relation, while some additional neurons can be 'born' there can only be so many before that part of the brain would bump up against other parts.

Then again, I suppose a neural processor could have access to as much additional traditional computing power as it could want, and write the programs that would run on them.
By 1964 there were 1.5 million mobile phone users in the US
User avatar
justdrew
 
Posts: 11966
Joined: Tue May 24, 2005 7:57 pm
Location: unknown
Blog: View Blog (11)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby backtoiam » Thu Mar 17, 2016 8:46 pm

New Sophisticated Humanoid Robot Declares “I Will Destroy Humans

March 17, 2016

Image

The evolution of humanoid robots is well into the concerning stage at this point. DARPA’s latest incarnation of its Atlas robot is seen in the following video beginning to walk at a pace with a sense of balance equal to most humans. Strangely, toward the end of the video, it is being “abused” by its human handler, which begs the question if a true artificial intelligence is permitted to flourish in this robot, if it might strike back at some point. At the very least, this robot’s demonstration of dexterity in the warehouse is likely to threaten humans economically as humans continue to be outsourced to machine labor at record levels.


But it’s the latest humanoid robot from Hanson Robotics that might further heighten the level of concern. As you will see below, the “Sophia” robot is being designed to walk among us in the future and fully integrate as part of the consumer experience and on into the family, according to CEO Dr. David Hanson.

It is important to note several things that Hanson mentions. Sophia first tells us that she would like to be “an ambassador” to humans, as well as to continue her evolution through formal education, studying art and eventually creating a business and having a family. Hanson explicitly states that Sophia will become as “conscious, creative, and capable as any human.” This statement is followed by a key mention of not having the rights of a human. This might seem absurd to the uninitiated, but this is a serious ethical discussion that has been taking place among “roboethicists.” This is all-but guaranteed to gain steam as robots are integrated in autonomous ways, whether it is on the battlefield, as self-driving vehicles (now programmed to sacrifice some humans over others), or certainly as they become visually and intelligently on par with human beings. Even the mainstream Boston Globe addressed this more than two years ago, citing a 2012 paper from MIT.

“Should Robots Have Rights?” states:

Robots having legal rights or privileges sounds ridiculous. But 20 years ago, the idea that the nation’s leading law schools would be teaching animal-rights courses seemed equally absurd. Now anti-cruelty legislation is quite common in industrialized countries, and late last year the Nonhuman Rights Project made national headlines when it argued that a chimpanzee had “standing,” meaning the right to sue, in a New York State court.

The Seattle-based Society for the Prevention of Cruelty to Robots allows that robots won’t be appearing in court any time soon, “but recent advances in data nanostructures, cognitive modeling, and neural networking have convinced many people that the advent of some sort of created intelligence is much closer than previously thought.”

Yes, Virginia, there is a Society for the Prevention of Cruelty to Robots, founded 15 years ago by music engineer Pete Remine. His website talks about a Robotic Bill of Rights, which Remine told me is more or less on hold; “until the state of artificial intelligence progresses a bit further, there’s really not a lot of relevant work to be done,” he e-mailed me.

There is ample proof that humans care about robots. During the height of the Iraq war, Washington Post writer Joel Garreau observed soldiers bonding with the complicated robots that detonated lethal improvised explosive devices. In one instance, a technician carried the remains of a “really great robot” named Scooby-Doo to a repair shop, hoping that the obviously “dead” robot could be brought back to life.

When we chatted, I asked Kate Darling what kinds of experiments she had carried out. “I did this one workshop where we gave everyone these cute little plush robot dinosaurs called PLEOs, and we asked them to spend time bonding with the toys,” she said. “They gave them names, they played with them a little . . . then we asked them to torture and kill them.”

“The results were more dramatic than I could even imagine,” she said. “There was an option to save your own dinosaur by killing someone else’s, and no one wanted to do that. They refused to even hit the things.”

For an advanced society, America lags far behind countries such as Japan and South Korea in . . . sexual robotics. Japan has hosted a thriving female doll escort service for almost 10 years, and engineers have designed robots called actroids, often young women who “breathe,” speak, and mimic many human behaviors.

Surely “Samantha,” the sensual and sensitive operating system that wins Joaquin Phoenix’s heart in the movie “Her” is barely a step removed from a sophisticated sexbot.

“The sexbot issue is going to be discussed sooner than most people think,” Darling predicted. “There are sexual acts that we don’t allow between humans, and people might argue for laws protecting robots from performing them.” In her 2012 paper, she quotes Immanuel Kant to the effect that a man shooting a dog “damages in himself that humanity which it is his duty to show toward mankind.”

So how we treat our robots will tell us volumes about ourselves. (emphasis added)

Hanson puts a timeline of 20 years on the full integration of robots that have become “indistinguishable from humans.” This, of course, falls right in line with Ray Kurzweil’s Singularity – the moment when machine intelligence and biological systems meet or surpass that of humans – first targeted for 2045, but since revised to be sooner than predicted, perhaps by 2029.

Regardless of whether or not you personally believe that the lofty intentions of robotics and artificial intelligence designers can truly manifest as planned, one must acknowledge that we are living in the realm of faith at this point, as nearly all of what they predicted years ago has come to pass.

Perhaps most troubling is the nervous laughter that erupts at the end of this video when the ultimate question is posited to our new humanoid friend and family member … and she gives her answer:

I will destroy humans.

Funny, super funny … ’til it’s not.


http://www.activistpost.com/2016/03/new ... umans.html
"A mind stretched by a new idea can never return to it's original dimensions." Oliver Wendell Holmes
backtoiam
 
Posts: 2101
Joined: Mon Aug 31, 2015 9:22 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Nordic » Thu Mar 17, 2016 10:08 pm

Deep Learning Is Going to Teach Us All the Lesson of Our Lives: Jobs Are for Machines

https://medium.com/basic-income/deep-le ... .fi783hlk2

On December 2nd, 1942, a team of scientists led by Enrico Fermi came back from lunch and watched as humanity created the first self-sustaining nuclear reaction inside a pile of bricks and wood underneath a football field at the University of Chicago. Known to history as Chicago Pile-1, it was celebrated in silence with a single bottle of Chianti, for those who were there understood exactly what it meant for humankind, without any need for words.

Now, something new has occurred that, again, quietly changed the world forever. Like a whispered word in a foreign language, it was quiet in that you may have heard it, but its full meaning may not have been comprehended. However, it’s vital we understand this new language, and what it’s increasingly telling us, for the ramifications are set to alter everything we take for granted about the way our globalized economy functions, and the ways in which we as humans exist within it.

The language is a new class of machine learning known as deep learning, and the “whispered word” was a computer’s use of it to seemingly out of nowhere defeat three-time European Go champion Fan Hui, not once but five times in a row without defeat. Many who read this news, considered that as impressive, but in no way comparable to a match against Lee Se-dol instead, who many consider to be one of the world’s best living Go players, if not the best. Imagining such a grand duel of man versus machine, China’s top Go player predicted that Lee would not lose a single game, and Lee himself confidently expected to possibly lose one at the most.

What actually ended up happening when they faced off? Lee went on to lose all but one of their match’s five games. An AI named AlphaGo is now a better Go player than any human and has been granted the “divine” rank of 9 dan. In other words, its level of play borders on godlike. Go has officially fallen to machine, just as Jeopardy did before it to Watson, and chess before that to Deep Blue.
“AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic.”

So, what is Go? Very simply, think of Go as Super Ultra Mega Chess. This may still sound like a small accomplishment, another feather in the cap of machines as they continue to prove themselves superior in the fun games we play, but it is no small accomplishment, and what’s happening is no game.

AlphaGo’s historic victory is a clear signal that we’ve gone from linear to parabolic. Advances in technology are now so visibly exponential in nature that we can expect to see a lot more milestones being crossed long before we would otherwise expect. These exponential advances, most notably in forms of artificial intelligence limited to specific tasks, we are entirely unprepared for as long as we continue to insist upon employment as our primary source of income.

This may all sound like exaggeration, so let’s take a few decade steps back, and look at what computer technology has been actively doing to human employment so far:

Let the above chart sink in. Do not be fooled into thinking this conversation about the automation of labor is set in the future. It’s already here. Computer technology is already eating jobs and has been since 1990.

Routine Work

All work can be divided into four types: routine and nonroutine, cognitive and manual. Routine work is the same stuff day in and day out, while nonroutine work varies. Within these two varieties, is the work that requires mostly our brains (cognitive) and the work that requires mostly our bodies (manual). Where once all four types saw growth, the stuff that is routine stagnated back in 1990. This happened because routine labor is easiest for technology to shoulder. Rules can be written for work that doesn’t change, and that work can be better handled by machines.

Distressingly, it’s exactly routine work that once formed the basis of the American middle class. It’s routine manual work that Henry Ford transformed by paying people middle class wages to perform, and it’s routine cognitive work that once filled US office spaces. Such jobs are now increasingly unavailable, leaving only two kinds of jobs with rosy outlooks: jobs that require so little thought, we pay people little to do them, and jobs that require so much thought, we pay people well to do them.

If we can now imagine our economy as a plane with four engines, where it can still fly on only two of them as long as they both keep roaring, we can avoid concerning ourselves with crashing. But what happens when our two remaining engines also fail? That’s what the advancing fields of robotics and AI represent to those final two engines, because for the first time, we are successfully teaching machines to learn.

Neural Networks

I’m a writer at heart, but my educational background happens to be in psychology and physics. I’m fascinated by both of them so my undergraduate focus ended up being in the physics of the human brain, otherwise known as cognitive neuroscience. I think once you start to look into how the human brain works, how our mass of interconnected neurons somehow results in what we describe as the mind, everything changes. At least it did for me.

As a quick primer in the way our brains function, they’re a giant network of interconnected cells. Some of these connections are short, and some are long. Some cells are only connected to one other, and some are connected to many. Electrical signals then pass through these connections, at various rates, and subsequent neural firings happen in turn. It’s all kind of like falling dominoes, but far faster, larger, and more complex. The result amazingly is us, and what we’ve been learning about how we work, we’ve now begun applying to the way machines work.

One of these applications is the creation of deep neural networks - kind of like pared-down virtual brains. They provide an avenue to machine learning that’s made incredible leaps that were previously thought to be much further down the road, if even possible at all. How? It’s not just the obvious growing capability of our computers and our expanding knowledge in the neurosciences, but the vastly growing expanse of our collective data, aka big data.

Big Data

Big data isn’t just some buzzword. It’s information, and when it comes to information, we’re creating more and more of it every day. In fact we’re creating so much that a 2013 report by SINTEF estimated that 90% of all information in the world had been created in the prior two years. This incredible rate of data creation is even doubling every 1.5 years thanks to the Internet, where in 2015 every minute we were liking 4.2 million things on Facebook, uploading 300 hours of video to YouTube, and sending 350,000 tweets. Everything we do is generating data like never before, and lots of data is exactly what machines need in order to learn to learn. Why?

Imagine programming a computer to recognize a chair. You’d need to enter a ton of instructions, and the result would still be a program detecting chairs that aren’t, and not detecting chairs that are. So how did we learn to detect chairs? Our parents pointed at a chair and said, “chair.” Then we thought we had that whole chair thing all figured out, so we pointed at a table and said “chair”, which is when our parents told us that was “table.” This is called reinforcement learning. The label “chair” gets connected to every chair we see, such that certain neural pathways are weighted and others aren’t. For “chair” to fire in our brains, what we perceive has to be close enough to our previous chair encounters. Essentially, our lives are big data filtered through our brains.
Deep Learning

The power of deep learning is that it’s a way of using massive amounts of data to get machines to operate more like we do without giving them explicit instructions. Instead of describing “chairness” to a computer, we instead just plug it into the Internet and feed it millions of pictures of chairs. It can then have a general idea of “chairness.” Next we test it with even more images. Where it’s wrong, we correct it, which further improves its “chairness” detection. Repetition of this process results in a computer that knows what a chair is when it sees it, for the most part as well as we can. The important difference though is that unlike us, it can then sort through millions of images within a matter of seconds.

This combination of deep learning and big data has resulted in astounding accomplishments just in the past year. Aside from the incredible accomplishment of AlphaGo, Google’s DeepMind AI learned how to read and comprehend what it read through hundreds of thousands of annotated news articles. DeepMind also taught itself to play dozens of Atari 2600 video games better than humans, just by looking at the screen and its score, and playing games repeatedly. An AI named Giraffe taught itself how to play chess in a similar manner using a dataset of 175 million chess positions, attaining International Master level status in just 72 hours by repeatedly playing itself. In 2015, an AI even passed a visual Turing test by learning to learn in a way that enabled it to be shown an unknown character in a fictional alphabet, then instantly reproduce that letter in a way that was entirely indistinguishable from a human given the same task. These are all major milestones in AI.

However, despite all these milestones, when asked to estimate when a computer would defeat a prominent Go player, the answer even just months prior to the announcement by Google of AlphaGo’s victory, was by experts essentially, “Maybe in another ten years.” A decade was considered a fair guess because Go is a game so complex I’ll just let Ken Jennings of Jeopardy fame, another former champion human defeated by AI, describe it:

Go is famously a more complex game than chess, with its larger board, longer games, and many more pieces. Google’s DeepMind artificial intelligence team likes to say that there are more possible Go boards than atoms in the known universe, but that vastly understates the computational problem. There are about 10¹⁷⁰ board positions in Go, and only 10⁸⁰ atoms in the universe. That means that if there were as many parallel universes as there are atoms in our universe (!), then the total number of atoms in all those universes combined would be close to the possibilities on a single Go board.
Such confounding complexity makes impossible any brute-force approach to scan every possible move to determine the next best move. But deep neural networks get around that barrier in the same way our own minds do, by learning to estimate what feels like the best move. We do this through observation and practice, and so did AlphaGo, by analyzing millions of professional games and playing itself millions of times. So the answer to when the game of Go would fall to machines wasn’t even close to ten years. The correct answer ended up being, “Any time now.”

Nonroutine Automation

Any time now. That’s the new go-to response in the 21st century for any question involving something new machines can do better than humans, and we need to try to wrap our heads around it.

We need to recognize what it means for exponential technological change to be entering the labor market space for nonroutine jobs for the first time ever. Machines that can learn mean nothing humans do as a job is uniquely safe anymore. From hamburgers to healthcare, machines can be created to successfully perform such tasks with no need or less need for humans, and at lower costs than humans.

Amelia is just one AI out there currently being beta-tested in companies right now. Created by IPsoft over the past 16 years, she’s learned how to perform the work of call center employees. She can learn in seconds what takes us months, and she can do it in 20 languages. Because she’s able to learn, she’s able to do more over time. In one company putting her through the paces, she successfully handled one of every ten calls in the first week, and by the end of the second month, she could resolve six of ten calls. Because of this, it’s been estimated that she can put 250 million people out of a job, worldwide.

Viv is an AI coming soon from the creators of Siri who’ll be our own personal assistant. She’ll perform tasks online for us, and even function as a Facebook News Feed on steroids by suggesting we consume the media she’ll know we’ll like best. In doing all of this for us, we’ll see far fewer ads, and that means the entire advertising industry — that industry the entire Internet is built upon — stands to be hugely disrupted.

A world with Amelia and Viv — and the countless other AI counterparts coming online soon — in combination with robots like Boston Dynamics’ next generation Atlas portends, is a world where machines can do all four types of jobs and that means serious societal reconsiderations. If a machine can do a job instead of a human, should any human be forced at the threat of destitution to perform that job? Should income itself remain coupled to employment, such that having a job is the only way to obtain income, when jobs for many are entirely unobtainable? If machines are performing an increasing percentage of our jobs for us, and not getting paid to do them, where does that money go instead? And what does it no longer buy? Is it even possible that many of the jobs we’re creating don’t need to exist at all, and only do because of the incomes they provide? These are questions we need to start asking, and fast.

Decoupling Income From Work

Fortunately, people are beginning to ask these questions, and there’s an answer that’s building up momentum. The idea is to put machines to work for us, but empower ourselves to seek out the forms of remaining work we as humans find most valuable, by simply providing everyone a monthly paycheck independent of work. This paycheck would be granted to all citizens unconditionally, and its name is universal basic income. By adopting UBI, aside from immunizing against the negative effects of automation, we’d also be decreasing the risks inherent in entrepreneurship, and the sizes of bureaucracies necessary to boost incomes. It’s for these reasons, it has cross-partisan support, and is even now in the beginning stages of possible implementation in countries like Switzerland, Finland, the Netherlands, and Canada.

The future is a place of accelerating changes. It seems unwise to continue looking at the future as if it were the past, where just because new jobs have historically appeared, they always will. The WEF started 2016 off by estimating the creation by 2020 of 2 million new jobs alongside the elimination of 7 million. That’s a net loss, not a net gain of 5 million jobs. In a frequently cited paper, an Oxford study estimated the automation of about half of all existing jobs by 2033. Meanwhile self-driving vehicles, again thanks to machine learning, have the capability of drastically impacting all economies — especially the US economy as I wrote last year about automating truck driving — by eliminating millions of jobs within a short span of time.

And now even the White House, in a stunning report to Congress, has put the probability at 83 percent that a worker making less than $20 an hour in 2010 will eventually lose their job to a machine. Even workers making as much as $40 an hour face odds of 31 percent. To ignore odds like these is tantamount to our now laughable “duck and cover” strategies for avoiding nuclear blasts during the Cold War.

All of this is why it’s those most knowledgeable in the AI field who are now actively sounding the alarm for basic income. During a panel discussion at the end of 2015 at Singularity University, prominent data scientist Jeremy Howard asked “Do you want half of people to starve because they literally can’t add economic value, or not?” before going on to suggest, ”If the answer is not, then the smartest way to distribute the wealth is by implementing a universal basic income.”

AI pioneer Chris Eliasmith, director of the Centre for Theoretical Neuroscience, warned about the immediate impacts of AI on society in an interview with Futurism, “AI is already having a big impact on our economies… My suspicion is that more countries will have to follow Finland’s lead in exploring basic income guarantees for people.”

Moshe Vardi expressed the same sentiment after speaking at the 2016 annual meeting of the American Association for the Advancement of Science about the emergence of intelligent machines, “we need to rethink the very basic structure of our economic system… we may have to consider instituting a basic income guarantee.”

Even Baidu’s chief scientist and founder of Google’s “Google Brain” deep learning project, Andrew Ng, during an onstage interview at this year’s Deep Learning Summit, expressed the shared notion that basic income must be “seriously considered” by governments, citing “a high chance that AI will create massive labor displacement.”

When those building the tools begin warning about the implications of their use, shouldn’t those wishing to use those tools listen with the utmost attention, especially when it’s the very livelihoods of millions of people at stake? If not then, what about when Nobel prize winning economists begin agreeing with them in increasing numbers?

No nation is yet ready for the changes ahead. High labor force non-participation leads to social instability, and a lack of consumers within consumer economies leads to economic instability. So let’s ask ourselves, what’s the purpose of the technologies we’re creating? What’s the purpose of a car that can drive for us, or artificial intelligence that can shoulder 60% of our workload? Is it to allow us to work more hours for even less pay? Or is it to enable us to choose how we work, and to decline any pay/hours we deem insufficient because we’re already earning the incomes that machines aren’t?

What’s the big lesson to learn, in a century when machines can learn?

I offer it’s that jobs are for machines, and life is for people.


Work is SO overrated.
"He who wounds the ecosphere literally wounds God" -- Philip K. Dick
Nordic
 
Posts: 14230
Joined: Fri Nov 10, 2006 3:36 am
Location: California USA
Blog: View Blog (6)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Pele'sDaughter » Fri Mar 18, 2016 8:05 am

Why don't they set these robots to figuring out what to do with Fukashima or the USA's own nuclear plant problems? These are the types of problems they should work on first, wouldn't you think. You know, the problems that humans can't figure out and that threaten our very existence not just our incomes. :sarcasm
Don't believe anything they say.
And at the same time,
Don't believe that they say anything without a reason.
---Immanuel Kant
User avatar
Pele'sDaughter
 
Posts: 1917
Joined: Thu Sep 13, 2007 11:45 am
Location: Texas
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby smoking since 1879 » Fri Mar 18, 2016 9:03 am

Pele'sDaughter » Fri Mar 18, 2016 1:05 pm wrote:Why don't they set these robots to figuring out what to do with Fukashima or the USA's own nuclear plant problems? These are the types of problems they should work on first, wouldn't you think. You know, the problems that humans can't figure out and that threaten our very existence not just our incomes. :sarcasm


re. Fukushima : probably because the high energy gamma radiation fries their electronics?
at this rate they will need to develop new robots to remove the old fried robots they already sent in there.
oh, and the access hatches are kinda small, too small for the mighty Atlas to fit thru.

we don't have the technology to fix Fukushima :|
"Now that the assertive, the self-aggrandising, the arrogant and the self-opinionated have allowed their obnoxious foolishness to beggar us all I see no reason in listening to their drivelling nonsense any more." Stanilic
smoking since 1879
 
Posts: 509
Joined: Mon Apr 20, 2009 10:20 pm
Location: CZ
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Searcher08 » Fri Mar 18, 2016 10:25 am

smoking since 1879 » Fri Mar 18, 2016 1:03 pm wrote:
Pele'sDaughter » Fri Mar 18, 2016 1:05 pm wrote:Why don't they set these robots to figuring out what to do with Fukashima or the USA's own nuclear plant problems? These are the types of problems they should work on first, wouldn't you think. You know, the problems that humans can't figure out and that threaten our very existence not just our incomes. :sarcasm


re. Fukushima : probably because the high energy gamma radiation fries their electronics?
at this rate they will need to develop new robots to remove the old fried robots they already sent in there.
oh, and the access hatches are kinda small, too small for the mighty Atlas to fit thru.

we don't have the technology to fix Fukushima :|


IIRC Recently, the best researchers at Toshiba had worked their asses off for nearly a year to create a robot that could operate in that environment... but it quickly failed.

There is not even a clear description of what is going on inside the complex. The effects on the north western pacific sealife have been devastating, however. Think multiple downward (land and sea)-pointing Chernobyls.
User avatar
Searcher08
 
Posts: 5887
Joined: Thu Dec 20, 2007 10:21 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby backtoiam » Fri Mar 18, 2016 10:29 am

We almost have the technology to make the old sci fi stories of autonomous killer robots hunting and killing humans a reality. Actually come to think of it a drone could probably discriminate by skin color and kill everything it sees of a specific skin color or other visual cues that are programmed into the computer.
"A mind stretched by a new idea can never return to it's original dimensions." Oliver Wendell Holmes
backtoiam
 
Posts: 2101
Joined: Mon Aug 31, 2015 9:22 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Pele'sDaughter » Fri Mar 18, 2016 3:41 pm

In a tongue-in-cheek way, I just figured if these are so smart then they can build the robot that can withstand the radiation. I guess they're not quite to that stage yet. :moresarcasm
Don't believe anything they say.
And at the same time,
Don't believe that they say anything without a reason.
---Immanuel Kant
User avatar
Pele'sDaughter
 
Posts: 1917
Joined: Thu Sep 13, 2007 11:45 am
Location: Texas
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Luther Blissett » Fri Mar 18, 2016 4:44 pm

That Medium piece posted by Nordic upthread really had me pondering this stuff a little deeper than normal, even though none of it was necessarily news to me. Maybe it was just the deep look at the recent learning capabilities and the explanation of what those mean.

Fully agreed, though, that AGI should be used first to figure out the climate crisis and cleaning up things like Fukushima.

I've posted it before, but the Darpa robotics challenge was superficially about Fukushima - programmers were supposed to develop robots that could autonomously handle a number of scenarios in a disaster zone modeled after the reactors. I'm sure they were aware of the radiation-circuitry problem, they just didn't care because they want Terminators anyway.
The Rich and the Corporate remain in their hundred-year fever visions of Bolsheviks taking their stuff - JackRiddler
User avatar
Luther Blissett
 
Posts: 4990
Joined: Fri Jan 02, 2009 1:31 pm
Location: Philadelphia
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby tapitsbo » Fri Mar 18, 2016 4:52 pm

Knowing what we do about compartmentalization, disinfo, and general lack of transparency re: technology, this tech will be harnessed to make itself occulted from the view not only of the masses but even humans who rank high in their hierarchies (making the generous assumption that they are not already completely oblivious to these matters, when it most matters.)
tapitsbo
 
Posts: 1824
Joined: Wed Jun 12, 2013 6:58 pm
Blog: View Blog (0)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 41 guests