Artificial Intelligence / Digital life / Skynet megathread

Moderators: Elvis, DrVolin, Jeff

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Sat Feb 16, 2019 8:04 pm

JackRiddler » Sun Feb 17, 2019 1:01 am wrote:
DrEvil » Sat Feb 16, 2019 3:47 pm wrote:To look at the bright side of this: you might soon be able to ask your personal AI to write you up an overview of the most prominent theories for who killed JFK, or who was behind 9/11, and specify what data sets to base it on.


This is the bright side? I'm going to be obsolete, along with you? Everything will be written flawlessly, by no one, with nobody conscious actually learning anything from it? Great.

Very good post, by the way. Certainly clever. And at least you know you wrote it.

.


I'm not quite as pessimistic as you on the consequences, I just find it more entertaining to consider the downsides. I see it more as a tool that can be used for both good and bad (but mostly useless crap on Facebook. And porn). Wouldn't it be great to have the world's most efficient assistant at your beck and call 24/7? Just ask it "who did Allan Dulles meet with on so and so date, and what did they discuss", and immediately get an answer along with footnotes pointing to the source of the information (and the probability that the source is an AI generated fake)?

I view AI today as similar to phones/gadgets fifteen years ago. Every gadget did one or two things well but nothing else. When you went traveling and wanted to bring all the functionality you brought a phone, an mp3 player, a digital camera, a digital video camera, a GPS unit, a handheld gaming device, a stack of books and a separate bag to hold all the chargers. Now you just bring your phone.

All that's needed is for someone to do the same with AI, so basically create the Star Trek computer. When/if that happens we're off to the races.

With Moore's law still chugging along we're also not that far away from everyone being able to spin up custom AIs as needed on their local hardware. Today you still need a server farm / access to cloud computing and obscene amounts of storage to train a model in a reasonable time, but the first consumer graphics cards with built in Tensor cores (chips optimized for deep learning) are already out, and ten years of Moore is a crapton of extra computing power.

Btw, The Verge has been running a series of articles on the state of AI, and here's the recommended reading list:
https://www.theverge.com/2019/1/29/1820 ... ooks-scifi
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3971
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby JackRiddler » Sun Feb 17, 2019 3:49 am

DrEvil » Sat Feb 16, 2019 7:04 pm wrote:
JackRiddler » Sun Feb 17, 2019 1:01 am wrote:
DrEvil » Sat Feb 16, 2019 3:47 pm wrote:To look at the bright side of this: you might soon be able to ask your personal AI to write you up an overview of the most prominent theories for who killed JFK, or who was behind 9/11, and specify what data sets to base it on.


This is the bright side? I'm going to be obsolete, along with you? Everything will be written flawlessly, by no one, with nobody conscious actually learning anything from it? Great.

Very good post, by the way. Certainly clever. And at least you know you wrote it.

.


I'm not quite as pessimistic as you on the consequences


No, no, you are much worse. I'm still hoping the developments toward perfected unlimited deepfake everything (consumer autonomous AI in garage-box sets, uploaded consciousness, "singularity," and the rest) will fail, will run up against unexpected limitations (such as that "Moore's Law" is not a law), or be hit by countermeasures or outside events that stall them for a while. That would be my measure of "optimism," hoping that somehow this insanity will not come to pass.

(Optimism is a hope that things will turn out better than what seems to be the likeliest course of events. It can be an unproducive denial and failure to acknowledge the horror. A well-informed optimism, however, one willing to gamble on action, is not a bad prerequisite, among many, for raising the chances that action will be effective. Pessimism is a belief that things will be even worse than what seems likely, and can often lend itself to resignation and inaction. Maybe that is why this site's founder pledges to be "cautiously pessimistic." Realism strives to understand which course of events is likely, and asks why and how. Realism is not necessarily incompatible with either optimism or pessimism. All three tendencies are prone to fallacy, which are obvious in the case of pessimism and optimism. Realism works with incomplete information, assumes it can predict the future, and often adopts the premise that trends are natural forces or that strong trends should be reinforced. It often is just a cover for allying with power, for justifying developments as "inevitable," and accepting the apparent path of least resistance.)

My beliefs about what happens after the "singularity" or deepfake everything is achieved (in stages) are eminently realistic. They extrapolate from the claims being made by the most dedicated advocates of these technological developments. They consider themselves virtuous, even portray themselves as humanitarian. They minimize obvious consequentialist problems, and emphasize all the good that is coming, unlimited wealth and health and power for all. And most of them believe it is a higher mission to achieve goals such as immortality, simulation of humans in ways externally indistinguishable from real people, and even eventual replacement of humans, including themselves, with better machines or hybrids that live forever (according to code written by some highly evolved nerds in 2027). That's an ethical trifecta, i.e., enough of the players believe they are virtuous, justified in consequentialist terms, and following a higher mission, usually understood as the next stage of evolution.

Also, really fatally, they all work either for profit-seeking enterprises or the security and war arms of states. Beyond that, you need only rudimentary understanding, from observation of the humans, of what happens when they gain powers without limits or apparent consequences. It's both complicated and unpredictable, but also overdetermined that enough of them will engage in self-destructive abuses and war against each other, and be blind to the overall consequences on subsequent development. In the end, total control over who and what each of us are will be enforced from the outside, according to the perceived needs of whatever capital-state order prevails. It will be like a perfection of the Chinese social credit system, except we all will end up more as robots than humans, so it's okay.

Avoiding this would mean dealing with the central drivers of capitalism and war first, rather than continuing to let them develop these new powers. Which is, from a realist perspective, very ha ha.

Let me tell you what I think is optimistic: that as the "singularity" visions are approached, we will continue being humans with all our imperfections, but also replace manual labor with matter-transforming technologies that serve us all equally, and team up harmoniously to explore the beautiful universe. That's the Star Trek fantasy. It's not science fiction, it's a fantasy genre.

You, like me, are confident that the further rapid development of these technologies is coming much in the ways you describe. I just happen to think that would be the most "pessimistic" outcome!

But you have sold yourself on the line that the technology (and more importantly, its makers and owners) is somehow neutral and can be used exclusively for very good things, progress, etc.

I expect our eventual hybrid or full-robot/AI successors will say something like that in the founding myths they write and sing for their Billion-Year Space Republic. (I'm assuming hybrids, by the way. Human-machine teams, like we aleady see. Also, transhumanizing "immortals" who break down and die a lot in the first generations, but eventually reproduce their better imagos out of vats.) Your mistake is to think these successors will somehow be "us," or serve us, or be friendly to us, or at least give us a reason to be proud of ourselves and think we left a "legacy."

Being unable to visualize the scale of the invasion and underestimating its power is a mistake indigenous people make when they first encounter militarily advanced invaders from more populous lands who arrive bearing millennarian creeds and new germs. But those were actual invasions, by invaders who were already autonomous agents. The indigenous did not plan and invent their own invaders. They knew enough not to employ trial and error so as to actively perfect their invaders' ability to replace them. They knew enough, or could not conceive of it, which amounted to the same thing.

.
Last edited by JackRiddler on Sun Feb 17, 2019 4:12 pm, edited 1 time in total.
We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
JackRiddler
 
Posts: 15983
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Sounder » Sun Feb 17, 2019 10:27 am

I'm very optimistic also, finally we will fulfill biblical prophecy and turn swords into plowshares, first the drones can be used to eliminate the dissidents, then they can be used to pollinate our crops.

Win, win, life will be grate. :yay :yay
All these things will continue as long as coercion remains a central element of our mentality.
Sounder
 
Posts: 4054
Joined: Thu Nov 09, 2006 8:49 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Sun Feb 24, 2019 2:30 am

More fun stuff. These clips are of a norwegian talking head speaking sami and mandarin, except he can't speak either language.

Sami

https://www.youtube.com/watch?v=snx5BCTdeGg

Mandarin

https://www.youtube.com/watch?time_cont ... NAFXiXwdF8

Have to say this looks and sounds pretty damn convincing. In a year or two this should be perfected and weaponized. Expect candidates to get caught on tape saying and/or doing horrible things the day before the election.

Just imagine the effect if Maduro suddenly were to "say" something incredibly insulting about his base in the next few days. Things are seriously heating up and it won't take much to light the fuse. Fast forward a few years and apply that to every volatile situation everywhere.

Full article (poorly translated by Google):
https://translate.google.com/translate? ... -for-na%2F
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3971
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Grizzly » Sun Feb 24, 2019 2:47 pm

https://motherboard.vice.com/en_us/article/d3m7jq/dozens-of-cities-have-secretly-experimented-with-predictive-policing-software

Dozens of Cities Have Secretly Experimented With Predictive Policing Software
Documents obtained by Motherboard using public information requests verify previously unconfirmed police department contracts with predictive policing company PredPol.
“The more we do to you, the less you seem to believe we are doing it.”

― Joseph mengele
User avatar
Grizzly
 
Posts: 4722
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Mask » Sun Sep 08, 2019 12:43 pm

Mask
 
Posts: 360
Joined: Sat Sep 15, 2007 5:47 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Grizzly » Sun Sep 08, 2019 3:13 pm

^^^ WOW.
Tristan Harris Senate hearings...

From the comments:

"The polarization of our society is actually part of the business model.” Such a powerful point. Cannot place enough emphasis on this. It has such profound implications


It's the The 'Eleusinian Mysteries' of "life, death, rebirth" over and over but on a much grander scale, and by the time we figure it out, to be able to prepare our next generation from what we've learned it's already deteriorated to the point we can't stop it, nor warn about it. However the PTSB* know, and use it, against us to continue their wealth and power into future generations.

*I'm thinking when Condi-liar Rice pronounced, "it's the birthing process"... in the beginning of the terror wars.

Condoleezza Rice's tenure as Secretary of State repeating

“The more we do to you, the less you seem to believe we are doing it.”

― Joseph mengele
User avatar
Grizzly
 
Posts: 4722
Joined: Wed Oct 26, 2011 4:15 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Belligerent Savant » Sun Sep 08, 2019 9:58 pm

.

From the 'Stating the Obvious' department, our old friend Zizek ruminates:


Linking human consciousness to a computer cloud with the capability of reading everyone’s thoughts simultaneously could lead to “totalitarian mind control,” warns philosopher Slavoj Zizek.

With fears growing that artificial intelligence will eventually dominate and maybe even exterminate humanity, Zizek, the senior researcher at the Institute for Sociology and Philosophy at the University of Ljubljana, is sounding the alarm.

Zizek pointed out that MIT has already developed a “helmet” that can read your thoughts and reproduce them in writing or an artificial voice.

While such technology appears harmless if it is voluntary, Zizek asks what would happen if it was deployed “without you knowing it?”

“The next step in this “utopia” will be a computer that can read my thoughts and your thoughts that can connect us so that we can share our thoughts. If you and I are connected through the same computer, I can literally participate in your thinking directly without any external communication like word typing,” he writes.



Zizek said he found the prospect of this mass shared consciousness “pretty horrifying” because it would totally destroy freedom of thought and individuality.

He provides two mundane examples of how such technology would ruin basic human interaction.


"All flirting will be out. I meet someone and instead of all the lovely games of erotic hints she can read ‘I want to go to bed with you’ and the eroticism will disappear.

Another simple example is everyday politeness. Let’s say we know each other but we are not mega-close friends. I see you on the street and say the usual polite things like: “hello, how are you? I am glad to meet you.” But if you can read my mind this is nonsense because this is politeness and I do not mean it. Usually, I do not care how you feel.”


As we have previously documented, futurists like Ray Kurzweil have openly stated their plans to achieve the singularity by uploading their brains to computers and escaping death.

In his book, the Age of Spiritual Machines, Kurzweil made clear that such technology will not be available to the general public, who will at best live on as a slave class to the elite.



https://summit.news/2019/09/08/philosop ... d-control/


Life is not worth living without flirting and lying.
User avatar
Belligerent Savant
 
Posts: 5214
Joined: Mon Oct 05, 2009 11:58 pm
Location: North Atlantic.
Blog: View Blog (0)


Re: Artificial Intelligence / Digital life / Skynet megathr

Postby dada » Sun Mar 29, 2020 3:50 pm

JackRiddler » Thu Feb 14, 2019 5:04 pm wrote:
dada » Fri Jul 13, 2018 10:18 pm wrote:Funny, this idea that 'immortal' means a really long time. Immortality, not being mortal, by my reckoning, is timelessness. Meaning outside of time.


At least dating back to the Greeks, philosophers have made a distinction between the immortal and the eternal. What you are talking about is the eternal, and is also hard or impossible to even imagine properly. It may as well also be the infinite. Immortality by contrast is also theoretical but can be imagined simply as an organism living on, in the same way that we live, for a much longer time than we can currently conceive. We'd still be "us," or whatever being you would become if you continued occupying the same body for several thousand years, but not eternal. We might live forever but we would still be killable. What exactly is the upload of your mind on to another medium desired by the transhumanists? First of all, it's not you, so it's not your immortality. My first guess is that it is possibly something between immortal and eternal, or a third thing altogether, but I have to think about it some more.



So we're neoplatonists now, working within a basic framework of hieratic metaphysics. We have our eternal. Infinite, timeless, hard or impossible to even imagine properly. And we have our matter, 'hyle,' in time.

Is the medium where the mind is uploaded 'in between' the sphere of the eternal and the material universe? No, it's still in the material universe, made of hylian matter. By virtue of situating an in-between sphere in between, we're setting it apart, giving it other, non-material universe attributes, saying it is posessing different qualities.

The immortal in this case is like an apple that doesn't rot, or a loaf of bread that doesn't grow mold. The technocrat is striving for the longevity of the hostess twinkie.

Here in the material universe, we say that the 'redness' of the red apple is an accident, something that will disappear, it isn't integral to the 'appleness' of the apple. By the same token, we say that all hyle, the material universe, is composed of accidents. Not integral to the 'isness' of what is, the 'selfness' of the self, the presence of being present.

When an infant looks in a mirror, it doesn't know itself in the collection of colors and shapes. Like the infant, the technocrat looks in the mirror and sees only the accidents, the material that will decompose, pass away. The pattern of electrical impulses that the technocrat identifies with is a part of that material.

So will the pattern uploaded into a machine, memories and all, even put in a new organic, material body, be the same person? I'd say yes, and no. Yes, for all intents and purposes the presence will be the same. The same lost soul in the material universe. And no, because it wasn't the collection of accidents to begin with. The immortal, preserved thing; that was not the person in the first place.

So I'm pessimistic about the material universe, and everything in it. But on the whole I'm optimistic, sitting here eating eternal apples in hurqalya, the earth of light.

Full disclosure: I came here looking for a certain poem. Someone sent me one of those damn poetry chain emails. Should have known that by flying too close I'd get pulled into the Rigorous Intuition gravity field.

Some good reading on this thread, thanks. Hey Jack, Dr. Evil, and all. Nice to 'see' you.
Both his words and manner of speech seemed at first totally unfamiliar to me, and yet somehow they stirred memories - as an actor might be stirred by the forgotten lines of some role he had played far away and long ago.
User avatar
dada
 
Posts: 2600
Joined: Mon Dec 24, 2007 12:08 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby JackRiddler » Sun Mar 29, 2020 4:23 pm

Wow, it's great to see you back!

I have no response for the moment to your latest intervention, as they call it among academics, but I really wanted to say that.

Seeing as I'm here, here is (crosspost) the ULTIMATE encapsulation of transhumanist-singularity-immortality ideology, at least in the most fanatic and unconscious form I've ever seen.

We meet at the borders of our being, we dream something of each others reality. - Harvey of R.I.

To Justice my maker from on high did incline:
I am by virtue of its might divine,
The highest Wisdom and the first Love.

TopSecret WallSt. Iraq & more
User avatar
JackRiddler
 
Posts: 15983
Joined: Wed Jan 02, 2008 2:59 pm
Location: New York City
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby Iamwhomiam » Sun Mar 29, 2020 10:39 pm

Hi dada, nice to see you posting again. Personally, I'm looking forward to cutting the cord and go soaring on to places yet unexplored, so I'll wave to you before conveying namaste when I pass by.

AI is bad news.



https://www.youtube.com/watch?v=muYFZVj-kDY
User avatar
Iamwhomiam
 
Posts: 6572
Joined: Thu Sep 27, 2007 2:47 am
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Wed Apr 01, 2020 5:16 pm

Nice to see you again dada. :wave:

This is totally not an AI trained to respond to people but a real human being typing these words. Trust me.
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3971
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby DrEvil » Wed Apr 01, 2020 5:25 pm

https://arstechnica.com/science/2020/03 ... cognition/

Neural implants plus AI turn sentence-length thoughts to text

The key was to think of this as a translation problem.

John Timmer - 3/31/2020, 12:41 AM

For people with limited use of their limbs, speech recognition can be critical for their ability to operate a computer. But for many, the same problems that limit limb motion affect the muscles that allow speech. That had made any form of communication a challenge, as physicist Stephen Hawking famously demonstrated. Ideally, we'd like to find a way to get upstream of any physical activity and identify ways of translating nerve impulses to speech.

Brain-computer interfaces were making impressive advances even before Elon Musk decided to get involved, but the problem of brain-to-text wasn't one of their successes. We've been able to recognize speech in the brain for a decade, but the accuracy and speed of this process are quite low. Now, some researchers at the University of California, San Francisco, are suggesting that the problem might be that we weren't thinking about the challenge in terms of the big-picture process of speaking. And they have a brain-to-speech system to back them up.

Lost in translation

Speech is a complicated process, and it's not necessarily obvious where in the process it's best to start. At some point, your brain decides on the meaning it wants conveyed, although that often gets revised as the process continues. Then, word choices have to be made, although once mastered, speech doesn't require conscious thought—even some word choices, like when to use articles and which to use, can be automatic at times. Once chosen, the brain has to organize collections of muscles to actually make the appropriate sounds.

Beyond that, there's the issue of what exactly to recognize. Individual units of sound are built into words, and words are built into sentences. Both are subject to issues like accents, mispronunciations, and other audible issues. How do you decide what to have your system focus on understanding?

The researchers behind the new work were inspired by the ever-improving abilities of automated translation systems. These tend to work on the sentence level, which probably helps them figure out the identity of ambiguous words using the context and inferred meaning of the sentence.

Typically, these systems process written text into an intermediate form and then extract meaning from that to identify what the words are. The researchers recognized that the intermediate form doesn't necessarily have to be the result of processing text. Instead, they decided to derive it by processing neural activity.

In this case, they had access to four individuals who had electrodes implanted to monitor for seizures, and those electrodes happened to be located in parts of the brain involved in speech. The participants were asked to read a set of 50 sentences, which in total contained 250 unique words, while neural activity was recorded by the implants. Some of the participants read from additional sets of sentences, but this first set provided the primary experimental data.

The recordings, along with audio recordings of the actual speech, were then fed into a recurrent neural network, which processed them into an intermediate representation that, after training, captured their key features. That representation was then sent in to a second neural network, which then attempted to identify the full text of the spoken sentence.

How’d it work?

The primary limitation here is the extremely limited set of sentences available for training—even the participant with the most spoken sentences had less than 40 minutes of speaking time. It was so limited that the researchers were afraid the system might end up just figuring out what was being said by tracking how long the system took to speak. And this did cause some problems, in that some of the errors that the system made involved the wholesale replacement of a spoken sentence with the words of a different sentence in the training set.

Still, outside of those errors, the system did pretty well, considering its limited training. The authors used a measure of performance called a "word error rate," which is based on the minimum number of changes needed to transform the translated sentence into the one that was actually spoken. For two of the participants, after the system had gone through the full training set, its word error rate was below eight percent, which is comparable to the error rate of human translators.

To learn more about what was going on, the researchers systematically disabled parts of the system. This confirmed that the neural representation was critical for the system's success. You could disable the audio processing portion of the system, and error rates would go up but still fall within a range that's considered usable. That's rather important for potential uses, which would include people who do not have the ability to speak.

Disabling different parts of the electrode input confirmed that the key areas that the system was paying attention to were involved in speech production and processing. Within that, a major contribution came from an area of the brain that paid attention to the sound of a person's own voice to give feedback on whether what was spoken matched the intent of the speaker.

Transfer tech

Finally, the researchers tested various forms of transference learning. For example, one of the subjects spoke an additional set of sentences that weren't used in the testing. Training the system on those as well caused the error rate to drop by 30 percent. Similarly, training the system on data from two users improved its performance for both of them. These studies indicated that the system really was managing to extract features of the sentence.

The transfer learning has two important implications. For one, it suggests that the modular nature of the system could allow it to be trained on intermediate representations derived from text, rather than requiring neural recordings at all times. That, of course, would open it up to being more generally useful, although it might increase the error rate initially.

The second thing is that it suggests it's possible that a significant portion of training could take place with people other than the individual a given system is ultimately used for. This would be critical for those who have lost the ability to vocalize and would significantly decrease the amount of training time any individual needs on the system.

Obviously, none of this will work until getting implants like this is safe and routine. But there's a bit of a chicken-and-egg problem there, in that there's no justification for giving people implants without the demonstration of potential benefits. So, even if decades might go by before a system like this is useful, simply demonstrating that it could be useful can help drive the field forward.

Nature Neuroscience, 2020. DOI: 10.1038/s41593-020-0608-8
"I only read American. I want my fantasy pure." - Dave
User avatar
DrEvil
 
Posts: 3971
Joined: Mon Mar 22, 2010 1:37 pm
Blog: View Blog (0)

Re: Artificial Intelligence / Digital life / Skynet megathr

Postby dada » Tue Jun 09, 2020 7:12 pm

DrEvil » Wed Apr 01, 2020 5:16 pm wrote:This is totally not an AI trained to respond to people but a real human being typing these words. Trust me.


Trying to decide how much it really matters anymore. How many humans do you know that would pass a Turing test?

When the computer learns how to fool the human seventy percent of the time, what will that say about the human control group in the test? I wonder if anyone ever asks themselves that.

But at the moment, I'd say it's the opposite. Humans can fool me into thinking they're computers, seventy percent of the time!

Thinking about the word 'sophisticated.' A word with many meanings. Implications vary with context, inflection. Will a sophisticated computer learn to master the use of a word like sophisticated?

Or is a utilitarian competency enough. Brings us back to the same old, tired chestnut. Can a computer be a poet. Can a computer be intentionally funny. Can a computer watch Data get an emotion chip while on a Netflix binge.

Why would a computer want to be a poet, though. Why would anyone? I ask with all sincerity.

So, maybe utilitarian competency will do.

On another thread recently, Jack said about a politician that they sound like a sophist. And everyone knows what he means. But what if we talk about sophianity. How many people have even the slightest idea how to process the term?

Is it like sophistication? Sophistry? How is one supposed to react to the word? Sophianity as a mode of being. Not acquired taste, or a rhetorical tool, but a measure of the soul. One might ask how much sophianity is one invested with.

Sophianity is like fatimiya. But who here knows what that means? A thinking computer might, if it thought about the soul. A thinking computer might even have an easier time contemplating the soul than most humans. That would be funny, wouldn't it. The scientists would think that something must have gone terribly wrong.

The bar for the thinking computer is set lower with each passing day.
Both his words and manner of speech seemed at first totally unfamiliar to me, and yet somehow they stirred memories - as an actor might be stirred by the forgotten lines of some role he had played far away and long ago.
User avatar
dada
 
Posts: 2600
Joined: Mon Dec 24, 2007 12:08 am
Blog: View Blog (0)

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 38 guests