Moderators: Elvis, DrVolin, Jeff
http://en.wikipedia.org/wiki/Chatterbot
ELIZA's key method of operation (copied by chatbot designers ever since) involves the recognition of cue words or phrases in the input, and the output of corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[4] Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate, because human judges are so ready to give the benefit of the doubt when conversational responses are capable of being interpreted as "intelligent". Thus the key technique here—which characterises a program as a chatbot rather than as a serious natural language processing system—is the production of responses that are sufficiently vague and non-specific that they can be understood as "intelligent" in a wide range of conversational contexts. The emphasis is typically on vagueness and unclarity, rather than any conveying of genuine information.
Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods".
http://www.filfre.net/2011/06/eliza-part-3/
Perhaps the first person to interact extensively with Eliza was Weizenbaum’s secretary: “My secretary, who had watched me work on the program for many months and therefore surely knew it to be merely a computer program, started conversing with it. After only a few interchanges with it, she asked me to leave the room.” Her reaction was not unusual; Eliza became something of a sensation at MIT and the other university campuses to which it spread, and Weizenbaum an unlikely minor celebrity. Mostly people just wanted to talk with Eliza, to experience this rare bit of approachable fun in a mid-1960s computing world that was all Business (IBM) or Quirky Esoterica (the DEC hackers). Some, however, treated the program with a seriousness that seems a bit baffling today.
coffin_dodger wrote:I agree with the age-old trope that any AI capable of cognitive reasoning and imbued with any morality at all will pretty quickly come to the conclusion that humanity needs to change. Quite how far it will go to achieve this end is down to the inhibitors programmed in at conception. Let there be inhibitors.
tazmic » Mon Jul 08, 2013 12:08 pm wrote:Any sufficiently advanced AI capable of cognitive reasoning and imbued with morality will be indistinguishable from the insane.*
Wombaticus Rex » Mon Jul 08, 2013 5:35 pm wrote:tazmic » Mon Jul 08, 2013 12:08 pm wrote:Any sufficiently advanced AI capable of cognitive reasoning and imbued with morality will be indistinguishable from the insane.*
Well said and a key point.
How are humans to evaluate an intelligence that vastly outstrips their own? (Good thing we've spent so much time talking about aliens here, huh?)
Worth noting that we don't have any reliable means of "inhibiting" our existing institutions from paving slaughterhouses full of good intentions, and worth considering that some of the most atrocious second-order effects where wholly unexpected by primates who are just getting used to thinking in systems. We have no working prototypes for this "inhibitor" concept.
How are we to replace a God we never understood? A: Accidentally, as dark LULZ ensue.
http://www.artificialbrains.com/google
http://www.achievement.org/autodoc/page/pag0int-3
"Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the Web. It would understand exactly what you wanted, and it would give you the right thing. We're nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on." - Larry Page, October 2000
http://www.pbs.org/newshour/bb/business ... 11-29.html
"HAL had a lot of information, could piece it together, could rationalize it. Hopefully it would never have a bug like HAL did where he killed the occupants of the spaceship. But that [level of artificial intelligence] is what we're striving for, and I think we've made it a part of the way there." - Sergey Brin, November 2002
Users browsing this forum: No registered users and 57 guests