In this activity you are to take a look at the Eliza project and make some determinations as to what the attraction was for the program, keeping in mind the context. When was it first released and what was the status of technology at the time? Then flash forward in time to the present and try to determine the effect the current media ecosystem in which we are currently living could have on its use. (Understanding, of course, we have not really gotten into it in detail but you surely have some notions about the current status of media in our lives… plus you may wish to come back to this activity later in the term to update your thinking).
Some hints are provided along the way (think MIT.. think Alice .. think Tynker, etc. from your previous courses).
Think what it would be like if you were able to write a program for Voki that would integrate the scripting of Eliza with Voki’s visual interface. Again it is understood that we may be getting ahead of ourselves with the visual interface (which is covered in more detail in the next cycle). But because we are continually using iterative thinking in this course, you can go back at the end of the next cycle and update your ideas about all of this after you learn more about the visual ecosystem.
NOTE: what we are NOT asking you to evaluate/assess Eliza in terms of artificial intelligence… that is the subject of another course (EME 6645 to be exact) What we are asking you is to evaluate/assess her in terms of her original text-based interface and what can be said about the text ecosystem … what is the value-added effect of this kind of interaction, and whether a graphical interface is always required to make something believable.
Rogerian Logic/Argument
First here is a short video introduction:
But what we are discussing here is really a derivative of the Rogerian Argument… how it filtered its way into the therapy field…. this derivative became known as Rogerian Rhetoric (prefect! seeing as we are discussing text rhetoric in this cycle)
Another derivation of this concept (also developed by Rogers) was referred to as Person-centered therapy (PCT)
(also known as person-centered psychotherapy, person-centered counseling, client-centered therapy and Rogerian psychotherapy). PCT is a form of talk-psychotherapy developed by psychologist Rogers in the 1940s and 1950s. This type of therapy diverged from the traditional model of the therapist as expert and moved instead toward a nondirective, empathic approach that empowers and motivates the client in the therapeutic process. The therapy is based on Rogers’s belief that every human being strives for and has the capacity to fulfill his or her own potential. Person-centered therapy, also known as Rogerian therapy, has had a tremendous impact on the field of psychotherapy and many other disciplines. Rather than viewing people as inherently flawed, with problematic behaviors and thoughts that require treatment, person-centered therapy identifies that each person has the capacity and desire for personal growth and change. Rogers termed this natural human inclination “actualizing tendency,” or self-actualization.
Eliza Background
ELIZA is a computer program and an early example of primitive natural language processing. ELIZA operated by processing users’ responses to scripts, the most famous of which was DOCTOR, a simulation of a Rogerian psychotherapist. Using almost no information about human thought or emotion, DOCTOR sometimes provided a startlingly human-like interaction.
Eliza was a creation of Joseph Weizenbaum. An early pioneer in computer science, Weizenbaum was one of the few to join the original MIT Artificial Intelligence Lab in the early 1960s. ELIZA is based on very simple pattern recognition, based on a stimulus-response model (scripted that way).
When the “patient” exceeded the very small knowledge base, DOCTOR might provide a generic response, for example, responding to “My head hurts” with “Why do you say your head hurts?” A possible response to “My mother hates me” would be “Who else in your family hates you?” ELIZA was implemented using simple pattern matching techniques, but was taken seriously by several of its users, even after Weizenbaum explained to them how it worked.
Apparently, Weizenbaum was shocked by the experience of releasing ELIZA (also known as “Doctor”) to the nontechnical staff at the MIT AI Lab. Secretaries and nontechnical administrative staff thought the machine was a “real” therapist, and spent hours revealing their personal problems to the program. When Weizenbaum informed his secretary that he, of course, had access to the logs of all the conversations, she reacted with outrage at this invasion of her privacy. Weizenbaum was shocked by this and similar incidents to find that such a simple program could so easily deceive a naïve user into revealing personal information.
Weizenbaum perceived his program as a threat. This is a rare experience in the history of computer science. Now it is hard to imagine anyone coming up with an original idea for a software program and saying, “no, this program is a dangerous genie and needs to be put back into the bottle.” His first reaction was to shut down the early ELIZA program. His second reaction was to write a book about the whole experience, eventually published in 1972 as Computer Power and Human Reason.
Weizenbaum perceived his mission as partly to educate an uninformed public about computers. Presumably the uneducated public confused science fiction with reality. Thus most of Computer Power is devoted to explaining how a computer works: this is a disk drive, this is memory, this is a logic gate, and so on. In 1972 such a primer may have necessary for the public, but today it might seem like the content for Computers for Dummies.
Most contemporary researchers did not need much convincing that ELIZA was at best a gimmick, at worst a hoax, and in any case not a “serious” artificial intelligence project. The irony of Joseph Weizenbaum and Computer Power and Human Reason is that, by failing to promote his own technology, indeed by encouraging his own critics, he successfully blocked further investigation into what would prove to be one of the most promising and persistently interesting demonstrations to emerge from the early AI Lab.
So Let’s meet Eliza:
Now, try this version and see if any differences:
The point is that Eliza is the product of direct scripting, meaning the alternative feedback based on input was predictive and pre-set, just like the interfaces coded into early video games.
Eliza Meets Alice
Chatbots are the current extensions of Alice. A Chatbot is a computer program designed to simulate conversation with human users, especially over the Internet)… They are programmed using natural language. The technology is getting so good that commercial ventures and customer service organizations are using as a first line of support). Again, it is our intent to have you evaluate the interactivity only in context of its text-based interactions (TUI versus GUI?)
Scripting Behind Eliza
The link in this section title is a look at the script behind one of the versions of Eliza. Notice how the script is parsed for possibilities of sentences and questions.. full of if statements that have a default retort to bounce the questions back at the user should an unexpected statement or question be made… this is the direct correlation to Rogerian arguments that make the interactions possible and seem real.
In summary, here is what you are being asked to do:
- Read the information about Eliza and the theory of Rogerian logic and actually try the different versions of the product.
- Note your impressions about the program, how believable it was/is (in the context of the psychological aspects of its theoretical underpinnings) and try to place yourself in the time period it was introduced. How is it that some folks thought she was real and could not determine the difference between this scripted program and a real psychological analyst. Remember, all of this was relatively successful without graphical support… simply text responses… what was the add-on functionality to reading static text that this program added? Several blind studies we conducted at the time where participants were broken into two groups, half interfacing with Eliza and half interacting with a real person behind a curtain… and that a significant number of people could not tell which was which. In today’s environment this may be less likely to happen but, again, please place yourself into the environment/times when computers were very young and not a prevalent… no PCs, no MACs, no mobile… devices only mainframe computers…
- Now fast forward to the present… can Eliza still be as effective as it was considered to be back then? Why? Why not? Does text still carry any weight as far as being an effective communication medium? What is it about text that provides its power? its weakness?
Post your responses in the drop box in canvas.
In a later cycle We will revisit this… While we do not have the ability (or the API) to have you actually do this, but imagine for a moment that you are able to combine Voki’s graphic and voice interface with the scripting of Eliza. What would that add? Would it make it more/less powerful agent and more believable? Why/ why not? Are there any other programs out there that might work better? Can you locate any projects that actually tried? If so, what were the results? What is your thinking about this and what power is added with a visual interface/ Again in each subsequent cycle we can revisit our answers and place them into the appropriate chapter in your notebook as we go along. At the end of the semester you will have a complete notebook that demonstrates your iterative thinking on these subjects and your evolving opinions about each ecosystem… that is the power of the final notebook.