The Suchman article is phenomenally interesting in terms of the thought process of humans and machines. The dialectic between goal oriented and process oriented methods to achieve an end result certainly teases out problems with artificial intelligence and even why we need it at all. The process oriented, logical model that machines take to accomplish a goal clearly lends itself to the logical process model of computer code and certainly makes it difficult for humans and computers to interact. We “think” in different ways. The ELIZA experiment is an interesting example of this. Though it passes the Touring test, in some ways (and if all goes well answering her questions), it is really nothing more than a linguistic game. (check it out online here) As Suchman says, the game only makes a somewhat rhetorical question, giving the illusion of intelligence, however if you think about the answers you’re receiving, ELIZA never really says anything intelligent. For instance, I told Eliza I was tired, and after a bit of “conversation” she eventually asked why I thought that was. I replied because I wasn’t sleeping and she replied, hmm that’s interesting, what are your thoughts on that? I replied maybe I should sleep more to which ELIZA replied you aren’t sure? Thus making you question yourself and come up with your own answer, regardless of the input. Certainly interesting, but more of a linguistic game than intelligence.
Mark Shepard defined sentience as hearing and feeling something, but not necessarily knowing anything about that thing. ELIZA to me is sort of case in point. The program has some generic responses to certain inputs, but has no idea that it’s responses are influencing your input-which is how the programmer designed the program to work. ELIZA herself doesn’t understand anything about that, but has responses prescribed based on certain types of inputs.
The idea of situated action or response is more along the lines of how humans think, which is intelligence, not sentience. However, we don’t even truly understand how the human mind works so it’s clear for now that our primitive, logic based, code languages couldn’t simulate a human thought…yet. But I question why machines need artificial intelligence. Certainly sentience is enough to serve and entertain us? Other than experiment, why are we trying to recreate ourselves (other than to better understand ourselves)? Certainly there are a slew of concerns involving intelligent machines and our relationship to them (and their relationship to us).