ARC 597 | On Speed Situated Technologies Intellectual Domain Seminar, Fall 2014

The difference of the European navigator to that of the Trukese is a pretty interesting example and metaphor. The primary difference being that the European constructs a full plan of action to reach his destination. He notes the speeds at which he must go and where to turn and when. The Trukese only knows his destination and makes all other decisions in situ. The reason being that you can never plan perfectly because there are always changes and it is more important to focus on reacting to those changes. This is where the European is flawed because even though he has a solid plan, the must change as unpredicted occurrences are presented during his travel. This is related, and important, to the conversation of artificial intelligence because it is easy to make a robot and tell it (program) to walk to a destination, but it will not be able to reach the destination if something gets in its way. An artificial intelligent bot would however be able to react to the spontaneous obstruction.

In the Suchman reading, she describes the two different methods of thinking when it comes to Artificial Intelligence.  She relates them to the different methods of sailing done by European navigators and Trukese navigators.  The Europeans created a detailed plan and sailed according to their plan and all moves were to keep them on course.  The Trukese navigators sailed with a destination and had no specific plan.  They would simply respond to all of the factors that crossed their paths.  These two separate ways of thinking describe the ways that programmers have been creating artificial intelligence.  Suchman argues that A.I should be based on reactionary responses and have the ability to respond to any situation rather than follow a plan.  She refers to this as “situated actions.” It is having a response that is particular to a specific situation not an automatic pre-planned response.  The idea is whatever “plan” you program cannot account for everything and must instead be able to react.  She goes on to talk about how in order for A.I. to be successful it should be able to respond and be clearly understood by humans.  It should also be reactive in the way humans are reactive to conversation or interaction.  In a sense it should have an intention and be able to communicate that intention.

 

 

In his article, Debord talks about the idea of a “constructed situation” and explains what is necessary to to purposefully orchestrate interactions between people, places and events.  Those who intentionally express experimental awareness are called presituationists, and are opposites to the passive spectators who are “forced into action” simply by unintentionally reacting to the environment that the directors have created for them.  In these instances, the presituationists are acting as the butlers/textual computers that McCullough describes, which “anticipate some of our needs before we do, and must carry out some of our business without our needing to know about it…”  Debord however, wants us to be aware of what is necessary for technological advancement and what is catering to playful tendencies.  I found it funny that both Debord and McCullough mentioned refrigerators in their texts.  Debord writes that “We obviously have no intent in encouraging the continuous artistic renovation of refrigerator designs,” but that is exactly what we are seeing today.  Not only are the layouts of fridges changing to become more stylish (and arguably functional), but they are now fitted with ice and water dispensers, digital temperature and humidity gages, and interactive touch screens which allow us to watch a cooking show or read a recipe right from the refrigerator screen.  We no longer see the fridge as a “cold, dark places to store your food” as McCullough describes it.  Now, even it can bring about some ridiculous delight. In the “Neo-Jetson” “try it and see” age of rapidly advancing technology, we are now seeing just how far we can push technology into as many things as possible, whether or not they should be there.

Lucy Suchman‘s in the preface discusses the differences between the ways European and Trukese ways of navigation. The European makes a plan of action and charts it out. The Trukese navigator uses an objective bases navigation system and responds to the conditions that arise. These ways of thinking are relevant to the way that AI designers were are are trying to create AI.

“Children have a tendency, for example, to attribute life to physical objects on the basis of behavior such as autonomous motion…” which describes how

It has been noted on page 602 that human-like automata have been constructed since the Hellenic times, which is roughly around 320bc to roughly 32 AD. These statues were said to “move, gesture, spoke, and generally were imbued by observers” These may have been simple and non electrical running, but they are the basics of automata. Suchman argues that to achieve automata that there must be a disconnect between reasoning and intelligence. Suchman finish up this section with the sentence “state-of-the-art in intelligent machines has yet to attain the basic cognitive abilities of the normal five-year-old child” from the preface it shows that the article was written over 10 years ago and I feel like there may have been a bit of improvement with the increases in computing power.

The idea of Human-computer interaction can only be described at what you put into it is what you get out. I am talking as the basic interaction of a computer meaning programing not an OS setup. Because adding an OS adds languages that the computer that then is described by Dennett as “it is part of our inability to see inside each other’s head, or our mutual opacity” This is also the inability to see what the OS is doing because the end user did not program it (in most cases)

As technology becomes more and more embedded in the social and cultural fabric, its hard not to immediately think what amounts of data (personal and otherwise) become available and who is taking benefit from it. McCullough talks about the Orwellian Big Brother on the idea of surveillance and information recollection by say a political party or a country regime. Situated Technologies, complete networks of data collecting devises are common place nowadays from GPS trackers to surveillance cameras we are being (knowingly and at the same time unknowingly) recorded at all times.

In the article Navigation, Suchman argued the objective and plan oriented thinking influences the way people act by the example of Trukese and European’s navigation. It is the most fascinating part to me. Actually there are thousands of methodologies (or logic) among people, from this view of point, it can be said it’s impossible to make a computer / artificial intelligence fully imitate a real person.

Furthermore, the argument of “alive versus not alive,” “machine versus person” distinction becomes unnecessary. In my perspective, the relation of “machine” and “person” or “alive” and “not alive” is not opposite, they are more like polarity in a spectrum. We can always say the siri in my phone is more like a real person than the voice in national grid service phone call because at least siri can search from google for me when I say something she doesn’t understand.

So, how can we measure the “ability to being like a real human”? I think Malcolm McCullough’s article gives us some clues: categorizing the situation and the corresponding action. However, it can be a paradox: when we try to define what is a real person we are ignoring the diversity of people. Is being like an European or a Trukese more humanize?

The other point is, why do we need a computer think/act like a real person? The technology already proven itself knows more than we really do, I believe we should find something hidden in the great amount of information the technology provides to learn more about the real world which already rebuilt by ourselves though technology. For instance, the project Trash Track in the Toward the Sentient City reveals the tracks of people’s daily trash. People create those commodities and never know where they are going after discarded. Hence, in my opinion, we are already create enough things (include create another ourselves in the phone) in the past decades, now it’s time to sense the “situation” we created so far like a real person.

Interaction to interact

In 1958 Debord in his writing tells that coordination is the must to create a situation/performance/intervention. They may have different parts coursed by different authorities but must be coherent with or without being concerned about other authorities working on the same for making one statement.

Now it is 1987. Suchman tells about interaction- action and response. How people act in different “situations”. The action might be purposeful or intuitive depending on situation. Then she draws the attention to the interaction in between man and machine. She asked for a common language to form a dialogue. She assumes artificial intelligence has the ability to give feedback in response.The product of artificial intelligence/artifact should not only be intelligible but also intelligent and self explanatory to communicate with the “partner”.

Then it is 2004. We are in the digital ground with PC, IP, USB, RFID, UPC, EPC, GPS, augmented reality,smart devices and so on. Artificial intelligence has became a natural extension of the work, a natural extension of the person.We are here where Suchman wanted us to see.  In this ground, Malcolm states we established the communication/language, but now it’s time to regulate the communication. As Luis I Kahn emphasized on the distinction in between spaces that serves and spaces that are served. According to Malcolm, now the communication medium is to be served.  To regulate or to fine tune,  he proposes the possibility of interaction/communication method among individuals. He takes the extent from individual to social scale.He assumes this sublime communication will dissolve the barriers of interaction between man an artificial intelligence.

The Suchman article is phenomenally interesting in terms of the thought process of humans and machines. The dialectic between goal oriented and process oriented methods to achieve an end result certainly teases out problems with artificial intelligence and even why we need it at all. The process oriented, logical model that machines take to accomplish a goal clearly lends itself to the logical process model of computer code and certainly makes it difficult for humans and computers to interact. We “think” in different ways. The ELIZA experiment is an interesting example of this. Though it passes the Touring test, in some ways (and if all goes well answering her questions), it is really nothing more than a linguistic game. (check it out online here) As Suchman says, the game only makes a somewhat rhetorical question, giving the illusion of intelligence, however if you think about the answers you’re receiving, ELIZA never really says anything intelligent. For instance, I told Eliza I was tired, and after a bit of “conversation” she eventually asked why I thought that was. I replied because I wasn’t sleeping and she replied, hmm that’s interesting, what are your thoughts on that? I replied maybe I should sleep more to which ELIZA replied you aren’t sure? Thus making you question yourself and come up with your own answer, regardless of the input. Certainly interesting, but more of a linguistic game than intelligence.

Mark Shepard defined sentience as hearing and feeling something, but not necessarily knowing anything about that thing. ELIZA to me is sort of case in point. The program has some generic responses to certain inputs, but has no idea that it’s responses are influencing your input-which is how the programmer designed the program to work. ELIZA herself doesn’t understand anything about that, but has responses prescribed based on certain types of inputs.

The idea of situated action or response is more along the lines of how humans think, which is intelligence, not sentience. However, we don’t even truly understand how the human mind works so it’s clear for now that our primitive, logic based, code languages couldn’t simulate a human thought…yet. But I question why machines need artificial intelligence. Certainly sentience is enough to serve and entertain us? Other than experiment, why are we trying to recreate ourselves (other than to better understand ourselves)? Certainly there are a slew of concerns involving intelligent machines and our relationship to them (and their relationship to us).