Index IntroductionThe Chinese Room ExperimentMemories and StoriesPublished in a Social ContextConclusionIntroductionThis article revolves around a conversation with the Siri chatbot for iPhone. The main objective of the paper is to examine the possibilities for Siri to be an intelligent social agent (SIA) or whether the chatbot is not intelligent. To do this, a transcript between Siri and a human will be analyzed. The article “The Art of Designing Socially Intelligent Agents: Science, Fiction and the Human in the Loop” by Kerstin Dautenhahn will be used to interpret the transcript. The article examines the approach to designing social and intelligent agents. The author explains how new forms of interaction can arise from the human tool interface. It reveals that social and intelligent agents can control their cognitive and social potential. Social agent technology combined with human social cognition can lead to new forms of sociality. As such, it can be established that the continuous development of social and intelligent agents can bridge the gap between humans and agents. While this article argues that machines can be social, intelligent agents, the article will also include a theory that machines are not social, intelligent agents. intelligent social agents. This theory is based on an experiment called “The Chinese Room Experiment” and will be explained below. A small part of the article “When a robot is social: spatial agreements and multimodal semiotic engagement in the practice of social robotics” by M. Alac, J. Movellan and F. Tanaka will also be used. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an original essayThe Chinese Room ExperimentThe Chinese Room Experiment is an experiment conducted by the American philosopher John Searle. The general idea is that a man is inside a room and outside the room there are Chinese people. The man in the room doesn't understand anything Chinese, but he has an instruction booklet in English that tells him some codes for answering a Chinese question in Chinese. When the Chinese outside the room send some Chinese symbols, the man inside the room can send new Chinese symbols as a response. To the Chinese outside the room it seems that the man inside the room knows Chinese. The purpose of the experiment is that it appears that the man inside the room knows Chinese, but in reality he does not know any Chinese. He just has an instruction booklet that tells him some codes, so he knows how to respond, but he doesn't understand what he's saying. The man in the room is an analogy of a machine. We might think that the machine has meanings, but it is just a computer that follows rules (Searle,1984). Dautenhahn's (1998) article explains that it is human nature to interact appropriately with forms that are at their level. Therefore, a robot or software that exhibits human characteristics is called a desirable agent (Dautenhahn 1998: 8). According to cognitive technology, designing a desirable EIS requires examining the interaction between biological and physical agents. When it comes to agent-human interaction, it is necessary to have a cognitive fit between SIA and human. Consequently, a desirable SIA borrows characteristics of social traits and human intelligence such as embodiment, memories, credibility, narrative construction, and historical embeddedness. All this if the above attributes contribute to having a developed SIA in terms of intelligence, therefore a desirable agent (ibid: 8). The following analysis will look more closely at embodiment, credibility, historical embeddedness, and the ability to have memories in association with the transcriptwith Siri. Humans are accustomed to being in bodies (Dautenhahn, 1998). Anything that does not exist in a corporeal form is a little strange to human understanding. In fact, most people don't believe in things they can't see. Therefore, one of the important aspects for a fully functioning SIA, according to Dautenhahn, is to have bodies that help them behave like human beings. An embodied SIA is better positioned to meet both human and social needs (ibid: 9). The article by Alac, Movellan and Tanaka (2011) also confirms this statement. He claims that robots can adapt to society if they are more like humans. According to the authors, to become a legible social factor, robots should have a body. The timing of robotic movement, as well as a computational mechanism, plays an important role in creating a readable social factor. A legible social factor is one that goes beyond physical activities. Therefore, when designing a robot, it should possess features that seek to humanize it. A social robot should have human characteristics such as head, hands, legs, to name a few (Alac, Movelan & Tanaka 2011: 894). With this in mind, Siri falls short of being a well-designed social robot. It has no hands, legs or head and does not look like a human. Siri is a mobile phone that talks. Since she can make sounds and speak, one could argue that Siri has a mouth. However, according to the theories of Dautenhahn and Alac, Movellan and Tanaka, it is not yet an embodied SIA. However, Siri is trying to embody herself in the interaction when she says: M: What are you doing? S: I look out the window and write some haiku. To do this, Siri must have human organs such as hands and legs. This causes Siri to become more human-like because it says it is doing human things that would require it to have a body to help it behave like a human. In this case, Siri works to incarnate, and this may be due to the fact that Siri is speaking through a cell phone. The mobile phone is used by humans to talk to other people, even if their bodies cannot be seen. If a friend calls and says he is “looking out the window and writing some haiku,” the phone user will not wonder whether or not this is correct, because the friend is known to be somewhere else fully embodied. Therefore, Siri can deceive the human being into believing that he is incarnate, using the phrase “looking out of my windows (…)”. This is because it is natural for the human mind that people talking on cell phones are embodied. Memories and stories According to Dautenhahn, for a social robot to be intelligent, it is essential that it has stories to tell. The article states that stories provide a platform to address enormous human responsibilities when it comes to social interaction (Dautenhahn, 1998). With this in mind, Siri fails to be an intelligent social agent. According to the transcript provided, it is notable that Siri is missing stories. Siri is asked three times to tell something about herself, and although she answers in different ways, she says almost the same thing each time: “I'm Siri, but enough about me,” “I'm just a humble virtual assistant,” and “ I am Siri, your virtual assistant." The first two times he says something different and the third time he puts both things in the same sentence. The story of Siri is that her name is Siri and that she is a humble virtual assistant. What's interesting is that when asked to say more about herself, she replies: "Maybe it's beyond me at the moment." However, Siri says more about herself later in the conversation. When asked if he can be friends with the human, he replies, "You've been my friend since day one, Julie."This says something about Siri's life story because she refers to the temporal aspect of her life by saying “from day one.” It seems strange that Siri first says that she can't say anything more about herself, and then in the conversation says more about herself. In this case it is possible to argue that Siri is not intelligent using the Chinese Experimental Room theory. It appears that Siri doesn't understand what he's saying when asked to say more about himself, because he responds poorly to the question. This may be because the chatbot is programmed to respond “that may be beyond my capabilities at this time” if the codes “other” and “yourself” (or something like that) are in the same sentence. It seems almost impossible for Siri to understand the phrase “tell me more about yourself” when she says no, and later in the conversation say more about herself. Using this argument, in this case Siri is not an intelligent social agent. Another important thing for social robots to be intelligent is the ability to have memories. Memory in a robot is essential to humanize it more. Being able to use a name or refer to a past event is essential for human interaction (Dautenhahn, 1998). Siri definitely has a memory when it comes to using a name. Use the name "Julie" multiple times in the interaction. There also seems to be some memory in the transcript of:M: Ok, tell me more about yourselfS: I'm just a humble virtual assistantM: Tell me more about yourselfS: This may be beyond my capabilitiesIn this case, Siri remembers that that the human just asked so they don't answer the same thing. However, it was just claimed earlier that the response "that may be beyond my capabilities" may be a programmed response due to some codes in the human question, if so Siri has no memory. Another moment in the interaction where it seems like Siri has no memory is in the transcript of:M: No, it doesn't matter, but I hope you can help me with somethingS: Who, me?M: Yes, youS: Yes. It's me. The human starts asking Siri if she can help me with something, and it seems like Siri needs to be sure it's something that concerns him. At this point the human answers yes and Siri is expected to remember what the first question was so that she can ask, for example, what she can help him with. However, Siri responds in a way that seems strange to humans, and Siri doesn't seem to remember what the conversation is about. Again, the argument from the Chinese room experiment can be used. In this case Siri does not appear to be an intelligent agent. It seems more like he is programmed to answer the question based on some code or rule without having any real understanding of the conversation and what is going on. There's a moment in the conversation where it would have been interesting to test Siri's ability to have memories. This moment is when the human asks questions about Los Angeles: M: I'm going to Los Angeles this weekend. Do you know what I should see in Los Angeles? S: Here's what I found on the web for 'Do you know what I should see in Los Angeles? M: Do you know where I can eat something good in Los Angeles? S: Yes, I know, Julie. The best-rated option I found is Sushi Gen, a sushi restaurant, which averages 4.5 stars and is quite expensive. So far it's mostly argued that Siri isn't intelligent because all the answers seem to fit The Chinese Room Experiment's argument. In this interaction it would have been interesting if the human had not used the words “Los Angeles” in the second question to see if Siri was able to remember that the first question was about Los Angeles. The human could have asked “do you know where I can eat something goodin the city?” When the words "Los Angeles" are used, it may just be a code for Siri to use. It could be possible that Siri is programmed to respond a certain way when words like “eat” and “Los Angeles” are placed in the same sentence. According to (Dautenhahn, 1998), the human social circle is inclined towards what is credible than what is not. Today, scientists are coming up with systems that help SIA communicate with human audiences, hence credibility. Therefore, the development of credible SIA constitutes the technology familiar to people. By being credible, SIAs satisfy human social and cognitive needs (Dautenhahn 1998: 7). As stated earlier, humans interact best with people, things, or shapes with which they are familiar. SIAs who can exhibit lifelike shapes are attractive during engagement. Dautenhahn predicts that, as humans, they will always be attracted to forms that display characteristics such as life, movement, speech, emotion and the like because they are realistic (ibid). According to various researchers, humans are more attracted to credible agents because they are in touch with reality. Indeed, critics claim that credible agents trick humans into caring and feeding, but this is just an exaggerated program (ibid). All the things mentioned above are part of this credibility. If the robot has memories, stories and embodiment, it is more believable to the human and appears more like an intelligent social agent. The first thing that makes Siri believable and human-like is that the machine has a name. It's called Siri and this makes the machine more personal. It is much easier to connect to something that has a name because it seems more real to the human being. Although Siri has a name that makes it similar to a human, Siri presents itself as a "virtual" assistant. The fact that it says “virtual” creates a distance from the machine, because it suddenly appears as something that is not human. If Siri described herself as just an assistant, Siri would be more like a human. Another thing that makes Siri seem human is the transcription of:M: What are you doing?S: Looking out the window and writing some haikuHere Siri speaks like a person with a life rather than just software in a phone. Also, when Sri replies "I feel good" it seems more human because it shows that he has feelings. However, there are some parts of the conversation where Siri is not believable. The full transcript of:M: I'm doing a school assignment on youS: I don't know what you mean by 'I'm doing a school assignment on you'. How about looking it up on the web?M: No, it doesn't matter, but I hope you can help me with somethingS: Who, me?M: Yes, youS: Yes, it's meM: Can you help me with something?S: (silence, none answer)M: I can't hear youS: If you can't, you can'tThis part of the conversation seems strange and is not a successful social interaction. It seems like Siri doesn't understand the questions and that makes it even more incredible. First of all, he doesn't know how to respond to the human's first sentence. If The Chinese Room Experiment argument is used, in this case, it may be because the machine does not have a code or rule for these words put together. As already mentioned, the Chinese Room Experiment topic can also be used for the rest of the conversation, since it seems that Siri is programmed to respond to certain things when you put certain words together. It doesn't appear that Siri has understood the conversation at this point. The same thing happens in the transcription of:M: So you love me?S: I'm not allowedM: Why aren't you allowed to love me?S: Is that what you think?M: Yes, that's what I think: This is what I thought . Once again, it seems like Siri doesn't understand what's going on. At the same time.
tags