We aimed to create a research design where it is possible to examine the situation in the (maybe not so distant) future, when we won't have certain and unquestionable ways of distiguish humans from machines, thus, we will base our decisions on other cues. One of these possible cues is opinion: if the other entity expresses a view different from ours, we might think that they are a member to another group (in this case: the group of machines). According to the Computers Are Social Actors (CASA, Nass, Moon, 2000) paradigm, humans interact with computers as if they were social agents (we attribute intentions, previous knowledge, social competences to them), and based on this paradigm, numerous mechanisms of social relations can be observed in these interactions (e.g. gender and race-based stereotyping, reciprocity, politeness). In line with this, our research examines the effect of attitude-difference on participants' perception of computer programs, and their categorization as “human” or “machine”. In our study, we created a situation similar to a Turing-test (viva voce): a written, on-line conversation is organized between two participants where they have to discuss a controversial topic, and both participants are informed beforehand that their conversational partner might be a human being or a computer program. We examine if the surfacing attitude-differences have influence on the following turing-type decision (that their partner was human or machine).
Long abstract of your presentation
Main author information