Identifier
Event
Language
Presentation type
Topic it belongs to
Subtopic it belongs to
Title of the presentation (use both uppercase and lowercase letters)
Presentation abstract
Artificial Intelligence relies on institutional trust as their use in sociotechnical systems grows. Designing honest mechanisms for assuring user trust will help drive social acceptance of an AI system. It is often easier to focus on economic benefits and the absence of “catastrophic” operational failures as warrants for trust in deployments of AI systems. There has been much research on the concept of interpersonal and interorganizational trust development (see Lewicki et al, 2006; Mayer el al, 1995). We need a more nuanced concept of justifiable, interagent trust in the use of AI within a given social system as advanced systems become more embedded in our institutions. One way of exploring trust miscalibration is through the lens of responses to events of systemic failures or stress. We use two historical examples to explore the dynamics of institutional trust in sociotechnical systems. These examples examine what failure events (trust signaling occurrences) say about the systems’ trustworthiness, and provide concrete illustrations of how human agents re-evaluate trust beliefs in a system. The discussion is meant as an illustration of a framework of trust in which responses to extreme systemic events are honest signals for calibrating rational beliefs about the underlying trustworthiness of sociotechnical systems. We then identify design approaches that may be useful for improving trust signaling, and conceptualize models of trust in which agents (both human and AI) modify their behaviors based on trust-signalling events that take place within the sociotechnical system. The goal is to articulate a clearer conception of trust dynamics in sociotechnical systems to guide us as we aim to build more trustworthy deployments of AI-equipped sociotechnical systems.
Long abstract of your presentation
Keywords (use both uppercase and lowercase letters)
Main author information
Co-authors information
Status:
Approved