Abstract
Intelligent systems have been rapidly evolving and play a pivotal role in assisting individuals across diverse domains, from healthcare to transportation. Understanding the dynamics of human–artificial intelligence (AI) partnering, particularly how humans trust and collaborate with intelligent systems, is becoming increasingly critical to design effective systems. This paper presents an experimental analysis to assess the impact of AI design attributes on users’ trust, workload, and performance when solving classification problems supported by an AI assistant. Specifically, we study the effect of transparency, fairness, and robustness in the design of an AI assistant and analyze the role of participants’ gender and education background on the outcomes. The experiment is conducted with 47 students in undergraduate, master’s, and Ph.D. programs using a drawing game application where the users are asked to recognize incomplete sketches revealed progressively while receiving recommendations from multiple versions of an AI assistant. The results show that when collaborating with the AI, participants achieve a higher performance than their individual performance or the performance of the AI. The results also show that gender does not have an impact on users’ trust and performance when collaborating with different versions of the AI system, whereas education level has a significant impact on the participants’ performance but not on trust. Finally, the impact of design attributes on participants’ trust and performance highly depends on the accuracy of the AI recommendations, and improvements in participants’ performance and trust in some cases come at the expense of increased workload.