How trust networks shape students’ opinions about the proficiency of artificially intelligent assistants
Bu Y., Melatos A., Evans R.
The rising use of educational tools controlled by artificial intelligence (AI) has provoked a debate about their proficiency. While intrinsic proficiency, especially in tasks such as grading, has been measured and studied extensively, perceived proficiency remains underexplored. Here it is shown through Monte Carlo multi-agent simulations that trust networks among students influence their perceptions of the proficiency of an AI tool. A probabilistic opinion dynamics model is constructed, in which every student's perceptions are described by a probability density function (PDF), which is updated at every time step through independent, personal observations and peer pressure shaped by trust relationships. It is found that students infer correctly the AI tool's proficiency θAI in allies-only networks (i.e. high trust networks). AI-avoiders reach asymptotic learning faster than AI-users, and the asymptotic learning time for AI-users decreases as their number increases. However, asymptotic learning is disrupted even by a single partisan, who is stubbornly incorrect in their belief θp≠θAI, making other students’ beliefs vacillate indefinitely between θp and θAI. In opponents-only (low trust) networks, all students reach asymptotic learning, but only a minority infer θAI correctly. AI-users have a small advantage over AI-avoiders in reaching the right conclusion. The outcomes in allies-only and opponents-only networks depend weakly on network size n. In mixed networks, students may exhibit turbulent nonconvergence and intermittency, or achieve asymptotic learning, depending on the relationships between partisans and AI-users. In smaller mixed networks with n≲10 students, the long-term outcome is affected by whether a partisan teacher is an AI-skeptic (θpAI) or an AI-promoter (θp≥θAI). In larger mixed networks with n≳102, students are more likely to infer θp instead of θAI. The educational implications of the results are discussed briefly in the context of designing robust usage policies for AI tools, with an emphasis on the unintended and inequitable consequences which arise sometimes from counterintuitive network effects.