Xue Zhirong is a designer, engineer, and author of several books; Founder of the Design Open Source Community, Co-founder of MiX Copilot; Committed to making the world a better place with design and technology. This knowledge base will update AI, HCI and other content, including news, papers, presentations, sharing, etc.
Q2: Does explainability necessarily enhance users' trust in AI?
Q1: How does feedback affect users' trust in AI?
The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study | Xue Zhirong's knowledge base
The results show that feedback has a more significant impact on improving users' trust in AI than explainability, but this enhanced trust does not lead to a corresponding performance improvement. Further exploration suggests that feedback induces users to over-trust (i.e., accept the AI's suggestions when it is wrong) or distrust (ignore the AI's suggestions when it is correct), which may negate the benefits of increased trust, leading to a "trust-performance paradox". The researchers call for future research to focus on how to design strategies to ensure that explanations foster appropriate trust to improve the efficiency of human-robot collaboration.
2014 剧情 artificial intelligence
Q3: How does result feedback and model interpretability affect user task performance?
Q2: Does explainability necessarily enhance users' trust in AI?
Q1: How does feedback affect users' trust in AI?
相关推荐
A3: The study found that the feedback of the results can improve the accuracy of the user's predictions (reducing the absolute error), thereby improving the performance of working with AI. However, interpretability does not have as much impact on user task performance as it does on trust. This may mean that we should pay more attention to how to effectively use feedback mechanisms to improve the usefulness and effectiveness of AI-assisted decision-making.

The researchers conducted two sets of experiments ("Predict the speed-dating outcomes and get up to $6 (takes less than 20 min)" and a similar Prolific experiment) in which participants interacted with the AI system in a task of predicting the outcome of a dating to explore the impact of model explainability and feedback on user trust in AI and prediction accuracy. The results show that although explainability (e.g., global and local interpretation) does not significantly improve trust, feedback can most consistently and significantly improve behavioral trust. However, increased trust does not necessarily lead to the same level of performance gains, i.e., there is a "trust-performance paradox". Exploratory analysis reveals the mechanisms behind this phenomenon.

A3: The study found that the feedback of the results can improve the accuracy of the user's predictions (reducing the absolute error), thereby improving the performance of working with AI. However, interpretability does not have as much impact on user task performance as it does on trust. This may mean that we should pay more attention to how to effectively use feedback mechanisms to improve the usefulness and effectiveness of AI-assisted decision-making.
The results show that feedback has a more significant impact on improving users' trust in AI than explainability, but this enhanced trust does not lead to a corresponding performance improvement. Further exploration suggests that feedback induces users to over-trust (i.e., accept the AI's suggestions when it is wrong) or distrust (ignore the AI's suggestions when it is correct), which may negate the benefits of increased trust, leading to a "trust-performance paradox". The researchers call for future research to focus on how to design strategies to ensure that explanations foster appropriate trust to improve the efficiency of human-robot collaboration.
2014 剧情
导演: 安德鲁·罗西
主演: Elizabeth,Armstrong

自1978年起,美国在高等教育领域的投资远超过在其他任何一个领域的资金注入,即使有很多学生在毕业后无法找到理想的工作,这笔资金仍然在增加...

M

本站所有视频和图片均来自互联网收集而来,版权归原创者所有,本网站只提供web页面服务,并不提供资源存储,也不参与录制、上传。

57 1 2 2