When AI Says "I'm Not Sure"

  • Subject:A Metastudy on Uncertainty Communication in Human-AI Collaboration
  • Type:Master's thesis
  • Date:Immediately
  • Supervisor:

    Joshua Holstein

  • Background

    Artificial intelligence systems are increasingly integrated into critical decision-making processes across healthcare, education, business, and research domains. However, AI systems inherently operate with varying degrees of uncertainty, and the methods by which this uncertainty is communicated to human users significantly influence collaborative outcomes.

    Consider the difference between an AI financial advisor stating "I am 73% confident this investment will outperform the market" versus "This investment shows strong potential, though market volatility introduces significant risk." Such variations in uncertainty communication can substantially impact investment decision-making processes. Similarly, when AI systems provide confidence scores versus natural language expressions of uncertainty, this affects user trust, reliance patterns, and overall collaboration effectiveness.

    As AI integration deepens across critical domains, understanding how uncertainty communication shapes human-AI collaboration becomes essential. Suboptimal uncertainty communication can result in calibration failures, inappropriate reliance behaviors, and misaligned expectations regarding AI capabilities.

    Current research spans computer science, psychology, and human-computer interaction, yet remains fragmented across disciplines. Studies examine diverse approaches including confidence scores, verbal uncertainty expressions, and visual indicators, while targeting different user populations and task contexts. A systematic synthesis of this literature is needed to establish evidence-based principles for effective uncertainty communication in human-AI systems.

     

    Research Goal

    This thesis will conduct a comprehensive meta-analysis to examine the effects of uncertainty communication strategies on human-AI collaboration effectiveness.

    Research objectives:

    • Systematically review empirical studies examining AI uncertainty communication across multiple domains (healthcare, education, decision support systems, etc.)
    • Analyze the effectiveness of different uncertainty communication methods (numerical confidence scores, verbal expressions, visual indicators, explanatory approaches)
    • Examine how uncertainty communication influences key outcomes including trust calibration, reliance appropriateness, task performance, and learning outcomes
    • Identify moderating factors such as user expertise, task characteristics, and application domain
    • Develop evidence-based recommendations for designing uncertainty communication in AI systems

     

    Your Profile

    • Demonstrate strong interest in human-computer interaction and AI system design
    • Seek to contribute to evidence-based approaches for improving human-AI collaboration
    • Have experience with or interest in systematic research methodologies and quantitative analysis