Research Output
The Opaque Nature of Intelligence and the Pursuit of Explainable AI
  When artificial intelligence is used for making decisions, people are more likely to accept those decisions if they can be made intelligible to the public. This understanding has led to the emerging field of explainable artificial intelligence. We review how research on explainable artificial intelligence is being conducted and discuss the limitations of current approaches. In addition to technical limitations, there is the huge problem that human decision-making is not entirely transparent either. We conclude with our position that the opacity of intelligent decision-making may be intrinsic, and with the larger question of whether we really need explanations for trusting inherently complex and large intelligent systems — artificial or otherwise.

  • Date:

    30 November 2023

  • Publication Status:

    Published

  • Publisher

    SCITEPRESS–Science and Technology Publications

  • DOI:

    10.5220/0012249500003595

  • Funders:

    Edinburgh Napier Funded

Citation

Thomson, S. L., van Stein, N., van den Berg, D., & van Leeuwen, C. (2023). The Opaque Nature of Intelligence and the Pursuit of Explainable AI. In Proceedings of the 15th International Joint Conference on Computational Intelligence (555-564). https://doi.org/10.5220/0012249500003595

Authors

Keywords

Explainable Artificial Intelligence, XAI, Machine Learning, Neural Networks, Cognitive Science

Monthly Views:

Available Documents