Sign Up

Trustworthy AI Tools for Science: A Philosophical, Computational, and Experimental Perspective

As AI/ML pervades the scientific community, scientists and engineers need to trust that these models and methods are human aligned while appropriately obeying fundamental laws of nature. In this talk, trustworthiness for scientific AI/ML is presented along three dimensions: philosophical, computational, and experimental. Science-oriented trustworthy AI frameworks provide a philosophical approach to human alignment in the laboratory; these frameworks evaluate AI/ML for properties such as interpretability and fairness in ways that are meaningful to the research community, and I discuss recent progress towards community-based trustworthiness standards. I then present a case study where computational approaches leverage novel explainable uncertainty quantification (xUQ) methods to evaluate an AI/ML model that fails to generate trustworthy predictions. Finally, because physical models benefit from physical validation, I present progress towards implementing proposed xUQ methods in an autonomous laboratory experiment for battery testing. These perspectives provide an outlook for how trustworthy AI/ML can support future research endeavors.
 

  • Iftach Amir

1 person is interested in this event

Trustworthy AI Tools for Science: A Philosophical, Computational, and Experimental Perspective

As AI/ML pervades the scientific community, scientists and engineers need to trust that these models and methods are human aligned while appropriately obeying fundamental laws of nature. In this talk, trustworthiness for scientific AI/ML is presented along three dimensions: philosophical, computational, and experimental. Science-oriented trustworthy AI frameworks provide a philosophical approach to human alignment in the laboratory; these frameworks evaluate AI/ML for properties such as interpretability and fairness in ways that are meaningful to the research community, and I discuss recent progress towards community-based trustworthiness standards. I then present a case study where computational approaches leverage novel explainable uncertainty quantification (xUQ) methods to evaluate an AI/ML model that fails to generate trustworthy predictions. Finally, because physical models benefit from physical validation, I present progress towards implementing proposed xUQ methods in an autonomous laboratory experiment for battery testing. These perspectives provide an outlook for how trustworthy AI/ML can support future research endeavors.