Faculty of Actuarial Science and Insurance Seminar with Joshua Loftus (London School of Economics)
Bayes Business School, 106 Bunhill Row
Room 2005 - Bayes Business School
106 Bunhill Row, London EC1Y 8TZ, UK
View MapRegistration
Registration is now closed (this event already took place).
Details
Tools for interpretable machine learning or explainable artificial intelligence can be used to audit algorithms for fairness or other desired properties. In a "black-box" setting--one without access to the algorithm's internal structure--the methods available to an auditor may be model-agnostic. These methods are based on varying inputs while observing differences in outputs, and include some of the most popular interpretability tools like Shapley values and Partial Dependence Plots. Such explanation methods have important limitations. Moreover, their limitations can impact audits with consequences for outcomes such as fairness. It may be worth the effort to use tools that can incorporate background information and be tailored to each specific application. We highlight some promising ways to integrate background information by using causal modeling, with Causal Dependence Plots serving as an example.
Biography:
Joshua Loftus is currently an Assistant Professor of Statistics and Data Science at the London School of Economics, and was previously at New York University. He completed his PhD in Statistics at Stanford University. His research focuses on the use of causal models to improve practices in scientific reproducibility and responsible and interpretable machine learning.
Dress Casual (jeans ok)
Food Provided (Tea, Coffee and Biscuits)
Where
Bayes Business School, 106 Bunhill Row
Room 2005 - Bayes Business School
106 Bunhill Row, London EC1Y 8TZ, UK