You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Vinitra
Thanks a lot for sharing your paper(s), and also thank you for your nice
words. I will take a look.
Best,
Christoph
Am Mi., 21. Sept. 2022 um 13:20 Uhr schrieb Vinitra Swamy <
***@***.***>:
Hi Christoph -- great work with the recent additions to the book; I really
liked the neural net chapters. As I wrote you a few years ago, I recommend
this book to any interested students working in trustworthy AI. :)
I might have missed it, but I think there's a great space to talk about
recent studies in disagreement for black box explainers at the end of
Chapter 9. I had a recent publication on this in the XAI for education
space
<https://educationaldatamining.org/edm2022/proceedings/2022.EDM-long-papers.9/2022.EDM-long-papers.9.pdf>.
Around the same time, there was a great ArXiv preprint by a team from
Harvard / MIT / CMU and Drexel <https://arxiv.org/abs/2202.01602> that
discusses the more general implications on different popular datasets, also
with a data scientist user study.
Of course, this 2019 Nature article asking us to use interpretable models
instead of explainability methods from Cynthia Rudin
<https://www.nature.com/articles/s42256-019-0048-x> is a predecessor and
a classic.
Thanks,
Vinitra
—
Reply to this email directly, view it on GitHub
<#347>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAMOOZBVVQ3JK252FRCQUGDV7LVPLANCNFSM6AAAAAAQR632MQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
Hi Christoph -- great work with the recent additions to the book; I really liked the neural net chapters. I recommend this book to everyone. :)
I might have missed it, but I think there's a great space to talk about recent studies in disagreement for black box explainers at the end of Chapter 9. I had a recent publication on this in the XAI for education space. Around the same time, there was a great ArXiv preprint by a team from Harvard / MIT / CMU and Drexel that discusses the more general implications on different popular datasets, also with a data scientist user study.
Of course, this 2019 Nature article asking us to use interpretable models instead of explainability methods from Cynthia Rudin is a predecessor and a classic.
Thanks,
Vinitra
The text was updated successfully, but these errors were encountered: