AI Seminar: Human-Centered AI: Safe, Interpretable, Trustworthy Analytics

Speaker: Duen Horng (Polo) Chau

Location: 60 Fifth Avenue, Room 7th Floor Open Space
Videoconference link: https://cimsnyu.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=1e70e1a2-961b-4cd7-9e9d-ae5a00ed7aba

Date: Wednesday, March 23, 2022

Tremendous growth in artificial intelligence (AI) research
has shown that AI is vulnerable to adversarial attacks, and their
predictions can be difficult to understand, evaluate and ultimately act
upon.

Our Safe AI research thrust discovers real-world AI vulnerabilities and
develops countermeasures to fortify AI deployment in safety-critical
settings: ShapeShifter, the world's first targeted physical attack fools
the Faster R-CNN object detector; the UnMask defense flags semantic
incoherence in predictions (part of DARPA GARD); SkeletonVis, the first
interactive tool that visualizes attacks on human action recognition
models; MalNet, the largest public cybersecurity graph database with
over 1.2M graphs (100X more).

Our complementary Interpretable AI research designs and develops
interactive visualizations that amplify people’s ability to understand
complex models and vulnerabilities, and provide key leaps of insight:
Summit, NeuroCartography, and Bluff, systems that scalably summarize and
visualize what features a deep learning model has learned, how those
features interact to make predictions, and how they may be exploited by
attacks; CNN Explainer and GAN Lab (with Google Brain), accessible viral
tools for students and experts to learn about AI models.

We conclude by highlighting our latest Trustworthy AI work: GAM Changer
enables domain users to edit ML model to reflect human knowledge and
values; SliceView and FairVis provide novel ways for visually auditing
and summarizing biases.