News
CDS Celebrates Saadia Gabriel, Forbes 30 Under 30 Honoree
In a world increasingly shaped by artificial intelligence, it’s critical to ensure that these technologies are not just powerful, but also equitable and safe. Saadia Gabriel, a Faculty Fellow at CDS, is leading the charge in this crucial mission, and for her work in this area, Gabriel was recently named to Forbes’ prestigious 30 Under 30 list.
Forbes cited the significant advancements Gabriel has made in the fight against online toxicity. At the helm of the MARS (Misinformation, AI & Responsible Society (MARS) Lab) Lab, which she founded, Gabriel is dedicated to countering the spread of false and toxic language on the internet. Her goal is to empower everyday users, equipping them with tools to improve online safety. This commitment to user empowerment is a cornerstone of her work. Moreover, her collaboration with Microsoft researchers resulted in (De)Toxigen, a cutting-edge NLP model that significantly enhances hate speech detection capabilities, offering a robust support system for content moderators. The model is trained on a large-scale public dataset of AI-generated benign and toxic language, which Gabriel and collaborators curated to endow detection models with state-of-the-art capabilities to take the context and speaker into account when making predictions.
The MARS Lab has its roots in Gabriel’s time as a PhD student at the University of Washington. “I started working on toxic language detection with other people in the lab. We wanted to develop a framework for understanding the toxicity of language that would take into account more than just what’s shown explicitly in the text,” Gabriel explains. This focus on the implicit meanings and the social context of language in AI models is crucial in an era where digital communication is omnipresent.
Gabriel’s work goes beyond the theoretical. Her collaboration with Microsoft led to the development of (De)Toxigen, now used in production by both Meta and Microsoft. “We wanted to have something that could detect hate speech broadly across all these different populations and communities and not risk eliminating speech from certain people who are already victims of hate speech more often than other groups,” Gabriel details. This work exemplifies her focus on equitable and inclusive AI technologies.
Beyond hate speech detection, Gabriel has more recently become deeply involved in misinformation detection. This aspect of her research targets a pressing issue in the current digital ecosystem — the spread of disinformation. Her approach involves developing personalized counter-narratives to combat misinformation, leveraging AI’s understanding of user beliefs. She describes this as generating “personalized counter-narratives to misinformation that are effective to people who have certain pre-existing beliefs.” It’s a forward-thinking approach to a problem that has significant societal implications. The work was recently awarded a seed grant by MIT and a paper is forthcoming.
Since joining CDS in September, Gabriel has engaged in collaborations that resonate with her research ethos. Her involvement with the Alignment Research Group and the Center for Responsible AI is a testament to her multidisciplinary approach and commitment to responsible AI development. These collaborations have allowed her to explore new dimensions in her work, from debate as a tool for AI oversight to mental health applications of AI.
Gabriel’s recognition by Forbes is not just a personal achievement but a beacon for the field of data science. Her work at CDS and collaborations with industry partners underscore the importance of responsible, equitable AI development. As our world becomes increasingly digitized, researchers like Saadia Gabriel are essential in ensuring that the AI shaping our future is as fair and inclusive as it is powerful.