Forging a Fairer Future: How AI Can Be Used to Promote Equity
Content
Intelligent systems are now seamlessly integrated into our everyday lives. If you use a smartphone, you have had some interaction with Artificial Intelligence and Machine learning models through functions such as voice-activated assistants, predictive texts, email filtering, or image recognition. But what happens when those technology tools are built on biased models? Nick Perello, a PhD student in the Manning College of Information & Computer Sciences, is working to tackle this challenge, aiming to make technology fairer and more equitable for all users.
Perello characterizes his choice to study computer science as spontaneous. His initial inclination was to pursue an engineering degree, but he was drawn to computers because he considered them cool and liked the allure of tinkering with technology. The quality of the UMass computer science program drew him here from Miami, where he was born and raised.
As he progressed through his undergraduate and master's studies at UMass, Perello's awareness of societal issues intertwined with technology grew. During his master’s, Perello quickly realized he would need a PhD to broaden his impact in AI equity research.
“The route with a master’s wouldn't let me make the kinds of contributions I wanted to, which was focusing on equity in computing, namely that there are obvious cases where software and other types of programs act in harmful, discriminatory ways,” Perello says.
Perello’s research focuses on two interconnected fronts: fairness and explainability in machine learning. “Once I went to grad school, I was able to see beyond the social problems that I saw before, to now see the technical problems and then that's how I got introduced to this field of research.”
What exactly is "AI fairness," and why has it become such a crucial topic in technology today? Broadly speaking, AI fairness refers to developing artificial intelligence systems that make ethically sound decisions free from inappropriate biases towards users’ race, gender identity, age, ability, socioeconomic status, and more. Perello explores, amongst other topics, dimensions of algorithmic fairness and counterfactual fairness. This involves ensuring that AI systems do not discriminate or perpetuate existing biases and comparing an individual's outcome to what it would have been under a counterfactual situation.
Perello cites an article titled “Gender Shades” as one of the inspirations for the fairness dimension of his research. The article describes how one of the authors, Joy Buolamwini, a Black woman, tries to use a piece of software to determine if she is male or female. This software failed to predict gender for individuals with darker skin tones accurately. Nick explains, “Essentially the darker your skin was the worse it was.”
Perello's research aims to rectify this kind of bias and discrimination by creating fair machine-learning methods. In his work on hiring, he explains how to build learning models that prevent disparate impact and inducement of indirect discrimination. Indirect discrimination could be the result of a naïve correction to a model where proxies to the eliminated feature are introduced.
The new methods strive to eliminate discriminatory outcomes by modifying the models, algorithms and training data to ensure equitable results for all.
The second core component of Perello’s research is explainability in AI models. “Explainability” refers to the ability to interpret machine learning models and outputs in an acceptable way to humans. He delves into the issue of "black box" algorithms, where the decisions made by AI systems are opaque and uninterpretable. His work makes these complex systems understandable, using techniques like heatmaps and counterfactual explanations. This transparency fosters user trust and unveils the inner workings of AI decisions to help identify and rectify biases.
Perello emphasizes that explainability and fairness are intertwined; they aid in exploring details of inequity amongst different group results on a granular level. One example he describes is the possible use of redlining data to build a machine learning model versus using a commercial black box model.
“We'll be able to see how a hypothetical model for loan applications that doesn't directly use race for its decision-making can still be discriminatory via redlining. So first, we can observe, via explainability tools, a model whose decisions are impacted by applicants' race and zip code. Obviously, this is bad, so we devise the naïve solution of dropping race and retraining the model. Using explainability tools again, we'll see that this ‘fixed’ model is not impacted by the applicants' race but is now impacted more by their zip code than before. Thus, we are redlining via zip code's relationship with race.”
The solutions Perello develops throughout his doctoral work could have far-reaching implications in spreading more ethical and just AI systems. As artificial intelligence increasingly mediates critical decisions in areas like healthcare, finance, employment, criminal justice and beyond, Perello’s research on mitigating algorithmic bias takes on profound importance.
One aspect of Perello's work that shines through is his dedication to teaching. He views explainability as a form of teaching, as it demystifies the decisions made by AI systems. This empowers individuals to understand why these systems arrive at certain conclusions, enabling users to make more informed judgments about the outcomes.
In addition to his educational pursuits and teaching interests, Perello has supported the Manning College of Information & Computer Science in implementing diversity and inclusion initiatives, earning him a Commitment to Diversity award in spring 2023.
Perello's aspirations continue to center around equity and diversity in AI systems. He envisions a future where AI tools are devoid of discriminatory outcomes, decisions are made transparently, and society benefits from AI's potential without reinforcing existing inequalities.
“For now, it’s about fixing these issues in areas with simpler data structures and machine learning models. For our work specifically, once we know how to fix the simpler areas, we could eventually fix the more complicated stuff more easily because adapting our solutions becomes more of an engineering problem rather than a research problem,” Perello says.
While the path forward may sometimes be unclear, Perello’s dedication to addressing these critical challenges ensures a brighter future for AI ethics and its applications. As he balances his research with teaching and outreach, his story inspires the hope that AI can be a force for positive change, transforming the landscape of technology and the world it shapes.
Written by Vivian Nwadiaru, PhD candidate in Mechanical & Industrial Engineering, as part of the Graduate School's Public Writing Fellows Program.