Recent News
New Mexico universities unite in $7 million project to develop automated additive manufacturing
November 4, 2024
Engineering professor to lead $5 million project investigating materials for safe storage of nuclear waste
October 31, 2024
From fireflies to drones: UNM researchers uncover strategy for synchronization efficiency
October 30, 2024
Cerrato leads new research center focused on climate resilience
October 24, 2024
News Archives
UNM’s computer science faculty part of team studying bias in housing application algorithms
October 24, 2019 - By Kim Delker
The University of New Mexico scholars are part of a statewide team examining fair housing laws, including the concept of “algorithmic justice,” or how some algorithms in housing applications may be inherently biased against certain groups of people.
On Oct. 18, a group of 10 computer scientists, social scientists and legal scholars from the Santa Fe Institute (SFI) and UNM submitted a formal response to the U.S. Department of Housing and Urban Development’s (HUD) proposal to dramatically revise the Fair Housing Act.
Among the team involved is Melanie Moses, professor of computer science, and G. Matthew Fricke, a research assistant professor of computer science and UNM’s Center for Advanced Research Computing.
Key amendments in the HUD's new legislation would absolve landlords and lenders from any legal responsibility for discrimination that results from a third-party computer algorithm. Such algorithms are already widespread in our society and are used to automate decisions about who gets a credit card, a lease, or a mortgage. As the proposal is written, landlords and lenders would be protected from charges of “disparate impact” (unintentional discrimination that nonetheless leads to wide disparities) so long as their algorithms don’t overtly factor in protected characteristics like race, gender, religion, or disability status, or rely on proxy variables for those characteristics.
According to the experts, the HUD amendments related to algorithms are based on a fundamental "failure to recognize how modern algorithms can result in disparate impact … and how subtle the process for auditing algorithms for bias can be.”
Modern machine-learning algorithms are poorly understood and often draw highly complex correlations that even their designers may not be aware of. Any combination of factors, from location data to purchase history to musical preference, could be correlated as a proxy for race or another protected characteristic, with devastating consequences for protected groups.
Despite the inherently opaque nature of these algorithms, there are ways to make them more transparent and fair, the group says. In their letter, the SFI and UNM experts are demanding transparency, recommending that designers of decision-making algorithms allow independent auditors a minimal level of access where they could test the algorithms for bias by feeding them various inputs and observing how they respond. The authors also demand transparency for individual applicants, allowing them to view their own data and “contest, update, or refute that data if it is inaccurate.”
In their letter, the experts lay out four arguments against the proposed legislation:
- To ensure that an algorithm does not have disparate impact, it is not enough to show that individual input factors are not “substitutes or close proxies” for protected characteristics.
- It is impossible to audit an algorithm for bias without an adequate level of transparency or access to the algorithm.
- Allowing defendants to deflect responsibility to proprietary third-party algorithms effectively destroys disparate impact liability.
- The proposed regulation fails to take into account the cumulative impact of multiple users of algorithms that result in disparate impact on protected classes where no individual user has liability under the proposed regulation.
Their full response is posted on the Federal Register, along with 2,456 other public comments as of Oct. 22, 2019.
The co-signatories are members of The Interdisciplinary Working Group for Algorithmic Justice and are available to provide thoughts and expertise to policymakers around the use of algorithms in society. They are:
Melanie Moses, Professor, Department of Computer Science, The University of New Mexico, and Santa Fe Institute
G. Matthew Fricke, Research Assistant Professor, The University of New Mexico Department of Computer Science and Center for Advanced Research Computing
Alfred Mathewson, Professor Emeritus and former Dean, The University of New Mexico School of Law
Kathy Powers, Associate Professor, Department of Political Science, Senior Fellow, Center for Social Policy, The University of New Mexico
Sonia M. Gipson Rankin, Professor, The University of New Mexico School of Law
Gabriel R. Sanchez, Professor, Department of Political Science, Director, Center for Social Policy, The University of New Mexico
Cristopher Moore, Professor, Santa Fe Institute
Elizabeth Bradley, Professor, Computer Science Department, University of Colorado, Boulder, and the Santa Fe Institute
Mirta Galesic, Professor, Santa Fe Institute
Joshua Garland, Postdoctoral Fellow, Santa Fe Institute