News Archives

  • UNM
  • >Home
  • >News
  • >2025
  • >August
  • >UNM joins Brown University in national institute focused on intuitive, trustworthy AI assistants

UNM joins Brown University in national institute focused on intuitive, trustworthy AI assistants

August 6, 2025

A $20 million grant from the National Science Foundation will help experts from institutions around the country develop an artificial intelligence research institute aimed at developing a new generation of AI assistants capable of trustworthy, sensitive and context-aware interactions with people. AI chatbots have already been deployed by companies in mental and behavioral health settings. Ensuring the next generation of AI assistants can respond in respectful, human-oriented ways is of the utmost importance.

photo: Melanie Moses
Melanie Moses

The AI Research Institute on Interaction for AI Assistants (ARIA) team is led by Brown University and includes experts from leading research institutions nationwide including The University of New Mexico; Santa Fe Institute, Colby College; Dartmouth College; New York University; Carnegie Mellon University; the University of California, Berkeley; the University of California, San Diego; and Data and Society, a civil society organization in New York.

Melanie Moses, professor of computer science in the School of Engineering, and Sonia Gipson Rankin, professor in the School of Law, will lead UNM’s contribution to the project. The team will build and evaluate AI systems that understand human reasoning, respect community standards and adhere to principles of justice.

Computing’s influence on society has accelerated with new forms of AI, according to Moses.

“The law is how we address conflicts in our society, but it is difficult for the law to keep up with the rapid pace of change in computing and AI. In this project we have the opportunity to design trustworthy AI using computational methods, while considering the social and legal implications from the start,” Moses said.

Collaboration across fields of expertise during the design and evaluation phases of AI development can help create safeguards that protect users. UNM will contribute strengths in justice-centered innovation, human-centered values and established interdisciplinary research relationships in the project.

“Integrating law, computer science, and a range of other disciplines is essential to developing AI systems that are not only innovative, but also trustworthy, right-respecting, and aligned with the public interest,” Gipson Rankin said. “In fields like mental health, where trust, privacy, and ethical responsibility are paramount, we have a great opportunity to design AI that truly serves people. Moving forward requires creativity, conscience, and collaboration. This is exactly the kind of work that UNM does so very well.”

Earlier this year, UNM launched a new Level 1 Grand Challenge team focused on the development of provably trustworthy AI systems. Moses and Gipson Rankin lead that team alongside Stephanie Moore, associate professor in Organization, Information, and Learning Sciences. In collaboration with faculty from across UNM and UNM Health Sciences Center, the group seeks to bridge the gap between AI that is trustworthy in theoretical models and AI that is trustworthy when trained with noisy data and deployed in real-world scenarios. Participation in the NSF-funded ARIA project will help push UNM’s work on trustworthy AI forward.

photo: Sonia Gipson Rankin
Sonia Gipson Rankin

Creating AI systems that can operate safely in a sensitive area like mental health care will require capabilities that extend well beyond those of even today’s most advanced chatbots and language models, according to Ellie Pavlick, an associate professor of computer science at Brown University who will lead the ARIA collaboration.

“Any AI system that interacts with people, especially who may be in states of distress or other vulnerable situations, needs a strong understanding of the human it’s interacting with, along with a deep causal understanding of the world and how the system’s own behavior affects that world,” Pavlick said. “At the same time, the system needs to be transparent about why it makes the recommendations that it does in order to build trust with the user. Mental health is a high-stakes setting that embodies all the hardest problems facing AI today. That’s why we’re excited to tackle this and figure out what it takes to get these things absolutely right.”

That work will require deep collaboration across institutions, expertise and academic disciplines, Pavlick said.

ARIA’s work will include a robust education and workforce development program spanning K-12 students through working professionals. The ARIA team will work with the Bootstrap program, a computer science curriculum developed at Brown, to support evidence-based practices for building new AI curricula and training for K-12 teachers. An initiative called the Building Bridges Summer Program will bring college and high school students from across the country to ARIA campuses to work on cutting-edge AI research. These educational opportunities and materials will also be deployed in New Mexico.

The need for this work is urgent, according to Pavlick. New startups and existing companies are already developing AI apps and chatbots for mental health support, and evidence suggests that people often turn to ChatGPT and other chatbots for relationship advice and other information tied to mental well-being.

“The work we’ll be doing on trust, safety and responsible AI will hopefully address immediate safety concerns with these systems — for example, developing safeguards against responses that reinforce delusions or unempathetic responses that could increase someone’s distress,” Pavlick said. “We need short-term solutions to avoid harms from systems already in wide use, paired with long-term research to fix these problems where they originate.”

New and smarter AI systems will be required to help deliver the kind of trustworthy and context-aware feedback required for safe and effective mental health interventions. Today’s large language models generate text through statistical inference — predicting which words to use next based on prior words or user inputs. Unlike humans, they don’t have a mental model of the world around them, they don’t understand chains of cause and effect, and they have little intuition about the internal states of the people with whom they interact.

At the same time, the team will engage legal scholars, philosophers, education experts and others to better understand how such systems would fit into existing social and cultural infrastructure.

“You don't just want to take for granted that any system that you can build should exist, because not all of them will have a net benefit,” Pavlick said. “So we’ll be addressing questions about what systems should even be built and which should not.”

Ultimately, Pavlick says, developing smarter, more responsible AI will be a benefit not only in the mental health sphere, but in the course of AI development in general.

“We’re addressing this critical alignment question of how to build technology that is ultimately good for society,” she said. “These are extremely hard problems in AI in general that happen to have a particularly pointed use case in mental health. By working toward answers to these questions, we’ll work toward making AI that’s beneficial to all.”

ARIA is one of five national AI institutes that will receive a total of $100 million in funding, the National Science Foundation, in partnership with Capital One and Intel, announced on Tuesday, July 29. The public-private investment aligns with the White House AI Action Plan, a national initiative to sustain and enhance America's global AI leadership, the NSF noted.

“Artificial intelligence is key to strengthening our workforce and boosting U.S. competitiveness," said Brian Stone, who is performing the duties of the NSF director. "Through the National AI Research Institutes, we are turning cutting-edge ideas and research into real-world solutions and preparing Americans to lead in the technologies and jobs of the future."