Rethinking how we study artificial intelligence
An interdisciplinary initiative at UCLA sees new research possibilities at the intersection of the humanities, neuroscience and engineering
Amisha Gadani / composite by Trever Ducote/UCLA
NNAI Livescu’s animal illustrations come courtesy of Amisha Gadani, senior artist for the UCLA Institute for Society and Genetics, representing diverse “more-than-human” intelligence.
By Lucy Berbeo | October 14, 2025
Of the qualities unique to human beings, our capacity for language has long been recognized as a defining trait. But what happens to that idea when technologies like Siri and ChatGPT seem to process and generate language in ever more humanlike ways? If machines are capable of holding conversations with us, might they be considered human too?
Faced with questions like these, we typically rely on knowledge from the experts — engineers, computer scientists — behind the technologies. But scholars associated with the UCLA Livescu Initiative on Neuro, Narrative and Artificial Intelligence are taking a different track: At one event held on campus, for instance, they turned to the wisdom of 18th-century philosopher Jean-Jacques Rousseau’s essay on the illusions of theater.
This interdisciplinary exploration represents just one facet of the work of NNAI, a multi-year seminar, symposium and research generation initiative housed in the UCLA Division of Life Sciences’ Institute for Society and Genetics. Made possible by a major gift from the Livescu Foundation, NNAI unites scholars from fields as diverse as medicine and literature to promote a collaborative, humanities-forward approach to far-reaching questions and challenges related to human and “other-than-human” thinking.
“The people who are experts on AI in the sense of creating it are often not the only people who have knowledge about its impacts, or important things to say about how it’s changing the world,” said NNAI director Christopher Kelty, a UCLA professor in the Institute for Society and Genetics with appointments in anthropology and information studies. “By creating lateral connections across disciplines, schools and expertises, we’re working to ensure there are multiple voices in the room when we talk about these things.”
To date, NNAI has brought together dozens of scholars from across UCLA, the U.S. and the world to participate in faculty seminars, workshops and public lectures, and it has sought to engage a broad community of scholars, students, policymakers and industry professionals. Organized by academic coordinator Marcos Arranz, the initiative’s activities center around core areas of inquiry that span such topics as the history of neuroscience, philosophical origins of intelligence and disembodied media.
The goal, Kelty said, is not to showcase expertise but to create a platform for advancing new and innovative contributions in research and education. With an advisory board comprising faculty from across UCLA, NNAI draws on the university’s depth of expertise in AI and neuroscience, and it partners with existing campus initiatives working in these areas, including UCLA DataX, the UCLA-CDU Dana Center and the UCLA Center for Critical Internet Inquiry (C2i2). But what sets NNAI apart, he said, is a driving focus on the synergies that emerge from the overlap between disparate disciplines.
“By getting engineers in the room with neuroscientists and humanities scholars in the room with doctors, we generate interdisciplinary dialogue and connection, but also opportunities for collaboration,” Kelty said. “Science doesn’t just happen because one person has a good idea and finds others to work with — it happens because of those random, collaborative sparks that lead people to propose ideas that they wouldn’t have otherwise.”
Training the next generation of researchers is also a key goal. This spring, NNAI partnered with Shazeda Ahmed, a chancellor’s postdoctoral fellow at UCLA’s Center on Race and Digital Justice, to host “Confronting the Crises of AI Through Research,” a seminar series open to UCLA graduate students in all disciplines. The series introduced participants to new methods for studying AI as non-experts, preparing them to explore its urgent questions from a range of scholarly perspectives.
The initiative also has supported the work of two Bruin undergraduate researchers. Rising senior Chaya Manjeshwar completed a project on ethical and clinical considerations of AI in health technology, while Isabella Yuan, who graduated in June with a pre-law bachelor of arts degree in human biology and society and a minor in philosophy, focused on AI, copyright law and personhood. Yuan said she hopes her research will help inform how legal systems might adapt to emerging technologies — and she credits the experience with broadening her perspectives within her chosen field of study.
“On the one hand, it’s easy to be drawn in by new technologies without fully considering their broader social and ethical implications; on the other, it’s easy to become overly cautious or dismissive of innovation without understanding its potential,” said Yuan, who plans to explore related cases in law school. “NNAI’s approach is important because it emphasizes engaging critically and meaningfully with the implications of AI in a time of rapid technological advancement.”
NNAI hosts a lively slate of outreach and engagement activities, including blog posts and other efforts geared toward the general public, and is supported by the work of ISG’s senior artist, Amisha Gadani. In May, Arranz spearheaded a collaboration with Pint of Science, a nonprofit organization that brings scholars into conversation with the public at local bars and cafes. Featuring UCLA computer science research associate Wayne Wu, the event focused on developments and challenges presented by delivery robots, intelligent scooters and other AI technologies shaping urban mobility.
“We held it at a bar in Santa Monica, and anyone who came in was invited to participate,” Arranz said. “It was a small but engaged crowd — people were excited, asking questions and participating in trivia. It was a great way of communicating science to another audience.”
Looking toward the future, Kelty said, NNAI will continue to broaden its growing circle of collaborating scholars and pursue new avenues for deepening the broader societal conversation.
“The debates and public discussion around AI — but also around new discoveries in neuroscience, such as neural enhancement — tend to be simplistic, in part because there’s not a way to communicate alternatives to the standard stories we hear from big tech or in the news,” Kelty said. “We’re hoping to offer people different ways to think about some of these new technologies that are affecting everybody, everywhere.”
Changing the Conversation
Throughout the 2024–25 academic year, the UCLA Livescu Initiative on Neuro, Narrative and Artificial Intelligence presented a series of public lectures and faculty seminars to engage audiences from undergraduates to university researchers in discussions on critical topics shaping our world and future. Conversations including “Why Turing Was Wrong,” “Can Humans Think?” and “Ghostwriting: A Secret History of the No Bodies Who Write” covered hot-button issues, while a forum on “Confronting the Crises of AI Through Research” brought multidisciplinary scholars together for intellectual discussion and debate outside of normal venues.
“On the one hand, it’s easy to be drawn in by new technologies without fully considering their broader social and ethical implications; on the other, it’s easy to become overly cautious or dismissive of innovation without understanding its potential. NNAI’s approach is important because it emphasizes engaging critically and meaningfully with the implications of AI in a time of rapid technological advancement.”
— Isabella Yuan, undergraduate student researcher
“I hope that my findings help start conversation on how we evaluate AI in healthcare, especially focusing on what changes these technologies can bring to all stakeholders: the patients, clinicians and our health systems. I would love to keep building on this by learning more from people working in this field already, and see how this framework can be used as a decision making tool to guide AI adoption in clinical settings.”
— Chaya Manjeshwar, undergraduate student researcher