2025 Conference: Artificial Intelligence and the Brain
2025 Conference: AI and the brain
The Institute for Mind and Brain at the University of South Carolina will hold a one-day in
person conference on the Artificial Intelligence and the Brain, Friday March 28, 2025. It is the sixth in a series of regular forums for highlighting current topics in
cognitive neuroscience. The conference will feature external speakers, as well as
invited contributions by local researchers and a poster session.
Date: Friday March 28, 2025
8:30 am to 5:30 pm
Held at the USC Conference Center @ Capstone Hall
Address: Campus Room in the Capstone Hall Building, 902 Barnwell Street, Columbia, SC 29208
Free registration
This symposium is free to trainees, faculty, and staff. A boxed lunch will be provided
for the first 100 registrants. Please register for the conference before March 14:
RSVP Link
Posters
Showcase your work! Students and postdocs are welcome to present their work in person (no virtual option).
Posters related to the conference themes, as well as topics within cognitive science
and neuroscience are acceptable.
Poster abstracts (< 250 words) should be submitted (link) by March 14, 2025 @ 11:59 p.m. Eastern Time.
Posters must fit on a 4ft X 4ft board.
Tentative agenda for the conference
Start
Agenda
Title
Speaker
8:30
Coffee & Registration
8:45
Welcome
Dr. Rutvik Desai, IMB Director
Dean Joel Samuels, Arts and Sciences
Dr. Christian O’Reilly, Conference Chair
9:00 AM
Invited speaker
Learning representations of complex meaning in the brain
Dr. Leil Wehbe
10:00 AM
Invited speaker
Reverse engineering the emotional brain: What can artificial neural networks tell
us about human emotion?
Dr. Philip Kragel
11:00 AM
Break
11:15 AM
Local talk
How can LLMs and Other AI Models help in Neuroimaging Analysis?
Dr. Amit Sheth
12:00 PM
Lunch break & Poster session
1:30 PM
Local talk
Using Modeling and Machine Learning to study the Brain
Dr. Christian O'Reilly
2:15 PM
Invited speaker
Bridging AI and Clinical Practice: Trustworthy AI for Stroke Risk and Management in
a Privacy-Preserving World
Dr. Khalid M. Malik
3:15 PM
Break
3:30 PM
Invited speaker
Conceptual representations in the brain versus AI
Dr. Jack L. Gallant
4:30 PM
Panel discussion
All speakers
5:30 PM
End of conference
Featured speakers
Dr. Philip Kragel
Emory University, Assistant Professor, Department of Psychology, Department of Psychiatry and Behavioral
Sciences
Title: Reverse engineering the emotional brain: What can artificial neural networks tell
us about human emotion?
Emotion is fundamental to human nature, having pervasive influences on learning, memory,
decision-making, social behavior, and subjective experience. Research in nonhuman
animals shows that emotional behaviors are mediated by distributed neural networks
spanning the frontal cortex, subcortex, and midbrain. The computations implemented
by these circuits enable animals to successfully navigate threats and opportunities
in the environment. This work has precisely resolved circuit-level function and offers
insight into the brain basis of behavior; however, the translation of findings to
humans is largely unknown because of differences in neuroanatomy across species and
our inability to measure the inner experience of nonhuman animals. In this talk, I
will present work from my lab that aims to bridge this gap using artificial neural
networks capable of explaining behavior and neural circuit function across species.
I will discuss how this approach can provide a more complete understanding of human
emotion by explicitly modeling how the brain transforms sensory inputs into low-dimensional
variables useful for adaptive behavior.
Dr. Kragel is an Assistant Professor in the Department of Psychology at Emory University.
He received a Bachelor of Science and Engineering (2006), a Master’s in Engineering
Management (2007), and a Ph.D. in Psychology and Neuroscience (2015) from Duke University.
Prior to joining the faculty at Emory in 2020, he was a postdoctoral associate at
the University of Colorado Boulder’s Institute of Cognitive Science. He currently
directs the Emotion, Cognition, and Computation laboratory, which is devoted to understanding
the neural underpinnings of human cognition and emotion. The lab works to advance
biologically grounded models of human behavior by integrating techniques including
fMRI, peripheral physiological recording, and computational modeling.
Carnegie Mellon University, Associate Professor, Machine Learning Department & Neuroscience Institute
Title: Learning representations of complex meaning in the brain
It has become increasingly common to use representations extracted from modern AI
models for language and vision to study these same processes in the human brain. This
approach often achieves accurate prediction of brain activity, often accounting for
almost all the variance in the recordings that is not attributable to noise. However,
better prediction performance doesn't always lead to better scientific interpretability.
This talk presents some approaches for the difficult problem of making scientific
inferences about how the brain represents high-level meaning. We also discuss how
to go beyond aligning AI representations and brains. Instead, we directly learn the
representations used in a brain region from its activity recordings. Using modern
AI tools, data from naturalistic neuroimaging experiments and other large scale datasets,
we reconstruct the representations and preferences of individual voxels and suggest
new subdivisions that are more refined than existing regions of interest. This perspective
draws a close connection between brains and AI models, reveals new aspects of brain
function, and can serve as the basis for more powerful brain computer interfaces.
Leila Wehbe is an associate professor in the Machine Learning Department and the Neuroscience
Institute at Carnegie Mellon University. Her work is at the interface of cognitive
neuroscience and computer science. It combines naturalistic functional imaging with
machine learning both to improve our understanding of the brain and to find insight
to build better artificial systems. She is the recipient of an NSF CAREER award, a
Google faculty research award and an NIH CRCNS R01. Previously, she was a postdoctoral
researcher at UC Berkeley and obtained her PhD from Carnegie Mellon University.
University of Michigan-Flint, Professor, Department of Computer Science
Title: Bridging AI and Clinical Practice: Trustworthy AI for Stroke Risk and Management in
a Privacy-Preserving World
Cerebrovascular diseases, including stroke and related conditions, are among the leading
causes of global morbidity and mortality. Effective clinical management of these complex
conditions requires accurate risk assessment, timely intervention, and individualized
treatment strategies. Cerebrovascular disorder demands innovative approaches for accurate
risk prediction and effective clinical management. This talk presents NeuroAssist, a framework combining multimodal neurosymbolic AI with federated learning to address
the challenges of hemorrhagic and ischemic stroke management. It will explain how
to perform privacy-preserving machine learning across institutions with human-understandable
reasoning, to enhance trust and usability in clinical settings. Through case study
of cerebral aneurysm, we will explore how to empower clinicians with trustworthy AI
tools to improve outcomes for patients with cerebrovascular diseases. Using cerebral
aneurysm risk prediction as a case study, the talk will demonstrate how NeuroAssist
empowers Neurosurgeons with AI-driven tools for subarachnoid hemorrhage prediction,
stroke risk stratification, and personalized treatment optimization.
Dr. Khalid Malik is a Professor of computer science and director of cybersecurity
at the College of Innovation and Technology, University of Michigan-Flint. His research
centers on designing secure, intelligent, and decentralized decision support systems
using multimodal, federated, trustworthy, and neuro-symbolic AI. In healthcare, he
specializes in predicting cerebrovascular and cardiovascular events through clinical
text and multiple medical imaging modalities (e.g., DSA, MRA). In cybersecurity, his
research is directed towards developing forensic examiners to ensure the authenticity,
integrity, and veracity of multimedia (audios, videos, images) and implementing web
filtering using multimodal and neuro-symbolic AI. Dr. Malik’s research is funded by
multiple National Science Foundation awards, the Brain Aneurysm Foundation, the Department
of Energy, the Michigan Translational Research and Commercialization (MTRAC) Innovation
Hub, MTRAC Life Sciences, and several national and international industry partners.
He is a recipient of numerous accolades, not limited to Oakland’s Young Investigator
Research Award (2018), SECS Outstanding Research Award (2019), and Distinguished Associate
Professor Award (2021).
University of California, Berkeley, Professor, Department of Neuroscience
Title: Conceptual representations in the brain versus AI
Human behavior is based on a complex interaction between perception, stored knowledge,
and continuous evaluation of the world relative to plans and goals. Even simple tasks
involve processes whose underlying circuitry is broadly distributed across the brain.
A key component of this system is the Distributed Conceptual Network (DCN), which
integrates perceptual information with memory in the service of current plans and
goals, supporting attention, working memory and conscious experience. In this talk
I will contrast the architecture and function of the DCN with current AI systems such
as transformer-based LLMs and reinforcement learning agents.
Jack Gallant is co-Director of the Henry H. Wheeler Jr. Brain Imaging Center and the
Class of 1940 Chair at the University of California at Berkeley. He holds appointments
in the Departments of Neuroscience and Electrical Engineering and Computer Science,
and is a member of the programs in Bioengineering, Biophysics, and Vision Science.
He is a senior member of the IEEE, and served as the 2022 Chair of the IEEE Brain
Community. Professor Gallant's research focuses on high-resolution functional mapping
and quantitative computational modeling of human brain networks. His lab has created
the most detailed current functional maps of human brain networks mediating vision,
language comprehension and navigation, and they have used these maps to decode and
reconstruct perceptual experiences directly from brain activity. Further information
about ongoing work in the Gallant lab, links to talks and papers and links to online
interactive brain viewers can be found at http://gallantlab.org.
University of South Carolina, Professor, Artificial Intelligence Institute
Title: How can LLMs and Other AI Models help in Neuroimaging Analysis?
TBD
Professor Sheth is an educator, researcher, and entrepreneur. He is the founding director
university-wide AI Institute at the University of South Carolina. He is a Fellow of
the IEEE, AAAI, AAAS and ACM, elected for his pioneering and enduring contributions
to information integration, distributed workflow processes, semantics, knowledge-enhanced
computing, etc. He has (co-)founded four companies, three of them by licensing his
university research outcomes, including the first Semantic Search company in 1999
that pioneered technology similar to what is found today in Google Semantic Search
and Knowledge Graph. He is particularly proud of the success of his 45 Ph.D. advisees
and postdocs in academia, industry research, and entrepreneurs. He received his B.E.
(Hons) from BITS-Pilani, India, and MS and PhD from the Ohio State University, USA.
University of South Carolina, Assistant Professor, Department of Computer Science and Engineering, Artificial Intelligence Institute, Carolina Autism Neurodevelopment Research Center, Institute for Mind and Brain
Title: Using Modeling and Machine Learning to study the Brain
Dynamic causal modeling has been an influential approach for studying effective connectivity
in the brain, as supported by about 1,400 papers on Pubmed mentioning this approach
(as of January 2025). However, we can conceptualize this approach within the more
comprehensive framework of model-driven analysis. In this framework, a biologically
relevant generative model is designed and fitted onto experimental data to gain insight
into potentially latent variables and processes. In this talk, I will present this
general framework, connect it to existing siloed approaches, emphasize its crux (i.e.,
the inference of dynamical systems parameters), and discuss how AI and, more specifically,
deep learning can help address this thorny problem. In doing so, I will also discuss
how biological neural processes are modeled, one of the core elements of this approach.
Christian O’Reilly received his B.Ing(electrical eng.; 2007), his M.Sc.A. (biomedical
eng.; 2011), and his Ph.D. (biomedical eng.; 2012) from the École Polytechnique de
Montréal where he applied pattern recognition and machine learning to predict brain
stroke risks. He was later a postdoctoral fellow at the Université de Montréal (2012-2014)
and then an NSERC postdoctoral fellow at McGill's Brain Imaging Center (2014-2015)
where he worked on characterizing EEG sleep transients, their sources, and their functional
connectivity. From 2015 to 2018, he led the large-scale biophysically-detailed modeling
of the thalamocortical loop at the Blue Brain project. He then worked as a Research
Associate at the Azrieli Centre for Autism Research (McGill) where he studied brain
connectivity in autism and related neurodevelopmental disorders. Since 2021, Christian
joined the Department of Computer Science and Engineering, the Artificial Intelligence
Institute (AIISC), and the Carolina Autism and Neurodevelopmental (CAN) Research Center
at the University of South Carolina as an assistant professor in neuroscience and
artificial intelligence.