Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Machine Learning

Learn about the latest advances in machine learning that allow systems to learn and improve over time.

Stanford AI Scholars Find Support for Innovation in a Time of Uncertainty
Nikki Goth Itoi
Jul 01, 2025
News

Stanford HAI offers critical resources for faculty and students to continue groundbreaking research across the vast AI landscape.

News

Stanford AI Scholars Find Support for Innovation in a Time of Uncertainty

Nikki Goth Itoi
Machine LearningFoundation ModelsEducation, SkillsJul 01

Stanford HAI offers critical resources for faculty and students to continue groundbreaking research across the vast AI landscape.

Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31, 2025
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Research

Stories for the Future 2024

Isabelle Levent
Machine LearningGenerative AIArts, HumanitiesCommunications, MediaDesign, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Whose Opinions Do Language Models Reflect?
Esin Durmus, Tatsunori Hashimoto, Faisal Ladhak, Cinoo Lee, Percy Liang, Shibani Santurkar
Quick ReadSep 20, 2023
Policy Brief

Since the November 2022 debut of ChatGPT, language models have been all over the news. But as people use chatbots—to write stories and look up recipes, to make travel plans and even further their real estate business—journalists, policymakers, and members of the public are increasingly paying attention to the important question of whose opinions these language models reflect. In particular, one emerging concern is that AI-generated text may be able to influence our views, including political beliefs, without us realizing it. This brief introduces a quantitative framework that allows policymakers to evaluate the behavior of language models to assess what kinds of opinions they reflect.

Policy Brief

Whose Opinions Do Language Models Reflect?

Esin Durmus, Tatsunori Hashimoto, Faisal Ladhak, Cinoo Lee, Percy Liang, Shibani Santurkar
Machine LearningQuick ReadSep 20

Since the November 2022 debut of ChatGPT, language models have been all over the news. But as people use chatbots—to write stories and look up recipes, to make travel plans and even further their real estate business—journalists, policymakers, and members of the public are increasingly paying attention to the important question of whose opinions these language models reflect. In particular, one emerging concern is that AI-generated text may be able to influence our views, including political beliefs, without us realizing it. This brief introduces a quantitative framework that allows policymakers to evaluate the behavior of language models to assess what kinds of opinions they reflect.

Teddy J. Akiki
Person
Person

Teddy J. Akiki

Machine LearningJul 14
Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students
Andrew Myers
Jun 06, 2025
News

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

News

Digital Twins Offer Insights into Brains Struggling with Math — and Hope for Students

Andrew Myers
Machine LearningSciences (Social, Health, Biological, Physical)Jun 06

Researchers used artificial intelligence to analyze the brain scans of students solving math problems, offering the first-ever peek into the neuroscience of math disabilities.

The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health
Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025
Research
Your browser does not support the video tag.

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

Research
Your browser does not support the video tag.

The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health

Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Foundation ModelsGenerative AIMachine LearningNatural Language ProcessingSciences (Social, Health, Biological, Physical)HealthcareFeb 14

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

All Work Published on Machine Learning

Better Benchmarks for Safety-Critical AI Applications
Nikki Goth Itoi
May 27, 2025
News
Business graph digital concept

Stanford researchers investigate why models often fail in edge-case scenarios.

Better Benchmarks for Safety-Critical AI Applications

Nikki Goth Itoi
May 27, 2025

Stanford researchers investigate why models often fail in edge-case scenarios.

Machine Learning
Business graph digital concept
News
Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning
Nicholas Haber, Miles Huston, Isaac Kauvar
Dec 13, 2024
Research
Your browser does not support the video tag.

Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

Policy-Shaped Prediction: Avoiding Distractions in Model-Based Reinforcement Learning

Nicholas Haber, Miles Huston, Isaac Kauvar
Dec 13, 2024

Model-based reinforcement learning (MBRL) is a promising route to sampleefficient policy optimization. However, a known vulnerability of reconstructionbased MBRL consists of scenarios in which detailed aspects of the world are highly predictable, but irrelevant to learning a good policy. Such scenarios can lead the model to exhaust its capacity on meaningless content, at the cost of neglecting important environment dynamics. While existing approaches attempt to solve this problem, we highlight its continuing impact on leading MBRL methods —including DreamerV3 and DreamerPro — with a novel environment where background distractions are intricate, predictable, and useless for planning future actions. To address this challenge we develop a method for focusing the capacity of the world model through synergy of a pretrained segmentation model, a task-aware reconstruction loss, and adversarial learning. Our method outperforms a variety of other approaches designed to reduce the impact of distractors, and is an advance towards robust model-based reinforcement learning.

Machine Learning
Foundation Models
Your browser does not support the video tag.
Research
Improving Transparency in AI Language Models: A Holistic Evaluation
Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Quick ReadFeb 28, 2023
Issue Brief

In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Improving Transparency in AI Language Models: A Holistic Evaluation

Rishi Bommasani, Daniel Zhang, Tony Lee, Percy Liang
Quick ReadFeb 28, 2023

In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Machine Learning
Foundation Models
Issue Brief
Feng Vankee Lin
Associate Professor of Psychiatry and Behavioral Sciences (Public Mental Health & Population Sciences)
Person

Feng Vankee Lin

Associate Professor of Psychiatry and Behavioral Sciences (Public Mental Health & Population Sciences)
Machine Learning
Person
The AI Race Has Gotten Crowded—and China Is Closing In on the US
Wired
Apr 07, 2025
Media Mention

Vanessa Parli, Stanford HAI Director of Research and AI Index Steering Committee member, notes that the 2025 AI Index reports flourishing and higher-quality academic research in AI.

The AI Race Has Gotten Crowded—and China Is Closing In on the US

Wired
Apr 07, 2025

Vanessa Parli, Stanford HAI Director of Research and AI Index Steering Committee member, notes that the 2025 AI Index reports flourishing and higher-quality academic research in AI.

Regulation, Policy, Governance
Economy, Markets
Finance, Business
Generative AI
Industry, Innovation
Machine Learning
Sciences (Social, Health, Biological, Physical)
Media Mention
Deciphering the Feature Representation of Deep Neural Networks for High-Performance AI
Md Tauhidul Islam, Lei Xing
Aug 01, 2024
Research
Your browser does not support the video tag.

AI driven by deep learning is transforming many aspects of science and technology. The enormous success of deep learning stems from its unique capability of extracting essential features from Big Data for decision-making. However, the feature extraction and hidden representations in deep neural networks (DNNs) remain inexplicable, primarily because of lack of technical tools to comprehend and interrogate the feature space data. The main hurdle here is that the feature data are often noisy in nature, complex in structure, and huge in size and dimensionality, making it intractable for existing techniques to analyze the data reliably. In this work, we develop a computational framework named contrastive feature analysis (CFA) to facilitate the exploration of the DNN feature space and improve the performance of AI. By utilizing the interaction relations among the features and incorporating a novel data-driven kernel formation strategy into the feature analysis pipeline, CFA mitigates the limitations of traditional approaches and provides an urgently needed solution for the analysis of feature space data. The technique allows feature data exploration in unsupervised, semi-supervised and supervised formats to address different needs of downstream applications. The potential of CFA and its applications for pruning of neural network architectures are demonstrated using several state-of-the-art networks and well-annotated datasets across different disciplines.

Deciphering the Feature Representation of Deep Neural Networks for High-Performance AI

Md Tauhidul Islam, Lei Xing
Aug 01, 2024

AI driven by deep learning is transforming many aspects of science and technology. The enormous success of deep learning stems from its unique capability of extracting essential features from Big Data for decision-making. However, the feature extraction and hidden representations in deep neural networks (DNNs) remain inexplicable, primarily because of lack of technical tools to comprehend and interrogate the feature space data. The main hurdle here is that the feature data are often noisy in nature, complex in structure, and huge in size and dimensionality, making it intractable for existing techniques to analyze the data reliably. In this work, we develop a computational framework named contrastive feature analysis (CFA) to facilitate the exploration of the DNN feature space and improve the performance of AI. By utilizing the interaction relations among the features and incorporating a novel data-driven kernel formation strategy into the feature analysis pipeline, CFA mitigates the limitations of traditional approaches and provides an urgently needed solution for the analysis of feature space data. The technique allows feature data exploration in unsupervised, semi-supervised and supervised formats to address different needs of downstream applications. The potential of CFA and its applications for pruning of neural network architectures are demonstrated using several state-of-the-art networks and well-annotated datasets across different disciplines.

Machine Learning
Your browser does not support the video tag.
Research
1
2
3
4