Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Generative AI

liquam ullamcorper purus ante, vitae lobortis urna semper et. Nam elementum consectetur neque, sit amet condimentum orci aliquet sit amet. Aliquam imperdiet magna vel interdum blandit. Nam ac ligula vitae nibh interdum ultricies. Cras sollicitudin vestibulum ligula. Ut sollicitudin felis nec velit convallis ultrices.

When AI Imagines a Tree: How Your Chatbot’s Worldview Shapes Your Thinking
Katie Gray Garrison
Jul 28, 2025
News

A new study on generative AI argues that addressing biases requires a deeper exploration of ontological assumptions, challenging the way we define fundamental concepts like humanity and connection.

News

When AI Imagines a Tree: How Your Chatbot’s Worldview Shapes Your Thinking

Katie Gray Garrison
Ethics, Equity, InclusionGenerative AIJul 28

A new study on generative AI argues that addressing biases requires a deeper exploration of ontological assumptions, challenging the way we define fundamental concepts like humanity and connection.

Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31, 2025
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Research

Stories for the Future 2024

Isabelle Levent
Machine LearningGenerative AIArts, HumanitiesCommunications, MediaDesign, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Labeling AI-Generated Content May Not Change Its Persuasiveness
Isabel Gallegos, Dr. Chen Shani, Weiyan Shi, Federico Bianchi, Izzy Benjamin Gainsburg, Dan Jurafsky, Robb Willer
Quick ReadJul 30, 2025
Policy Brief

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

Policy Brief

Labeling AI-Generated Content May Not Change Its Persuasiveness

Isabel Gallegos, Dr. Chen Shani, Weiyan Shi, Federico Bianchi, Izzy Benjamin Gainsburg, Dan Jurafsky, Robb Willer
Generative AIRegulation, Policy, GovernanceQuick ReadJul 30

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

David Nguyen
Person
Person

David Nguyen

Economy, MarketsWorkforce, LaborGenerative AIMar 03
Social Science Moves In Silico
Katharine Miller
Jul 25, 2025
News

Despite limitations, advances in AI offer social science researchers the ability to simulate human subjects.

News

Social Science Moves In Silico

Katharine Miller
Generative AINatural Language ProcessingSciences (Social, Health, Biological, Physical)Jul 25

Despite limitations, advances in AI offer social science researchers the ability to simulate human subjects.

The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health
Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Feb 14, 2025
Research
Your browser does not support the video tag.

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

Research
Your browser does not support the video tag.

The Promise and Perils of Artificial Intelligence in Advancing Participatory Science and Health Equity in Public Health

Abby C King, Zakaria N Doueiri, Ankita Kaulberg, Lisa Goldman Rosas
Foundation ModelsGenerative AIMachine LearningNatural Language ProcessingSciences (Social, Health, Biological, Physical)HealthcareFeb 14

Current societal trends reflect an increased mistrust in science and a lowered civic engagement that threaten to impair research that is foundational for ensuring public health and advancing health equity. One effective countermeasure to these trends lies in community-facing citizen science applications to increase public participation in scientific research, making this field an important target for artificial intelligence (AI) exploration. We highlight potentially promising citizen science AI applications that extend beyond individual use to the community level, including conversational large language models, text-to-image generative AI tools, descriptive analytics for analyzing integrated macro- and micro-level data, and predictive analytics. The novel adaptations of AI technologies for community-engaged participatory research also bring an array of potential risks. We highlight possible negative externalities and mitigations for some of the potential ethical and societal challenges in this field.

All Work Published on Generative AI

The Art of the Automated Negotiation
Matty Smith
Jun 18, 2025
News

Different AI agents have wildly different negotiation skills. If we outsource these tasks to agents, we may need to bring the "best" AI agent to the digital table.

The Art of the Automated Negotiation

Matty Smith
Jun 18, 2025

Different AI agents have wildly different negotiation skills. If we outsource these tasks to agents, we may need to bring the "best" AI agent to the digital table.

Automation
Generative AI
Economy, Markets
News
pyvene: A Library for Understanding and Improving PyTorch Models via Interventions
Zhengxuan Wu, Atticus Geiger, Jing Huang, Noah Goodman, Christopher Potts, Aryaman Arora, Zheng Wang
Jun 01, 2024
Research
Your browser does not support the video tag.

Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce pyvene, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. pyvene supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters. We show how pyvene provides a unified and extensible framework for performing interventions on neural models and sharing the intervened upon models with others. We illustrate the power of the library via interpretability analyses using causal abstraction and knowledge localization. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at ‘https://github.com/stanfordnlp/pyvene‘.

pyvene: A Library for Understanding and Improving PyTorch Models via Interventions

Zhengxuan Wu, Atticus Geiger, Jing Huang, Noah Goodman, Christopher Potts, Aryaman Arora, Zheng Wang
Jun 01, 2024

Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce pyvene, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. pyvene supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters. We show how pyvene provides a unified and extensible framework for performing interventions on neural models and sharing the intervened upon models with others. We illustrate the power of the library via interpretability analyses using causal abstraction and knowledge localization. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at ‘https://github.com/stanfordnlp/pyvene‘.

Natural Language Processing
Generative AI
Machine Learning
Foundation Models
Your browser does not support the video tag.
Research
Simulating Human Behavior with AI Agents
Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie J. Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, Michael S. Bernstein
Quick ReadMay 20, 2025
Policy Brief

This brief introduces a generative AI agent architecture that can simulate the attitudes of more than 1,000 real people in response to major social science survey questions.

Simulating Human Behavior with AI Agents

Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie J. Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, Michael S. Bernstein
Quick ReadMay 20, 2025

This brief introduces a generative AI agent architecture that can simulate the attitudes of more than 1,000 real people in response to major social science survey questions.

Generative AI
Policy Brief
Percy Liang
Associate Professor of Computer Science, Stanford University | Director, Stanford Center for Research on Foundation Models | Senior Fellow, Stanford HAI
Person
Percy Liang

Percy Liang

Associate Professor of Computer Science, Stanford University | Director, Stanford Center for Research on Foundation Models | Senior Fellow, Stanford HAI
Foundation Models
Generative AI
Machine Learning
Natural Language Processing
Percy Liang
Person
How Language Bias Persists in Scientific Publishing Despite AI Tools
Scott Hadly
Jun 16, 2025
News

Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.

How Language Bias Persists in Scientific Publishing Despite AI Tools

Scott Hadly
Jun 16, 2025

Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.

Ethics, Equity, Inclusion
Generative AI
News
A Large Scale RCT on Effective Error Messages in CS1
Sierra Wang, John Mitchell, Christopher Piech
Mar 07, 2024
Research

In this paper, we evaluate the most effective error message types through a large-scale randomized controlled trial conducted in an open-access, online introductory computer science course with 8,762 students from 146 countries. We assess existing error message enhancement strategies, as well as two novel approaches of our own: (1) generating error messages using OpenAI's GPT in real time and (2) constructing error messages that incorporate the course discussion forum. By examining students' direct responses to error messages, and their behavior throughout the course, we quantitatively evaluate the immediate and longer term efficacy of different error message types. We find that students using GPT generated error messages repeat an error 23.1% less often in the subsequent attempt, and resolve an error in 34.8% fewer additional attempts, compared to students using standard error messages. We also perform an analysis across various demographics to understand any disparities in the impact of different error message types. Our results find no significant difference in the effectiveness of GPT generated error messages for students from varying socioeconomic and demographic backgrounds. Our findings underscore GPT generated error messages as the most helpful error message type, especially as a universally effective intervention across demographics.

A Large Scale RCT on Effective Error Messages in CS1

Sierra Wang, John Mitchell, Christopher Piech
Mar 07, 2024

In this paper, we evaluate the most effective error message types through a large-scale randomized controlled trial conducted in an open-access, online introductory computer science course with 8,762 students from 146 countries. We assess existing error message enhancement strategies, as well as two novel approaches of our own: (1) generating error messages using OpenAI's GPT in real time and (2) constructing error messages that incorporate the course discussion forum. By examining students' direct responses to error messages, and their behavior throughout the course, we quantitatively evaluate the immediate and longer term efficacy of different error message types. We find that students using GPT generated error messages repeat an error 23.1% less often in the subsequent attempt, and resolve an error in 34.8% fewer additional attempts, compared to students using standard error messages. We also perform an analysis across various demographics to understand any disparities in the impact of different error message types. Our results find no significant difference in the effectiveness of GPT generated error messages for students from varying socioeconomic and demographic backgrounds. Our findings underscore GPT generated error messages as the most helpful error message type, especially as a universally effective intervention across demographics.

Natural Language Processing
Foundation Models
Generative AI
Research
1
2
3
4