Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Michael S. Bernstein | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peopleFaculty,Senior Fellow

Michael S. Bernstein

Associate Professor of Computer Science | Senior Fellow, HAI | STMicroelectronics Faculty Scholar, Stanford University

External Bio

Michael Bernstein is an Associate Professor of Computer Science and STMicroelectronics Faculty Scholar at Stanford University, where he is a member of the Human-Computer Interaction Group. His research focuses on the design of social computing systems. This research has won best paper awards at top conferences in human-computer interaction, including CHI, CSCW, ICWSM, and UIST, and has been reported in venues such as The New York Times, New Scientist, Wired, and The Guardian. Michael has been recognized with an Alfred P. Sloan Fellowship, UIST Lasting Impact Award, and the Patrick J. McGovern Tech for Humanity Prize. He holds a bachelor's degree in Symbolic Systems from Stanford University, as well as a master's degree and a Ph.D. in Computer Science from MIT.

Share
Link copied to clipboard!

Latest Related to Michael S. Bernstein

policy brief

Simulating Human Behavior with AI Agents

Percy Liang, Robb Willer, Michael S. Bernstein, Aaron Shaw, Benjamin Mako Hill, Carolyn Q. Zou, Carrie J. Cai, Meredith Ringel Morris, Joon Sung Park
Generative AIMay 20

This brief introduces a generative AI agent architecture that can simulate the attitudes of more than 1,000 real people in response to major social science survey questions.

response to request

Response to NSF’s Request for Information on Research Ethics

Margaret Levi, Michael S. Bernstein, Quinn Waeiss, Raio Huang, Betsy Arlene Rajala, David Magnus, Debra Satz
Ethics, Equity, InclusionNov 22

In this response to the National Science Foundation’s (NSF) request for information related to research ethics, a group of scholars affiliated with Stanford’s Ethics and Society Review (ESR) and Stanford HAI share lessons drawn from their five years of experience operating the ESR ethical reflection process as a requirement for HAI research grants. They make the case for promoting ethical and societal reflection within NSF’s grantmaking and highlight the common ethical issues that arise as part of AI research reviews.

Research
Your browser does not support the video tag.

Internal Fractures: The Competing Logics of Social Media Platforms

Jeanne Tsai, Jeffrey Hancock, Angèle Christin, Michael S. Bernstein, Chenyan Jia, Chunchen Xu
Sciences (Social, Health, Biological, Physical)Communications, MediaAug 21

Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.

All Related

Embedding Democratic Values into Social Media AIs via Societal Objective Functions
Michael S. Bernstein, Minh Chau Mai, Michelle Lam, Chenyan Jia
Apr 26, 2024
Research
Your browser does not support the video tag.

Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.

Embedding Democratic Values into Social Media AIs via Societal Objective Functions

Michael S. Bernstein, Minh Chau Mai, Michelle Lam, Chenyan Jia
Apr 26, 2024

Mounting evidence indicates that the artificial intelligence (AI) systems that rank our social media feeds bear nontrivial responsibility for amplifying partisan animosity: negative thoughts, feelings, and behaviors toward political out-groups. Can we design these AIs to consider democratic values such as mitigating partisan animosity as part of their objective functions? We introduce a method for translating established, vetted social scientific constructs into AI objective functions, which we term societal objective functions, and demonstrate the method with application to the political science construct of anti-democratic attitudes. Traditionally, we have lacked observable outcomes to use to train such models-however, the social sciences have developed survey instruments and qualitative codebooks for these constructs, and their precision facilitates translation into detailed prompts for large language models. We apply this method to create a democratic attitude model that estimates the extent to which a social media post promotes anti-democratic attitudes, and test this democratic attitude model across three studies. In Study 1, we first test the attitudinal and behavioral effectiveness of the intervention among US partisans (N=1,380) by manually annotating (alpha=.895) social media posts with anti-democratic attitude scores and testing several feed ranking conditions based on these scores. Removal (d=.20) and downranking feeds (d=.25) reduced participants' partisan animosity without compromising their experience and engagement. In Study 2, we scale up the manual labels by creating the democratic attitude model, finding strong agreement with manual labels (rho=.75). Finally, in Study 3, we replicate Study 1 using the democratic attitude model instead of manual labels to test its attitudinal and behavioral impact (N=558), and again find that the feed downranking using the societal objective function reduced partisan animosity (d=.25). This method presents a novel strategy to draw on social science theory and methods to mitigate societal harms in social media AIs.

Democracy
Your browser does not support the video tag.
Research
Algorithms and the Perceived Legitimacy of Content Moderation
Michael S. Bernstein, Sahil Yakhmi, Tara Iyer, Amy X. Zhang, Christina A. Pan, Evan Strasnick
Dec 15, 2022
policy brief

The perceived legitimacy of content moderation processes is an important question for policymakers—and it informs how policymakers themselves think about social media. If their content moderation processes—from human reviews to algorithmic flagging—are not perceived as legitimate, it will impact how users view and engage with the platform. It can also shape whether users believe they must follow platform rules. In this brief, scholars dive into this problem by surveying people’s views of Facebook’s content moderation processes, providing a pathway for better online speech platforms and improving content moderation processes.

Algorithms and the Perceived Legitimacy of Content Moderation

Michael S. Bernstein, Sahil Yakhmi, Tara Iyer, Amy X. Zhang, Christina A. Pan, Evan Strasnick
Dec 15, 2022

The perceived legitimacy of content moderation processes is an important question for policymakers—and it informs how policymakers themselves think about social media. If their content moderation processes—from human reviews to algorithmic flagging—are not perceived as legitimate, it will impact how users view and engage with the platform. It can also shape whether users believe they must follow platform rules. In this brief, scholars dive into this problem by surveying people’s views of Facebook’s content moderation processes, providing a pathway for better online speech platforms and improving content moderation processes.

Regulation, Policy, Governance
policy brief