Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Communications, Media | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

Communications, Media

Generative AI is reshaping communications and media and challenging public trust.

Stanford HAI Welcomes Six Distinguished Scholars as Senior Fellows
Feb 03, 2025
Announcement
From top left: Susan Athey, Michael Bernstein, Angèle Christin, Mykel Kochenderfer, Dorsa Sadigh, and Melissa Valentine.
Announcement
From top left: Susan Athey, Michael Bernstein, Angèle Christin, Mykel Kochenderfer, Dorsa Sadigh, and Melissa Valentine.

Stanford HAI Welcomes Six Distinguished Scholars as Senior Fellows

Communications, MediaMachine LearningFeb 03
Stories for the Future 2024
Isabelle Levent
Deep DiveMar 31, 2025
Research

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Research

Stories for the Future 2024

Isabelle Levent
Machine LearningGenerative AIArts, HumanitiesCommunications, MediaDesign, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Deep DiveMar 31

We invited 11 sci-fi filmmakers and AI researchers to Stanford for Stories for the Future, a day-and-a-half experiment in fostering new narratives about AI. Researchers shared perspectives on AI and filmmakers reflected on the challenges of writing AI narratives. Together researcher-writer pairs transformed a research paper into a written scene. The challenge? Each scene had to include an AI manifestation, but could not be about the personhood of AI or AI as a threat. Read the results of this project.

Preparing for the Age of Deepfakes and Disinformation
Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot
Nov 01, 2020
Policy Brief

Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.

Policy Brief

Preparing for the Age of Deepfakes and Disinformation

Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot
Communications, MediaNov 01

Popular culture has envisioned societies of intelligent machines for generations, with Alan Turing notably foreseeing the need for a test to distinguish machines from humans in 1950. Now, advances in artificial intelligence that promise to make creating convincing fake multimedia content like video, images, or audio relatively easy for many. Unfortunately, this will include sophisticated bots with supercharged self-improvement abilities that are capable of generating more dynamic fakes than anything seen before.

Social Media Ads May Not Influence User Satisfaction as Much as You Think
Shana Lynch
Aug 23, 2024
News

A new study by researchers from Stanford, Carnegie Mellon, and Meta finds that the presence of ads on Facebook doesn’t significantly affect how users value the platform. 

News

Social Media Ads May Not Influence User Satisfaction as Much as You Think

Shana Lynch
Economy, MarketsCommunications, MediaAug 23

A new study by researchers from Stanford, Carnegie Mellon, and Meta finds that the presence of ads on Facebook doesn’t significantly affect how users value the platform. 

Measuring receptivity to misinformation at scale on a social media platform
Christopher K Tokita, Kevin Aslett, William P Godel, Zeve Sanderson, Joshua A Tucker, Jonathan Nagler, Nathaniel Persily, Richard Bonneau
Sep 10, 2024
Research

Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

Research

Measuring receptivity to misinformation at scale on a social media platform

Christopher K Tokita, Kevin Aslett, William P Godel, Zeve Sanderson, Joshua A Tucker, Jonathan Nagler, Nathaniel Persily, Richard Bonneau
Communications, MediaSciences (Social, Health, Biological, Physical)Sep 10

Measuring the impact of online misinformation is challenging. Traditional measures, such as user views or shares on social media, are incomplete because not everyone who is exposed to misinformation is equally likely to believe it. To address this issue, we developed a method that combines survey data with observational Twitter data to probabilistically estimate the number of users both exposed to and likely to believe a specific news story. As a proof of concept, we applied this method to 139 viral news articles and find that although false news reaches an audience with diverse political views, users who are both exposed and receptive to believing false news tend to have more extreme ideologies. These receptive users are also more likely to encounter misinformation earlier than those who are unlikely to believe it. This mismatch between overall user exposure and receptive user exposure underscores the limitation of relying solely on exposure or interaction data to measure the impact of misinformation, as well as the challenge of implementing effective interventions. To demonstrate how our approach can address this challenge, we then conducted data-driven simulations of common interventions used by social media platforms. We find that these interventions are only modestly effective at reducing exposure among users likely to believe misinformation, and their effectiveness quickly diminishes unless implemented soon after misinformation’s initial spread. Our paper provides a more precise estimate of misinformation’s impact by focusing on the exposure of users likely to believe it, offering insights for effective mitigation strategies on social media.

Modeling Effective Regulation of Facebook
Seth G. Benzell, Avinash Collis
Oct 01, 2020
Policy Brief

Social Media platforms break traditional barriers of distance and time between people and present unique challenges in calculating the precise value of the transactions and interactions they enable.

Policy Brief

Modeling Effective Regulation of Facebook

Seth G. Benzell, Avinash Collis
Communications, MediaOct 01

Social Media platforms break traditional barriers of distance and time between people and present unique challenges in calculating the precise value of the transactions and interactions they enable.

All Work Published on Communications, Media

Building a Social Media Algorithm That Actually Promotes Societal Values
Katharine Miller
Apr 08, 2024
News

A Stanford research team shows that building democratic values into a feed-ranking algorithm reduces partisan animosity.

Building a Social Media Algorithm That Actually Promotes Societal Values

Katharine Miller
Apr 08, 2024

A Stanford research team shows that building democratic values into a feed-ranking algorithm reduces partisan animosity.

Machine Learning
Communications, Media
News
Internal Fractures: The Competing Logics of Social Media Platforms
Angèle Christin, Michael S. Bernstein, Jeffrey Hancock, Chenyan Jia, Jeanne Tsai, Chunchen Xu
Aug 21, 2024
Research
Your browser does not support the video tag.

Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.

Internal Fractures: The Competing Logics of Social Media Platforms

Angèle Christin, Michael S. Bernstein, Jeffrey Hancock, Chenyan Jia, Jeanne Tsai, Chunchen Xu
Aug 21, 2024

Social media platforms are too often understood as monoliths with clear priorities. Instead, we analyze them as complex organizations torn between starkly different justifications of their missions. Focusing on the case of Meta, we inductively analyze the company’s public materials and identify three evaluative logics that shape the platform’s decisions: an engagement logic, a public debate logic, and a wellbeing logic. There are clear trade-offs between these logics, which often result in internal conflicts between teams and departments in charge of these different priorities. We examine recent examples showing how Meta rotates between logics in its decision-making, though the goal of engagement dominates in internal negotiations. We outline how this framework can be applied to other social media platforms such as TikTok, Reddit, and X. We discuss the ramifications of our findings for the study of online harms, exclusion, and extraction.

Sciences (Social, Health, Biological, Physical)
Communications, Media
Your browser does not support the video tag.
Research
Tuning Our Algorithmic Amplifiers: Encoding Societal Values into Social Media AIs
Angèle Christin
Oct 20, 2023
News

The values built into social media algorithms are highly individualized. Could we reshape our feeds to benefit society?

Tuning Our Algorithmic Amplifiers: Encoding Societal Values into Social Media AIs

Angèle Christin
Oct 20, 2023

The values built into social media algorithms are highly individualized. Could we reshape our feeds to benefit society?

Design, Human-Computer Interaction
Machine Learning
Communications, Media
News
AI Researchers Tap into Medical Twitter To Create Powerful New Analysis Tool
Andrew Myers
Aug 28, 2023
News

Stanford researchers discover a rich new data source in the anonymized pathology images and online comments of thousands of pathologists.

AI Researchers Tap into Medical Twitter To Create Powerful New Analysis Tool

Andrew Myers
Aug 28, 2023

Stanford researchers discover a rich new data source in the anonymized pathology images and online comments of thousands of pathologists.

Healthcare
Communications, Media
News
Was this written by a human or AI?
Prabha Kannan
Mar 16, 2023
News

New research shows we can only accurately identify AI writers about 50% of the time. Scholars explain why (and suggest solutions).

Was this written by a human or AI?

Prabha Kannan
Mar 16, 2023

New research shows we can only accurately identify AI writers about 50% of the time. Scholars explain why (and suggest solutions).

Arts, Humanities
Communications, Media
News
How Social Media Shapes Our Perceptions About Crime
Nikki Goth Itoi
Feb 27, 2023
News

An analysis of all Facebook posts from U.S. law enforcement agencies revealed widespread overreporting of Black suspects.

How Social Media Shapes Our Perceptions About Crime

Nikki Goth Itoi
Feb 27, 2023

An analysis of all Facebook posts from U.S. law enforcement agencies revealed widespread overreporting of Black suspects.

Communications, Media
News
1
2
3