Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Rishi Bommasani | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
people

Rishi Bommasani

Society Lead, Stanford Center for Research on Foundation Models; Ph.D. Candidate of Computer Science, Stanford University

External Bio
Latest Work
Response to OSTP’s Request for Information on the Development of an AI Action Plan
Rishi Bommasani, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Percy Liang, Jennifer King, Russell Wald
Mar 17
response to request

In this response to a request for information issued by the National Science Foundation’s Networking and Information Technology Research and Development National Coordination Office (on behalf of the Office of Science and Technology Policy), scholars from Stanford HAI, CRFM, and RegLab urge policymakers to prioritize four areas of policy action in their AI Action Plan: 1) Promote open innovation as a strategic advantage for U.S. competitiveness; 2) Maintain U.S. AI leadership by promoting scientific innovation; 3) Craft evidence-based AI policy that protects Americans without stifling innovation; 4) Empower government leaders with resources and technical expertise to ensure a “whole-of-government” approach to AI governance.

Safeguarding Third-Party AI Research
Rishi Bommasani, Percy Liang, Kevin Klyman, Shayne Longpre, Peter Henderson, Sayash Kapoor
Feb 13
policy brief
Safeguarding third-party AI research

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Are Open-Source AI Models Worth The Risk?
Tech Brew
Oct 31
media mention

Rishi Bommasani, Society Lead at HAI's CRFM, discusses where AI is proving most dangerous, why openness is important, and how regulators are thinking about the open-close divide. 

Share
Link copied to clipboard!

All Related

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Daniel E. Ho, Percy Liang, Alexander Wan, Yifan Mai
Sep 09, 2024
response to request

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Daniel E. Ho, Percy Liang, Alexander Wan, Yifan Mai
Sep 09, 2024

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
response to request
On the Societal Impact of Open Foundation Models
Rishi Bommasani, Daniel E. Ho, Percy Liang, Sayash Kapoor
and Arvind Narayanan
Feb 27, 2024
news

New research adds precision to the debate on openness in AI.

On the Societal Impact of Open Foundation Models

Rishi Bommasani, Daniel E. Ho, Percy Liang, Sayash Kapoor
and Arvind Narayanan
Feb 27, 2024

New research adds precision to the debate on openness in AI.

news
Considerations for Governing Open Foundation Models
Rishi Bommasani, Marietje Schaake, Daniel E. Ho, Daniel Zhang, Percy Liang, Ashwin Ramaswami, Arvind Narayanan, Kevin Klyman, Shayne Longpre, Sayash Kapoor
Dec 13, 2023
issue brief

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Considerations for Governing Open Foundation Models

Rishi Bommasani, Marietje Schaake, Daniel E. Ho, Daniel Zhang, Percy Liang, Ashwin Ramaswami, Arvind Narayanan, Kevin Klyman, Shayne Longpre, Sayash Kapoor
Dec 13, 2023

This brief highlights the benefits of open foundation models and calls for greater focus on their marginal risks.

Foundation Models
issue brief
By the Numbers: Tracking The AI Executive Order
Rishi Bommasani, Rohini Kosoglu, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Russell Wald, Peter Henderson, Lindsey A. Gailmard, Christie M. Lawrence
Nov 16, 2023
news

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

By the Numbers: Tracking The AI Executive Order

Rishi Bommasani, Rohini Kosoglu, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Russell Wald, Peter Henderson, Lindsey A. Gailmard, Christie M. Lawrence
Nov 16, 2023

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

news
By the Numbers: Tracking The AI Executive Order
Rishi Bommasani, Rohini Kosoglu, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Russell Wald, Peter Henderson, Lindsey A. Gailmard, Christie M. Lawrence
Nov 16, 2023
explainer
Your browser does not support the video tag.

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

By the Numbers: Tracking The AI Executive Order

Rishi Bommasani, Rohini Kosoglu, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Russell Wald, Peter Henderson, Lindsey A. Gailmard, Christie M. Lawrence
Nov 16, 2023

New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.

Regulation, Policy, Governance
Government, Public Administration
Your browser does not support the video tag.
explainer
The AI Regulatory Alignment Problem
Rishi Bommasani, Mariano-Florentino Cuéllar, Daniel E. Ho, Percy Liang, Neel Guha, Lindsey A. Gailmard, Kit T. Rodolfa, Faiz Surani, Inioluwa Deborah Raji, Christie M. Lawrence, Colleen Honigsberg
Nov 15, 2023
policy brief

This brief, produced in collaboration with Stanford RegLab, sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.

The AI Regulatory Alignment Problem

Rishi Bommasani, Mariano-Florentino Cuéllar, Daniel E. Ho, Percy Liang, Neel Guha, Lindsey A. Gailmard, Kit T. Rodolfa, Faiz Surani, Inioluwa Deborah Raji, Christie M. Lawrence, Colleen Honigsberg
Nov 15, 2023

This brief, produced in collaboration with Stanford RegLab, sheds light on the “regulatory misalignment” problem by considering the technical and institutional feasibility of four commonly proposed AI regulatory regimes.

Regulation, Policy, Governance
policy brief
Decoding the White House AI Executive Order’s Achievements
Rishi Bommasani, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Russell Wald, Peter Henderson, Lindsey A. Gailmard, Christie M. Lawrence
Nov 02, 2023
news

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

Decoding the White House AI Executive Order’s Achievements

Rishi Bommasani, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Russell Wald, Peter Henderson, Lindsey A. Gailmard, Christie M. Lawrence
Nov 02, 2023

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

news
Decoding the White House AI Executive Order’s Achievements
Rishi Bommasani, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Russell Wald, Peter Henderson, Lindsey A. Gailmard, Christie M. Lawrence
Nov 02, 2023
explainer
Your browser does not support the video tag.

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

Decoding the White House AI Executive Order’s Achievements

Rishi Bommasani, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Russell Wald, Peter Henderson, Lindsey A. Gailmard, Christie M. Lawrence
Nov 02, 2023

America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.

Government, Public Administration
Your browser does not support the video tag.
explainer
Responses to NTIA's Request for Comment on AI Accountability Policy
Rishi Bommasani, Daniel Zhang, Percy Liang, Jennifer King, Arvind Narayanan, Sayash Kapoor
Jun 14, 2023
response to request
Your browser does not support the video tag.

Responses to NTIA's Request for Comment on AI Accountability Policy

Responses to NTIA's Request for Comment on AI Accountability Policy

Rishi Bommasani, Daniel Zhang, Percy Liang, Jennifer King, Arvind Narayanan, Sayash Kapoor
Jun 14, 2023

Responses to NTIA's Request for Comment on AI Accountability Policy

Your browser does not support the video tag.
response to request
Ecosystem Graphs: The Social Footprint of Foundation Models
Rishi Bommasani
Mar 29, 2023
news

Researchers develop a framework to capture the vast downstream impact and complex upstream dependencies that define the foundation model ecosystem.

Ecosystem Graphs: The Social Footprint of Foundation Models

Rishi Bommasani
Mar 29, 2023

Researchers develop a framework to capture the vast downstream impact and complex upstream dependencies that define the foundation model ecosystem.

Machine Learning
news
AI Spring? Four Takeaways from Major Releases in Foundation Models
Rishi Bommasani
Mar 17, 2023
news

As companies release new, more capable models, questions around deployment and transparency arise.

AI Spring? Four Takeaways from Major Releases in Foundation Models

Rishi Bommasani
Mar 17, 2023

As companies release new, more capable models, questions around deployment and transparency arise.

Natural Language Processing
Machine Learning
news
Improving Transparency in AI Language Models: A Holistic Evaluation
Rishi Bommasani, Daniel Zhang, Percy Liang, Tony Lee
Feb 28, 2023
issue brief

In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Improving Transparency in AI Language Models: A Holistic Evaluation

Rishi Bommasani, Daniel Zhang, Percy Liang, Tony Lee
Feb 28, 2023

In this brief, Stanford scholars introduce Holistic Evaluation of Language Models (HELM) as a framework to evaluate commercial application of AI use cases.

Machine Learning
Foundation Models
issue brief
1
2