Build with our next generation AI systems

Explore models chevron_right

Gemini

Our most intelligent AI models

2.5 Pro
2.5 Flash
2.0 Flash-Lite
Learn more

Gemma

Lightweight, state-of-the-art open models

Gemma 3
Gemma 3n
ShieldGemma 2
Learn more

Generative models

Image, music and video generation models

Imagen
Lyria
Veo

Experiments

AI prototypes and experiments

Project Astra
Project Mariner
Gemini Diffusion

Our latest AI breakthroughs and updates from the lab

Explore research chevron_right

Projects

Explore some of the biggest AI innovations

Learn more

Publications

Read a selection of our recent papers

Learn more

News

Discover the latest updates from our lab

Learn more

Unlocking a new era of discovery with AI

Explore science chevron_right

AI for biology

AlphaFold
AlphaMissense
AlphaProteo

AI for climate and sustainability

WeatherNext

AI for mathematics and computer science

AlphaEvolve
AlphaProof
AlphaGeometry

AI for physics and chemistry

GNoME
Fusion
AlphaQubit

AI transparency

SynthID

Our mission is to build AI responsibly to benefit humanity

About Google DeepMind chevron_right

News

Discover our latest AI breakthroughs, projects, and updates

Learn more

Careers

We’re looking for people who want to make a real, positive impact on the world

Learn more

Milestones

For over 20 years, Google has worked to make AI helpful for everyone

Learn more

Education

We work to make AI more accessible to the next generation

Learn more

Responsibility

Ensuring AI safety through proactive security, even against evolving threats

Learn more

The Podcast

Uncover the extraordinary ways AI is transforming our world

Learn more
Models
Research
Science
About

Build with our next generation AI systems

Explore models chevron_right

Gemini

Our most intelligent AI models

2.5 Pro
2.5 Flash
2.0 Flash-Lite
Learn more

Gemma

Lightweight, state-of-the-art open models

Gemma 3
Gemma 3n
ShieldGemma 2
Learn more

Generative models

Image, music and video generation models

Imagen
Lyria
Veo

Experiments

AI prototypes and experiments

Project Astra
Project Mariner
Gemini Diffusion

Our latest AI breakthroughs and updates from the lab

Explore research chevron_right

Projects

Explore some of the biggest AI innovations

Learn more

Publications

Read a selection of our recent papers

Learn more

News

Discover the latest updates from our lab

Learn more

Unlocking a new era of discovery with AI

Explore science chevron_right

AI for biology

AlphaFold
AlphaMissense
AlphaProteo

AI for climate and sustainability

WeatherNext

AI for mathematics and computer science

AlphaEvolve
AlphaProof
AlphaGeometry

AI for physics and chemistry

GNoME
Fusion
AlphaQubit

AI transparency

SynthID

Our mission is to build AI responsibly to benefit humanity

About Google DeepMind chevron_right

News

Discover our latest AI breakthroughs, projects, and updates

Learn more

Careers

We’re looking for people who want to make a real, positive impact on the world

Learn more

Milestones

For over 20 years, Google has worked to make AI helpful for everyone

Learn more

Education

We work to make AI more accessible to the next generation

Learn more

Responsibility

Ensuring AI safety through proactive security, even against evolving threats

Learn more

The Podcast

Uncover the extraordinary ways AI is transforming our world

Learn more
Models

Build with our next generation AI systems

Explore models chevron_right

Gemini

Our most intelligent AI models

2.5 Pro
2.5 Flash
2.0 Flash-Lite
Learn more

Gemma

Lightweight, state-of-the-art open models

Gemma 3
Gemma 3n
ShieldGemma 2
Learn more

Generative models

Image, music and video generation models

Imagen
Lyria
Veo

Experiments

AI prototypes and experiments

Project Astra
Project Mariner
Gemini Diffusion
Research

Our latest AI breakthroughs and updates from the lab

Explore research chevron_right

Projects

Explore some of the biggest AI innovations

Learn more

Publications

Read a selection of our recent papers

Learn more

News

Discover the latest updates from our lab

Learn more
Science

Unlocking a new era of discovery with AI

Explore science chevron_right

AI for biology

AlphaFold
AlphaMissense
AlphaProteo

AI for climate and sustainability

WeatherNext

AI for mathematics and computer science

AlphaEvolve
AlphaProof
AlphaGeometry

AI for physics and chemistry

GNoME
Fusion
AlphaQubit

AI transparency

SynthID
About

Our mission is to build AI responsibly to benefit humanity

About Google DeepMind chevron_right

News

Discover our latest AI breakthroughs, projects, and updates

Learn more

Careers

We’re looking for people who want to make a real, positive impact on the world

Learn more

Milestones

For over 20 years, Google has worked to make AI helpful for everyone

Learn more

Education

We work to make AI more accessible to the next generation

Learn more

Responsibility

Ensuring AI safety through proactive security, even against evolving threats

Learn more

The Podcast

Uncover the extraordinary ways AI is transforming our world

Learn more
Try Google AI Studio Try Gemini
Google DeepMind
Google AI Learn about all of our AI Google DeepMind Explore the frontier of AI Google Labs Try our AI experiments Google Research Explore our research
Gemini app Chat with Gemini Google AI Studio Build with our next-gen AI models
Models Research Science About
Try Google AI Studio Try Gemini

Responsibility & Safety

We want to build AI responsibly to benefit humanity.

Our approach

AI can provide extraordinary benefits, but like all transformational technology, it could have negative impacts unless it’s developed and deployed responsibly.

Guided by our AI Principles, we work to anticipate and evaluate our systems against a broad spectrum of AI-related risks, taking a holistic approach to responsibility, safety and security. Our approach is centered around responsible governance, research and impact.

To empower teams to pioneer responsibly and safeguard against harm, the Responsibility and Safety Council (RSC), our longstanding internal review group co-chaired by our COO Lila Ibrahim and VP, Responsibility Helen King, evaluates Google DeepMind’s research, projects and collaborations against our AI Principles, advising and partnering with research and product teams on our highest impact work.

Our AGI Safety Council, led by our Co-Founder and Chief AGI Scientist Shane Legg, works closely with the RSC, to safeguard our processes, systems and research against extreme risks that could arise from powerful AGI systems in the future.

We’re also collaborating with researchers across industry and academia to make breakthroughs in AI, while engaging with governments and civil society to address challenges that can’t be solved by any single group.

Members of the DeepMind team holding a discussion in a meeting room.

We also have world class teams focusing on technical safety, ethics, governance, security, and public engagement, who work to grow our collective understanding of AI-related risks and potential mitigations. Leading the industry forward, our recent research includes developing stronger security protocols on the path to AGI, creating a new benchmark for evaluating the factuality of large language models, and exploring the promise and risks of a future with more advanced AI assistants. We also introduced, and continue to update, our Frontier Safety Framework - a set of protocols to help us stay ahead of possible severe risks from powerful frontier AI models.

Our interdisciplinary teams are committed to understanding the full spectrum of AI opportunities and risk, helping advance the entire field of AI safety by investing in, and prioritising, cutting-edge research and best-practices.

Secure and Privacy-Preserving AI

As AI capabilities expand, so does the potential for misuse. At Google DeepMind, we recognize the critical importance of safeguarding user security and privacy. For example, this includes investing in mitigations to limit the potential for misuse when the model is deployed and threat modelling research that helps identify capability thresholds where heightened security is necessary.

As AI becomes more agentic, it can help users on a more proactive and continuous basis, which introduces the risks of collecting and misusing the user data. We aim to invest in both privacy preserving infrastructure and models, and work with the rest of GDM to adapt these techniques into Gemini and products.

By prioritizing these principles, we aim to foster an AI ecosystem where we can unlock advanced capabilities of AI technologies without sacrificing user security, privacy and trust.

AI that benefits everyone

Our teams work with many brilliant non-profits, academics, and other companies to apply AI to solve problems that underpin global challenges, while proactively mitigating risks.

To help prevent the misuse of our technologies, in 2023 we established the cross-industry Frontier Model Forum to ensure safe and responsible development of frontier AI models.

We collaborate with other leading research labs, as well as the Partnership on AI — which we helped co-found to bring together academics, charities, and company labs to solve common challenges.

We believe we have the opportunity to demonstrate that AI can and should be deployed for the greater good. We’re driven to enable equitable access and adoption of our AI models, so that these developments can empower, impact and benefit us all.

For example, we’ve developed the AlphaFold Server to broaden access to our breakthrough model, AlphaFold 3, and accompanying educational materials to support best practice in the community.

In addition, we collaborate with many brilliant partners to catalyse the use of AI in fighting key global issues - including driving progress in the fight against AMR.

Beyond that we also work to broaden access to AI education - for example by supporting grassroots efforts like the African Deep Learning Indaba, providing funding used for scholarships and Fellowships, or partnering with the Raspberry Pi Foundation to launch Experience AI, which equips teachers to educate and inspire 11-to 14-year-olds about AI.

Since Experience AI’s launch in April 2023, this program has been accessed by educators across 130 countries, and with $10 million from Google.org, Raspberry Pi Foundation now aims to bring the Experience AI program to more than 2 million young people across Europe, Middle East and Africa, so they can become forward-thinking, responsible and safe users of AI.

Latest responsibility and safety news

Discover our latest AI breakthroughs and updates from the lab

View all posts

  • Responsibility & Safety

    Advancing Gemini's security safeguards

    We’ve made Gemini 2.5 our most secure model family to date.

    20 May 2025
  • Responsibility & Safety

    Taking a responsible path to AGI

    We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community.

    2 April 2025
  • Responsibility & Safety

    Evaluating potential cybersecurity threats of advanced AI

    Our framework enables cybersecurity experts to identify which defenses are necessary—and how to prioritize them

    2 April 2025

Responsibility

Our principles

Learn more

Explore our other teams and product areas

  • Google AI

  • Google AI for Developers

  • Google AI Studio

  • Gemini

  • Google Cloud

  • Google Labs

Follow us
footer__x
footer__instagram
footer__youtube
footer__linkedin
footer__github
Build AI responsibly to benefit humanity
Models
Build with our next generation AI systems
Gemini
Gemma
Veo
Imagen
Lyria
Science
Unlocking a new era of discovery with AI
AlphaFold
SynthID
WeatherNext
Learn more
About News Careers Research Responsibility & Safety
Sign up for updates on our latest innovations

I accept Google's Terms and Conditions and acknowledge that my information will be used in accordance with Google's Privacy Policy.

Please enter a valid email (e.g., "[email protected]")
About Google
Google products
Privacy
Terms