Responsible AI: Putting our principles into action | Google

Jeff Dean, Kent Walker

Jeff Dean, Kent Walker

June 28, 2019 - Jeff Dean [pictured, left], Google Senior Fellow and SVP, Google AI and Kent Walker [pictured, right], SVP, Global Affairs, posted: Every day, we see how AI can help people from around the world and make a positive difference in our lives—from helping radiologists detect lung cancer, to increasing literacy rates in rural India, to conserving endangered species. These examples are just scratching the surface—AI could also save lives through natural disaster mitigation with our flood forecasting initiative and research on predicting earthquake aftershocks.

As AI expands our reach into the once-unimaginable, it also sparks conversation around topics like fairness and privacy. This is an important conversation and one that requires the engagement of societies globally. A year ago, we announced Google’s AI Principles that help guide the ethical development and use of AI in our research and products. Today we’re sharing updates on our work.

Internal education

We’ve educated and empowered our employees to understand the important issues of AI and think critically about how to put AI into practice responsibly. This past year, thousands of Googlers have completed training in machine learning fairness. We’ve also piloted ethics trainings across four offices and organized an AI ethics speaker series hosted on three continents.

Tools and research

Over the last year, we’ve focused on sharing knowledge, building technical tools and product updates, and cultivating a framework for developing responsible and ethical AI that benefits everyone. This includes releasing more than 75 research papers on topics in responsible AI, including machine learning fairnessexplainabilityprivacy, and security, and developed and open sourced 12 new tools. For example:

  • The What-If Tool is a new feature that lets users analyze an ML model without writing code. It enables users to visualize biases and the effects of various fairness constraints as well as compare performance across multiple models.

  • Google Translate reduces gender bias by providing feminine and masculine translations for some gender-neutral words on the Google Translate website.

  • We expanded our work in federated learning, a new approach to machine learning that allows developers to train AI models and make products smarter without your data ever leaving your device. It’s also now open-sourced as TensorFlow Federated.

  • Our People + AI Guidebook is a toolkit of methods and decision-making frameworks for how to build human-centered AI products. It launched in May and includes contributions from 40 Google product teams.

We continue to update the Responsible AI Practices quarterly, as we reflect on the latest technical ideas and work at Google. More...