Like other powerful technologies, AI and machine learning present significant opportunities. To reap the full benefits of ML, organizations must also mitigate the considerable risks it presents. This report outlines a set of actionable best practices for people, processes, and technology that can enable organizations to innovate with ML in a responsible manner.
Authors Patrick Hall, Navdeep Gill, and Ben Cox focus on the technical issues of ML as well as human-centered issues such as security, fairness, and privacy. The goal is to promote human safety in ML practices so that in the near future, there will be no need to differentiate between the general practice and the responsible practice of ML.
This report explores:
|
People: Humans in the Loop—Why an organization’s ML culture is an important aspect of responsible ML practice |
|
Processes: Taming the Wild West of Machine Learning Workflows—Suggestions for changing or updating your processes to govern ML assets |
|
Technology: Engineering ML for Human Trust and Understanding—Tools that can help organizations build human trust and understanding into their ML systems |
|
Actionable Responsible ML Guidance—Core considerations for companies that want to drive value from ML |