BossaBox

This is the playbook for engineering-playbook

Responsible AI in ISE

Microsoft’s Responsible AI principles

Every ML project in ISE goes through a Responsible AI (RAI) assessment to ensure that it upholds Microsoft’s 6 Responsible AI principles:

Every project goes through the RAI process, whether we are building a new ML model from scratch, or putting an existing model in production.

ISE’s Responsible AI process

The process begins as soon as we start a prospective project. We start to complete a Responsible AI review document, and an impact assessment, which provides a structured way to explore topics such as:

At this point we research available tools and resources, such as InterpretML or Fairlearn, that we may use on the project. We may change the project scope or re-define the ML problem definition if necessary.

The Responsible AI review documents remain living documents that we re-visit and update throughout project development, through the feasibility study, as the model is developed and prepared for production, and new information unfolds. The documents can be used and expanded once the model is deployed, and monitored in production.