SUBSCRIBE

Skeptical about explainability of AI-derived recommendations? – IT World Canada

As a senior executive or CIO, how can you assure yourself that Artificial Intelligence (AI) or Machine Learning (ML)-derived recommendations are reasonable and flow logically from the project work that has been performed?

You want to support and encourage your teamโ€™s work, but you donโ€™t want to be unwittingly misled, and you want to confirm that the data science team has not misled itself.

โ€œOrganizations need explainable AI/ML results to build confidence in the technology and to minimize the risk of being misled by subtle biases or catastrophic recommendations,โ€ says Amy Hodler, AI Technical Evangelist at Fiddler AI in Palo Alto, California. โ€œExplainability applies to both design and operation of AI/ML applications.โ€

Here are some high-level questions that you can ask the team about explainability. Theyโ€™re designed to raise everyoneโ€™s assurance that the AI/ML recommendations are sound, and can be confidently implemented even though you and everyone else know that youโ€™re not an expert. Start by selecting one question that youโ€™re most concerned about and most comfortable asking.

Explainability

Explainable artificial intelligence (XAI) is a set of processes and methods that allows human end-users to comprehend and trust the results and output created by machine learning algorithms. The confidence you can have in AI/ML-derived recommendations is dependent on the project design, including explainability features.

Here are some related questions that will illuminate potential issues in your projectโ€™s explainability of results:

  1. What steps did you take to enhance trust and confidence in the modelโ€™s results?
  2. How would you characterize your modelโ€™s accuracy, explainability, and transparency?
  3. Have you performed a model risk assessment?
  4. To what extent did the model design incorporate traceability?
  5. Does the AI/ML application you want to deploy trigger an alert when the model deviates from the expected results?
  6. What is your strategy for monitoring the AI/ML application to ensure that it continues to deliver the expected results?

Evaluate answers

Hereโ€™s how to evaluate the answers that youโ€™ll receive to these questions from your data science team:

  1. If you receive blank stares, this means the topic of your question has not been addressed and requires more attention before the recommendations should be adopted. It will be necessary to add missing skills to the team or even replace the entire team.
  2. If you receive a lengthy answer filled with a lot of data science jargon or techno-chatter, the topic has not been addressed sufficiently, or not at all. Your team may lack the critical skills required to deliver confident recommendations. Your confidence in the recommendations should decrease or even disappear.
  3. Your confidence in the work should increase if you receive a thoughtful response that points to uncertainties and risks associated with the recommendations.
  4. If you receive a response that describes potential unanticipated consequences, your confidence in the recommendations should increase.
  5. If additional slides support the answers you receive with relevant figures and charts, your confidence in the team should increase significantly.
  6. If the project team acknowledges that the topic of your question should receive more attention, your confidence in the team should increase. To remedy the shortfall, it will probably be necessary to allocate more resources, such as external data science consultants.

For a summary discussion of the topics you should consider as you seek to assure yourself that AI/ML recommendations are reasonable, please read this article: Skeptical about AI-derived recommendations? Here are some tips to get you started.

Now whatโ€™s left is to ask yourself: what ideas can you contribute to help senior executives assure themselves that the AI/ML-derived recommendations are reasonable and flow logically from the project work performed?

Tech Jobs

Categories