It can be difficult to discern how a mathematical model trained by deep learning arrives at a particular prediction, recommendation, or decision. A black box, even one that does what it’s supposed to, may have limited utility, especially where the predictions or decisions impact society and hold ramifications that can affect individual well-being. In such cases, users sometimes need to know the “whys” behind the workings, such as why an algorithm reached its recommendations—from making factual findings with legal repercussions to arriving at business decisions, such as lending, that have regulatory repercussions—and why certain factors (and not others) were so critical in a given instance.
Content: Quotation
Authors: James Manyika, Mehdi Miremadi, Michael Chui
Source: McKinsey Quarterly
Subject: IT / Technology / E-Business
Authors: James Manyika, Mehdi Miremadi, Michael Chui
Source: McKinsey Quarterly
Subject: IT / Technology / E-Business
There Are No Comments
Click to Add the First »
Click to Add the First »