Michael Chui, James Manyika, and Mehdi Miremadi

It can be difficult to discern how a mathematical model trained by deep learning arrives at a particular prediction, recommendation, or decision. A black box, even one that does what it’s supposed to, may have limited utility, especially where the predictions or decisions impact society and hold ramifications that can affect individual well-being. In such cases, users sometimes need to know the “whys” behind the workings, such as why an algorithm reached its recommendations—from making factual findings with legal repercussions to arriving at business decisions, such as lending, that have regulatory repercussions—and why certain factors (and not others) were so critical in a given instance.

Like this content? Why not share it?
Share on FacebookTweet about this on TwitterShare on LinkedInBuffer this pagePin on PinterestShare on Redditshare on TumblrShare on StumbleUpon
There Are No Comments
Click to Add the First »