Which is strongly misleading tradeoff. For a ton of tasks, deep learning methods are no better than white box regression or tree ensemble methods.
And there is no reason to expect that a deep learning model has to be unexplainable. He’s putting up a mystical interpretability vs accuracy tradeoff which does not exist.
There is a ton of research out there making deep neural networks (slightly to significantly interpretable)
And there is no reason to expect that a deep learning model has to be unexplainable. He’s putting up a mystical interpretability vs accuracy tradeoff which does not exist.
There is a ton of research out there making deep neural networks (slightly to significantly interpretable)