Posted By: David Levy
Date: Wednesday, 15 November 2017, at 4:57 a.m.
Since I've retired from a career in IT, I don't do much reading about technology. But there is one author, Martin Fowler, who taught me a lot professionally and continues to blog engagingly. He just wrote an essay touching on an aspect of AI that I find intriguing--explainability. When we train neural nets to outperform us at a task, can we get the net to explain why it did what it did? Machine learning, yes, but then machine justification.
Now, if you're Nack, you can come up with contrasting positions and tease out explanations from XG. For the rest of us, it's not so easy.
Take a look at the Fowler article and perhaps the linked article from the MIT Technology Review. I find it thought provoking.
Messages In This Thread
BGonline.org Forums is maintained by Stick with WebBBS 5.12.