DARPA wants artificial intelligence to be to explain conclusions and reasoning to humans

DARPA wants to have artificial intelligence have the capability of explaining and helping humans to trace the conclusions, decisions and reasoning of the AI.Dramatic success in machine learning has led to an explosion of new AI capabilities. Continued advances promise to produce autonomous systems that perceive, learn, decide, and act on their own. These systems offer tremendous benefits, but their effectiveness will be limited by the machine’s inability to explain its decisions and actions to human users. This issue is especially important for the Department of Defense (DoD), which is facing challenges that demand the development of more intelligent, autonomous, and symbiotic systems. Explainable AI will be essential if users are to understand, appropriately trust, and effectively manage this incoming generation of artificially intelligent partners.The problem of explainability is, to…


Link to Full Article: DARPA wants artificial intelligence to be to explain conclusions and reasoning to humans

Pin It on Pinterest

Share This