Raytheon works on artificial intelligence that explains itself

2016 10 03 14 26 17 320 Computer Binary 400

Raytheon's BBN Technologies unit said it is developing a neural network that can explain its findings to users. The defense contractor believes the work-in-progress software -- called Explainable Question Answering System (EQUAS) -- will have a variety of applications ranging from defense to medical imaging.

Still in the early stages of development, the EQUAS project is part of the U.S. Defense Advanced Research Projects Agency's (DARPA) Explainable Artificial Intelligence (XAI) program, which aims to create a suite of machine-learning techniques that produce more explainable models, according to Raytheon. With EQUAS, users will be able to review the data that mattered the most in the AI decision-making process and explore the system's recommendations to understand why it chose one answer over another. This can increase confidence in the AI system's findings, the company said.

The technology could help address one of the criticisms of using AI in radiology -- namely, that most algorithms use a "black box" process with little to no transparency into how they reach a decision.

On the other hand, with EQUAS, "say a doctor has an x-ray image of a lung and her AI system says that its cancer," said lead scientist and EQUAS principal investigator Bill Ferguson in a statement. "She asks why and the system highlights what it thinks are suspicious shadows, which she had previously disregarded as artifacts of the x-ray process. Now the doctor can make the call -- to diagnose, investigate further, or, if she still thinks the system is in error, to let it go."

Raytheon said the system will be able to monitor itself and share factors that limit its ability to make reliable recommendations. Developers will be able to leverage this self-monitoring capability to refine their AI systems, enabling them to add more data or change how the information is processed, according to the vendor.

Page 1 of 371
Next Page