Project Details
Description
In recent years, AI systems have made great strides using of deep learning techniques. These systems have proven to be efficient and outperform previous approaches in many domains. However, one issue is that these techniques are opaque to further analysis. The result of training a neural network is a set of weights, which is just a vast number of floating point numbers. Therefore, it is impossible to elicit knowledge from the trained network or to analyze the limits or pitfalls of a given system. Since the computations in a deep learning network depends on subtle changes in the pattern of activation of various neurons, it is impossible to convert the network into systems that are more amendable to further analysis, such as decision trees or rule-based system. This blind spot can have many detrimental consequences. For example, this has obvious drawbacks. For example, it is impossible to predict if a self-driving car is likely to be able to adapt to a snow covered road. However, even more insidious problems can occur in seemingly benign applications. Researchers for example have found that AI face recognition systems are biased and are far better at recognizing faces of Whites as opposed to Blacks or Asians. We use a hybrid approach where we combine white box tests (i.e., tests that use knowledge of the internal structure of the system and the deep learning architecture), and black box tests (i.e., tests without any knowledge about the internal structure of the system) to convert aspects or even the entire robot system into an approximately equivalent behavior tree program (Main project and sub-project 3). Our approach is tested in two domains: self-driving cars (Subproject 1) and robots operating in a nuclear power plants (Sub-project 2). Both of these domains are highly safety sensitive, since errors in the AI system can lead to loss of many lives. In the self-driving car domain, we focus on the perception problem in robot systems. Deep learning, especially convolutional neural networks are very popular approaches to computer vision in self-driving cars. Our system will provide methods for visualizing and analyzing the trained network. An adversarial neural network is used to create counter examples to detect deficiencies in the trained network. One of the tasks that a plant maintenance robot must accomplish it to safely dispose of radioactive waste. It is important that the waste is handled properly to avoid further contamination. Therefore, we focus on the motion planner component in this domain. Whereas vision is a classification problem (e.g., is there a stop sign in this image), the robot needs to learn a sequence of steps in manipulation tasks. Reinforcement learning is a common technique for solving this problem and it has seen increased use of deep learning as well. In this domain, the goal is to decompose the learned sequence into behaviors and to organize those behaviors into a tree. This behavior tree program can then be used to analyze the quality of the motion planner and to find limits of its performance.
Status | Finished |
---|---|
Effective start/end date | 2020/01/01 → 2020/07/31 |
Keywords
- Safety and AI
- Explainable AI
- Deep learning
- Convolutional neural networks
- Generative adversarial networks
- Behavior-tree programming language
- selfdriving cars
- plant maintenance robots
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.