Japanese
Department of Computer Hardware

Multimedia Devices Laboratory

Qiangfu ZhaoYong Liu
Qiangfu Zhao
Professor
Yong Liu
Associate Professor
The main stream in our lab is related to computational intelligence. There are three key words: recognition, learning and understanding. The goal of our research is to develop some learning models that are flexible enough to adapt changing environment, and also simple enough to be realized, interpreted and re-used. The final goal is to design a system that can think, and decide what to do and how to grow-up based on its own thinking. For this purpose, many approaches have been studied - e.g., neuro-computation, evolutionary computation, reinforcement learning, and so on. Of course, results proposed in conventional symbol based artificial intelligence are also included.
  So far we have used or proposed the following learning models:
  • Neural network trees,
  • Neural network ensemble,
  • Modular neural networks,
  • Cellular automata, and
  • Recurrent neural networks.
Based on the above learning models, we have proposed many new algorithms.
Examples include:
  • IEA: individual evolutionary algorithm,
  • CoopCEA: cooperative co-evolutionary algorithms,
  • EPNet: evolutionary programming neural net, and
  • Evolutionary design of neural network trees.
To verify and to improve the models and learning algorithms proposed so far, we have being studying on-line growing of neural network trees, evolution of neural network ensemble, evolutionary design of decision trees, and so on. The main applications we are considering include pattern recognition, robot control, and medical diagnosis. Currently, we are developing a health advisory intelligent robot based on the results of our study.
  

Refereed Journal Papers

[yliu-01:2003]X. Yao and Y. Liu. Evolving Neural Network Ensembles by Minimization of Mutual Information. International Journal of Hybrid Intelligent Systems, 1(1), 2004.

Learning and evolution are two fundamental forms of adaptation. There has been a great interest incombining learning and evolution with neural networks in recent years. This paper presents a hybrid learning system for learning and designing of neural network ensembles based on negative correlation learning and evolutionary learning. The idea of the hybrid learning system is to regard the population of neural networks as an ensemble, and the evolutionary process as the design of neural network ensembles. Two fitness sharing techniques have been used in the evolutionary process. One is based on the covering set. The other is to use the concept of mutual information. The effectiveness of such hybrid learning approach was tested on two real-world problems.

Refereed Proceeding Papers

[qf-zhao-01:2003]T. Takeda, Q. F. Zhao, and Y. Liu. A study on on-line learning of NNTrees. In M. E. Hasselmo, editor, International Joint Conference on Neural Networks, pages 513-517, Portland, July 2003. IEEE and INNS, IEEE.

A neural network tree (NNTree) is a hybrid learning model with the overall structure being a decision tree (DT), and each non-terminal node containing a neural network (NN). Using NNTrees, it is possible to learn new knowledge on-line by adjusting the NNs in the non-terminal nodes. It is also possible to understand the learned knowledge on-line because the NNs in the non-terminal nodes are usually very small, and can be interpreted easily. Currently, we have studied retraining of the NNTrees by adjusting the NNs in the non-terminal nodes. The structure of the trees is fixed during retraining. We found that this kind of retraining is good for size reduction in off-line learning, if the training set is highly redundant. However, updating the NNs alone is not enough for on-line learning. In this paper, we introduce two methods for on-line learning of NNTrees. The first one is SGU (simple growing up), and the second one is GUWL (growing up with learning). The effectiveness of these methods are compared with each other through experiments with several public databases.
[qf-zhao-02:2003]Q. F. Zhao. Recognition, learning and understanding. In R. Sadykhov, editor, International Conference on Neural Networks and Artificial Intelligence, pages 29-34, Minsk,Belarus, Nov. 2003. Belarus State University of Informatics and Radioelectronics, INNS.

Symbolic and non-symbolic approaches are two popular categories in machine learning. In general, symbolic approaches can provide comprehensible rules, but cannot adapt to changing environments eAEciently. On the contrary, nonsymbolic approaches can adapt to changing environments but cannot provide comprehensible rules. In this study, we introduce a hybrid learning model called neural network tree (NNTree). AnNNTree is a decision tree (DT) with each non-terminal node being an expert neural network (ENN). Experimental results have shown that NNTrees can be used both for learning and understanding. In this paper, we first summarize our results obtained in the current years, and then propose some topics for future study.
[qf-zhao-03:2003]C.Lu, Q.F. Zhao,W.J.Pei, and Z. Y.He. A Multiple Objective Optimization Based GA for Designing Interpretable and Comprehensible Neural Network Trees. In Z. Y He, editor, IEEE International Conference on Neural Networks and Signal Processing, pages 518-521, Nanjing, China, Dec. 2003. IEEE CASS Shanghai Chapter, IEEE.

Neural network tree (NNTree) is a hybrid model for machine learning. The overall structure is a decision tree (DT), and each non-terminal node is an expert neural network (ENN). Generally speaking, NNTrees outperforms conventional DTs with fewer nodes, and the performance can be improved through incremental learning. In addition, the NNTrees can be interpreted in polynomial time if the number of inputs for each ENN is limited. In this paper, we propose a multiple objective optimization based genetic algorithm(MOO-GA) for designing interpretable and comprehensible NNTrees. The eAEciency of the proposed algorithm is validated by experimental results.
[qf-zhao-04:2003]K. Sakamoto and Q. F. Zhao. Generation of good training data for extracting DTs from evolved NN robot controllers. In Z. Y He, editor, IEEE International Conference on Neural Networks and Signal Processing, pages 33-36, Nanjing, China, Dec. 2003. IEEE CASS Shanghai Chapter, IEEE.

Neural networks (NNs) have been widely accepted as a good model of robot controllers. One reason is that NNs are good both for batch learning and for incremental learning. Batch learning is important for obtaining an initial controller using existing data, while incremental learning is useful for refining the controller using newly observed data. One drawback in using NN controllers is that the knowledge learned by anNN is diAEcult to understand. The goal of the study is to interpret an evolved NN controller using a decision tree (DT). for this purpose it is necessary to generate a good training set from which the most consistent DT can be induced. This paper introduces several simple methods for generating the training set. The eAEciency and eAEcacy of these methods are verified through experiments.
[qf-zhao-05:2003]T. Takeda and Q. F. Zhao. A two step algorithm for designing small neural network trees. In Z. Y He, editor, IEEE International Conference on Neural Networks and Signal Processing, pages 513-517, Nanjing, China, Dec. 2003. IEEE CASS Shanghai Chapter, IEEE.

There are mainly two approaches for machine learning. One is symbolic approach and another is sub-symbolic approach. Decision tree (DT) is a typical model for symbolic learning, and neural network (NN) is a popular model for sub-symbolic learning. Neural network tree (NNTree) is a DT with each non-terminal node being an expert NN.NNTree is a learning model that may combine the advantages of both DT and NN. Through experiments we found that the size of an NNTree is usually proportional to the number of training data. Thus, we can produce small trees by using partial training data. In most cases, however, this will decrease the performance of the tree. In this paper, we propose a two step algorithm to produce small NNTrees. The first step is to get a small NNTree using partial data, and the second step is to increase the performance through retraining. The effectiveness of this algorithm is verified through experiments with public databases.
[qf-zhao-06:2003]T. Takeda and Q. F. Zhao. Growing Neural Network Trees Efficiently and Effectively. In Koppen M. Abraham, A. and K. Franke, editors, International Conference on Hybrid Intelligent Systems, pages 107-115, Melbourne, Dec. 2003. IEEE, IOS Press.

Neural network tree (NNTree) is a hybrid learning model with the overall structure being a decision tree (DT), and each non-terminal node containing an expert neural network (ENN). Generally speaking, NNTrees outperform conventional DTs because better features can be extracted by the ENNs, and the performance can be improved further through incremental learning. In addition, as we have shown recently, NNTrees can always be interpreted in polynomial time if we restrict the number of inputs for each ENN. Currently, we proposed an algorithm which can grow the tree automatically, and can provide very good results. However, the algorithmis not eAEcient because GAis used both in re-training the ENNs and in creating new nodes. In this paper, we propose away to replace GA with the back propagation (BP) algorithm in the growing algorithm. Experiments with several public databases show that the improved algorithm can grow better NNTrees, with much less computational costs.
[yliu-02:2003]Y. Liu and X. Yao. Optimisation and Learning in Neural Network Learning. In Editor M. H. Hamza, editor, Proceedings of the IASTED International Conference on Modelling, Simulation and Optimization, pages 167-172. IASTED, ACTA Press, July 2003.

This paper introduces supervised learning model and surveys related research work. The paper is organised as follows. A supervised learning model is firstly described. The bias-variance trade-off is then discussed for the supervised learning model. Based on the bias-variance trade-off, both the single neural network approaches and the neural network ensemble approaches are overviewed, and the problem in the existing approaches are indicated. Finally, the paper concludes by specifying the potential future research directions.

Chapters in Book

[yliu-03:2003]X.Yao and Y. Liu. Machine learning. Introductory Tutorials in Optimisation, Decision Support and Search Methodologies. Kluwer Academic Publishers, 2004.

Grants

[yliu-04:2003]Y. Liu. Research Subsidy Program for Young Faculty Members to Conduct Research Overseas Related to Improvement of Education [Ministry of Education and Science], 2003.

Academic Activities

[yliu-05:2003]Y. Liu, 2004-.

Editor, International Journal of Hybrid Intelligent Systems

Ph.D and Other Theses

[qf-zhao-07:2003]Masamitsu Ikarashi. Master Thesis: Acquisition of robot moving strategies based on co-evolution, University of Aizu, 2003.

Thesis Advisor: Zhao, Q. F.

[qf-zhao-08:2003]Nobuhiro Okamoto. Master Thesis: A study on PGAs for evolutionary learning of neural network robot controllers, University of Aizu, 2003.

Thesis Advisor: Zhao, Q. F.

[qf-zhao-09:2003]Chuanfeng LU. Master Thesis: VQ+IFS: a new approach to image compression, University of Aizu, 2003.

Thesis Advisor: Zhao, Q. F.

[qf-zhao-10:2003]Kouichi Sakamoto. Master Thesis: A study on extraction of rules from evolved neural network robot controllers, University of Aizu, 2003.

Thesis Advisor: Zhao, Q. F.

[qf-zhao-11:2003]Takaharu Kawatsure. Graduation Thesis: Generation of comprehensible decision rules through evolution, University of Aizu, 2003.

Thesis Advisor: Zhao, Q. F.

[qf-zhao-12:2003]Hiromoto Hayashi. Graduation Thesis: Designing robot controllers that are both learnable and comprehensible, University of Aizu, 2003.

Thesis Advisor: Zhao, Q. F.

[qf-zhao-13:2003]Makito Shigihara. Graduation Thesis: Learning of neural network with unblanced training set, University of Aizu, 2003.

Thesis Advisor: Zhao, Q. F.

[qf-zhao-14:2003]Masanori Takano. Graduation Thesis: Find suitable sensors for evolving mobile robots, University of Aizu, 2003.

Thesis Advisor: Zhao, Q. F.

[qf-zhao-15:2003]Masaaki Takahashi. Graduation Thesis: Stock prediction using neural network trees, University of Aizu, 2003.

Thesis Advisor: Zhao, Q. F.