English
◆ Annual Review 2002

Multimedia Devices Laboratory


Qiangfu Zhao
Professor

Yong Liu
Associate Professor

The main stream in our lab is related to computational intelligence. Thereare three key words: recognition, learning and understanding. The goal of our research is to develop some learning models that are flexible enough to adapt changing environment, and also simple enough to be realized, interpreted and re-used. The final goal is to design a system that can think, and decide what to do and how to grow-up based on its own thinking. For this purpose, many approaches have been studied - e.g., neuro-computation, evolutionary computation, reinforcement learning, and so on. Of course, results proposed in conventional symbol based artificial intelligence are also included.

    So far we have used or proposed the following learning models:

  • Neural network trees,
  • Neural network ensemble,
  • Modular neural networks,
  • Cellular automata, and
  • Recurrent neural networks.

    Based on the above learning models, we have proposed many new algorithms. Examples include:

  • IEA: individual evolutionary algorithm,
  • Coop CEA: cooperative co-evolutionary algorithms,
  • EPNet: evolutionary programming neural net, and
  • Evolutionary design of neural network trees.

To verify and to improve the models and learning algorithms proposed so far, we have being studying on-line growing of neural network trees, evolution of neural network ensemble, evolutionary design of decision trees, and so on. The main applications we are considering include pattern recognition, robot control, and data mining.

Referred Journal Papers
[yliu-01:2002]Y. Liu and X. Zou. Analysis of negative correlation learning. Wuhan University Journal of Natural Sciences, 8(1), 2003.
[yliu-02:2002]Y. Liu and X. Zou. From designing a single neural network to designing neural network ensembles. Wuhan University Journal of Natural Sciences, 8(1), 2003.
Referred Proceeding Papers
[qf-zhao-01:2002]N. Okamoto and Q. F. Zhao. Synchronous parallel GA with nearly zero sequential computations. In S. Peng, V. V. Savchenko, and S. Yukita, editors, Proc. International Symposium on Cyber Worlds (CW2002), pages 296-300, Tokyo, Nov. 2002. Hosei Unversity, IEEE.
Generally speaking, genetic algorithms (GAs) can provide global optimal solutions with higher probability. However, compared with conventional approaches, GAs are usually more time consuming. To solve this problem, different parallel GAs(PGAs) have been proposed in the literature. A direct method is to employ many slave processors for individual evaluation, and a master processor to synchronize the whole evolution process. This method is good in the sense that it does not change any properties (say, convergence) of GAs. However, according to Amdahl's law, this method cannot speedup the evolution process very much, because the percentage of sequential computation is relatively large. In this paper, we make two proposals: one is to keep all genetic materials inside the slave processors, and the other is to ask the slaves to perform crossover and mutation as well. The former reduces data transmission time, and the latter reduces sequential computing time directly. With these proposals, we have succeeded to achieve almost m times speedup when we use m slaves. This is verified through experiments.
[qf-zhao-02:2002]S. Mizuno and Q. F. Zhao. Neural Network Trees with Nodes of Limited Inputs are Good for Learning and Understanding. In L. etc. Wang, editor, Proc. 4th Asia-Pacific Conference on Simulated Evolution And Learning (SEAL2002), pages 573-576, Singapore, Nov. 2002. Nanyang Technological University, IEEE.
An neural network tree (NNTree) is a decision tree (DT) with each non-terminal node being an expert neural network (ENN). Compared with the conventional DTs, an NNTree can achieve better performance with less nodes and the performance can be increased further by incremental learning. Currently, we find that if we limit the number of inputs of eachENN,we can interpret the NNTree very efficiently. In general, the time complexity for interpreting a neural network is an NP-complete problem. If we adopt NNTrees with nodes of limited inputs (L-NNTree for short), however, the time complexity for extracting comprehensible rules can become polynomial. In this paper, we propose several approaches for evolutionary design of L-NNTrees, and show, through experiments with several public databases, thatL-NNTrees are goodboth for learningand for understanding
[qf-zhao-03:2002]S. Haruyama and Q. F. Zhao. Designing smaller decision trees using multiple objective optimization based GPs. In A. E. etc. Kamel, editor, Proc. IEEE International Conference on Systems, Man and Cybernetics (SMC'02), pagesCD{ROMwithout page number, Tunisia, Oct. 2002. IEEE, IEEE.
Decision tree (DT) is a good model for machine learning. Many methods have been proposed in the literature for designing DTs from training data. Most existing methods, however, are single-path search algorithms which provide only one of the possible solutions. In our research, we have tried to design DTs using the genetic programming (GP). Theoretically speaking, GP can generate many different DTs, and thus might be able to design smaller DTs with the same performance. In practice, however, the DTs obtained by GP are usually very large and complex. To solve this problem, we have proposed several methods for evolving smaller DTs. In this paper, we examine several multiple objective optimization (MOO) based GPs, and verify their e??ectiveness through experiments
[qf-zhao-04:2002]T. Endou and Q.F. Zhao. Generation of comprehensible decision trees through evolution of training data. In X. etc. Yao, editor, Proc. IEEE Congress on Evolutionary Computation (CEC'2002), pages 1221-1225, Hawaii, May 2002. IEEE, IEEE.
In machine learning, decision trees (DTs) are usually considered comprehensible because a reasoning process can be given for each conclusion. When the data set is large, however, the DTs obtained may become very large, and they are no longer comprehensible. To increase the comprehensibility of DTs, we have proposed several methods. For example, we have tried to evolve DTs using genetic programming (GP), with tree size as the secondary fitness measure; we have tried to initialize GP using results obtained by C4.5; and we have also tried to introduce the divide-and-conquer concept inGP, but all results obtained are still not good enough. Up to nowwe have tried to design good DTs from given fixed data. In this paper, we look at the problem from a different point of view. The basic idea is to evolve a small data set that can cover the domain knowledge as good as possible. From this data set, a small but good DT can be designed. The validity of the new algorithm is verified through several experiments
[yliu-03:2002]Y. Liu and X. Yao. How to Control Search Step Size in Fast Evolutionary Programming. In Proceedings of the 2002 Congress on Evolutionary Computation, pages 652-656. IEEE Press, May 2002.
This paper investigates three approaches to controlling the search step size in fast evolutionary programming (FEP). The first approach is based on using the different parameters in a Cauchymutation, the second approach through mixing different mutation operators, and the third approachby applying cooperative coevolution in FEP. Experimental studies on some function optimization problems have shown that the three approaches can help FEP find better solutions while FEP maintains its fast convergence rate.
[yliu-04:2002]Y.Liu and X. Yao. Maintaining Population Diversity By Minimizing Mutual Information. In Proceedings of the 2002 Genetic and Evolutionary Computation Conference, pages 652-656. Morgan Kaufmann, July 2002.
Based on negative correlation learning and evolutionary learning, evolutionary ensembles with negative correlation learning(EENCL) was proposed for learning and designing of neural network ensembles. The idea of EENCL is to regard the population of neural networks as an ensemble, and the evolutionary process as the design of neural network ensembles. EENCL used a fitness sharing based on the covering set. Such fitness sharing did not make accurate measurement on the similarity in the population. In this paper, a fitness sharing scheme based on mutual information is introduced in EENCL to evolve a diverse and cooperative population. The effectiveness of such evolutionary learning approachwas tested on two real-world problems.
[yliu-05:2002]X. Yao and Y. Liu. Getting Most Out of Evolutionary Approaches. In Proceedings of the 2002 NASA/DoD Conference on Evolvable Hardware, pages 8-14. IEEE Computer Society, 2002.
[yliu-06:2002]Y. Liu and X. Yao. Learning and Evolution by Minimization of Mutual Information. In Lecture Notes in Computer Science, Vol.2439, pages 495-504. Springer-Verlag, September 2002.
This paper presents a new ensemble learning with minimization of mutual information(ELMMI) for designing neural network ensembles. The idea of minimizing mutual information is to encourage different individual networks in an ensemble to learn different parts or aspects of the training data so that the ensemble can learn the whole training data better. Through minimization of mutual information, ELMMI tends to create uncorrelated networks with a novel correlation penalty term in the error function to encourage such specialization. In ELMMI, individual networks are trained simultaneously rather than sequentially. This provides an opportunity for different networks to cooperate with each other and to specialize. This paper analyses ELMMI in terms of mutual information on a regression task.
Chapters in Book
[yliu-07:2002]Y. Liu, X. Yao, and T. Higuchi. Designing neural network ensembles by minimizing mutual information, pages Chapter 1, 1-21. Computational Intelligence in Control. Idea Group Publishing, Hershey, PA 17033-1117, USA, 2003.
[yliu-08:2002]X. Yao, Y. Liu, K.-H. Liang, and G. Lin. Optimal Search Step Size and Fast Evolutionary Algorithms, pages Chapter 2, 45-94. Advances in Evolutionary Computing: Theory and Applications. Springer-Verlag, 2002.
Ph.D and Other Thesis
[qf-zhao-05:2002]Daisuke Ogino. Graduation Thesis: Neural network learning with unbalanced training set, University of Aizu, 2002.
Thesis Advisor: Qiangfu Zhao
[qf-zhao-06:2002]Takashi Murakami. Graduation Thesis: Improvement of a simulation environment for robotic study, University of Aizu, 2002.
Thesis Advisor: Qiangfu Zhao
[qf-zhao-07:2002]Tomoyuki Ito. Graduation Thesis: Rule extraction from neural networks using decision trees, University of Aizu, 2002.
Thesis Advisor: Qiangfu Zhao
[qf-zhao-08:2002]Satoru Maeda. Graduation Thesis: Prediction of Topix based on neural networks, University of Aizu, 2002.
Thesis Advisor: Qiangfu Zhao
[qf-zhao-09:2002]Taichirou Endou. Master Thesis: A study on extraction of comprehensible decision trees, University of Aizu, 2002.
Thesis Advisor: Qiangfu Zhao
[qf-zhao-10:2002]Yuki Takemura. Master Thesis: Develop a simulation environment for robotic studies, University of Aizu, 2002.
Thesis Advisor: Qiangfu Zhao
[yliu-09:2002]Yo Takahashi. Graduation Thesis: Sarsa learning with replacing eligibility traces for robot control, University of Aizu, 2002.
Thesis Advisor: Yong Liu.
[yliu-10:2002]Yukitaka Nishizawa. Graduation Thesis: Reinforcement learning for game playing, University of Aizu, 2002.
Thesis Advisor: Yong Liu.
[yliu-11:2002]Takashi Kaneko. Graduation Thesis: Neural network learning for stock market index forecast, University of Aizu, 2002.
Thesis Advisor: Yong Liu.
[yliu-12:2002]Shinichi Ooba. Graduation Thesis: Black Jack with SARSA algorithm, University of Aizu, 2002.
Thesis Advisor: Yong Liu.