Japanese
◆ Annual Review 2001

Multimedia Devices Laboratory


Qiangfu Zhao
Professor

Jie Huang
Associate Professor

Yong Liu
Associate Professor

The main stream in our lab is related to computational intelligence. There are three key words: recognition, learning and understanding. The final goal is to design a system that can think, and decide what to do and how to grow-up based on its own thinking. For this purpose, many approaches have been studied - e.g., neuro-computation, evolutionary computation, reinforcement learning, and so on. Of course, results proposed in conventional symbol based artificial intelligence are also included.

1. Neuro-computation and evolutionary computation:

The goal of this research is to develop some learning models that are flexible enough to adapt changing environment, and also simple enough to be realized, interpreted and re-used. Of course, we cannot use a model if we do not study the learning algorithms corresponding to this model. Possible models include:

  • Modular neural networks
  • Neural network trees
  • Neural network ensemble
  • Cellular automata
  • Recurrent neural networks
  • Hybrid models

As for learning, we have proposed the following algorithms:

  • IEA: individual evolutionary algorithm
  • CoopCEA: cooperative co-evolutionary algorithms
  • EPNet: evolutionary programming neural net
  • Evolutionary design of neural network trees

To verify and improve the models and learning algorithms proposed by us, we have being studying automatic design of decision rules and decision trees, pattern recognition, robot control, and data mining.

2. Supporting Researches

To apply our methods to pattern recognition and robot control, it is also necessary to get support from many other researches, such as signal and image processing, development of virtual environment, learning and evolution of agents living in it, and so on. Speci??cally, we are studying localization and recognition of sounds in natural environments, and trying to apply our results to automatic acquisition of robot control strategies. We also developed a robot simulator that can be used by many users simultaneously through Internet. We are trying to make this simulator a free-software, so that any researcher in the world can use it for stand-alone or joint study.

Referred Journal Papers
[j-huang-001:2001]J. Huang. Developing a Multimodal Telerobot | Spatial Auditory Processing |. J. Shanghai Univ., 5(Suppl.):147-151, 2001.
[j-huang-002:2001]Q. Jin, J. Huang, and Peng Z. Design of Multilingual Agents for Community-Based Collaborative Virtual Universities. J. Shanghai Univ., 5(Suppl.):248-253, 2001.
[j-huang-003:2001]Y. Asai, H. Nakashima, T. Yamamura, J. Huang, and N. Ohnishi. Acquiring the Ability of Object Localization by Vision and Audition Through Motion. IEICE Trans. D-II, J83-D-II(7):1676-1684, 2000.
Referred Proceeding Papers
[j-huang-004:2001]Y. Yamazaki, M. Cohen, J. Huang, and T.Yanagi. Augmented Audio Reality: Compositing Mobile Telerobotic and Virtual Spatial Audio. In Proc. 11th Int. Conf. Arti??cial Reality and Telexistence, Dec. 2001.
[j-huang-005:2001]J. Huang. Computational Evaluation for the Echo-avoidance Model of the Precedence E??ect. In Proc. First Int. Sym. Measurement, Analysis and Modeling of Human Functions, pages 38-42. IMEKO/SICE/IEEE, Sep. 2001.
[j-huang-006:2001]J. Huang. Developing a Multimodal Telerobot. In Proc. 2001 Int. Conf. Imaging Science, Systems, and Technology, pages 539-545. CISST'2001, Jun 2001.
[j-huang-007:2001]J. Huang. Spatial Sound Processing for a Hearing Robot. In Proc. Int. Conf. Information Society in the 21st Century, pages 281-287. IS2000, Nov 2000.
[qf-zhao-001:2001]Taichirou Endou and Qiangfu Zhao. Generation of Comprehensible Decision Trees Through Evolution of Training Data. In IEEE, editor, Proc. IEEE Congress on Evolutionary Computation, pages 1221-1225, Hawaii, May 2002. IEEE, IEEE.
In machine learning, decision trees (DTs) are usually considered comprehensible because a reasoning process can be given for each conclusion. When the data set is large, however, the DTs obtained maybecome very large, and they are no longer comprehensible. To increase the comprehensibility of DTs, we have proposed several methods. For example, we have tried to evolve DTs using genetic programming (GP), with tree size as the secondary fitness measure; we have tried to initialize GP using results obtained by C4.5; and we have also tried to introduce the divide-and-conquer concept in GP, but all results obtained are still not good enough. Up to nowwe have tried to design good DTs from given fixed data. In this paper, we look at the problem from a different point of view. The basic idea is to evolve a small data set that can cover the domain knowledge as good as possible. From this data set, a small but good DT can be designed. The validity of the new algorithm is verified through several experiments
[qf-zhao-002:2001]S. Haruyama and Qiangfu Zhao. Designing Smaller Decision Trees Using Multiple Objective Optimization Based GPs. In IEEE, editor, Proc. IEEE International Conference on Systems, Man and Cybernetics, Tunisia, Oct. 2002. IEEE, IEEE.
Decision tree (DT) is a goodmodel for machine learning. Manymethods have been proposed in the literature for designing DTs from training data. Most existing methods, however, are single-path search algorithms which provide only one of the possible solutions. In our research, we have tried to design DTs using the genetic programming (GP). Theoretically speaking, GP can generate many di??erent DTs, and thus might be able to design smaller DTs with the same performance. In practice, however, the DTs obtained by GP are usually very large and complex. To solve this problem, we have proposed several methods for evolving smaller DTs. In this paper, we examine several multiple objective optimization (MOO) based GPs, and verify their effectiveness through experiments
[qf-zhao-003:2001]S. Mizuno and Qiangfu Zhao. Evolutionary Design of Neural Network Trees with Nodes of Limited Number of Inputs. In IEEE, editor, Proc. IEEE International Conference on Systems, Man and Cybernetics, Tunisia, Oct. 2002. IEEE, IEEE.
Neural network tree (NNTree) is a decision tree (DT) with each non-terminal node being an expert neural network (ENN). Compared with conventional DTs, NNTrees can achieve good performance with less nodes and the performance can be improved further by incremental learning with new data. Currently, we find that it is also possible to extract comprehensible rules more easily fromNNTrees than from conventional neural networks if the number of inputs of eachENN are limited. Usually, the time complexity for interpreting a neural network increases exponentially with the number of inputs. If we adopt NNTrees with nodes of limited number of inputs, the time complexity for extracting rules can become polynomial. In this paper, we introduce three methods for feature selection when the number of inputs is limited. The effectiveness of these methods are verified through experiments with four databases taken from the machine learning repository of the University of California at Irvine
[qf-zhao-004:2001]N. Okamoto and Qiangfu Zhao. Synchronous parallel GA with nearly zero sequential computations. In et al S. Peng, editor, Proc. International Symposium on Cyber Worlds, pages 296-300, Tokyo, Nov. 2002. Hosei University, IEEE.
Generally speaking, genetic algorithms (GAs) can provide global optimal solutions with higher probability. However, compared with conventional approaches, GAs are usually more time consuming. To solve this problem, different parallel GAs (PGAs) have been proposed in the literature. A direct method is to employ many slave processors for individual evaluation, and a master processor to synchronize the whole evolution process. This method is good in the sense that it does not change any properties (say, convergence) of GAs. However, according to Amdahl's law, this method cannot speedup the evolution process very much, because the percentage of sequential computation is relatively large. In this paper, we make two proposals: one is to keep all genetic materials inside the slave processors, and the other is to ask the slaves to perform crossover and mutation as well. The former reduces data transmission time, and the latter reduces sequential computing time directly. With these proposals, we have succeeded to achieve almost m times speedup when we use m slaves. This is verified through experiments
[yliu-001:2001]Y. Liu and X. Yao. Scaling up fast evolutionary programming with cooperative coevolution. In Proceedings of the 2001 Conference on Evolutionary Computation, pages 1101{1108. IEEE Press, May 2001.
Evolutionary programming (EP) has been applied with success to many numerical and combinatorial optimization problems in recent years. However, most analytical andexperimental results onEPhavebeen obtained using low dimensional problems. It is interesting to know whether the empirical results obtained from the low-dimensional problems still hold for high-dimensional cases. It was discovered that neither classical EP (CEP) nor fast EP (FEP) performed satisfactorily for some large-scale problems. This paper shows empirically that FEP with cooperative coevolution (FEPCC) can speed up convergence rates on the large-scale problems whose dimension ranges from 100 to 1000.
[yliu-002:2001]Y. Liu, X. Yao, Q. Zhao, and T. Higuchi. Evolving a cooperative population of neural networks by minimizing mutual information. In Proceedings of the 2001 Conference on Evolutionary Computation, pages 384-389. IEEE Press, May 2001.
Evolutionary ensembles with negative correlation learning (EENCL) is an evolutionary learning system for learning and designing neural network ensembles. The fitness sharing used in EENCL was based on the idea of covering the same training patterns by shared individuals. This paper explores connection between fitness sharing and information concept, and introduces mutual information into EENCL. Through minimization of mutual information, a diverse and cooperative population of neural networks can be evolved by EENCL.
[yliu-003:2001]Y. Liu and X. Yao. Evolving neural networks for Hang Seng stock index forecast. In Proceedings of the 2001 Conference on Evolutionary Computation, pages 256-260. IEEE Press, May 2001.
This paper describes an evolutionary neural network approach to Hang Seng stock index forecast. In this approach, a feed forward neural network is evolved using an evolutionary programming algorithm. Both the weights and architectures (i.e., connectivity of the network) are evolved in the same evolutionary process. The network may grow as well as shrink. The experimental results show that the evolutionary neural network approach can produce very compact neural networks with good prediction.
[yliu-004:2001]Y. Liu, X. Yao, and T. Higuchi. Ensemble Learning by Minimizing Mutual Information. In Proceedings of the second International Conference on Software Engineer, Artificial Intelligence, Networking & Parallel/Distributed Computing, pages 457-462. the International Association for Computer and Information Science, August 2001.
This paper presents an experimental comparison on different kinds of neural network ensemble learning methods on a patter classification problems.
[yliu-005:2001]X. Yao and Y. Liu. Evolving Neural Networks for Chlorophylla Prediction. In Proc. of the Fourth International Conference on Computational Intelligence and Multimedia Applications, pages 185-189. IEEE Computer Society Press, Nov. 2001.
This paper studies the application of evolutionary artificial neural networks to chlorophyll-a prediction in Lake Kasumigaura. Unlike previous applications of artificial neural networks in this field, the architecture of the artificial neural network is evolved automatically rather than designed manually. The evolutionary system is able to find a near optimal architecture of the artificial neural network for the prediction task. Our experimental results have shown that evolved artificial neural networks are very compact and generalise well.
[yliu-006:2001]Y. Liu and X. Yao. How to Control Search Step Size in Fast Evolutionary Programming. In Proceedings of the 2002 Congress on Evolutionary Computation, pages 652-656. IEEE Press, May 2002.
This paper investigates three approaches to controlling the search step size in fast evolutionary programming (FEP). The first approach is based on using the different parameters in a Cauchymutation, the second approach through mixing different mutation operators, and the third approach by applying cooperative coevolution in FEP. Experimental studies on some function optimization problems have shown that the three approaches can help FEP find better solutions while FEP maintains its fast convergence rate.
[yliu-007:2001]Y. Liu, X. Yao, Q. Zhao, and T. Higuchi. An Experimental Comparison of Ensemble Learning Methods on Decision Boundaries. In Proceedings of the 2002 International Joint Conference on Neural Networks, pages 221{226. IEEE Press, May 2002.
This paper presents an experimental comparison on different kinds of neural network ensemble learning methods on a patter classification problems. To summarize, there are three ways of designing neural network ensembles in these methods: independent training, sequential training, and simultaneous training. The purpose of such comparison is not only to illustrate the learning behavior of different neural network ensemble learning methods, but also to cast light on how to design more e??ective neural network ensembles.
[yliu-008:2001]Y. Liu and X. Yao. Maintaining Population Diversity By Minimizing Mutual Information. In Proceedings of the 2002 Genetic and Evolutionary Computation Conference, pages 652{656. Morgan Kaufmann, July 2002.
Based on negative correlation learning and evolutionary learning, evolutionary ensembles with negative correlation learning (EENCL) was proposed for learning and designing of neural network ensembles. The idea ofEENCLis to regard the population of neural networks as an ensemble, and the evolutionary process as the design of neural network ensembles. EENCL used a fitness sharing based on the covering set. Such fitness sharing did not make accurate measurement on the similarity in the population. In this paper, a fitness sharing scheme based on mutual information is introduced in EENCL to evolve a diverse and cooperative population. The effectiveness of such evolutionary learning approachwas tested on two real-world problems.
[yliu-009:2001]X.Yao andY. Liu. Getting MostOut of Evolutionary Approaches. In Proceedings of the 2002 NASA/DoD Conference on Evolvable Hardware, pages 8-14. IEEE Computer Society, 2002.
[yliu-010:2001]Y. Liu and X. Yao. Learning and Evolution by Minimization of Mutual Information. In Lecture Notes in Computer Science, Vol.2439, pages 495-504. Springer-Verlag, September 2002.
This paper presents a new ensemble learning with minimization of mutual information (ELMMI) for designing neural network ensembles. The idea of minimizing mutual information is to encourage different individual networks in an ensemble to learn different parts or aspects of the training data so that the ensemble can learn the whole training data better. Through minimization of mutual information, ELMMI tends to create uncorrelated networks with a novel correlation penalty term in the error function to encourage such specialization. In ELMMI, individual networks are trained simultaneously rather than sequentially. This provides an opportunity for di??erent networks to cooperate with each other and to specialize. This paper analyses ELMMI in terms of mutual information on a regression task.
Chapters in Book
[j-huang-008:2001]J. Huang. Enabling Society with Information Technology, chapter Spatial SoundProcessing for a Hearing Robot. Springer-Verlag, 2001.
[yliu-012:2001]X. Yao and Y. Liu. From Evolving a Single Neural Network to Evolving Neural Network Ensembles, pages 383{427. Advances in the Evolutionary Synthesis of Intelligent Agents. the MIT Press, 2001.
Books
[yliu-011:2001]Y. Y. Liu, K. Tanaka, M. Iwata, T. Higuchi, and M. (Eds.) Yasunaga. Number 2210 in Lecture Notes in Computer Science. Springer-Verlag, Berlin, 2001.
Grants
[qf-zhao-005:2001]Qiangfu Zhao andYong Liu. Learning, understanding, integration and re-use of robot strategies. Ministry of Education Grant-in-Aid: Scientific Research, Contract No.14580426, 2002-2003.
Academic Activities
[yliu-013:2001]Yong Liu, October 2001. Co-Chair, ICES 2001 Conference Committee, the 4th International Conference on Evolvable Systems: From Biology to Hardware.
Ph.D and Other Thesis
[j-huang-009:2001]T.Yanagi. Multimedia Streaming Server for a Telerobot, Univ. Aizu, 2002.
Maters thesis advisor: Jie Huang
[j-huang-010:2001]K.Kume. Spatial Sound Localization Using aFour Microphone Spherical H ead, Univ. Aizu, 2002.
Maters thesis advisor: Jie Huang
[j-huang-011:2001]Y. Ootake. Robot Position Calibration by Rectangle Sign Boards, Univ. Aizu, 2002.
Graduation thesis advisor: Jie Huang
[j-huang-012:2001]H. Satou. Interaction Between Different Primary Cues for Sound Integrati on and Segregation, | Psychological Investigation Part II |, Univ. Aizu, 2002.
Graduation thesis advisor: Jie Huang
[j-huang-013:2001]M. Nakayama. First-step Implementation for a Robot Operation Server by Usin g Java RMI, Univ. Aizu, 2002.
Graduation thesis advisor: Jie Huang
[j-huang-014:2001]A. Saji. 3D Sound Reproduction by a Sphere Four-Channel Microphone He ad, Univ. Aizu, 2002.
Graduation thesis advisor: Jie Huang
[qf-zhao-006:2001]Gaku KEYAKIDANI. Graduation Thesis: Effective Pruning to Generate Balanced Decision Tree, University of Aizu, 2001.
Thesis Advisor: Qiangfu Zhao
[qf-zhao-007:2001]Shingo MITA. Graduation Thesis: Performance Evaluation of Multilayer Neural Networks, University of Aizu, 2001.
Thesis Advisor: Qiangfu Zhao
[qf-zhao-008:2001]Shigeru HARUYAMA. Graduation Thesis: Create Small Decision Trees with Multiple Object Programming Based Genetic Algorithm, University of Aizu, 2001.
Thesis Advisor: Qiangfu Zhao
[qf-zhao-009:2001]Takaharu TAKEDA. Graduation Thesis: Performance Evaluation Experiment of NNTree, University of Aizu, 2001.
Thesis Advisor: Qiangfu Zhao
[qf-zhao-010:2001]Nobuhiro OKAMOTO. Graduation Thesis: Synchronous Parallel GA with Nearly Zero Sequential Computations, University of Aizu, 2001.
Thesis Advisor: Qiangfu Zhao
[yliu-014:2001]Keiji Aita. Graduation Thesis: A Hybrid Learning Algorithm for Feedforward Neural Networks, University of Aizu, 2001.
Thesis Advisor: Yong Liu.
[yliu-015:2001]Hisanori Sakaihara. Graduation Thesis: Improve the Performance of Neural Network Learning with Genetic Algorithms, University of Aizu, 2001.
Thesis Advisor: Yong Liu.
[yliu-016:2001]Mayumi Sakai. Graduation Thesis: Design of Optimal Feedforward Neural Networks by Pruning, University of Aizu, 2001.
Thesis Advisor: Yong Liu.
[yliu-017:2001]Takashi Harada. Graduation Thesis: Evolutionary Learning for Robot Navigation, University of Aizu, 2001.
Thesis Advisor: Yong Liu.
[yliu-018:2001]Takehiko Sone. Graduation Thesis: SARSA Learning for Robot Control, University of Aizu, 2001.
Thesis Advisor: Yong Liu.