/ Qiangfu Zhao / Professor
/ Yong Liu / Associate Professor
/ Jie Huang / Assistant Professor
The main stream in our lab is related to computational intelligence. The final goal is to design a system that can think, and decide what to do and how to grow-up based on its own thinking. For this purpose, many approaches have been studied --- e.g., neuro-computation and evolutionary computation. Of course, results proposed in conventional symbol based artificial intelligence are also included.
The goal of this research is to develop a neural network model that is flexible enough to adapt changing environment, and also simple enough to be realized and used for practical applications. Of course, we cannot use a model if we do not study the learning algorithms corresponding to this model. Possible models include:
As for learning, we have proposed the following algorithms:
To verify and improve the models and learning algorithms proposed by us, we are also studying automatic design of decision rules and decision trees, pattern recognition, robot control, and data mining.
To apply our methods to pattern recognition and robot control, it is also necessary to get support from many other researches, such as signal and image processing, development of virtual environment, learning and evolution of agents living in it, and so on. Specifically, we are studying localization and recognition of sounds in natural environments, and trying to apply our result to automatic acquisition of robot control strategies.
Refereed Journal Papers
This paper presents evolutionary ensembles with negative correlation learning (EENCL) to address the issues of automatic determination of the number of individual neural networks (NNs) in an ensemble and the exploitation of the interaction between individual NN design and combination. Experiments on two real-world problems demonstrate that EENCL can produce NN ensembles with good generalization ability.
In robot learning, many of papers are concerned with neural networks as controllers. In this paper, an evolvable hardware (EHW) is designed as a robot controller. EHW is based on the idea of combining reconfigurable hardware devices, such as programmable logic array (PLA) and field-programmable gate array (FPGA), with evolutionary algorithms to execute reconfiguration autonomously.
When we talk about co-evolution, we often consider it as competitive co-evolution (CompCE). Examples include co-evolution of training data and neural networks, co-evolution of game players, and so on. Recently, several researchers have studied another kind of co-evolution --- cooperative co-evolution (CoopCE). While CompCE tries to get more competitive individuals through evolution, the goal of CoopCE is to find individuals from which better systems can be constructed. The basic idea of CoopCE is to {\bf divide-and-conquer}: divide a large system into many modules, evolve the modules separately, and then combine them together again to form the whole system. Depending on how to divide-and-conquer, different cooperative co-evolutionary algorithms (CoopCEAs) have been proposed in the literature. Results obtained so far strongly support the usefulness of CoopCEAs. To study the CoopCEAs systematically, we proposed a society model, which is a common framework of most existing CoopCEAs. From this model, we can see that there are still many open problems related to CoopCEAs. To make CoopCEAs generally useful, it is necessary to study and solve these problems. In this paper, we focus the discussion on evaluation of the modules - which is one of the key point in using CoopCEAs. To be concrete, we will apply the model to evolutionary learning of RBF-neural networks, and show the effectiveness of different evaluation methods through experiments.
For pattern recognition, the decision trees (DTs) are more efficient than neural networks (NNs) for two reasons. First, the computations in making decisions are simpler. Second, important features can be selected automatically during the design process. On the other hand, NNs are adaptable, and thus have the ability to learn in changing environment. Noting that there is a simple mapping from DT to NN, we can design a DT first, and then map it to an NN. By so doing, we can integrate the symbolic (DT) and the sub-symbolic (NN) approaches, and have advantages of both. For this purpose, we should design DTs which are as small as possible. In this paper, we continue our study on the evolutionary design of the decision trees based on genetic programming, and propose two new methods to reduce the tree sizes. The effectiveness of the new methods are tested through experiments with a character recognition problem.
C4.5 is one of the tools for designing decision trees (DTs) from training examples. In most cases, C4.5 can generate near optimal DTs when the training data are given all together. However, if the training data are given incrementally, C4.5 cannot be used. In this case, genetic programming (GP) might be a better choice. Actually, GP can be considered as a DT-breeder in which good DTs can be generated automatically through evolution. In GP based DT design, the training examples can be given all together or incrementally, provided that the fitness of the tree is properly defined. This of course does NOT mean that the GP based approach is BETTER than C4.5 because DTs obtained by GP are usually very large and complex. In this paper, we try to integrate C4.5 and GP in such a way that each individual is initialized by C4.5 using part of the training examples. By so doing, we can have relatively good DTs from the very beginning, and use them while waiting for better DTs to emerge. To show the effectiveness of this kind of integration, we conducted some experiments with a digit recognition problem. Experimental results show that smaller DTs with higher recognition rates can always be obtained through integration of C4.5 and GP. However, as the evolution continues, DTs obtained by GP (with random initialization) tend to have almost the same recognition ability as those obtained by C4.5+GP.
GABIL is a genetic algorithm (GA) based concept learner. It is a batch (off-line) learner because all training data are given all together. However, it is not exactly the traditional batch learner because the training data are provided to GA incrementally. In GABIL, the same data set is used both for training and test. When the number of training data is large, GABIL would be more efficient than traditional batch learner in the sense that a good rule set could be obtained in a more stable way. GABIL can also be easily extended to on-line learning. In this paper, we point out some problems in using GABIL, and then propose some solutions to these problems. Experimental results show that the improved GABIL is much better than the original one for multi-class and unbalanced concept learning.
In the last decade, a number of neural network models have been proposed in the literature. Some of them have been successfully incorporated in different intelligent information processing systems. Among these models, a group of most successful ones are the modular neural networks (MNNs). This paper introduces a general model of MNNs, and proposes a neural network tree (NNTree) model. An evolutionary algorithm is also given for designing the NNTrees. The usefulness of the NNTrees and the effectiveness of the learning algorithm are verified through experiments with a digit recognition problem.
This paper presents an integrated on-line learning system to evolve programmable logic array controllers for navigating an autonomous robot in a two-dimensional environment. The integrated on-line learning system consists of two learning modules: one is the module of reinforcement learning based on temporal-difference learning methods, and the other is the module of evolutionary learning based on genetic algorithms. The control rules extracted from the module of reinforcement learning can be used as input to the module of evolutionary learning, and quickly implemented by the PLA through on-line evolution.
This paper presents a new ensemble learning with minimization of mutual information (ELMMI) for designing neural network ensembles. By minimizing the mutual information between variables extracted by two neural networks, they are forced to convey different information about some features of their input.
This paper has analyzed negative correlation learning in terms of mutual information on a regression task in the different noise conditions. Unlike independent training which creates larger mutual information among the ensemble, negative correlation learning can produce smaller mutual information among the ensemble.
This paper studies the application of evolutionary artificial neural networks to chlorophyll-a prediction in Lake Kasumigaura. Our experimental results have shown that evolved artificial neural networks are very compact and generalize well. Accurate prediction of chlorophyll-a and other blue-green algae is very useful in protecting the fresh-water environment.
In this paper, we describe how to design a gate-level evolvable hardware chip. The chip consists of genetic algorithm hardware, reconfigurable hardware logic, and the control logic. With this chip, we have successfully executed genetic learning and hardware reconfiguration. The paper describes the architecture, functions, and a performance evaluation of the chip. We show that its learning speed is considerably faster than with software.
Evolutionary artificial neural networks (EANNs) refer to a special class of artificial neural networks (ANNs) in which evolution is another fundamental form for adaptation and learning. This chapter describes an evolutionary programming-based EANNs which learn both their weights and architectures simultaneously using a hybrid algorithm. We have tested our evolutionary programming-based EANNs on a wide range of problems, including parity problems of various size, the two-spiral problem, four different medical diagnosis problems, the Australian credit card problem, and a couple of time-series prediction problems. Very good results have been achieved.
Program co-chair of the program committee of the 4th International Conference on Evolvable Systems: From Biology To Hardware, October, 2001 in Tokyo.