Professor |
Associate Professor |
The main stream in our lab is related to computational intelligence. So far we have focused our study on three key words: recognition, learning and understanding. The goal of our research is to develop some learning models that are flexible enough to adapt changing environment, and also simple enough to be realized, interpreted and re-used. The ultimate goal is to design a system that can think, and decide what to do and how to grow-up based on its own thinking. For this purpose, many approaches have been studied - e.g., neuro-computation, evolutionary computation, reinforcement learning, and so on. Of course, results proposed in conventional symbol based artificial intelligence are also included. So far we have used or proposed the following learning models:
|
[qf-zhao-01:2005] |
T. Takeda, Q. F. Zhao, and Y. Liu. Incremental Learning with the Neural Network Trees. Neural, Parallel and Scientific Computations, 13(3):287-296, 2005. |
Neural network tree (NNTree) is a decision tree (DT) with each non-terminal node containing an expert neural network (ENN). Generally speaking, the NNTrees can outperform conventional DTs because more complex and better features can be extracted by the ENNs. The NNTrees also provide a way to resolve the plasticity-stability dilemma because the information can be integrated through localized retraining. Recently, we have studied several GA-based algorithms for inducing NNTrees. In this paper, we propose a back propagation based algorithm that can induce smaller and better NNTrees with data provided incrementally. The algorithm is verified through experiments on several public databases. |
|
[qf-zhao-02:2005] |
Hazem M. El-Bakry and Q. F. Zhao. Fast Time Delay Neural Networks. International Journal of Neural Sysems, 15(6):445-455, 2005. |
This paper presents a new approach to speed up the operation of time delay neural networks. The entire data are collected together in a long vector and then tested as a one input pattern. The proposed fast time delay neural networks (FTDNNs) use cross correlation in the frequency domain between the test data and the input weights of neural networks. It is proved mathematically that the number of computation steps required for the presented time delay neural networks is much less than that needed by conventional time delay neural networks (CTDNNs). Simulation results using MATLAB confirm the theoretical results. |
|
[qf-zhao-03:2005] |
Hazem M. El-Bakry and Q. F. Zhao. Fast pattern detection using normalized neural networks and cross-correlation in the frequency domain. EURASIP Journal on Applied Signal Processing, 2005(13):2054- 2060, 2005. |
Neural networks have shown good results for detection of a certain pattern in a given image. In our previous paper [1], a fast algorithm for object/face detection was presented. Such algorithm was designed based on cross correlation in the frequency domain between the input image and the weights of neural networks. Our previous work also solved the problem of local subimage normalization in the frequency domain. In this paper, the effect of image normalization on the speed up ratio of pattern detection is presented. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. Moreover, the overall speed up ratio of the detection process is increased as the normalization of weights is done off line. |
|
[qf-zhao-04:2005] |
Hazem M. El-Bakry and Q. F. Zhao. Speeding-up normalized neural networks for face/object detection. Machine Graphics & Vision, 14(1):29-59, 2005. |
Finding an object or a face in the input image is a search problem in the spatial domain. Neural networks have shown good results for detecting a certain face/object in a given image. In this paper, faster neural networks for face/object detection are presented. Such networks are designed based on cross correlation in the frequency domain between the input image and the input weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural network. Furthermore, fastest face/object detection is achieved by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural networks, the speed up ratio is increased with the size of the input image when using faster neural networks and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio for face/object detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line. |
|
[yliu-01:2005] |
Y. Liu and X. Yao. Nature Inspired Neural Network Ensemble Learning. Journal of Intelligent Systems, 2005. |
Learning and evolution are two fundamental forms of adaptation from nature. This paper presents a nature inspired neural network ensemble learning, i.e., EENCL, in which evolution is another fundamental form of adaptation in addition to learning. Learning in EENCL is conducted at the individual level based on negative correlation learning to encourage each individual neural network learns differently. Evolution in EENCL is processed by evolutionary programming at the population level in order to evolve a population of diverse neural networks. One distinct feature of EENCL is its adaptability to a dynamic environment. In other words, EENCL can make neural network ensembles adapt to an environment as well as changes in the environment. The evolution and learning in EENCL make the adaptation of neural network ensembles to a dynamic environment much more effective efficient. |
|
[yliu-02:2005] |
X. Yao, Y. Liu, J. Li, J. He, and C. Frayn. Current developments and future directions of bio-inspired computation and implications for ecoinformatics. Ecological Informatics, 1(1):9-22, 2006. |
Evolutionary and neural computation has been used widely in solving various problems in biological ecosystems. This paper reviews some of the recent work in evolutionary computation and neural network ensembles that could be explored further in the context of ecoinformatics. Although these bio-inspired techniques were not developed specifically for ecoinformatics, their successes in solving complex problems in other fields demonstrate how these techniques could be adapted and used for tackling difficult problems in ecoinformatics. Firstly,we will review our work in modelling and model calibration, which is an important topic in ecoinformatics. Secondly, one example will be given toillustrate how co-evolutionary algorithms could be used in problem-solving. Thirdly, we will describe our work on neural network ensembles, which can beused for various classification and prediction problems in ecoinformatics. Finally,we will discuss ecosystem inspired computational models and algorithms that could be explored as directions of future research. |
[qf-zhao-05:2005] |
Q. F. Zhao. Learning with Data Streams - An NNTree Based Approach. In T. Enokido et al, editor, EUC Workshops 2005, Lecture Notes in Computer Science 3823, pages 519-528, Nagasaki, Japan, Dec. 2005. IFIP International Federation for Information Processing, Springer. |
The plasticity-stability dilemma is a key problem for learning with data streams. On the one hand, the learner must be plastic enough to adapt to the new data. On the other hand, it must be stable enough to integrate information. In this paper, we try to resolve this problem using neural network trees (NNTrees). An NNTree is a decision tree (DT) with each non-terminal node containing an expert neural network (ENN). The NNTrees are plastic because they can adapt to the new data through retraining of the ENNs and/or through generation of new nodes. The NNTrees are also stable because retraining is performed partially and locally. In this paper, we propose an algorithm that can grow NNTrees effectively and efficiently. Experiments with several public databases show that the NNTrees obtained by the proposed methods are comparable with the NNTrees or DTs obtained with all data provided all-at-once. |
|
[qf-zhao-06:2005] |
K. Sakamoto and Q. F. Zhao. Generating Smart Robot Controllers through Co-evolution. In T. Enokido et al, editor, EUC Workshops 2005, Lecture Notes in Computer Science 3823, pages 529-537, Nagasaki, Japan, Dec. 2005. IFIP International Federation for Information Processing, Springer. |
To evolve robot controllers that generalize well, we should evaluate the controllers using as many environment patterns (evaluation patterns) as possible. On the other hand, to evolve the controllers faster, we should use as few evaluation patterns as possible in evaluation. It is difficult to know in advance what patterns can produce good controllers. To solve this problem, this paper studies co-evolution of the robot controllers and the evaluation patterns. To improve the effectiveness of co-evolution, we introduce fitness sharing in the population of evaluation patterns, and the inter-generation fitness in selecting good controllers. Simulation results show that the proposed method can get much better robot controllers than standard co-evolutionary algorithm. |
|
[qf-zhao-07:2005] |
C. F. Lv and Q. F. Zhao. A universal PCA for image compression. In T. Enokido et al, editor, EUC Workshops 2005, Lecture Notes in Computer Science 3824, pages 910-919, Nagasaki, Japan, Dec. 2005. IFIP International Federation for Information Processing, Springer. |
In recent years, principal component analysis (PCA) has attracted great attention in image compression. However, since the compressed image data include both the transformation matrix and the transformed coefficients, PCA cannot produce good performance like DCT (Discrete Cosine Transform) in respect of compression ratio. In using DCT, we need only to preserve the coefficients after transformation, because the transformation matrix is universal in the sense that it can be used to compress all images. In this paper we consider to build a universal PCA by proposing a hybrid method called k-PCA. The basic idea is to construct k sets of eigenvectors for different image blocks with distinct characteristics using some training data. The k sets of eigenvectors are then used to compress all images. Vector quan-tization (VQ) is adopted here to split the training data space. Experimental re-sults show that the proposed approach, although simple, is very efficient. |
|
[qf-zhao-08:2005] |
H. Hayashi and Q. F. Zhao. A comparative study on GA based and BP based induction of neural network trees. In IEEE, editor, Proc. IEEE International Conference on Systems, Man and Cybernetics, pages 822-826, Hawaii, Oct. 2005. IEEE, IEEE. |
Neural network tree (NNTree) is a special multivariate decision tree (DT) with each non-terminal node containing an expert neural network (ENN). Generally speaking, NNTrees can outperform standard DTs because the ENNs can extract more complex features. However, induction of multivariate DTs is very difficult. Even if each non-terminal node contains a simple oblique hyperplane, the induction problem can be NP-complete. To solve this problem, we have introduced an evolutionary algorithm that follows the same recursive procedure for inducing a standard DT, and designs an ENN for each non-terminal node using GA (genetic algorithm). This algorithm, however, is very time consuming and cannot be used easily. In this paper, we propose two new methods. One is to evolve the whole tree instead of evolving the ENNs recursively. Another is to define the group labels for the examples assigned to each non-terminal node using a heuristic method, and design the ENNs with the back propagation (BP) algorithm. Experimental results with 10 public databases show that the BP based algorithm is much more efficient than GA based algorithms. |
|
[qf-zhao-09:2005] |
T. Kawatsure and Q. F. Zhao. Inducing Multivariate Decision Trees with the R4-rule. In IEEE, editor, Proc. IEEE International Conference on Systems, Man and Cybernetics, pages 3593-3598, Hawaii, Oct. 2005. IEEE, IEEE. |
Decision tree (DT) is often considered as a comprehensible learning model. If the data set is large, however, the induced DT may be too large to understand. Currently, we have proposed a non-genetic evolutionary algorithm called R4-rule for producing the smallest nearest neighbor classifiers (NNCs). In this paper, we propose two new approaches for inducing DTs with the R4-rule. The DTs considered here are multivariate, and there is an NNC with two or more prototypes in each non-terminal node. In the first method, the prototypes are found directly from the training set. In the second method, the prototypes are found from the data assigned to each non-terminal node. Using these methods, we can induce more compact and more comprehensible DTs. The efficiency and efficacy of the methods are verified through experiments with several public databases. |
|
[qf-zhao-10:2005] |
K. Sakamoto and Q. F. Zhao. Co-evolving robot controllers that generalize well. In IEEE, editor, Proc. IEEE International Conference on Systems, Man and Cybernetics, pages 2311-2316, Hawaii, Oct. 2005. IEEE, IEEE. |
To evolve robot controllers that generalize well, we should evaluate the controllers using as many environment patterns or test cases as possible. On the other hand, to evolve the controllers faster, we should use as few environment patterns as possible in evaluation. It is difficult to know in advance what environment patterns can produce good controllers. To solve this problem, this paper studies co-evolution of the robot controllers and the environment patterns. To improve the effectiveness of co-evolution, we introduce fitness sharing in the population of environment patterns, and the inter-generation fitness in selecting the robot controllers. Simulation results show that the improved method can get much better robot controllers than standard co-evolutionary algorithm. |
|
[qf-zhao-11:2005] |
C. F. Lv and Q. F. Zhao. A simplified MPC for image compression. In IEEE, editor, Proc. IEEE International Conference on Computer and Information Technology (CIT05), pages 580-584, Shanghai, Sept. 2005. Fudan University, IEEE. |
In recent years, principal component analysis (PCA) has attracted great attention in image compression field. However due to the linear nature, PCA cannot simultaneously explain the global and local characteristics of the input image. To achieve high compression rate, only a few basis vectors should be used. The fewer the basis vectors used, the more local information will be lost. To solve this problem, a number of improved PCA approaches have been proposed. The basic idea is to reduce the error by using different basis vectors for different sub-spaces of the problem space. These algorithms are non-linear, but very time-consuming and cannot be used easily. In this paper, a VQ based mixture of principle components (MPCs) is proposed. Experimental results show that the proposed approach, although simpler, is actually better than existing PCA based approaches. |
|
[yliu-03:2005] |
Y. Liu. How to Find Different Neural Networks by Negative Correlation Learning. In Proceedings of the 2005 International Joint Conference on Neural Networks (IJCNN'05), pages 3330-3333. The International Neural Network Society and IEEE Computational Intelligence Society, IEEE Press, July 2005. |
Two penalty functions are introduced in the negative correlation learning for finding different neural networks in an ensemble. One is based on the average output of the ensemble. The other is based on the classification. The idea of penalty function based on the average output is to make each individual network has the different output value to that of the ensemble on the same input. In comparison, the penalty function based on the classification is to lead each individual network to have different class to that of the ensemble on the same input. Experiments on a classification task show how the negative correlation learning generates different neural networks with two different penalty functions. |
|
[yliu-04:2005] |
Y. Liu. Generate Different Neural Networks by Negative Correlation Learning. In L. Wang, K. Chen, and Editors Y. S. Ong, editors, Advances in Natural Computation, Lecture Notes in Computer Science, pages 149- 156. Springer, August 2005. |
This paper describes two methods on how to generate different neural networks in an ensemble. One is based on negative correlation learning. The other is based on cross-validation with negative correlation learning, i.e., bagging with negative correlation learning. In negative correlation learning, all individual networks are trained simultaneously on the same training set. In bagging with negative correlation learning, different individual networks are trained on the different sampled data set with replacement from the training set. The performance and correct response sets are compared between two learning methods. The purpose of this paper is to find how to design more effective neural network ensembles. |
|
[yliu-05:2005] |
Y. Liu. Create Stable Neural Networks by Cross-Validation. In Proceedings of the IEEE World Cpngress on Computational Intelligence (WCCI'06. IEEE Computational Intelligence Society, IEEE Press, 2006. |
This paper studies how to learn a stable neural network through the use of crossvalidation. Cross-validation has been widely used for estimating the performance of neural networks and early stopping of training. Although cross-validation could give a good estimate of the generalisation errors of the trained neural networks, the question of selecting an neural network to use remains. This paper proposes a new method to train a stable neural network by approximately mapping the output of an average of a set of neural networks obtained from cross-validation. Two experiments have been conducted to show how different the generalisation errors of the trained neural networks from cross-validation could be and how stable an neural network would be by learning the average output of a set of neural networks. |
|
[yliu-06:2005] |
Y. Liu. Evolving Neural Network Ensembles by Fitness Sharing. In Proceedings of the IEEE World Cpngress on Computational Intelligence (WCCI'06. IEEE Computational Intelligence Society, IEEE Press, 2006. |
The difference between evolving neural networks and evolving neural network ensembles is that the solution of evolving neural networks is an evolved neural network while the solution of evolving neural network ensemble is an evolved population of neural networks. In the practice of evolving neural network ensemble, it is common that each individual rather the whole population is evaluated. During the evolution, the solution of evolving neural networks would be better and better while it might not be the case for the solution of evolving neural network ensembles. It suggests that the final evolved population might be worse so that it is not wise to choose the final population as a solution. Through experimental studies, this paper gives ideas of how to evolve better populations. |
[yliu-07:2005] |
X. Yao and Y. Liu. Machine learning, page 32. Introductory Tutorials in Optimisation, Decision Support and Search Methodologies. Kluwer Academic Publishers, 2005. |
[yliu-08:2005] |
T. Higuchi, Y. Liu, and X. Yao. Introduction to Evolvable Hardware, page 18. Evolvable Hardware. Springer, 2006. |
[yliu-09:2005] |
Y. Liu. How to Generate Different Neural Networks, page 16. Trends in Neural Computation, Lecture Notes in Computer Science. Springer, 2006. |
[yliu-10:2005] |
Y. Liu, 2004-. Editor, International Journal of Hybrid Intelligent Systems |
[yliu-11:2005] |
Y. Liu, 2005. Co-Chair of the 2005 International Symposium on Intelligence Computation and Applications (ISICA2005) |
[yliu-12:2005] |
Y. Liu, 2005-2006. One of editors of the book Evolvable Hardware by Springer |
[yliu-13:2005] |
Y. Liu, 2006-2007. Co-Chair of the joint conference of the 7th International Conference on Evolvable Systems: From Biology to Hardware (ICES2007) and the 2nd International Symposium on Intelligence Computation and Applications (ISICA2007) |
[qf-zhao-12:2005] |
Kenta Kawaii. Graduation Thesis: Face recognition based on PCA, University of Aizu, 2005. Thesis Advisor: Q. F. Zhao |
[qf-zhao-13:2005] |
Kazumi Tashiro. Graduation Thesis: Image restoration based on ICA, University of Aizu, 2005. Thesis Advisor: Q. F. Zhao |
[qf-zhao-14:2005] |
Hiroyuki Nakamura. Graduation Thesis: Induction of SVMTrees, University of Aizu, 2005. Thesis Advisor: Q. F. Zhao |
[qf-zhao-15:2005] |
Ryo Watababe. Graduation Thesis: Study on stock prediction based on neural network trees, University of Aizu, 2005. Thesis Advisor: Q. F. Zhao |
[qf-zhao-16:2005] |
Tatsuya Fujita. Graduation Thesis: Expression recognition based on PCA, University of Aizu, 2005. Thesis Advisor: Q. F. Zhao |
[qf-zhao-17:2005] |
Satoshi Kondo. Master Thesis: Information hiding technique based on morphing technology, University of Aizu, 2005. Thesis Advisor: Q. F. Zhao |
[qf-zhao-18:2005] |
Takaharu Kawatsure. Master Thesis: Inducing comprehensible multivariate decision trees with the R4-rule, University of Aizu, 2005. Thesis Advisor: Q. F. Zhao |
[qf-zhao-19:2005] |
Horotomo Hayashi. Master Thesis: A comparative study on different approaches for induction of NNTrees, University of Aizu, 2005. Thesis Advisor: Q. F. Zhao |
[yliu-14:2005] |
Takeshi Sato. Graduation Thesis: An Experimental Study on Weight Decay in Neural Network Learning, University of Aizu, 2005. Thesis Advisor: Liu, Y |
[yliu-15:2005] |
Takafumi Satou. Graduation Thesis: Selection of Neural Networks by Early Stopping, University of Aizu, 2005. Thesis Advisor: Liu, Y |
[yliu-16:2005] |
Eiji Watanabe. Graduation Thesis: Cross-Validation for Better Generalization, University of Aizu, 2005. Thesis Advisor: Liu, Y |