Professor |
Associate Professor |
The main stream in our lab is related to computational intelligence. So far we have focused our study on three key words: recognition, learning and understanding. The goal of our research is to develop some learning models that are flexible enough to adapt changing environment, and also simple enough to be realized, interpreted and re-used. The ultimate goal is to design a system that can think, and decide what to do and how to grow-up based on its own thinking. For this purpose, many approaches have been studied - e.g., neuro-computation, evolutionary computation, reinforcement learning, and so on. Of course, results proposed in conventional symbol based artificial intelligence are also included. In 2009, we successfully organized the 19th Symposium on Intelligent Systems (FAN2009) and the 1st International Workshop on Aware Computing (IWAC2009). We also established a new technical committee on awareness computing under the IEEE SMC society, with 26 colleagues around the world. Currently, we are promoting awareness computing by organizing the International Symposium on Aware Computing (ISAC2010, Tainan). Our dream is to propose a new and better approach to realization of artificial intelligence. So far we have used or proposed the following learning models:
|
[qf-zhao-01:2009] |
N. Tominaga and Q. F. Zhao. Speed-up of the R4-rule for distancebased
neural network learning. In IEEE SMCS, editor, IEEE International
Conference on Systems, Man and Cybernetics, pages 3489–3494, Texas, Oct.
2009. IEEE, IEEE. |
The R4-rule is a heuristic algorithm for distance-based neural network (DBNN) learning.
Experimental results show that the R4-rule can obtain the smallest or nearly
smallest DBNNs. However, the computational cost of the R4-rule is relatively high
because the learning vector quantization (LVQ) algorithm is used iteratively during
learning. To reduce the cost of the R4-rule, we investigate three approaches in this
paper. The first one is called the distance preservation (DP) approach, which tries to
reduce the number of times for calculating the distance values, and the other two are
based on the attentional learning concept, which try to reduce the number of data
used for learning. The efficiency of these methods is verified through experiments on
several public databases. |
|
[qf-zhao-02:2009] |
H. Hayashi and Q. F. Zhao. Induction of compact neural network
trees through centroid based dimensionality reduction. In IEEE SMCS, editor,
IEEE International Conference on Systems, Man and Cybernetics, pages 974–
979, Texas, Oct. 2009. IEEE, IEEE. |
Neural network tree (NNTree) is a hybrid model for machine learning. Compared with
single model fully connected neural networks, NNTrees are more suitable for structural
learning, and faster for decision making. To increase the realizability of the NNTrees,
we have tried to induce more compact NNTrees through dimensionality reduction. So
far, we have used principal component analysis (PCA) and linear discriminant analysis
(LDA) for dimensionality reduction, and confirmed that in most cases the LDA based
approach can result in very compact NNTrees without degrading the performance. One
drawback in using the LDA based approach is that the cost for finding the transformation
matrix can be very high for large databases. To solve this problem, in this
paper we investigate the efficiency and efficacy of two centroid based approaches for
NNTree induction. One is to map each datum directly to the class centroids; and the
other is to find the least square error approximation of each datum using the centroids.
Experimental results show that both approaches, although simple, are comparable to
the LDA based approach in most cases. |
|
[qf-zhao-03:2009] |
H. Hayashi and Q. F. Zhao. Model reduction of neural network trees
based on dimensionality reduction. In IEEE, editor, International Joint Conference
on Neural Networks, pages 1171–1176, Atlanta, June 2009. IEEE and
INNS, IEEE. |
Neural network tree (NNTree) is a hybrid model for machine learning. Compared with
single model fully connected neural networks, NNTrees are more suitable for structural
learning, and faster for decision making. Recently, we proposed an efficient algorithm
for inducing the NNTrees based on a heuristic grouping strategy. In this paper, we try
to induce smaller NNTrees based on dimensionality reduction. The goal is to induce
NNTrees that are compact enough to be implemented in a VLSI chip. Two methods
are investigated for dimensionality reduction. One is the principal component analysis
(PCA), and another is linear discriminant analysis (LDA). We conducted experiments
on several public databases, and found that the NNTree obtained after dimensionality
reduction usually has less nodes and much less parameters, while the performance is
comparable with the NNTree obtained without dimensionality reduction. |
|
[qf-zhao-04:2009] |
N. Tominaga and Q. F. Zhao. Fast structural learning of distancebased
neural networks. In IEEE, editor, International Joint Conference on
Neural Networks, pages 3124–3131, Atlanta, June 2009. IEEE and INNS, IEEE. |
The R4-rule is a structural learning algorithm for obtaining the smallest or nearly
smallest distance-based neural networks. However, the computational cost of the R4-
rule is relatively high because the learning vector quantization (LVQ) algorithm is
used iteratively. To reduce the cost of the R4-rule, we investigate two approaches in
this paper. The first one is called attentional learning (AL) which tries to reduce the
number of data used for learning. The second one is called distance preservation (DP),
which tries to reduce the number of times for calculating the distances during learning.
The efficiency of these two approaches as well as their combination is verified through
experiments on several public databases. |
|
[yliu-01:2009] |
Y. Liu. New Discoveries in Fast Evolutionary Programming. In Proceedings
of the 1st International Workshop on Aware Computing (IWAC 2009),
Sept. 2009. |
Through carefully examining the relationship between the step sizes of mutations and
their performance, it has been discovered that not long jumps but short jumps with
large variances among Cauchy mutations had contributed to the better performance
of fast evolutionary programming (FEP) than that of classical evolutionary programming
(CEP). Experimental results show that effective Cauchy mutations in FEP had
often had even shorter step sizes on average than effective Gaussian mutations in
CEP, although the average step sizes of Cauchy mutations were much longer than
those of Gaussian mutations. Effective mutations here refer to those mutations that
have generated better offspring than their parent solutions among all mutations at
each generation. New discoveries shed light on why the shorter effective Cauchy mutations
performed better than the longer effective Gaussian mutations, and how effective
Cauchy mutations had had the shorter step sizes than effective Gaussian mutations. |
|
[yliu-02:2009] |
Y. Liu. Self-Adaptation in Fast Evolutionary Programming. In Computational
Intelligence and Intelligent Systems, Communications in Computer and
Information Science, Vol.51, pages 232–239. Springer, Oct. 2009. |
It had been discovered that the same self-adaptation used in fast evolutionary programming
(FEP) and classical evolutionary programming (CEP) had shown quite different
behaviors on optimizing the same test functions from the same initial populations. The
experimental results presented in this paper suggest that the strategy parameters in
FEP generally could not rise as high as their values in CEP in the rising stage, but
dropped faster than their values in CEP in the falling stage. Such different behaviors
were led by the strong correlation existing between mutations and self-adaptation in
CEP and FEP. |
|
[yliu-03:2009] |
Y. Liu. Balanced Learning for Ensembles with Small Neural Networks. In
Advances in Computation and Intelligence, Lecture Notes in Computer Science,
Vol. 5821, pages 163–170. Springer, Oct. 2009. |
By introducing an adaptive error function, a balanced ensemble learning had been
developed from negative correlation learning. In this paper, balanced ensemble learning
had been used to train a set of small neural networks with one hidden node only. The
experimental results suggest that balanced ensemble learning is able to create a strong
ensemble by combining a set of weak learners. Different to bagging and boosting where
learners are trained on randomly re-sampled data from the original set of patterns,
learners could be trained on all available data in balanced ensemble learning. It is
interesting to be discovered that learners by balanced ensemble learning could be just
be slightly better than random guessing even if they had been trained on the whole
data set. Another difference among these ensemble learning methods is that learners
are trained simultaneously in balanced ensemble learning when learners are trained
independently in bagging, and sequentially in boosting. |
[yliu-04:2009] |
Z. Cai, Z. Li, Z. Kang, and Y. Liu. Advances in Computation and Intelligence.
Number 5821 in Lecture Notes in Computer Science. Springer,
2009. |
[yliu-05:2009] |
Z. Cai, Z. Li, Z. Kang, and Y. Liu. Computational Intelligence and Intelligent
Systems. Number 51 in Communications in Computer and Information
Science. Springer, 2009. |
[yliu-07:2009] |
Y. Liu, 2008–2009. Program Co-Chair of the 4th International Symposium on Intelligence Computation and Applications (ISICA2009) |
[yliu-08:2009] |
Program Co-Chair of the 1st International Workshop on Aware Computing (IWAC
2009) |
[yliu-09:2009] |
Y. Liu, 2009–2010. Program Co-Chair of the 5th International Symposium on Intelligence Computation and Applications (ISICA2010) |
[yliu-10:2009] |
Program Co-Chair of the Second International Symposium on Aware Computing
(ISAC 2010) |
[yliu-11:2009] |
Program Co-Chair of the Second International Symposium on Aware Computing
(ISAC 2010) |
[qf-zhao-05:2009] |
Gun-Ho Na. Master Thesis: A study on face expression recognition
based on linear approaches, University of Aizu, 2009. Thesis Advisor: Qiangfu Zhao |
[qf-zhao-06:2009] |
Yuki Aiba. Graduation Thesis: Pose recognition based on linear algorithms,
University of Aizu, 2009. Thesis Advisor: Qiangfu Zhao |
[qf-zhao-07:2009] |
Yoshitomo Ishibashi. Graduation Thesis: A study on IC card security,
University of Aizu, 2009. Thesis Advisor: Qiangfu Zhao |
[qf-zhao-08:2009] |
Noriaki Ishiyama. Graduation Thesis: Pose recognition in dim lighting
conditions, University of Aizu, 2009. Thesis Advisor: Qiangfu Zhao |
[qf-zhao-09:2009] |
Kei Sato. Master Thesis: Posture recognition with NNC-Trees, University
of Aizu, 2009. Thesis Advisor: Qiangfu Zhao |
[qf-zhao-10:2009] |
Mayuko Akatsuka. Graduation Thesis: Generating natural cluster
centers with image morphing, University of Aizu, 2009. Thesis Advisor: Qiangfu Zhao |
[qf-zhao-11:2009] |
Chiemi Hanyu. Graduation Thesis: A study on generating virtual
databases for expression recognition, University of Aizu, 2009. Thesis Advisor: Qiangfu Zhao |
[yliu-12:2009] |
Taiga Watanabe. Master Thesis: Development of a Product Recommendation
System Based on Text Mining, University of Aizu, 2009. Thesis Advisor: Y. Liu |