Professor 
Associate Professor 
The main stream in our lab is related to computational intelligence. So far we have focused our study on three key words: recognition, learning and understanding. The goal of our research is to develop some learning models that are flexible enough to adapt changing environment, and also simple enough to be realized, interpreted and reused. The ultimate goal is to design a system that can think, and decide what to do and how to growup based on its own thinking. For this purpose, many approaches have been studied  e.g., neurocomputation, evolutionary computation, reinforcement learning, and so on. Of course, results proposed in conventional symbol based artificial intelligence are also included. In 2008, we proposed to hold the 1st International Workshop on Aware Computing (IWAC 2009) at the University of Aizu. IWAC 2009 is sponsored by the University of Aizu and the Japan Society for Fuzzy Theory and Intelligent Informatics, and supported by other 11 societies. The ultimate goal of aware computing is to create a computing machine/system that is able to be aware. An aware system may not be as intelligent as a human, but it will certainly be more autonomous and more humanlike than conventional systems. Application examples of aware computing include context awareness, situation/background awareness, power awareness, location/position awareness, weakness/ risk/danger awareness, safety/security awareness, chance/opportunity awareness, etc. So far we have used or proposed the following learning models:

[qfzhao01:2008] 
H. Hayashi and Q. F. Zhao. A fast algorithm for inducing neural
network trees. Journal of Information Processing, 49(8):2878–2889, Aug. 2008. 
Neural network tree (NNTree) is a decision tree (DT) with each internal node containing
a small neural network (NN). Although NNTree is a model good for structural learning
and for hardware implementation, induction of NNTrees is difficult. Even if each NN contains
only one neuron, the problem for finding the optimal test function in each internal
node is NPcomplete. To solve this problem, we have tried to induce the NNTrees using
genetic algorithm (GA). The GAbased approach, however, is very time consuming, and
cannot be used easily. In this paper, we propose a new algorithm for inducing NNTrees
quickly. The basic idea is to define the group labels for the data assigned to a internal
node based on some heuristic rules, and then find the test function through supervised
learning. The efficiency of the proposed algorithm is proved through experiments on
several public databases. 
[qfzhao02:2008] 
Q. F. Zhao, H. Hayashi, and H. Takahashi. Face recognition based on
minor components of the withinclass scatter matrix. In IEEE, editor, Proc.
of Joint International Conference on Soft Computing and Intelligent Systems,
and International Symposium on Advanced Intelligent Systems, pages
1078–1083, Nagoya, Sept. 2008. Nagoya University, IEEE 
Face recognition is one of the most fundamental functions for surveillance, information
retrieval, robot vision, and so on. Among many methods proposed for face
recognition, linear approaches such as principal component analysis (PCA) and linear
discriminant analysis (LDA) have attracted great attention because of their simplicity.
Generally speaking, LDA outperforms PCA because LDA may preserve the
discriminative information better than PCA. However, when the number of training
examples is small, LDA cannot be used directly because the withinclass scatter matrix
Sw might be singular. This is known as the small sample size (SSS) problem. To
solve this problem, we can use PCA first to reduce the dimensionality of the feature
space, and then adopt LDA. Another approach to solve the SSS problem is the null
space (NS) approach. In this approach, only information contained in the null space
of Sw is used for recognition. In our study, however, we found through experiments
that some important discriminative information is also contained in the range space
of Sw. Based on this observation, we propose a rough null space (RNS) approach.
Experimental results on three public face databases show that the RNS approach is
more effective than PCA, LDA and NS. 

[qfzhao03:2008] 
J. Ji, Q. F. Zhao, R. Shindo, and Y. Kunishi. A study on criteria for
extracting key terms in document clustering. In IEEE, editor, Proc. IEEE
International Conference on Systems, Man and Cybernetics, pages 3674–
3679, Singapore, Oct. 2008. IEEE, IEEE. 
Document clustering is the process to partition a set of unlabelled documents into
some categories or clusters. To analysis the documents based on the clustering results,
it is expected that all documents in a cluster has some shared concept. This
shared concept is often represented as the centroid. Intuitively, the centroid may not
be able to represent a concept clearly because it is just the average of all documents
in the same cluster. To represent a cluster more clearly, we expect that each cluster
has a small set of representative key terms. Although many document clustering
methods have been proposed in the literature, few of them deal with the key terms
explicitly. In this paper, we study three criteria for extracting key terms through
clustering. The first one is the mean squared error (MSE) function. It is well known
that clusters obtained based on MSE are good in the sense that all documents in
each cluster are similar. In addition to MSE, we introduce two new criteria. The
basic idea of both criteria is to encourage each cluster to use a different set of key
terms. Experimental results with three databases show that MSE is surprisingly
good for generating representative key terms. The proposed new criteria can make
some improvement, but the improvement is not significant. 

[qfzhao04:2008] 
J. Ji, Q. F. Zhao, R. Shindo, and Y. Kunishi. A new document clustering
method based on comparative advantage. In IEEE, editor, Proc. of Joint
International Conference on Soft Computing and Intelligent Systems, and
International Symposium on Advanced Intelligent Systems, pages 1084–1089,
Nagoya, Sept. 2008. Nagoya University, IEEE. 
Document clustering is the process to partition a set of unlabelled documents into
some categories or clusters. To analyze the documents based on the clustering results,
it is expected that all documents in each cluster has some shared concept. This shared
concept is often represented as the centroid. Kmeans is a wellknown algorithm for
unsupervised clustering. It can cluster the document set to satisfy the minimum
mean squared error (MSE) function. However, intuitively speaking, the centroid
may not be able to represent a concept clearly because it is just the average of all
documents in the same cluster. To represent a cluster more clearly, we expect that
each cluster has a small set of representative key terms. Although many document
clustering methods have been proposed in the literature, few of them deal with the
key terms explicitly. In this study, we propose a new method for classifying the
documents based on the concept of comparative advantage, and a new clustering
algorithm for extracting important key terms. Experimental results show that the
proposed method can generate better results in the sense that the overlap between
the sets of representative terms of the clusters is smaller. 

[yliu01:2008] 
Y. Liu. Reduction of difference among trained neural networks by relearning.
In Proceedings of 2008 IEEE International Joint Conference on
Neural Networks, pages 1881–1885. IEEE Press, June 2008. 
It is often that the learned neural networks end with different decision boundaries
under the variations of training data, learning algorithms, architectures, and initial
random weights. Such variations are helpful in designing neural network ensembles,
but are harmful for making unstable performances, i.e., large variances among different
learnings. This paper discusses how to reduce such variances for learned neural
networks by letting them relearn on those data points on which they disagrees with
each other. Experimental results have been conducted on four real world applications
to explain how and when such relearning works. 

[yliu02:2008] 
Y. Liu. A balanced ensemble learning with adaptive error functions. In
L. Kang, Z. Cai, X. Yan, and Y. Liu, editors, Intelligence Computation
and Applications, Lecture Notes in Computer Science, Vol. 5370, pages 1–8.
Springer, Dec. 2008. 
In the practice of designing neural network ensembles, it is common that a certain
learning error function is defined and kept the same or fixed for each individual neural
network in the whole learning process. Such fixed learning error function not only
likely leads to overfitting, but also makes learning slow on hardlearned data points
in the data set. This paper presents a novel balanced ensemble learning approach
that could make learning fast and robust. The idea of balanced ensemble learning is
to define adaptive learning error functions for different individual neural networks
in an ensemble, in which different individuals could have different formats of error
functions in the learning process, and these error functions could be changed as
well. Through shifting away from welllearned data and focusing on notyetlearned
data by changing error functions for each individual among the ensemble, a good
balanced learning could be achieved for the learned ensemble. 

[yliu03:2008] 
Y. Liu. Correlation between mutations and selfadaptation in evolutionary
programming. In L. Kang, Z. Cai, X. Yan, and Y. Liu, editors, Intelligence
Computation and Applications, Lecture Notes in Computer Science,
Vol. 5370, pages 58–66. Springer, Dec. 2008. 
It has been taken for granted that long jumps of Cauchy mutation in a fast evolutionary
programming (FEP) increase the probability of finding a nearoptimum when
the distance between the current search point and the optimum is large, but decrease
the probability when such distance is small. By explicitly measuring the search step
sizes, this paper gives sound evidence that not long jumps but large variances in
Cauchy mutation have contributed to the better performance of FEP than that of
classical evolutionary programming (CEP). It has been discovered that smaller stepsize
mutations among Cauchy mutations had led to the faster convergence of FEP
in some test functions, while these helpful Cauchy mutations could actually have
shorter search step sizes than Gaussian mutations used in CEP. The reason that
Cauchy mutations could have shorter step sizes than Gaussian mutations is that
Cauchy mutations and Gaussian mutations could radically alter selfadaptation in
FEP and CEP. This paper further discusses the correlation between mutations and
selfadaptation in CEP and FEP. 

[yliu04:2008] 
H. Wang, Y. Liu, Z. Wu, S. Zeng, and L. Kang. An improved particle
swarm optimization with adaptive jumps. In Proceedings of 2008 IEEE
Congress on Evolutionary Computation, pages 392–397. IEEE Press, June
2008. 
Particle Swarm Optimization (PSO) has shown its fast search speed in many complicated
optimization and search problems. However, PSO could often easily fall
into local optima. This paper presents an improved PSO with adaptive jump. The
proposed method combines a novel jump strategy and an adaptive Cauchy mutation
operator to help escape from local optima. The new algorithm was tested on a suite
of wellknown benchmark functions with many local optima. Experimental results
were compared with some similar PSO algorithms based on Gaussian distribution
and Cauchy distribution, and showed better performance on those test functions.n
those test functions. 
[yliu05:2008] 
L. Kang, Z. Cai, X. Yan, and Y. Liu. Intelligence Computation and Applications.
Number 5370 in Lecture Notes in Computer Science. Springer,
2008. 
[yliu06:2008] 
Y. Liu, 2008–2009. Program CoChair of the 1st International Workshop on Aware Computing (IWAC 2009)) 
[yliu07:2008] 
Y. Liu, 2007–2008. General CoChair of the 3rd International Symposium on Intelligence Computation and Applications (ISICA2008) 
[yliu08:2008] 
Y. Liu, 2008–2009. Program CoChair of the 4th International Symposium on Intelligence Computation and Applications (ISICA2009) 
[yliu09:2008] 
Y. Liu, 2008–2009. CoChair of Award Evaluation Committee, FAN 2009 
[qfzhao05:2008] 
Yuto Takabatake. Graduation Thesis: A study on generating new face
images for pose recognition, University of Aizu, 2008. Thesis Advisor: Zhao, Q. F. 
[qfzhao06:2008] 
Sayaka Takano. Graduation Thesis: A study on neural network based
face detection, University of Aizu, 2008. Thesis Advisor: Zhao, Q. F. 
[qfzhao07:2008] 
Naoki Tominaga. Master Thesis: Fast structural learning of distancebased
neural networks, University of Aizu, 2008. Thesis Advisor: Zhao, Q. F. 
[qfzhao08:2008] 
Takashi Yazawa. Graduation Thesis: A study on improving the generalization
ability of the NNTrees, University of Aizu, 2008. Thesis Advisor: Zhao, Q. F. 
[qfzhao09:2008] 
Ryohei Shindo. Master Thesis: Development of an extendable system
for document classification, University of Aizu, 2008. Thesis Advisor: Zhao, Q. F. 
[qfzhao10:2008] 
Daiichi Kunita. Graduation Thesis: A comparative study on stateofthe
art algorithms for document classification, University of Aizu, 2008. Thesis Advisor: Zhao, Q. F. 
[qfzhao11:2008] 
Kazuhiro Haga. Graduation Thesis: Face recognition based on rough
null space approach, University of Aizu, 2008. Thesis Advisor: Zhao, Q. F. 
[qfzhao12:2008] 
Yoshihiko Watanabe. Master Thesis: Extracting understandable
knowledge based on data fuzzification, University of Aizu, 2008. Thesis Advisor: Zhao, Q. F. 
[yliu10:2008] 
Jiahao Lu. Master Thesis: A Soccer Simulator and its Application for
Robocup, University of Aizu, 2008. Thesis Advisor: Liu, Y 
[yliu11:2008] 
Yuji Masuya. Graduation Thesis: Stock Price Prediction Using a Neural
Network, University of Aizu, 2008. Thesis Advisor: Liu, Y 
[yliu12:2008] 
Masayuki Shiraiwa. Graduation Thesis: Simulation of Robot Soccer Game
in Soccer Server, University of Aizu, 2008. Thesis Advisor: Liu, Y 
[yliu13:2008] 
Aiichiro Yamahara. Graduation Thesis: Improvement of Reinforcement
Learning in Maze Problem, University of Aizu, 2008. Thesis Advisor: Liu, Y 