


The main stream in our lab is related to computational intelligence. So far we have focused our study on three key words: recognition, learning and understanding. The goal of our research is to develop some learning models that are flexible enough to adapt changing environment, and also simple enough to be realized, interpreted and reused. The ultimate goal is to design a system that can think, and decide what to do and how to growup based on its own thinking. For this purpose, many approaches have been studied  e.g., neurocomputation, evolutionary computation, reinforcement learning, awareness computing, and so on. Of course, results proposed in conventional symbol based artificial intelligence are also included.
In 2010, we successfully organized the International Symposium on Aware Computing (ISAC2010). We also prepared the IEEE International Conference on Awareness Science and Technology (iCAST2011). We are trying to promote awareness technology through collaboration with Chaoyang University of Technology (CYUT) and Shandong Academy of Sciences (SDAS). Our dream is to propose a new and better approach to realization of artificial intelligence.
So far we have used or proposed the following learning models:
Neural network trees (NNTrees),
Nearest neighbor classification trees (NNCTrees),
Support vector machines (SVMs),
Neural network ensembles,
Modular neural networks,
Cellular automata, and
Recurrent neural networks.
Based on the above learning models, we have proposed many new algorithms. Examples include:
IEA: individual evolutionary algorithm (also called the R4rule),
CoopCEA: cooperative coevolutionary algorithms,
EPNet: evolutionary programming neural net,
Evolutionary design of neural network trees,
Induction of NNCTrees with the R4rule,
Fast neural network for face detection,
Rough null space approach to face recognition,
Card holder authentication based on image fusion.
To verify and to improve the models and learning algorithms proposed so far, we have being studying online growing of neural network trees, evolution of neural network ensemble, evolutionary design of decision trees, and so on. Currently, we are very interested in applying our models and algorithms to solving practical problems related to producing a "safe, secure and healthy" society. Examples include: face detection, face/expression recognition, image compression and protection, text mining, IC card holder authentication, and so on.
Recently, we started a new research related to steganography. We have proposed a morphing based steganography technology that is useful for holder authentication and communication. We have filed several patents related to this research, and are trying to make some useful products in near future.
J. Ji and Q. F. Zhao. Applying Naive Bayes classifier to document clustering. Journal of Advance Computational Intelligence and Intelligent Informatics, Vol. 14(6):624630, Sept. 2010.
Document clustering is the process of partitioning a set of unlabeled documents into clusters such that documents in each cluster share some common concepts. A naive Bayes classifier (BC) is a simple probabilistic classifier based on applying Bayes' theorem with strong (naive) independence assumptions. BC requires a small amount of training data to estimate the parameters necessary for classification. However, the training data must be labeled. To apply BC to classification of unlabeled data, we introduce an iterative Bayes clustering (IBC) method in this paper. To improve the performance of IBC, we propose to combine IBC with a comparative advantage based initialization method. Experimental results show that the proposed method can improve the performance significantly compared with the classical clustering methods.
J. Ji, Y. T. Tony, and Q. F. Zhao. Clustering large sparse text data: a comparative advantage approach. Journal of Information Processing Society of Japan, 18(9):12341243, Sept. 2010.
Document clustering is the process of partitioning a set of unlabeled documents into clusters such that documents within each cluster share some common concepts. To analyze the clusters easily, it is convenient to represent the concepts using some key terms. However, by using terms as features, text data is represented in a very highdimensional vector space, and the computational cost is high. Note that the text data are of high sparsity, and not all weights in the centers are important for classification. Based on this observation, we propose in this study a comparative advantagebased clustering algorithm which can find out the relative strength between clusters, as well as keep and enlarge their strength. Since he vectors are represented by term frequency, the clustering results are more comprehensible compared with dimensionality reduction methods. Experimental results show that the proposed algorithm can keep the characteristic of kmeans algorithm, but the computational cost is much lower. Moreover, we also found that the proposed method has a higher chance of getting better results.
H. Hayashi and Q. F. Zhao. Inducing compact NNTrees through discriminant multiple centroid based dimensionality reduction. International Journal of Innovative computing, Information and Control, 7(5):29712985, May 2011.
Neural network tree (NNTree) is a hybrid model for machine learning. Experimental results have shown that, compared with fully connected neural networks (NNs), NNTrees are more suitable for structural learning; and compared with standard decision trees(DTs), NNTrees are more accurate. To reduce the implementation cost of the NNTrees, we have tried to induce compact NNTrees through dimensionality reduction. We have first studied the linear discriminant analysis (LDA) approach and found that LDA can reduce the implementation cost greatly without degrading the system performance significantly in most cases. However, when the number Nt of data is large and the dimensionality Nd of the feature space is high, using LDA can be computationally very expensive. One method for solving this problem is to use the direct centroid(DC) approach. Unfortunately, through experiments we found that features extracted by DC are not informative enough for obtaining good NNTrees. In this paper, we propose the discriminant multiple centroid (DMC) approach. In DMC, all data are first mapped to a lower dimensional space based on the centroids, and LDA is then conducted in the mapped space. Experimental results with several public databases show that DMC is as effective as LDA, but is more efficient when both Nt and Nd are large.
Y. Liu. New discoveries in fast evolutionary programming. International Journal of Innovative Computing, Information & Control, 7(5(B)):28812896, 2011.
It had been stated in both the theoretical analysis and the empirical results on fast evolutionary programming (FEP) that long jumps of Cauchy mutations were the cause of the better performance of FEP on optimizing both unimodal and multimodal functions. Such a statement about long jumps of Cauchy mutations has been so widely held in the applications of evolutionary programming (EP) that the effectiveness of long jumps of Cauchy mutations has seldom been put in doubt. Through carefully examining the relationship between the step sizes of mutations and their performance, it has been discovered that not long jumps but short jumps with large variances among Cauchy mutations had led to the better performance of FEP than that of classical EP (CEP). Experimental results given in this paper show that effective Cauchy mutations in FEP had often had even shorter step sizes on average than effective Gaussian mutations in CEP, although the average step sizes of Cauchy mutations were much longer than those of Gaussian mutations. Effective mutations used here refer to those mutations that have generated better offspring than their parent solutions among all mutations at each generation. It has been further discovered that the same selfadaptation used in CEP and FEP had shown quite different behaviors on optimizing the same test functions from the same initial populations. These two discoveries shed light on why the shorter effective Cauchy mutations performed better than the longer effective Gaussian mutations, and how effective Cauchy mutations had had the shorter step sizes than effective Gaussian mutations
C. H. Hsieh, F. J. Chang, and Q. F. Zhao. Content aware image retrieval with partitionbased color feature. In IEEE, editor, Proc. of International Joint Conference on Neural Networks (IJCNN2010),, pages 13921398, Barcelona, Spain, July 2010. IEEE, IEEE.
In this paper, we present a content aware approach to image retrieval with partitioned color features only. Given a query image, the proposed approach consists of four stages. First, partition the query image into subimages. Second, calculate the mean of each component in the partitioned subimages as the color features. Third, find weights for R, G, Bcomponent based on their energies for similarity evaluation. Forth, retrieve images in database of images by a weighted similarity measure. Though the approach is simple, it is effective in image retrieval even with only partitioned color features. The simulation result for the given database indicates that the overall average precision of ten retrieved images is as high as 0.86. Thus the proposed approach can be used in the applications where light computation is sought.
Y. Liu. Are long jumps of Cauchy mutations effective in fast evolutionary programming. In The Proceedings of the 2010 IEEE Congress on Evolutionary Computation,, pages 11741181. IEEE, July 2010.
It had been discovered that not long jumps but short jumps with large variances among Cauchy mutations had contributed to the better performance of fast evolutionary programming (FEP) than that of classical evolutionary programming (CEP). One strong force to push the effective Cauchy mutations to have the shorter step sizes came from the different behaviors of the the same selfadaptation used in CEP and FEP on optimizing the same test functions from the same initial populations. This paper explored such correlation between the mutation step sizes and selfadaptation, and suggests that it is as necessary to set up an upper bound as to have a lower bound on the strategy parameters in selfadaptation.
Y. Liu. Measurements in fast evolutionary programming. In Computational Intelligence and Intelligent Systems, Communications in Computer and Information Science, Volume 107, pages 7986. Springer, October 2010.
A number of mutation operators have been developed in evolutionary programming, such as Gaussian mutations, Cauchy mutations, Levy mutations, and some mixed mutations. Many results have been obtained only on comparisons of performance among different mutations. In stead of mearly measuring the performance, this paper discusses how to examine the behaviors of Gaussian mutations and Cauchy mutations based on nine measurements including five measurements from fitness distributions, one measurement on survival rate, and the other three measurements on mutation step sizes. The relationships among these nine measurements are further explored.
Y. Liu. Evaluations of mutations in evolutionary programming. In The Proceedings of the 2nd International Symposium on Aware Computing, pages 154158. IEEE, November 2010.
Different mutation operators have been developed in evolutionary programming, such as Gaussian mutations, Cauchy mutations, L∼ evy mutations, and some mixed mutations. Experimental results have often been focused on comparisons of performance among different mutations. Besides measuring the performance, this paper discusses how to evaluate Gaussian mutations, Cauchy mutations, and L∼ evy mutations using nine measurements including five measurements from fitness distributions, one measurement on survival rate, and the other three measurements on mutation step sizes. The relationships among these nine measurements are further explored on these mutations.
P. Li, Z. Li, and Y. Liu. Band structures of multilayer films with randomness. In Advances in Computation and Intelligence, Lecture Notes in Computer Science, Volume 6382, pages 496504. Springer, October 2010.
Some randomness could likely occur in one way or another in the fabrication of photonic crystals. It is essential to understand how such randomness would affect the performance of the produced photonic crystals. In this paper, the band structures of multilayer films with different levels of randomness were analyzed by plane wave expansion (PWE) with supercell. The results clearly show how the lowest band gap decreased with the increased randomness. Meanwhile, a number of slim band gaps could appear in the perturbed multilayer films.
J. Li, Y. Liu, and L. Gu. DDoS attack detection based on neural network. In The Proceedings of the 2nd International Symposium on Aware Computing, pages 196199. IEEE, November 2010.
DDoS attack is a major Internet security problem DoS is that lots of clients simultaneously send service requests to certain server on the internet such that this server is too busy to provide normal services for others. Attackers using legitimate packets and often changing package information, so that traditional detection methods based on feature descriptions is difficult to detect it. This paper present an artificial intelligence DDoS attack detection method based on neural networks. In this method, analysis of server resources and network traffic, To training the ability of detection normal or abnormal, it have better results for detect DDoS attack.
Z. Cai, C. Hu, Z. Kang, and Y. Liu. Advances in Computation and Intelligence, page 535. Number 6382 in Lecture Notes in Computer Science. Springer, 2010.
Z. Cai, H. Tong, Z. Kang, and Y. Liu. Computational Intelligence and Intelligent Systems, page 289. Number 107 in Communications in Computer and Information Science. Springer, 2010.
Y. Liu. FY2010 University of Aizu Competitive Research Funding, 2010.
Y. Liu, 20092010.
Program CoChair of the 5th International Symposium on Intelligence Computation and Applications (ISICA2010)
Y. Liu, 20092010.
Program CoChair of the Second International Symposium on Aware Computing (ISAC 2010)
Yamahara Aiichiro. Master Thesis: Multiagent Reinforcement Learning with Selective Voting, Graduate School of Computer Science and Engineering, March 2011.
Thesis Adviser: Y. Liu
Jin Li. Master Thesis: Intelligent Detection of DoS Attack, Graduate School of Computer Science and Engineering, September 2010.
Thesis Adviser: Y. Liu