/ Tetsuhiko Ikegami / Vice President
/ Atsushi Kara / Professor
/ Hamza A. Ali / Visiting Professor
/ Masamichi Hashimoto/Visiting Professor
/ Vitaliy V. Kluev / Visiting Associate Professor
/ Aiguo He / Assistant Professor
As represented by the Japan Gigabit Network (JGN), a national project for research and development of advanced information network technologies, research on high-speed networking are actively promoted worldwide. In response to these developments, Core and Information Technology Center (CITEC) was founded at the University of Aizu in April l999. At the center headed by Dr. T. Ikegami, intensive research and development are conducted for a five year period, focusing on the basic and applied technologies of the next-generation super high-speed networks. In particular, as one of the connection sites of the JGN national project, the University of Aizu is working with various research institutions connected to JGN on research and development of basic and applied network technologies including applications for distance education, medical and nursing care, and e-commerce for the realization of the advanced information and communication society. The university is also collaborating with Fukushima Technology Center (Hitech Plaza) and Nishi Aizu Machi CATV, which are connected through Asynchronous Transfer Mode (ATM) links (135Mbps and 20Mbps respectively).
Refereed Journal Papers
Tetsuhiko Ikegami., Real-Time in Society by Nonreal-Time in Network. IEEE Journal on Selected Topics in Quantum Mechanics, vol. 6, No. 6, pp. 1216--1219, Invited Paper, 2000.
Finding useful information on the Web is frequently a tedious and difficult task. The Information Retrieval system must somehow interpret the contents of documents in a collection and rank them according to the degree of relevance to the user's query. This interpretation of document content involves extracting semantic information from the document text and using this information to match the user information need. The difficulty is not only knowing how to extract this information but also knowing how to use it to decide relevance. In this paper we propose a new approach to extracting semantic information from a given document text and using the obtained information to search for similar documents. This paper describes the search method to find similar documents inside a static thematic collection. The basis of our approach is computing the degree of thematic similarity between pairs of documents. The algorithm for construction of these thematic similarity estimates is proposed and discussed. This algorithm uses document clustering from the collection and the automatic construction of keyword sets.
Presently domain specific search engines are becoming popular because they offer greater accuracy, when compared to general purpose search engines. In this study, a method for collecting domain specific documents from the net was developed for the purpose of improving search results. The main thrust of our approach is to use several metrics to estimate the relevance of every automatically discovered document by a crawler regarding a topic of interest. This type of search resulted in two important findings. First, the time required for manual analysis of document content by the crawler was significantly reduced; second, the content quality of selected documents was improved. These results suggest that the rough estimation of precision and recall calculated in this study offer great promise.
This paper describes a means of video-based respiration monitoring to detect crises in an experimental telecare monitoring system [1]. The method can detect periodic respiratory motion as well as sudden motion, disorder and apnea. The method is based on computing and analyzing the edges of an image of a patient and a quilt, and to automatically identify region of respiratory motion. We examined the effect of video quality upon the ability of the procedure to accurately identify specific regions. The results showed that a Source Input Format (SIF) class video with 0.5Mbps bandwidth compression is sufficiently reliable for the algorithm to perform well. One implementation runs at a near real-time video rate on a PC.
Physically disabled is often unable to give respite to the attendants (carers). In most cases, the carer needs to monitor the safety of the patient 24 hours a day to check if the patient is in discomfort or not. The emerging universal access to the Internet is going to bring a paradigm shift in home health care. This study is aimed to investigate the utility of remote monitoring of patients at home through video and sound over the Internet. The goal is to provide TV-quality monitoring service that reduces the excessive burden of the carers[1]. Although the current MPEG2 encoder and associated running costs for high-bandwidth telecommunications (2Mbps or up) are not affordable for the majority, we anticipate that, in near future, PC-based technologies and high-speed access to the Internet provide the most cost-effective solution for our goal.
Finding useful information on the Web is frequently a tedious and difficult task. The Information Retrieval system must somehow interpret the contents of documents in a collection and rank them according to the degree of relevance to the user's query. This interpretation of document content involves extracting semantic information from the document text and using this information to match the user information need. The difficulty is not only knowing how to extract this information but also knowing how to use it to decide relevance. In this paper we propose a new approach to extracting semantic information from a given document text and using the obtained information to search for similar documents. This paper describes the search method to find similar documents inside a static thematic collection. The basis of our approach is computing the degree of thematic similarity between pairs of documents. The algorithm for construction of these thematic similarity estimates is proposed and discussed. This algorithm uses document clustering from the collection and the automatic construction of keyword sets.
With the exponential growth of the Web, it is becoming increasingly difficult to collect and analyze documents from the Internet that are relevant to a particular topic. Domain-specific search tools are becoming popular because they offer greater accuracy. To address this problem we propose a goal-directed approach to crawl the Web for to pic specific documents. To begin a search, a topic is specified using exemplary documents. In this case, we describe new research in information extraction, that enables efficient spidering of the Web. The results of our preliminary experiments show the proposed approach has promise.
A system model for a distance computer exercises system. In this paper, the requirements for constructing the Tele-Exercise Classroom be discussed.
Design of a distance education system for computer exercises.In order to realize the system, we present a Systems Model which shows the configuration of the system, user computer terminal types, system modes, and action modes. In addition, extendibility of the system is well considered in the model.
This paper proposes a Tele-Exercise Classroom for computer exercises over the Japan Gigabit Network. The main features of the classroom are (1) a teacher can monitor a student's operations, (2) give instructions to the student, (3) show a demonstration to the student. Besides, exercise materials and/or problems happening during the exercise class can be presented by using a synchronous/asynchronous white board, and discussed between the teacher and a student (students) through interactive real-time video/audio.
This paper reports the results of NTCIR experiments on Japanese and English text retrieval carried out with the OASIS System. The OASIS system is designed to support multiple languages. The main aim of these experiments is to test particular methods to support Japanese. Results showed that some improvements in the search engine need to be considered. More experiments are needed to test the effect of automatically query expansion using top relevant ranking documents.
Chapters in Books
V. Kluev, V. Dobrynin, and S. Garnaev., chapter: Intelligent Construction of Thematic Collections, Recent Advances in Applied and Theoretical Mathematics. pp. 103--106, 2000, World Scientific and Engineering Society Press.
We describe the experimental semi - automatically technology for construction of thematic collections. The crawler (a program that retrieves WEB pages) is used as a basis of this technology. The main principles of the crawler were discussed. The usage of this program for gathering topic specific documents from the Internet is reported. This technology can be used for construction of collections of mathematical WEB material. Obtained results for some topics indicate that additional work is needed to improve this technology.
Academic Activities
Atsushi Kara., IEICE, November 2000. Chairman of IEICE Communication System Workshop (2000.11).