/ Masahide Sugiyama / Professor
/ Michael Cohen / Associate Professor
/ Susantha Herath / Associate Professor
/ Minoru Ueda / Assistant Professor
Using our communication channels (sense organs: ears, mouth, eyes, nose, skin, etc) we can communicate each other, including between human and human, human and machine, and human and every information sources. Because of disability of the above channels in software or hardware sense, sometimes it becomes to be difficult for human to communicate. Research area of Human Interface Laboratory covers enhancement and generation of various human interface channels.
In order to advance the above research on human interface, we adopt the following research principle:
We organized second workshop IWHIT98 on Nov. 11th, 12th and 13th (International Workshop on Human Interface Technology 1998) which was sponsored by the International Affairs Committee of the University of Aizu. The workshop had 5 sessions (1.Object Location and Tracking in Video Data, 2.Subjective Factors in Handling Images, 3.Visual Interfaces, 4.Visual and Body Perception, 5.Tools for Language Generation; 15 lectures).
We promoted 5 SCCPs for students (``Speech Processing and Multimedia", ``Sign Language Processing System'', ``GAIA -- Planet Management'', ``Computer Music'', ``Aizu Virtual City on InterNet") and 2 Research Projects (``Object Location and Tracking in Video Data'', ``Spatial Media: Sound Spatializiation''). We received 4 commissioned research funds; IPA on ``Development of Japanese Dictation Software'' , HITOCC on ``Study on Computer Security using Speaker Recognition", Fukushima Prefectural Foundation for the advancement of Science and education on ``Environment computer activity project'', Telecommunication Advancement Organization of Japan Fund on ``Sign Language Communication Between Different Languages''.
We exhibited our research activities in the open campus in University Festival (Oct 31st, Nov.1st) and Fukushima Sangyo Fair (Nov. 29th and 30th). We promoted Lab Open House for Freshmen on April 3rd.
On our research activity we presented 6 papers in academic journals and 10 refereed papers in International Conferences.
One of members organized working group on ``Blind and Computer" and about 30 people attended to the working group and received the support from NHK Wakaba Fund.
We have the homepage of Human Interface Lab to open our research and education activities to the world.
http://www.u-aizu.ac.jp/labs/sw-hi/.
Refereed Proceeding Papers
In recent years, there has been extensive research on collaborative virtual environments. It is important to construct a robust system for developing a multimodal multiclient environment. In this paper, the basic planned structure of a virtual collaborative environment is described. To realize the system, {\sc Java} with the {\sc jsdt} (Java Shared Data Toolkit) is being used to leverage off its ease of constructing collaborative virtual environments. The description of the system which has a potential for multicasting is expressed. A {\sc 2.5d} dynamic map client is presented as an example of an interface. Legacy clients can also be integrated using a proxy between server and client. The framework for interacting shared space with multi-clients multi-users is described. We are developing a suite of multicasting clients to support audio, video, and text chat applications, including multiple views of shared spaces, featuring spatialized audio (via both a speaker array system and nearphones), haptic input/outut (via a swivel chair with a servomotor and force sensors), and photo-realistic immersive interfaces (via {\sc qtvr} [QuickTime Virtual Realty] panoramic movies). Applications are expected to include distance learning and collaborative computing (internet telephony complemented by video and distributed whiteboard feature), telepresence and telerobotics (through a mobile robot), and gaming.
ISBN 4-9980886-1-0 and 4-9980886-3-7. We have programmed a Java applet to display and control the state of `The Internet Chair,' a multimedia-enhanced swivel seat rotated by a force-feedback servo-motor. As a user interface of the Internet Chair, this applet graphically displays a virtual spiral spring, dynamically calculating its projected force, using a natural extension of Hooke's Law (stress proportional to strain), which values are used to set the resistance of the servo-motor. Further, by using spectral colors in the 2D display, virtual depth is visually suggested by Chromastereoptic eyewear. By adjusting audio panning (intensity stereo) through the Java media layer of a virtual source with respect to the orientation of the Internet Chair, a virtual soundscape may be rotated and stabilized, registrable with the actual physical space (for eventual alignment of auditory cues with real-life events or locations). Currently capable of rendering HDD-resident .au and .wav sound files in either `one-shot' or looped mode, the applet will be extended to also allow audio display of network-streamed signals. Another visual display will be used as a viewport into a 3D scene (using modeling and rendering technology like QTVR, Java3D, or VRML), also fixable against the real space across yawing rotation, complementing and reinforcing the stabilized soundscape. A Java framework integrates these three different spatial media modalities: audio, visual, and haptic. Groupware deployment will allow multiple users across the internet to share the sights, sounds and physical sensations of virtual and real worlds.
We are integrating the Internet Chair with a telerobot. The goal is high-level radio- and \initials{ip}-based tele\"{e}xistence with an advanced, virtual-reality based frontend (control system) with realtime audio and video telepresence and force rendering. Azimuthal sensors and effectors in the chair will allow it to be used to steer the robot, or, when the robot runs autonomously, to monitor its orientation propriocentrically, which sensation complements the visual and auditory telepresence modalities. We plan to deploy synesthetic displays, compositing information cross-modally, like inferred auditory scene details rendered graphically.
A sound reproduction apparatus which reproduces sound by driving a plurarity of loudspeakers using a sound signal comprising; head-related transfer function filtering parts which simulate transfer functions of sound transmission path from a sound source to a human head, sound field correction filtering parts which alleviate effects of sound field in reproduction parts, a chair which comprises a tilting-angle-variable back with a set of loudspeakers and a turnable seat, an azimuth detecting part which detects an azimuthal angle due to turning of said seat an elevation angle detecting part which detects an elevation angle due to tilting of said tilting-angle-variable back, and a transferring part which transfers outputs from said both detecting parts to said head-related transfer function filtering parts.