Professor |
Most of the courses taken by engineers and computer science students emphasize scientific discipline and the accumulation of "truth." The Computer Arts Lab. activities includes such technically objective factors, but also encourages original expression, subjectively motivated by aesthetics rather than "correctness," sometimes "putting the art before the course!" Unlike many other labs' activities that try to converge on a "right answer" sharable by everyone else, artistic disciplines encourage originality, in which the best answer is one that is like no one else's! The Computer Arts Lab., through its resident Spatial Media Group,1 is researching projects including practical and creative applications of virtual reality and mixed (augmented, enhanced, hybrid, mediated) reality and virtuality; panoramic interfaces and spatially-immersive displays (especially stereotelephonics, spatial sound, and stereography); wearable and mobile applications, computing, and interfaces; and networked multimedia, with related interests in cve (collaborative virtual environments), groupware and cscw (computer-supported collaborative work); hypermedia; digital typography and electronic publishing; force-feedback displays; telecommunication semiotics (models of teleconferencing selection functions); information furniture; way-finding and navigation; entertainment computing; ubicomp (ubiquitous computing), calm (ambient), and pervasive technology. We are particularly interested in narrowcasting commands, conference selection functions for adjusting groupware situations in which users have multiple presence, virtually existing in more than one space simultaneously. We explore realtime interactive multimedia interfaces-- auditory, visual, haptic, and multimodal: Auditory We are exploring interfaces for multichannel sound, including stereo, quadraphonic, and nearphones (mounted on our rotary motion platform) two separate speaker array systems in the University-Business Innovation Center 3d Theater,2 and an octophonic system in our lab. Working with Dr. Durand Begault of NASA, we deployed online courseware, "Sonic,"3 organizing an introduction to desktop audio and presenting many sound samples. We use these contents in the "Audio Interfaces," graduate school course,4 which is a prerequisite for "Spatial Hearing and Virtual 3D Sound,"5 taught jointly with Prof. Jie Huang in the Human Interface Lab. With Profs. Robert Fujii and Satoshi Nishimura, we host a Computer Music Studio, featuring about fifteen computer music workstations-- each comprising a personal computer (Windows and Macs) with sequencing software and a keyboard synthesizer-- plus assorted amplifiers, racks, mixers, and effects processors. Ongoing student projects explore dynamic microtuning systems, quadraphonic spatialization of tetraphonic bass guitar strings, and control of narrowcasting using sip (Session Initiation Protocol). We annually conduct a Student Cooperative Class Project.6 In the past we have sponsored sccps on Digital Compositing (using Photoshop and the Gimp7 ), but recently the sccp has been focused on Computer Music,8 studying basic music theory and using a midi sequencer9 to compose and perform student-authored songs. This sccp segues into a graduate level computer music course.10 Visual We promote creative applications of scientific visualization, encouraging the use of Mathematica11 and stereoscopy,12 including chromastereoscopy13 (3d images with depth layers cued by color). We enjoy exploiting the unique large-format immersive stereographic display in the UBIC 3d Theater. Some group members are using visual input techniques through a web cam to sense user mood, for kansei interpretation, or for control of narrowcasting chatspace interfaces. We are also exploring creative applications of panoramic imaging and object movies.14 Haptic We are also exploring the use of haptic interfaces, including force-display joysticks and a rotary motion platform (the "Sc haire [for `shared chair'] Internet Chair"). We conduct annual Creative Factory Seminars. Past cfss explored advanced audio interfaces, panoramic imaging, and haptic modeling. This year, in conjunction with the Distributed and Parallel Processing Lab., we conducted a workshop on Haptic Modeling and 3d Printing, using force-feedback CAD workstations15 to make models that were then rapid prototyped (as stereolithograms) with the dppl's personal fabricator,16 closing the "idea (stored in brain neurons) - information (stored as bits) - matter (atoms)" pathway. Multimodal Using such multimodal interfaces, our students have crafted driving simulators, location-based games featuring the rotary motion platform,17 synaesthetic (cross-sensory modality) visual and haptic music players (rendering songs as light shows<18> or dancing chairs19 ). Using the aforementioned visual sensing technique, narrowcasting postures can be recognized, and used to control distributed chatspaces or virtual concerts. A recent student project deployed a microphone vector to track a moving sound source, using its network interface to trigger internet appliances (like lights that follow the source). We are also developing a driving simulator using collision-detection modulation of the force-feedback steering wheel and the rotary motion platform. The most recent version of the project features a dual-steering (front and back) fire truck, racing through a 3d model of our campus to reach a fire, piloted by two drivers, and featuring spatial sound effects. We are also exploring mobile (nomadic, portable) computing, working in conjunction with university spin-offs The Designium,20 Eyes,21 and GClue.22 Other activities: We host an annual symposium, the Int. Symposium on Spatial Media,23 inviting experts to share their knowledge and passion regarding such themes as gSpatial Sound and Spatial Telepresenceh (f01), gMagic in Math and Musich (f02), gAdvanced Multimedia and Virtual Realityh (f03), gSpatial Soundh (f04), gHearing and Sound Installationsh (f05), gSound, Audio, and Musich (f06), gInteractive Media, Security, and Stereographyh (f06), and gMusic XML and the Structure of Swing, Understanding Color Media, Media Grid, and Visualization Toolsh (f07). Our lab sponsors several student performance circles, including the Yasakoi Dance Circle,24 Disco Mix Club, and Project Mermaid. Through the research & development, the deployment & integration, of stereographic, spatial sound, and haptic applications, we nurture scientific and artistic interest in advanced computer-human and human-human communication. Our ultimate domain is the exploration of interfaces and artifacts that are literally sensational. Some relevant links: Audio Courseware: http://sonic.u-aizu.ac.jp Spatial Media: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohea.html
Japanese: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohejp.mpg QuickTime Virtual Reality: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/
Object Movie: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/shoe.mov Hideo Noguchi + Akabeko: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/Noguchi+Akabeko.mov Rotational Degrees of Freedom: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/Rotational-DsoF.mov |
[mcohen-01:2006] |
Owen Noel Newton Fernando, Kazuya Adachi, Uresh Duminduwardena,
Makoto Kawaguchi, and Michael Cohen. Audio Narrowcasting
and Privacy for Multipresent Avatars on Workstations and Mobile
Phones. Ieice Trans. on Information and Systems, E89-D(1):73-87, 2006. |
Our group is exploring interactive multi- and hypermedia, especially regarding
virtual and mixed reality groupware systems.The apparent paradoxes of multipresence |
[mcohen-02:2006] |
Tatuya Nagai, Michael Cohen, Youko Murakami, and Yoshinori
Moriguchi. Networked driving simulator with force display, rotating
motion platform, and dual steering. In Proc. HC: Ninth Int. Conf. on
Human and Computer, pages 25-30, Aizu-Wakamatsu, September
2006. |
This article describes the force display and dual steering in the Networked
Driving Simulator which can share a virtual space using the cve (Collaborative
Virtual Environments), our group ware. The Networked Driving
Simulator has two types of force display, the Sc
haire and Force Feedback
Wheel(ffbw).The Sc
haire (Share-Chair), a rotary motion platform being developed
by our Spatial Media Group, is rotated with a servomotor, the rotation
angle controlled via internet through the cve.The ffbw is displays force
when a vehicle in the Networked Driving Simulator collides.The technique of
feeding back the effect employs programs using C++ and DirectInput25, and
escapes the execution file, called ForceManager, from the Networked Driving
Simulator which is implemented with Java3D.The pattern of effect is
changed by the arguments of ForceManager.Moreover, our Networked Driving
Simulator can select driving vehicle, such as a normal car or long-style
fire truck which has seats for drivers in front and in rear (tiller), as its front
and rear wheels are independent. The dual steering is usefull to turn narrow
corners rapidly and smoothly in case of emergencies.Keywords: Force
Display, Rotating Motion Platform, Schaire, Dual Steering |
|
[mcohen-03:2006] |
Kunihiro Doi and Michael Cohen. Visual affective sensing of
rotary chair. In Proc. KEIS: First Int. Conf. on Kansei Engineering
and Intelligent Systems, pages 257-258, Aizu-Wakamatsu, September
2006. |
This research adopts a rotary chair as a novel input device for additional
dimension as ginformation furniture.hThis approach is similar to a motion
capture system with azimuth sensor.A swivel chairfs gesture is tracked as a
video streamof a a wing of colored objects two round green stickers) attached
to it |
|
[mcohen-04:2006] |
Mohammad Sabbir Alam, Michael Cohen, and Ashir Ahmed.
Media Privacy in SIP Conferencing Systems. In Proc. ICDT-2006:
Int. Conf. on Digital Telecommunication, pages 257-258, Cap Esterel,
Cote d •A`zur, France, August 2006. |
We propose a new focus of research for multimedia conferencing systems
which allows each user to flexibly select another participant or a group of
participants to control media transmission.In a traditional conference system,
participantsf voices might by default be shared with all others, but a
participant might want to select a subset of the conference members to send
his/her media to or receive streams from.We review the concept of narrowcasting,
a technique for limiting such information streams in a multimedia
conference, and propose a design to use existing standard protocols (sip and
sdp) for controlling fine-grained narrowcasting sessions.Keywords: Narrowcasting,
sip, conferencing, policy control, and privacy. |
|
[mcohen-05:2006] |
Kayoko Kanno, Newton Fernando, Alam Bolhassan, Sho Narita,
, and Michael Cohen. gPersonal Practically Panoramich Multimodal
Interfaces. In Proc. EDT: IEEE VR6 Workshop on Emerging Display
Technologies, pages 37-40, Alexandria, Virginia, March 2006. |
We have developed second-generation prototypes of theInternet Chair, a
novel internet appliance. The first generationexplored using the chair as an
input device; gSchaire,h the prototypeemployed here, is a pivot (swivel, rotating)
chair deployed as anoutput device, a rotary motion-platform information
appliance |
|
[mcohen-06:2006] |
Mohammad Sabbir Alam, Michael Cohen, and Ashir Ahmed.
Narrowcasting- Controlling Media Privacy in Sip Multimedia Conferencing.
In Proc. IEEE CCNC: 4th Consumer Communications and
Networking Conf., Las Vegas, January 2007. |
Media and the vectors of its transmission is private information and should
be made available only to authorized participants in a conference.In a traditional
conference system, usersf voices might by default be shared with
all others, but a participant might want to select a subset of the conference
members to send his/her media to or receive streams from.We review the
concept of narrowcasting, a technique for limiting such information streams
in a multimedia conference, and propose manipulation of media policies in
centralized conferencing systems in a sip framework.This system allows each
participant to flexibly select another participant or group of participants for
media transmission using existing standard protocol (sip) for configuring
fine-grained narrowcasting sessions. |
|
[mcohen-07:2006] |
Kunihiro Doi and Michael Cohen. Control of Navigable
Panoramic Imagery with Information Furniture: Chair-Driven 2.5d
Steering through Multistandpoint Qtvr Multinode Panoramas. In
3DUI: Proc. 3D User Interfaces Symposium (Poster Demonstration),
Charlotte, NC; USA, March 2007. |
We have implemented a browser, gExtended VR4U2C, (Virtual Reality for
(4) You (U) to (2) See (C))hthat performs track- and dolly-enhanced qtvr
(QuickTime Virtual Reality) and window dilation.Extended VR4U2C can be
used interactively to explore and examine detailed multidimensional, virtual
environments(photorealistic or otherwise) using a computer and conventional
input devices.including mouse, trackball, rotary controller, track pad, and
keyboard.as well as more exotic interfaces |
|
[mcohen-08:2006] |
Michael Cohen. Articulated modeling of distributed privacy:
Transitive closure of composition of narrowcasting and multipresence.
In Proc. CIT: Sixth Int. Conf. on Computer and Information
Technology, Seoul, September 2006. |
Our group is exploring models and interfacesfor advanced conferencing |
|
[mcohen-09:2006] |
Owen Noel Newton Fernando and Michael Cohen. Narrowcasting
attributes for presence awareness in collaborative virtual environments.
In Proc. CIT: Sixth Int. Conf. on Computer and Information
Technology, Seoul, September 2006. VRSJ (Virtual Reality Society
of Japan). |
Our group is investigating interactive multi- and hypermedia, especially
applied to virtual and mixed reality multimodal groupware systems.We are exploring user interfaces to control source¨sinktransmissions in synchronous
groupware (like teleconferences, chatspaces, virtual concerts,
etc.).We have developed two interfaces fornarrowcasting (deliberate filtering
of multicast duplex information streams)functions for presence awareness
in collaborative virtual environments (cves):for a workstation wimp-style
(windows/icon/menu/pointer)gui (graphical user interface) |
|
[mcohen-10:2006] |
JuliLan Villegas and Michael Cohen. DSP-based Real-time Harmonic
Stretching. In Proc. HC: Ninth Int. Conf. on Human and
Computer, pages 164-168, Aizu-Wakamatsu, September 2006. |
Musical harmonic stretching is a contraction or dilation of the overtone spacing
of a complex tone. This article introduces a new way to express it in realtime.
This approach, based on Pd (Pure Data), overcomes the limitations
imposed by the midi protocol in previous solutions (applicable to synthesized
sounds only) and allows one to harmonically stretch musical instrument
sounds (preferably non-percussive) in realtime without changing their physical
dimensions by analyzing a signal and additively resynthesizing it with
microtuned harmonics. |
|
[mcohen-11:2006] |
Kayoko Kanno and Michael Cohen. Visual narrowcasting interface
using posture recognition. In Proc. HC: Ninth Int. Conf. on
Human and Computer, pages 50-55, Aizu-Wakamatsu, September
2006. |
In ubiquitous computing environments, there arise various situations of using
computer. A controller should have suitable forms depending on situation,
allowing any user to use easily and intuitively.It is most intuitive to use
human pose as input.In our group, we have developed gMultiplicity,h an
applicationto increase onefs presence and control such presence in virtual
spaces(collaborative virtual environment).The interface supports generalization
of narrowcasting |
|
[mcohen-12:2006] |
Takenori Nakamura and Michael Cohen. Sound localization and
collaborative virtual environment control using a microphone vector.
In Proc. HC: Ninth Int. Conf. on Human and Computer, pages 181-
186, Aizu-Wakamatsu, September 2006. |
Detecting the location of sound sources is a very important preliminary step
toward the development of spatial audio, speech recognition, stereophony,
security systems, and so on.Sound is the audible vibration of a medium that
carrys it;the intensity of that vibration is felt as the volume of that sound and the frequency of the vibration is perceived as the pitch of sound.Humans
recognize the direction of a sound source using differences of the vibration
between the ears, the Interaural Intensity Difference (iid), the Interaural
Time Difference (itd), and the Head-Related Transfer Function (hrtf).This
thesis describes a method for detecting the location of a sound source using
a vector of microphones, deriving iid between the microphones.iid discribes
the difference of intensity of sound vibration between ears for recognizing
the location of the sound source.When the location of the sound source
is detected by the microphone vector, it is possible to record or ignore a
particular sound source, to chase the sound source with a spotlight or microphone,
and so on.In this thesis, we describe a program named gSound
Locatorh that detects the location of a sound source, implemented with Java
and the Java Sound api, cooperating our Collaborative Virtual Environment
(cve) clients (glan-taph26,Sc
haire, and etc).lan-tap[Sawahata:Concento], an
ethernet-controllable power strip, and some electrical appliances are used to
show the suitability of Sound Locator to integrate information regarding real
spaces with information regarding virtual spaces. |
[mcohen-13:2006] |
Michael Cohen, Owen Noel Newton Fernando, Kensuke
Shimizu, , and Tatsuya Nagai. Back-seat driver: Spatial sound for vehicular
way-finding and situation awareness. In Proc. FCST: Proc. Japan-
China Joint Workshop on Frontier of Computer Science and Technology,
pages 109-115, Aizu-Wakamatsu, November 2006. |
We are exploring idss (intelligent driver support systems),especially including
way-finding presented via spatial audio.(gWay-findingh refers to giving a driver
directions |
[mcohen-14:2006] |
JuliLan Villegas, Michael Cohen, and Yuuta Kawano. Harmonic
Stretching with the Helical Keyboard, volume 20, pages 29-34. 2006. |
We describe a new technique to visually and auditorily express harmonic
stretching in real-time, based on the Helical Keyboard (a Java3D application), the midi protocol, and Java Sound synthesis capabilities. The result is
a rich multimodal interface that allows changing the stretching ratio while the
user plays back midi sequences, or plays directly through a keyboard controller
(hardware or widget). A helix representing the keys of the keyboard is visually
rendered using chromastereoptic techniques enabling depth perception when
viewing with special eyewear. The notes in the helix dynamically change their
azimuth and elevation according to the stretching ratio, and simultaneously, the
notes played are synthesized using their corresponding frequencies (fundamental
and partials) in the newly specified octave ratio. |
[mcohen-15:2006] |
Michael Cohen. Japan Science and Technology Agency (Kagaku
Gijutsu Sinkoukikou), 2006-7. |
[mcohen-16:2006] |
Michael Cohen, 2006. Program Committee, HC-2006: Ninth Int. Conf. on Human and Computer (3D Forum) |
[mcohen-17:2006] |
Michael Cohen, 2006. Reviewer, Computer Graphics and Applications |
[mcohen-18:2006] |
Michael Cohen, 2006. Reviewer and Scientific Committee, J. of Virtual Reality and Broadcasting |
[mcohen-19:2006] |
Michael Cohen, 2006. Program Committee, CIT: Sixth Int. Conf. on Computer and Information Technology |
[mcohen-20:2006] |
Michael Cohen, 2006. Reviewer, Multimedia |
[mcohen-21:2006] |
Michael Cohen, 2006. Reviewer, Int. J. of Virtual Reality |
[mcohen-22:2006] |
Arai Yutaka. Graduation Thesis: Visual Music Using G-Force,
the iTunes Visualizer, and ChromaDepth 3D, University of Aizu, 2006-7. Thesis Advisor: Michael Cohen |
[mcohen-23:2006] |
Imae Masatoshi. Graduation Thesis: Analysis of flying disc
throwing gesture with motion capture, University of Aizu, 2006-7. Thesis Advisor: Michael Cohen |
[mcohen-24:2006] |
Mine Yorinobu. Graduation Thesis: Optical Road Sign Recognition
System, University of Aizu, 2006-7. Thesis Advisor: Michael Cohen |
[mcohen-25:2006] |
Kanno Kayoko. Master Thesis: Realtime Visual Gestural Privacy
Control in Collaborative Virtual Environments, University of Aizu,
2006-7. Thesis Advisor: Michael Cohen |
[mcohen-26:2006] |
Yokomatsu Yoshiyuki. Master Thesis: gPrimassah: Polyphonic
Spatial Audio System with Matrix Mixer and Speaker Array Integrated
with CVE, University of Aizu, 2006-7. Thesis Advisor: Michael Cohen |
[mcohen-27:2006] |
Doi Kunihiro. Master Thesis: Video Tracking of Rotary Chair
for Control in Collaborative Virtual Environments and Virtual Reality
Experience With Window Dilation, University of Aizu, 2006-7. Thesis Advisor: Michael Cohen |
[mcohen-28:2006] |
Owen Noel Newton Fernando. Doctoral Dissertation: Narrowcasting
and Privacy for Multipresent Avatars on Workstations and Mobile
Phones in Collaborative Virtual Environments, University of Aizu,
2006-7. Thesis Advisor: Michael Cohen |
[mcohen-29:2006] |
Atsushi Marui. Doctoral Dissertation: Perceptually-Based Control
of Multiparameter Distortion Effects Processing for Musical Applications,
University of Aizu, 2006-7. Thesis Advisor: Michael Cohen |
[mcohen-30:2006] |
Murakami Youko. Graduation Thesis: Dual driver with 3-
dimensional effects for networked driving simulator, University of Aizu,
2006-7. Thesis Advisor: Michael Cohen with Tatsuya Nagai |
[mcohen-31:2006]] |
Michael Cohen. Opponent, Doctoral Defense of Tommi Ilmonen,
Helsinki University of Technology, Dec 2006. |
[mcohen-32:2006] |
Michael Cohen. Invited Lecture at Helsinki University of Technology,
gWhereware, Wearware, & Everyware: Distributed Mobile Presenceh,
Dec 2006. |