Professor |
Associate Professor |
Most of the courses taken by engineers and computer science students emphasize scientific discipline and the accumulation of "truth." The Computer Arts Lab. activities includes such technically objective factors, but also encourages original expression, subjectively motivated by aesthetics rather than "correctness," sometimes "putting the art before the course!" Unlike many other labs' activities that try to converge on a "right answer" sharable by everyone else, artistic disciplines encourage originality, in which the best answer is one that is like no one else's! The Computer Arts Lab., through its resident Spatial Media Group,1 is researching projects including practical and creative applications of virtual reality and mixed (augmented, enhanced, hybrid, mediated) reality and virtuality; panoramic interfaces and spatially-immersive displays (especially stereotelephonics, spatial sound, and stereography); wearable and mobile applications, computing, and interfaces; and networked multimedia, with related interests in cve (collaborative virtual environments), groupware and cscw (computer-supported collaborative work); hypermedia; digital typography and electronic publishing; force-feedback displays; telecommunication semiotics (models of teleconferencing selection functions); information furniture; way-finding and navigation; entertainment computing; ubicomp (ubiquitous computing), calm (ambient), and pervasive technology. We are particularly interested in narrowcasting commands, conference selection functions for adjusting groupware situations in which users have multiple presence, virtually existing in more than one space simultaneously. We explore realtime interactive multimedia interfaces-- auditory, visual, haptic, and multimodal: Auditory We are exploring interfaces for multichannel sound, including stereo, quadraphonic, and nearphones (mounted on our rotary motion platform) two separate speaker array systems in the University-Business Innovation Center 3d Theater,2 and an octophonic system in our lab. Working with Dr. Durand Begault of NASA, we deployed online courseware, "Sonic,"3 organizing an introduction to desktop audio and presenting many sound samples. We use these contents in the "Audio Interfaces," graduate school course,4 which is a prerequisite for "Spatial Hearing and Virtual 3D Sound," taught jointly with Prof. Jie Huang in the Human Interface Lab. With Profs. Robert Fujii and Satoshi Nishimura, we host a Computer Music Studio, featuring about fifteen computer music workstations-- each comprising a personal computer (Windows and Macs) with sequencing software and a keyboard synthesizer-- plus assorted amplifiers, racks, mixers, and effects processors. Ongoing student projects explore dynamic microtuning systems, quadraphonic spatialization of tetraphonic bass guitar strings, and control of narrowcasting using sip (Session Initiation Protocol). We annually conduct a Student Cooperative Class Project.5 In the past we have sponsored sccps on Digital Compositing (using Photoshop and the Gimp6 ), but recently the sccp has been focused on Computer Music,7 studying basic music theory and using a midi sequencer8 to compose and perform student-authored songs. This sccp segues into a graduate level computer music course.9 Visual We promote creative applications of scientific visualization, encouraging the use of Mathematica10 and stereoscopy,11 including chromastereoscopy12 (3d images with depth layers cued by color). We enjoy exploiting the unique large-format immersive stereographic display in the UBIC 3d Theater. Some group members are using visual input techniques through a web cam to sense user mood, for kansei interpretation, or for control of narrowcasting chatspace interfaces. We are also exploring creative applications of panoramic imaging and object movies.13 Haptic We are also exploring the use of haptic interfaces, including force-display joysticks and a rotary motion platform (the "Sc haire [for `shared chair'] Internet Chair"). We conduct annual Creative Factory Seminars. Past cfss explored advanced audio interfaces, panoramic imaging, and haptic modeling. This year, in conjunction with the Distributed and Parallel Processing Lab., we conducted a workshop on Haptic Modeling and 3d Printing, using force-feedback cad workstations14 to make models that were then rapid prototyped (as stereolithograms) with the dppl's personal fabricator,15 finishing the "idea (stored in brain neurons) - information (stored as bits) - matter (atoms)" pathway. Multimodal Using such multimodal interfaces, our students have crafted driving simulators, location-based games featuring the rotary motion platform,16 synaesthetic (cross-sensory modality) visual and haptic music players (rendering songs as light shows or dancing chairs17 ). Using the aforementioned visual sensing technique, narrowcasting postures can be recognized, and used to control distributed chatspaces or virtual concerts. A recent student project deployed a microphone vector to track a moving sound source, using its network interface to trigger internet appliances (like lights that follow the source). We are also developing a driving simulator using collision-detection modulation of the force-feedback steering wheel and the rotary motion platform. The most recent version of the project features a dual-steering (front and back) fire truck, racing through a 3d model of our campus to reach a fire, piloted by two drivers, and featuring spatial sound effects. We are also exploring mobile (nomadic, portable) computing, working in conjunction with university spin-offs The Designium,18 Eyes,19 and GClue.20 Other activities: We host an annual symposium, the Int. Symposium on Spatial Media,21 inviting experts to share their knowledge and passion regarding such themes as "Spatial Sound and Spatial Telepresence" ('01), "Magic in Math and Music" ('02), "Advanced Multimedia and Virtual Reality" ('03), "Spatial Sound" ('04), "Hearing and Sound Installations" ('05), "Sound, Audio, and Music" ('06), and "Interactive Media, Security, and Stereography" ('06). Our lab sponsors several student performance circles, including the Yasakoi Dance Circle,22 Disco Mix Club, and Project Mermaid. Through the research & development, the deployment & integration, of stereographic, spatial sound, and haptic applications, we nurture scientific and artistic interest in advanced computer-human and human-human communication. Our ultimate domain is the exploration of interfaces and artifacts that are literally sensational. Some relevant links: Audio Courseware: http://sonic.u-aizu.ac.jp Spatial Media: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohea.html
Japanese: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohejp.mpg
"Personal Panoramic Displays for a Rotary Motion Platform": http://sonic.u-aizu.ac.jp/spatial-media/mixedreality/VideoClips/keitai+Schaire2.mov
Object Movie: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/shoe.mov Hideo Noguchi + Akabeko: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/Noguchi+Akabeko.mov Rotational Degrees of Freedom: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/Rotational-DsoF.mov 1 sonic.u-aizu.ac.jp/spatial-media/Videos/coheen.mpg 2 www.ubic-u-aizu.pref.fukushima.jp/pub/system.file/system.html#3D 3 sonic.u-aizu.ac.jp 4 www.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/graduate/Audio_Interfaces/syllabus.html 5 www.u-aizu.ac.jp/official/curriculum/curr04-3_j.htm 6 www.gimp.org 7 www.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/undergraduate/Computer_Music 8 www.pgmusic.com/band.htm 9 www.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/graduate/Computer_Music/syllabus.html 10 www.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/Mma.html 11 www.u-aizu.ac.jp/~mcohen/spatial-media/stereograms.html 12 www.chromatek.com 13 sonic.u-aizu.ac.jp/spatial-media/QTVR/ 14 www.sensable.com/products/3ddesign/freeform 15 www.zcorp.com/products/printersdetail.asp?ID=1 16 sonic.u-aizu.ac.jp/spatial-media/mixedreality/VideoClips/keitai+Schaire2.mov 17 sonic.u-aizu.ac.jp/spatial-media/mixedreality/VideoClips/keitai+Schaire2.mov 18 www.thedesignium.com 19 www.aizu.com 20 www.gclue.com 21 www.u-aizu.ac.jp/~mcohen/welcome/ISSM/06-07 22 www.u-aizu.ac.jp/circles/yosakoi |
[mcohen-01:2005] |
Wenxi Chen, Daming Wei, Shuxue Ding, Michael Cohen, Hui Wang, Shigeru Tokinoya, and Naotoshi Takeda. A scalable mobile phone-based system for multiple vital signs monitoring and healthcare. JPCC: J. of Pervasive Computing and Communications, 1(2):157-163, 2005. |
Design and implementation of a universal platform based upon mobile telephony and internet infrastructure for healthcare monitoring and other related applications is described. The platform hierarchy comprises three layers for sensing, communication, and management. The core of the sensing layer is a wearable sensor unit suitable for vital signs monitoring without discomfort or constraint during daily activities. The communication layer conducts bidirectional data/command signals via either wired or wireless means to bridge layers communication between the sensor and management layers. The management layer performs data mining and functional assessment. This article describes the architecture design considerations and systemic implementation to meet various practical needs and provide scalable solutions not only for home healthcare but also other applications driven by vital signs. Three applications platformed on this architecture are described. |
|
[mcohen-02:2005] |
Owen Noel Newton Fernando, Michael Cohen, Uresh Chanaka Dumindawardana, and Makoto Kawaguchi. Duplex narrowcasting operations for multipresent groupware avatars on mobile devices. IJWMC: Int. J. of Wireless and Mobile Computing (Special Issue on Mobile Multimedia Systems and Applications), 1(5), 2005. |
Our group is exploring interactive multimedia, especially for virtual and mixed reality groupware systems. The apparent paradoxes of multipresence, having avatars in multiple places or spaces simultaneously, are resolvable by an "autofocus" feature, which uses reciprocity, logical exchangability of source and sink, to project overlaid soundscapes and simulate the precedence effect to consolidate the audio display. We have developed an interface for narrowcasting (selection) functions for a networked mobile device deployed in a collaborative virtual environment (cve). Featuring a variable number of icons in a "2.5d" application, the interface can be used to control motion, sensitivity, and audibility of avatars in a teleconference or chatspace. The interface is integrated with other (cve clients through a "servent" (server/client hybrid) http$tcp/ip gateway, and interoperates with a heterogeneous group ware suite to interact with other clients, including stereographic panoramic browsers and spatial audio backends and speaker arrays. Novel features include mnemonic conferencing selectin function keypad operations, multiply encoded graphical display of such non-mutally exclusive attributes, and explicit multipresence features. |
[mcohen-03:2005] |
Owen Noel Newton Fernando, Guo Saito, Uresh Duminduwardena, Yoshie Tanno, and Michael Cohen. Cloning and teleporting avatars across workstations and mobile devices in collaborative virtual environments: Clipboard operations for virtual reality. In Proc. ICIA'05: Int. Conf. on Information and Automation, Colombo, Sri Lanka, December 2005. |
Audio windows are, in analogy to graphical windowing systems, an idiom for managing multiple streams of information, as in teleconferences, chatspaces, or virtual concerts. Audio windowing systems are especially appropriate on small screen devices for practical applications in ubiquitous computing. We have developed a multiplatform implementation of audio windowing systems to control source¨sink transmissions in collaborative virtual environments (cves), for workstations and mobile devices. Clipboard operations have been implemented to allow teleporting (cut/paste) and cloning (copy/paste) avatars in both interfaces. The workstation- and mobile-based interfaces encourage multiple spaces, which enables multipresence-enabled conferencing features in collaborative virtual environment. These interfaces are integrated with other cve clients, interoperating with a heterogeneous groupware suite, including stereographic panoramic browsers, spatial audio backends, and speaker arrays. Our main goal is to explore how to extend audio window systems with narrowcasting (selection) and multipresence to support useful conferencing capabilities. |
|
[mcohen-04:2005] |
Juli'an Villegas and Michael Cohen. Melodic Stretching with the Helical Keyboard. In Proc. Enactive: 2nd Int. Conf. on Enactive Interfaces, Genoa, November 2005. VRSJ (Virtual Reality Society of Japan). |
We describe a new technique to visually and aurally express melodic stretching in real-time based on the Helical Keyboard (a Java 3D application) and the midi protocol. In traditional music, the scale is constructed by dividing the octave which spans a ratio of 2:1 between the frequencies of the notes in the interval. However, it is possible to define octaves using different frequency ratios and as a consequence construct new scales. We extend our exotic 3D model keyboard to render such stretched octaves visually and aurally. By interacting with this rich interface, the user can experiment with the dual nature of the musical scale (lineal and cyclical) as well as the principles of the musical scales construction. Our application allows spatial audio, interactive control, and immersive experiences by using chromastereoptic techniques. Keywords: octave stretching, midi, micro-tuning, visual music, representation of musical pitch. |
|
[mcohen-05:2005] |
Michael Cohen. Multipresence narrowcasting operations comprise a media meta-mixer exponentiating interface value. |
[mcohen-06:2005] |
Uresh Chanaka Duminduwardena and Michael Cohen. Controlling the Sc haire Internet Chair with a Mobile Device. In Proc. CIT: Fourth Int. Conf. on Computer Information Technology, pages 215- 220, Wuhan, China, September 2004. |
We have developed second-generation prototypes of the Internet Chair, a novel internet appliance. The first generation explored using the chair as an input device; "Sc haire," the prototype described here, is a pivot (swivel, rotating) chair deployed as an output device, a rotary motion-platform information appliance. Its haptic display modality is yaw, dynamically synchronized with wireless visual displays and spatial audio in a rotation-invariant virtual space. In groupware situations, like teleconferencing, chat spaces, or multiplayer gaming, such orientation is also used to twist iconic representations of a seated user, avatars in a virtual world, enabling social situation awareness. Using its audio display modality, transaural speakers (without crosstalk), "nearphones" embedded in the seat headrest, the system can present unencumbered binaural sound with soundscape stabilization for multichannel sound image localization. As a haptic output modality, chairs with servomotors render kinesthetic and proprioceptive cues, direct the attention of a seated subject orienting seated users like a "dark ride" amusement park attraction or under active user control, local and/or distributed. The Sc haire, manifesting as personal lbe (location-based entertainment) can be used in both stand-alone and networked applications. Keywords: {augmented, enhanced, hybrid, mediated, mixed} reality/virtuality, haptic interface, information furniture, location-based entertainment (lbe), mobile communication, motion platform, networked appliance, pervasive applications, soundscape stabilization. |
|
[mcohen-07:2005] |
Sh_uhei Ishikawa, Yu Saito, and Michael Cohen. Synchronization of Networked Avatars and Appliances with Mobile Phone Ringtones: Integrating Java3D and Lan-tap Roomware with J2ME. In Proc. CIT: Fifth Int. Conf. on Computer and Information Technology, pages 553- 557, Shanghai, September 2005. |
This article describes research on a mobile interface that allows remote control of electric appliances and avatars rendered on a workstation or personal computer to be synchronized with a mobile phone's ringtone. The interface-- implemented with DoJa, one of the profiles of J2ME (Java 2, Micro Edition)-- controls electric appliances via a "lan-tap," an internet-connected power supply, and controls avatars by changing position parameters and attributes on a mobile phone. In a mobile phone network, delay of the transmission frequently occurs because of the interference of radio waves. To synchronize appliances and avatars with a ringtone, our interface sends parameters anticipating the network delay. When a short delay occurs, the interface adjusts the time lag between the avatar and the ringtone automatically; when a long delay occurs, it can be adjusted by pushing the mobile phone's keys. We used a 2.5g (2.5th generation) mobile phone for this research. The interface was able to synchronize networked avatars with a slow tempo ringtone for a short time, but the connection speed of the 2.5g mobile phone was not fast enough for quick realtime synchronization. As a proof-of-concept, we deployed such phase-locked musical animation roomware with 3d avatars in a virtual space enhanced by entertainment effects displayed in a real space enhanced by some similar effects. |
|
[mcohen-08:2005] |
Wenxi Chen, Daming Wei, Masayuki Uchida, Shuxue Ding, Michael Cohen, Shigeru Tokinoya, and Naotoshi Takeda. A Mobile Phone-based Wearable Vial Signs Monitoring System. |
[mcohen-09:2005] |
Mohammad Sabbir Alam, Michael Cohen, and Ashir Ahmed. Design of Narrowcasting Implementation in Sip. In Proc. HC-2005: Eighth Int. Conf. on Human and Computer, pages 255-260, AizuWakamatsu, August 2005. |
This article proposes a new area of research for multimedia, multi-party conferencing systems, which will allow users to have flexible control during a conferencing, to control media streams and their direction. Controlling the media and the direction from an endpoint has been a challenging issue. For simple example, a user's voice might by default be shared with all others in a chatspace, but an appropriate interface would allow a secret to be shared with some select subset of the members. Current commercially-available conference systems do not generally support such features. In our research, we introduce the concept of narrowcasting, a technique for limiting information streams, use the existing standard protocol (sip and sdp) to negotiate the device capabilities for establishing, modifying and terminating a session. Keywords: Narrowcasting, sip, sdp, Conferencing, Device capability, Media direction control |
|
[mcohen-10:2005] |
Juli'an Villegas, Michael Cohen, and Yuuta Kawano. Harmonic Stretching with the Helical Keyboard. In Proc. HC-2005: Eighth Int. Conf. on Human and Computer, pages 261-266, Aizu-Wakamatsu, August 2005. |
We describe a new technique to visually and auditorily express harmonic stretching in real-time, based on the Helical Keyboard (a Java3D application), the midi protocol, and Java Sound synthesis capabilities. The result is a rich multimodal interface that allows changing the stretching ratio while the user plays back midi sequences, or plays directly through a keyboard controller (hardware or widget). A helix representing the keys of the keyboard is visually rendered using chromastereoptic techniques enabling depth perception when viewing with special eyewear. The notes in the helix dynamically change their azimuth and elevation according to the stretching ratio, and simultaneously, the notes played are synthesized using their corresponding frequencies (fundamental and partials) in the newly specified octave ratio. Keywords: visual music, harmonic stretching, additive synthesis, multimodal interfaces, microtuning, Java3D, Java Sound |
|
[mcohen-11:2005] |
Kazuya Adachi, Ken'ichiro Iwai, Eiji Yamada, and Michael Cohen. Multimodal wayfinding in a driving simulator for the schaire internet chair, a networked rotary motion platform. In Proc. ICEC: Int. Conf. on Entertainment Computing, Kobe Sanda, September 2005. |
We are exploring idss (intelligent driver support systems), especially including way-finding presented via spatial audio. ("Way-finding" refers to giving a driver directions, as via car navigation ["Car-Nabi"] gps/gis systems.) We have developed a networked driving simulator as a virtual-reality based interface (control/display system) featuring integration with the Sc haire rotary motion platform for azimuth-display, stereographic display for 3d graphics, and spatial audio (sound spatialization) way-finding cues. As a haptic output modality, chairs with servomotors can render force-display, turning themselves under networked control, to respond to driving control. Our chairs are deployed with augmented reality visual scenes (via qtvr-enabled browsers, Swing-conformant dynamic maps, and Java3D) and sounds, using laptops integrated via wireless communication (using Wi-Fi, 820.11). As a visual output modality, a mixed perspective or stereoscopic rendering of a scene, fusible via special eyewear, allows spatial graphics. As an audio output modality, transaural speakers (without crosstalk), "nearphones" embedded in the seat headrest, can present unencumbered binaural sound with soundscape stabilization for multichannel sound image localization. |
|
[mcohen-12:2005] |
Michael Cohen and Owen Noel Newton Fernando. Narrowcasting Attributes for Presence Awareness. In Proc. CHI'05 Workshop on Awareness Systems: Known Results, Theory, Concepts and Future Challenges, Portland, April 2005. |
Our group is exploring interactive multiand hypermedia, especially applied to virtual and mixed reality multimodal groupware systems. We are exploring user interfaces to control source ¨ sink transmissions in synchronous groupware (like teleconferences, chatspaces, virtual concerts, etc.). The apparent paradoxes of multipresence, having avatars in multiple places or spaces simultaneously, are resolvable by an `autofocus' feature, which uses reciprocity, logical exchangeability of source and sink, to project overlaid soundscapes and simulate precedence effect to consolidate the audio display. We have developed two interfaces for narrowcasting (selection) functions for presence awareness in collaborative virtual environments (CVEs): for a workstation WIMPń tyle (windows/icon/menu/pointer) GUI (graphical user interface), and for networked mobile devices, a 2.5and 3rd-generation mobile phones. The narrowcasting operations comprise an idiom for selective attention, presence, and privacy-- an infrastructure for rich computer-supported collaborative work (CSCW). |
[mcohen-13:2005] |
Michael Cohen, 2005. Program Committee, HC-2005: Eighth Int. Conf. on Human and Computer (3D Forum) |
[mcohen-14:2005] |
Michael Cohen, 2005. Reviewer, Computer Graphics and Applications (IEEE) |
[mcohen-15:2005] |
Michael Cohen, 2005. Reviewer and scientific committee, J. of Virtual Reality and Broadcasting |
[mcohen-16:2005] |
Nagai Tatsuya. Graduation Thesis: Feeding back collision force to drivers: The steering wheel as an output device for Networked Driving Simulator, University of Aizu, 2005-6. Thesis Advisor: Michael Cohen with Kazuya Adachi and Kayoko Kanno |
[mcohen-17:2005] |
Tanno Yoshie. Graduation Thesis: Narrowcasting and Clipboard Operations Across Multiple Spaces for Collaborative Virtual Environments Using Mobile Phones, University of Aizu, 2005-6. Thesis Advisor: Michael Cohen with Newton Fernando |
[mcohen-18:2005] |
Kawano Yuuta. Graduation Thesis: Harmonic Stretching with the Helical Keyboard Featuring Dynamic Microtuning and Chromastereoptic Visualization, University of Aizu, 2005-6. Graduation Thesis: Thesis Advisor: Michael Cohen with Julian Villegas |
[mcohen-19:2005] |
Yamaki Akira. Graduation Thesis: `M & M': Audio Filters for Muffle and Muzzle, University of Aizu, 2005-6. Thesis Advisor: Michael Cohen with Sabbir Alam |
[mcohen-20:2005] |
Nakamura Takenori. Graduation Thesis: Sound Localization and CVE Control Using a Microphone Vector, University of Aizu, 2005- 6. Thesis Advisor: Michael Cohen with Ishikawa Shuhei |
[mcohen-21:2005] |
Matsumoto Noriyoshi. Graduation Thesis: LabanReader: A Choreography Interface to a Collaborative Virtual Environment Featuring Spatial Sound, University of Aizu, 2005-6. Thesis Advisor: Michael Cohen |
[mcohen-22:2005] |
Midorikawa Junko. Graduation Thesis: Spatial Sound with Spiral Spring Interface on Workstation and Mobile Phone, University of Aizu, 2005-6. Thesis Advisor: Michael Cohen with Newton Fernando |
[mcohen-23:2005] |
Suzuki Kazuyuki. Graduation Thesis: Firefox/Mozilla Application for Hyperfun, University of Aizu, 2005-6. Thesis Advisor: Michael Cohen |