Professor |
Most of the courses taken by engineers and computer science students emphasize
scientific discipline and the accumulation of “truth.” The Computer Arts Lab. activities
include such technically objective factors, but also encourage original expression, subjectively
motivated by æsthetics rather than “correctness,” sometimes “putting the art
before the course!” Unlike many other labs’ activities that try to converge on a “right
answer” sharable by everyone else, artistic disciplines encourage originality, in which the
best answer is one that is like no one else’s. The Computer Arts Lab., through its resident Spatial Media Group,1 is researching projects including practical and creative applications of virtual reality and mixed (augmented, enhanced, hybrid, mediated) reality and virtuality; panoramic interfaces and spatially-immersive displays (especially stereotelephonics, spatial sound, and stereography); wearable and mobile applications, computing, and interfaces; and networked multimedia, with related interests in cve (collaborative virtual environments), groupware and cscw (computer-supported collaborative work); hypermedia; digital typography and electronic publishing; force-feedback displays; telecommunication semiotics (models of teleconferencing selection functions); information furniture; way-finding and navigation (including using a Segway personal transporter); entertainment computing; ubicomp (ubiquitous computing), calm (ambient), and pervasive technology. We are particularly interested in narrowcasting commands, conference selection functions for adjusting groupware situations in which users have multiple presence, virtually existing in more than one space simultaneously. We explore realtime interactive multimedia interfaces— auditory, visual, haptic, and multimodal: Auditory We are exploring interfaces for multichannel sound, including stereo, quadraphonic, and nearphones (mounted on ourSc haire rotary motion platform), as well as two separate speaker array systems in the University-Business Innovation Center 3d Theater.2 A permanent installation there of the Helical Keyboard,3 refined and extended by Juli´an Villegas and featuring realtime visual music with spatial sound and stereographic graphics, is on permanent exhibition. Working with Dr. Durand Begault of NASA, we deployed online courseware, “Sonic,”4 organizing an introduction to desktop audio and presenting many sound samples. We use these contents in the “Intro. to Sound and Audio” graduate school course,5 which is a prerequisite for “Spatial Hearing and Virtual 3D Sound,”6 taught jointly with Prof. Jie Huang in the Human Interface Lab. With Profs. Robert Fujii and Satoshi Nishimura, we host a Computer Music Studio, featuring about fifteen computer music workstations— each comprising a personal computer (Windows and Macs) with sequencing software and a keyboard synthesizer— plus assorted amplifiers, racks, mixers, and effects processors. Ongoing student projects explore dynamic microtuning systems, and visual music. We annually conduct a Student Cooperative Class Project.7 In the past we sponsored sccps on Digital Compositing (using Photoshop and the Gimp8), but in recent years the sccp has been focused on Computer Music,9 studying basic music theory and dtm (desk-top music) software, including samplers and midi sequencers 10 to compose and perform student-authored songs.11 This sccp segues into a graduate level computer music course.12 Visual We promote creative applications of scientific visualization, encouraging the use of Mathematica13 and stereoscopy,14 including chromastereoscopy15 (3d images with depth layers cued by color). We enjoy exploiting the unique large-format immersive stereographic display in the Ubic 3d Theater. We are experimenting with various cad authoring tools, such as 3DStudioMax, Blender, Maya, Sketch- Up, and Softimage. Various group members are exploring the application of cad techniques to the design and animation of exotic fashions, or the construction of a model of the university from photographs. We are also exploring creative applications of panoramic imaging and object movies,16 including a virtual tour of the university,17 compiled by Mamoru Ishikawa. Haptic We are also exploring the use of haptic interfaces, including force-display joysticks and a rotary motion platform (the “Schaire [for ‘shared chair’] Internet Chair”). A recently finished project uses the Sudden Motion Sensor in a laptop as a gyroscopic control of avatars in a virtual environment.18 We also convene annual Creative Factory Seminars. Past cfss explored advanced audio interfaces and panoramic imaging, but in recent years, in conjunction with Prof. Rentaro Yoshioka of the Distributed and Parallel Processing Lab., we conduct a workshop on Haptic Modeling and 3d Printing, using force-feedback cad workstations19 to make models that are then rapid prototyped (as stereolithograms) with the dppl’s personal fabricator,20 closing the “idea (stored in brain neurons) – information (stored as bits) – matter (atoms)” pathway. MultimodalUsing such multimodal interfaces, our students have crafted driving simulators, location-based games featuring the rotary motion platform21 and synæsthetic (cross-sensory modality) visual and haptic music players (rendering songs as light shows22 or dancing chairs23). Using visual sensing techniques,24 narrowcasting postures can be recognized, and used to control distributed chatspaces or virtual concerts. A recent student project deployed a microphone vector to track a moving sound source, using its network interface to trigger internet appliances (like lights that follow the source). We are also developing a driving simulator using collision-detection modulation of the force-feedback steering wheel and the rotary motion platform. The most recent version of the project features a dual-steering (front and back) fire truck, racing through a 3d model of our campus to reach a fire, piloted by two drivers, and featuring spatial sound effects. We are interested in exploring using figurative interfaces to express emotion and to control narrowcasting privacy using a media mixing system based on the Session Initiation Protocol for advanced conferencing features. We are also exploring extensions of Wonderland, 25 an open-source framework for developing virtual reality environments. This year, group members have developed windshield wipers that dance, featuring beat detection, a digital phase-locked loop, and articulated wiper gestures.26 We are also exploring mobile (nomadic, portable) computing, working in conjunction with university spin-offs The Designium,27 Eyes,28 and GClue.29 Such keitai - based interfaces can be used to design kaleidoscopic “wallpaper” screen savers, or to control internet appliances, panoramic imaging, spatial sound, or motion platforms. An exciting project combines spatial sound with way-finding, using gps tracking, the Segway personal transporter,30 and directional transfer functions. Other activities: We host an annual symposium, the Int. Symposium on Spatial Media,31 inviting experts to share their knowledge and passion regarding such themes as “Spatial Sound and Spatial Telepresence” (’01), “Magic in Math and Music” (’02), “Advanced Multimedia and Virtual Reality” (’03), “Spatial Sound” (’04), “Hearing and Sound Installations” (’05), “Sound, Audio, and Music” (’06), “Interactive Media, Security, and Stereography” (’06), “Music XML and the Structure of Swing, Understanding Color Media, Media Grid, and Visualization Tools” (’07), “Multimedia Computing” (’08). Our lab sponsors several student performance circles, including the Yasakoi Dance Circle,32 Disco Mix Club, and Project Mermaid. We also sponsor a couple of other student circles, the Dual Boot (Ultimate Frisbee) Flying Disc Club,33 and the Furiten Mah Jongg Circle. Through the research & development, the deployment & integration, of stereographic, spatial sound, haptic, and mobile applications, including virtual and mixed reality, we nurture scientific and artistic interest in advanced computer–human and human–human communication. Our ultimate domain is the exploration of interfaces and artifacts that are literally sensational. Some relevant links: Audio Courseware: http://sonic.u-aizu.ac.jp Spatial Media: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohea.html
Japanese: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohejp.mpg QuickTime Virtual Reality: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/
Object Movie: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/shoe.mov Hideo Noguchi + Akabeko: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/Noguchi+Akabeko.mov Rotational Degrees of Freedom: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/Rotational-DsoF.mov |
[mcohen-01:2008] |
Michael Cohen and Hiroshi Saito. Existential Quantifiers in Mathematica
for Narrowcasting Predicate Calculus. 3D Forum: J. of Three Dimensional
Images, 22(2):55–58, 2008. |
In analogy to broad-, multi-, and any-casting, narrowcasting is a technique for limiting
and focusing information streams. For modern groupware situations like teleconferences,
in which everyone can have presence across the global network, users
want to shift and distribute attention (apathy) and accessibility/availability/exposure
(privacy), and narrowcasting organizes source and sink exclusion and inclusion filters
mute, select, deafen, and attend. To realize an elegant predicate calculus notation
formalizing such attributes using the logic programming features of Mathematica, the
existential quantifiers were reimplemented to reference a list-expressed finite domain,
working-around the non-trivial finite domain of their application, avatars in a virtualreality
style user interface. Keywords: Predicate Calculus, Existential Quantifiers,
Narrowcasting |
|
[mcohen-02:2008] |
Mohammad Sabbir Alam, Michael Cohen, and Ashir Ahmed. Articulated
Narrowcasting for Privacy and Awareness in Multimedia Conferencing
Systems and Design for Implementation Within a SIP Framework. JVRB: J.
of Virtual Reality and Broadcasting, 5(14), 2008. |
This article proposes a new focus of research for multimedia conferencing systems
which allows a participant to flexibly select another participant or a group for media
transmission. For example, in a traditional conference system, participantsvoices
might by default be shared with all others, but one might want to select a subset of
the conference members to send his/her media to or receive media from. We review
the concept of narrowcasting, a model for limiting such information streams in a multimedia
conference, and describe a design to use existing standard protocols (SIP and
SDP) for controlling fine-grained narrowcasting sessions. |
[mcohen-03:2008] |
Michael Cohen and Norbert Gy¨orbir´o. Personal and Portable, Plus
Practically Panoramic: Mobile and Ambient Display and Control of Virtual
Worlds. Innovation: The Magazine of Research & Technology, 8(3):33–35, 2008. |
The diffusion of powerful mobile telephones as global personal communication and processing
tools creates new synergies for multimedia applications.With network capability,
mobile interfaces can also be integrated and synchronized with fixed resources, including
location-based displays such as speaker arrays, information furniture, and motion
platforms. Ultimately mobile phones could end up as the tools that people use to navigate
and interact in virtual worlds, exploiting seamless integration of audio, visual, and
haptic data streams. |
[mcohen-04:2008] |
Michael Cohen. Integration of Laptop Sudden Motion Sensor as Accelerometric
Control for Virtual Environments. In VRCAI: Proc. ACM SIGGRAPH
Int. Conf. on Virtual-Reality Continuum and Its Applications in
Industry, Singapore, December 2008. |
We have developed a gestural controller for a multimodal client suite using a sudden
motion sensor (sms) deployed with many modern laptop computers. Interpreted
commands inferred from the sms accelerometer can be used to adjust position— orientation
and location— of egocentric perspectives and exocentric avatars to control
panoramic browsing and spatialized sound, adjusting the lateralization, directionalization,
and spatialization of musical and audio channels. |
|
[mcohen-05:2008] |
Rasika Ranaweera, Ishara Jayasingha, Senaka Amarakeerthi, Chamila
Karunathilake, and Michael Cohen. Event Script Interpreter for Synchronized
“Roller-Cam” Graphical Display and Rotary Motion Platform. In
Ashu Marasinghe, editor, Proc. HC-2008: 11th Int. Conf. on Humans and
Computers, pages 91–98, Nagaoka, Japan, November 2008. |
In the field of virtual and mixed reality, vision, audition, olfaction (smell), gustation
(taste), and touch sensations are important for a human to immerse in a synthetic
environment. Toward the virtual display of some of these, we have developed a
script processor, a client which synchronizes rotational imperatives with sophisticated
graphics and sound generated by deterministic output from a modeling game
called “Roller Coaster Tycoon.” The game itself has egocentric perspective visual
display and soundscape. Yaw of the rotary motion platform can be synchronized
with these displays to have integrated 1st 壬erson multimodal display. For the synchronization,
we use a Collaborative Virtual Environment (CVE) platform, a client
ミserver architecture developed by our group. Our newly built client, the CVE
script interpreter, parses choreographed motions and connects to the server which
broadcasts to clients in the same session (in particular, the rotary motion platform).
Positions are stored in an XML resource file. Each track of the game has a corresponding
XML file. It contains location and orientation details in position nodes,
each with a time stamp relative to the previous node. The Document Object Model
is used for the XML parser since it is fast in navigating the tree structure. This
architecture allows us freedom to add new tracks and use the same client for other
games. |
|
[mcohen-06:2008] |
Michael Cohen. Modeling the Foldable Torus Network and Irregular
Network Variation. In Ashu Marasinghe, editor, Proc. HC-2008: 11th Int.
Conf. on Humans and Computers, pages 55–56, Nagaoka, Japan, November
2008. |
For multiprocessors, a “folded torus” topology has been proposed to richly connect
processors on a backplane while minimizing interconnection costs. Recently some
extensions to this architecture have been proposed, including the so-called “Irregular
Network.” We have modeled these topologies in Mathematica, including techniques
to contrast the original and extended architectures, and also rapid prototyped such
a model using a 3D printer. We discuss the implementation of this model, including
techniques used to distinguish the disused and new links between the topologies,
both in the colored computer-viewable versions and the actual stereolithographic
“fabject” (fabricated object). |
|
[mcohen-07:2008] |
Juli´an Villegas and Michael Cohen. ‘Roughometer’: Realtime Roughness
Calculation and Profiling. In Audio Engineering Society Conv.,
Preprint #7516, San Francisco, October 2008. |
A software application capable of determining auditory roughness in real-time is
presented. This application, based on Pure-Data (Pd), calculates the roughness of
incoming audio streams using a spectral method originally proposed by Vassilakis.
The processing speed is adequate for many realtime applications, and results indicate
limited but significant agreement with an online implementation of the chosen
model. Finally, the usage of this tool is illustrated by roughness profile of a musical
composition that can be compared to its perceived patterns of ‘tension’ and
‘relaxation. |
|
[mcohen-08:2008] |
Takayuki Hattori, Yorinobu Mine, Michael Cohen, and Jun Yamadera.
Gesture Interpretation for Domestic Appliance Control. In Proc. ICAT:
Eighteenth Int. Conf. Artificial Reality and Telexistence, pages 343–346,
Hiyoshi, Yokohama, December 2008. |
We present a gestural interpretation interface that optically senses hand positions
and motions. These postures and gestures are recognized and processed, closing
the interaction loop by controlling computer media— such as Apple tv or Google
Earth— as well as physical devices such as domestic appliances— including adjusting
curtains, changing television channel and volume, turning lights and fans on and
off, and piloting a cleaning robot. Keywords: ambient information systems, calm
technology, computer vision, disappearing computer, image processing, multimodal
interaction, remote controls, roomware, smart spaces and aware environments. |
[mcohen-09:2008] |
Michael Cohen. Integration of Laptop Sudden Motion Sensor as Accelerometric
Control for Virtual Musical Environments and Spatial Audition.
In Denis Baggi and Goffredo Haus, editors, Simposio di Musica, Matematica e
Multimedia: Proc. IEEE Int. Conf. “The Use of Symbols to Represent Music
and Multimedia Objects”, Lugano, Switzerland, October 2008. |
We have developed a gestural controller for a multimodal client suite using a sudden
motion sensor (sms) deployed with many modern laptop computers. Interpreted
commands inferred from the sms accelerometer can be used to adjust position— orientation
and location— of egocentric perspectives and exocentric avatars to control
panoramic and/or stereoscopic browsing, and spatial sound, as well as location-based
displays. This interface can be used to control the position of avatars in a virtual
concert, adjusting the lateralization, directionalization, or spatialization of musical
channels. |
|
[mcohen-10:2008] |
Michael Cohen. Keynote Address: Spatial Media Arts. In Pane
on Culture Technology and Asian Pop Culture: Spatial Media and Popular
Culture, Singapore, July 2008. |
Recently there has been a great interest amongst artists, computer scientists, and engineers
to merge culture with technology in order to produce a new kind of technology
media which can inform, enhance, and entertain society. Related to this is the way in
which Asian countries, and some Asian governments, who were previously obsessed
with manufacturing and technology have suddenly seen the importance and benefit of
culture and content, in particular Pop Culture. The impact of Asian content is being
strongly felt not only in Asia, but also in Europe and America and the panel will
discuss and analyze this. The panelists are Adrian David Cheok, Michael Cohen, Sam
Furukawa, Masa Inakage, Ken Mogi, Ryohei Nakatsu, Francis Quek, Marc Tuters |
|
[mcohen-11:2008] |
Michael Cohen. Keynote address: “Spatial Media at the University
of Aizu”. In 135th SIG-DPS: Proc. 135th Meeting Special Interest Group on
Distributed Processing System, Aizu-Wakamatsu, June 2008. |
Many disciplines encountered in engineering or science assume some accumulation of
“truth,” ideally converging on some common result. Art and design projects include
such technically objective factors, but also explicitly encourage creative expression,
subjectively motivated by aesthetics rather than “correctness.” Unlike domains that
try to arrive upon a “right answer” shared by everyone, artistic disciplines (including
temporal arts like music and theatre) encourage originality, in which the best work
is unique. Spatial media explore both our real world (of three physical dimensions
and six degrees of freedom) and also artificial spaces of information. New technologies
offer opportunities for innovative design and advanced applications, both creative and
“re-creative,” simulating virtual and real spaces. Some examples of spatial media at
the University of Aizu–targeted for distribution traditional (print, etc.) or electronic
(internet-delivered) ムare presented, including panoramic photography, stereoscopy
and chromastereoscopy, sqtvr (stereoscopic panoramic photography), 3d-printing
(stereolithography, a.k.a. “rapid prototyping”), digital typography, fashion design,
mobile computing, and dynamic arts like virtual concerts with spatial sound (presented
via our UBIC 3d Theater’s discrete speaker array). The feedback between
co-potentiating hardware manufacturers and software content providers–ムdriven by
“after-market” sales of applications and services, the composition of infrastructure and
applications–ムwill continue to blur the distinction between designers & engineers,
art & science, and invention & discovery. |
[mcohen-12:2008] |
Michael Cohen. Japan Society for Promotion of Science (Kakenhi ):
“Personal Panoramic Multimodal Interface”, 2008–9. |
[mcohen-13:2008] |
Japan Eyes and Michael Cohen. Augmented Reality, 2008–9.
Fukushima Sangyo Oen Fund |
[mcohen-14:2008] |
Japan Eyes and Michael Cohen. Seminar Series for Intelligent Driver
Support Systems, 2008–9. |
[mcohen-15:2008] |
Michael Cohen, 2008. Reviewer and Scientific Committee, J. of Virtual Reality and Broadcasting |
[mcohen-16:2008] |
Michael Cohen, 2008. Program Committee, HC-2008: Eleventh Int. Conf. on Human and Computer (Nagaoka) |
[mcohen-17:2008] |
Michael Cohen, 2008. Program Committee, CIT: Seventh Int. Conf. on Computer and Information Technology, http://attend.it.uts.edu.au/cit2008/ (Sydney) |
[mcohen-18:2008] |
Michael Cohen, 2008. Program Committee, ICAT: 18th Int. Conf. on Artificial Reality and Telexistence, http://www.vrsj.org/ic-at/ICAT2008/ (Hiyoshi, Yokohama) |
[mcohen-19:2008] |
Michael Cohen, 2007. Program Committee, ICMC: Int. Computer Music Conf, www.icmc2009.org (Montr´eal, Qu´ebec) |
[mcohen-20:2008] |
Michael Cohen, 2008. Reviewer, Elsevier Brain Research, http://ees.elsevier.com/bres/ |
[mcohen-21:2008] |
Michael Cohen, 2008. Reviewer, Singapore Interactive Digital Media R&D Program |
[mcohen-22:2008] |
Hattori Takayuki. Masters Thesis: “Hand Gesture Tracking and
Recognition to Control Mouse Pointer as GT4U2C and Evaluation”, University
of Aizu, 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-23:2008] |
Moriguchi Yoshinori. Graduation Thesis: “Segwayfinding with 3D
Sensor”, University of Aizu, 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-24:2008] |
Ando Ryu. Graduation Thesis: “Authoring Flash Through Adobe
AIR”, University of Aizu, 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-25:2008] |
Inoue Akira. Graduation Thesis: “WonderFL Web Music Creator”,
University of Aizu, 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-26:2008] |
Ishikawa Mamoru. Graduation Thesis: “Simulated Stereophonic System
Using PC with R/GPS Receiver”, University of Aizu, 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-27:2008] |
Sato Hiromitsu. Graduation Thesis: “Using Motion Capture to Produce
Augmented Reality Drum-Playing Movies”, University of Aizu, 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-28:2008] |
Nojima Ayumu. Graduation Thesis: “Dynamic Composition and
Decomposition of Musical Clusters”, University of Aizu, 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-29:2008] |
Yorinobu Mine. Masters Thesis: “Gestural Domestic Appliance Control”,
University of Aizu, 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-30:2008] |
Nojima Ayumu. Graduation Thesis: “Automatic Generation of Hexapod
Character Ambulation: Development of 3ds Max Plug-In”, University of
Aizu, 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-31:2008] |
Mark William Grandjean. Visiting Student, Rose-Hulman Institute
of Technology (Terre Haute, Indiana; USA), 2008. Thesis Advisor: Michael Cohen |
[mcohen-32:2008] |
Daniel Sabin Jr. Visiting Student, Rose-Hulman Institute of Technology
(Terre Haute, Indiana; USA), 2008. Thesis Advisor: Michael Cohen |
[mcohen-33:2008] |
Sebastian Boettner. Masters Thesis: “Chromostereopsis In Interactive
Walkthroughs”, University of Potsdam (Germany), 2008–9. Thesis Advisor: Michael Cohen |
[mcohen-34:2008] |
Matthias Flasko. Masters Research, Fachhochschule Duesseldorf (Germany),
2008–9. Thesis Advisor: Michael Cohen |
[mcohen-35:2008] |
Michael Cohen and Juli´an Villegas. “Spatial Media at the University
of Aizu”: Research Seminar for Guests from Peru. Aizu Wakamatsu 6th Junior
High School, June 22 2008. |
[mcohen-36:2008] |
Mikiko Cohen and Michael Cohen. “Special people and its street
around castle town Aizu”: Historical Map, 2008. Japan Scenic By-Way “Castle Town Aizu Avenue” Promotion Group |
[mcohen-37:2008] |
Michael Cohen. Rainbow Spot Lecture for Gakuho Junior High School,
March 10, 2009. |