![]() |
||
Professor |
Assistant Professor |
Visiting Researcher |
Most of the courses taken by engineers and computer science students emphasize scientific
discipline and the accumulation of 窶徼ruth.窶・The Computer Arts Lab. activities include
such technically objective factors, but also encourage original expression, subjectively
motivated by テヲsthetics rather than 窶彡orrectness,窶・sometimes 窶徘utting the art before the
course!窶・Unlike many other labs窶・activities that try to converge on a 窶徨ight answer窶・sharable by everyone else, artistic disciplines encourage originality, in which the best
answer is one that is like no one else窶冱. The Computer Arts Lab., through its resident Spatial Media Group,1 is researching projects including practical and creative applications of virtual reality and mixed (augmented, enhanced, hybrid, mediated) reality and virtuality; panoramic interfaces and spatially-immersive displays (especially stereotelephonics, spatial sound, and stereography); wearable and mobile applications, computing, and interfaces; and networked multimedia, with related interests in cve (collaborative virtual environments), groupware and cscw (computer-supported collaborative work); hypermedia; digital typography and electronic publishing; force-feedback displays; telecommunication semiotics (models of teleconferencing selection functions); information furniture; way-finding and navigation (including using a Segway personal transporter); entertainment computing; ubicomp (ubiquitous computing), calm (ambient), and pervasive technology. We are particularly interested in narrowcasting commands, conference selection functions for adjusting groupware situations in which users have multiple presence, virtually existing in more than one space simultaneously. We explore realtime interactive multimedia interfaces窶・auditory, visual, haptic, and multimodal: Auditory We are exploring interfaces for multichannel sound, including stereo, quadraphonic, and nearphones (mounted on ourSc haire rotary motion platform), as well as two separate speaker array systems in the University-Business Innovation Center 3d Theater.A permanent installation there of the Helical Keyboard,2 refined and extended by Juliツエan Villegas and featuring realtime visual music with spatial sound and stereographic graphics, is on permanent exhibition. Working with Dr. Durand Begault of NASA, we deployed online courseware, 窶彜onic,窶・ organizing an introduction to desktop audio and presenting many sound samples. We use these contents in the 窶廬ntro. to Sound and Audio窶・graduate school course,4 which is a prerequisite for 窶彜patial Hearing and Virtual 3D Sound,窶・ taught jointly with Prof. Jie Huang in the Human Interface Lab. With Profs. Robert Fujii and Satoshi Nishimura, we host a Computer Music Studio, featuring about fifteen computer music workstations窶・each comprising a personal computer (Windows and Macs) with sequencing software and a keyboard synthesizer窶・plus assorted amplifiers, racks, mixers, and effects processors. Ongoing student projects explore dynamic microtuning systems, and visual music. We annually conduct a Student Cooperative Class Project.6 In the past we sponsored sccps on Digital Compositing (using Photoshop and the Gimp7), but in recent years the sccp has been focused on Computer Music,8 studying basic music theory and dtm (desk-top music) software, including samplers and midi sequencers 9 to compose and perform student-authored songs.10 This sccp segues into a graduate level computer music course.11 Visual We promote creative applications of scientific visualization, encouraging the use of Mathematica12 and stereoscopy,13 including chromastereoscopy14 (3d images with depth layers cued by color). We enjoy exploiting the unique large-format immersive stereographic display in the Ubic 3d Theater. We are experimenting with various cad authoring tools, such as 3DStudioMax, Blender, Maya, Sketch-Up and encourage 窶廴-Project,窶・a student-run cad circle. Various group members are exploring the application of cad techniques to the design and animation of exotic fashions, or the construction of a model of the university from photographs. We are also exploring creative applications of panoramic imaging and object movies,15 including a virtual tour of the university,16 compiled by Mamoru Ishikawa. Haptic We are also exploring the use of haptic interfaces, including force-display joysticks and a rotary motion platform (the 窶彜chaire [for 窶・hared chair窶兢 Internet Chair窶・. A recently finished project uses the Sudden Motion Sensor in a laptop as a gyroscopic control of avatars in a virtual environment.17 We also convene annual Creative Factory Seminars. Past cfss explored advanced audio interfaces and panoramic imaging, but in recent years, in conjunction with Prof. Rentaro Yoshioka of the Distributed and Parallel Processing Lab., we conduct a workshop on Haptic Modeling and 3d Printing, using force-feedback cad workstations18 to make models that are then rapid prototyped (as stereolithograms) with the dppl窶冱 personal fabricator,19 closing the 窶彿dea (stored in brain neurons) 窶・information (stored as bits) 窶・matter (atoms)窶・pathway. MultimodalUsing such multimodal interfaces, our students have crafted driving simulators, location-based games featuring the rotary motion platform20 and synテヲsthetic (cross-sensory modality) visual and haptic music players (rendering songs as light shows21 or dancing chairs22). Using visual sensing techniques,23 narrowcasting postures can be recognized, and used to control distributed chatspaces or virtual concerts. A recent student project deployed a microphone vector to track a moving sound source, using its network interface to trigger internet appliances (like lights that follow the source). We are also developing a driving simulator using collision-detection modulation of the force-feedback steering wheel and the rotary motion platform. A recent version of the project features a dual-steering (front and back) fire truck, racing through a 3d model of our campus to reach a fire, piloted by two drivers, and featuring spatial sound effects. We are interested in exploring using figurative interfaces to express emotion and to control narrowcasting privacy using a media mixing system based on the Session Initiation Protocol for advanced conferencing features. We are also exploring extensions of Project Wonderland, 24 an open-source framework for developing virtual reality environments. This year, group members have developed windshield wipers that dance, featuring beat detection, a digital phase-locked loop, and articulated wiper gestures.25 We are also exploring mobile (nomadic, portable) computing, working in conjunction with university spin-offs The Designium,26 Eyes, JAPAN,27 and GClue.28 Such keitai-based interfaces can be used to design kaleidoscopic 窶忤allpaper窶・screen savers, or to control internet appliances, panoramic imaging, spatial sound, or motion platforms. An exciting project combines spatial sound with way-finding, using gps tracking, the Segway personal transporter,29 and directional transfer functions. Other activities: We host an annual symposium, the Int. Symposium on Spatial Media,30 inviting experts to share their knowledge and passion regarding such themes as 窶彜patial Sound and Spatial Telepresence窶・(窶・1), 窶廴agic in Math and Music窶・(窶・2), 窶廣dvanced Multimedia and Virtual Reality窶・(窶・3), 窶彜patial Sound窶・(窶・4), 窶廩earing and Sound Installations窶・(窶・5), 窶彜ound, Audio, and Music窶・(窶・6), 窶廬nteractive Media, Security, and Stereography窶・(窶・6), 窶廴usic XML and the Structure of Swing, Understanding Color Media, Media Grid, and Visualization Tools窶・(窶・7), 窶廴ultimedia Computing窶・(窶・8), and 窶彜ystems and Applications窶・(窶・9-窶・0). Our lab sponsors several student performance circles, including the Yasakoi Dance Circle,31 Disco Mix Club, and Project Mermaid. We also sponsor a couple of other student circles, the Dual Boot (Ultimate Frisbee) Flying Disc Club,32 and the Furiten Mah Jongg Circle. Through the research & development, the deployment & integration, of stereographic, spatial sound, haptic, and mobile applications, including virtual and mixed reality, we nurture scientific and artistic interest in advanced computer窶塗uman and human窶塗uman communication. Our ultimate domain is the exploration of interfaces and artifacts that are literally sensational. Some relevant links: Audio Courseware: http://sonic.u-aizu.ac.jp Spatial Media: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohea.html
Japanese: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohejp.mpg
Dual Driving Simulator:http://sonic.u-aizu.ac.jp/spatial-media/Videos/ DualDrivingSimulator.mov 窶弖MP My Ride窶・http://sonic.u-aizu.ac.jp/spatial-media/ Videos/VMPMyRide.mp4
Sudden Motion Sensor Control of Collaborative Virtual Environment:http://sonic.u-aizu.ac.jp/spatial-media/Videos/SMS-CVE.mov
Object Movie:http://sonic.u-aizu.ac.jp/spatial-media/QTVR/shoe.mov Rotational Degrees of Freedom:http://sonic.u-aizu.ac.jp/spatial-media/ QTVR/Rotational-DsoF.mov |
[mcohen-01:2009] |
Owen Noel Newton Fernando, Michael Cohen, Uresh Chanaka Dumindawardana,
and Makoto Kawaguchi. Duplex Narrowcasting Operations
for Multipresent Groupware Avatars on Mobile Devices. IJWMC: Int. J. of
Wireless and Mobile Computing (Special Issue on Mobile Multimedia Systems
and Applications), 4(2), 2009. |
issn 1741-1084, 1741-1092; www.inderscience.com/browse/index.php?journalID
=46&year=2009&vol=4% &issue=2 |
|
[mcohen-02:2009] |
Mohammad Sabbir Alam, Michael Cohen, Juliツエan Villegas, and Ashir
Ahmed. Narrowcasting for Articulated Privacy and Attention in SIP Audio
Conferencing. JMM: J. of Mobile Multimedia (Special Issue on Multimedia
Networking and Applications), 5(1):12窶・8, 2009. |
http://www.rinton.net/xjmm5/jmm-5-1/012-028.pdf |
|
[wmpang-01:2009] |
Wai-Man Pang Kup-Sze Choi Pheng-Ann Heng Jing Qin, Yim-
Pan Chui. Learning Blood Management in Orthopedic Surgery through
Gameplay. IEEE Computer Graphics and Applications, pages 45窶・7,
March/April 2010. |
Orthopedic surgery treats the musculoskeletal system, in which bleeding is common
and can be fatal. To help train future surgeons in this complex practice, researchers
designed and implemented a serious game for learning orthopedic surgery. The game
focuses on teaching trainees blood management skills, which are critical for safe
operations. Using state-of-the-art graphics technologies, the game provides an interactive
and realistic virtual environment. It also integrates game elements, including
task-oriented and time-attack scenarios, bonuses, game levels, and performance evaluation
tools. To study the system窶冱 effect, the researchers conducted experiments on
player completion time and off-target contacts to test their learning of psychomotor
skills in blood management. |
[mcohen-04:2009] |
Rasika Ranaweera, Nicholas Nagel, and Michael Cohen. Wonderland窶・Cve Bridge. In Proc. HC-2009: 12th Int. Conf. on Humans and Computers,
pages 174窶・79, Hamamatsu, Japan, dec 2009. |
isbn 978-4-900721-01-2, 窶廚luspi窶・is a watermarking system capable of yielding rectangular
coordinates and orientation when a camera captures a section of appropriately
printed material. We have deployed cluspi-encoded materials, starting with
panoramic photographs, as an affordance to our locally-developed multimodal display
suite. Because of the cyclic graphical nature (2d depthlessness) of a 窶徘ano,窶・it
can be displayed just as easily, if somewhat unnaturally, inside-out. In particular,
such panos can be printed on cluspi-watermarked background and mounted on a
cylindrical accessory, a conveniently sized light-weight manipulable, about the size
of a can or bottle. Since the cluspi codes are subtle, they do not interfere with watermarked panos. We experiment with cluspi encoding as a tangible interface for
virtual tour applications, especially when deployed in conjunction with a multimodal
interface suite featuring panoramic browsing, spatial sound, optional stereographic
display, and rotary motion platform. |
|
[mcohen-05:2009] |
Michael Cohen, Norbert Gyヒ捐rbirツエo, and Kamen Kanev. Print-based
Interfaces for Mulitmodal Virtual Tours窶・Cryptosteganographic affordances
for multimodal interfaces and immersive experiences. In Proc. HC-2009: 12th
Int. Conf. on Humans and Computers, pages 26窶・2, Hamamatsu, Japan, dec
2009. |
isbn 978-4-900721-01-2, 窶廚luspi窶・is a watermarking system capable of yielding rectangular
coordinates and orientation when a camera captures a section of appropriately
printed material. We have deployed cluspi-encoded materials, starting with
panoramic photographs, as an affordance to our locally-developed multimodal display
suite. Because of the cyclic graphical nature (2d depthlessness) of a 窶徘ano,窶・it
can be displayed just as easily, if somewhat unnaturally, inside-out. In particular,
such panos can be printed on cluspi-watermarked background and mounted on a
cylindrical accessory, a conveniently sized light-weight manipulable, about the size
of a can or bottle. Since the cluspi codes are subtle, they do not interfere with
watermarked panos. We experiment with cluspi encoding as a tangible interface for
virtual tour applications, especially when deployed in conjunction with a multimodal
interface suite featuring panoramic browsing, spatial sound, optional stereographic
display, and rotary motion platform. |
|
[mcohen-06:2009] |
Rasika Ranaweera, Michael Cohen, Nick Nagel, and Michael Frishkopf.
(Virtual [World) Music]: Virtual World, World Music窶・Folkways in Wonderland.
In Yoiti Suzuki, Douglas Brungart, Hiroaki Kato, Kazuhiro Iida,
Densil Cabrera, and Yukio Iwaya, editors, Proc. IWPASH: Int. Wkshp. on
the Principles and Applications of Spatial Hearing, Zao, Miyagi; Japan, nov
2009. |
isbn 978-981-4299-31-2, http://eproceedings.worldscinet.com/9789814299312
/9789814299312.shtm%l. As immersive virtual environments and online music networks
become increasingly popular, it behooves researchers to explore their convergence:
groupware music browsers populated by figurative avatars. Collaborative
virtual environments (CVEs), like Second Life, offer immersive experiential network
interfaces to online worlds and media. We are developing a virtual environment,
based on and similar to a 窶廴usic in Wonderland窶・proof-of-concept provided by Sun
Microsystems, that enable a place that avatar-represented users can go to browse musical
databases. At first this environment will support simple virtual explorations of
music, but we have ambitions to map geotags to projected spaces, enabling locationaware
browsing. A dome-shaped sphere whose nodes are populated with representative
album art is an attractive idiom, since it leverages against our natural spatial
intuition and experience, but allows a large number of samples to be scalably displayed
and juxtaposed. A user 窶徼eleports窶・into this dome, and can browse data in
multiple fashions. |
|
[mcohen-07:2009] |
Michael Cohen and Norbert Gyヒ捐rbirツエo. Mobile Narrowcasting Spatial
Sound. In Yoiti Suzuki, Douglas Brungart, Hiroaki Kato, Kazuhiro Iida, Densil Cabrera, and Yukio Iwaya, editors, Proc. IWPASH: Int. Wkshp. on
the Principles and Applications of Spatial Hearing, Zao, Miyagi; Japan, nov
2009. |
eisbn 978-981-4299-31-2, http://eproceedings.worldscinet.com/9789814299312
/9789814299312.shtm%l, Features integrated into mobile phones are increasing in
both quantity and quality. Audio capabilities started with monaural sound, but
some recent models include 3D spatial sound rendering. This rapid development
enables novel applications for mobile users. We introduce a software framework
for mobile audio which provides creation of spatialized sound. Virtual presence is
supported in the form of avatars with controllable attendance and privacy features.
Utilizing this framework, a mobile musical concert application is presented with a
customizable configuration of the soundscape. We have designed and implemented
a mobile telephone interface for controlling spatial audio features. Mosaic (an
acronym for MObile Spatial Audio Interface andControl) is developed in DoJa, the
Java environment for NTT DoCoMo窶冱 i-mode mobile phones. |
|
[mcohen-08:2009] |
Juliツエan Villegas and Michael Cohen. Mapping topological representations
of musical scales onto virtual 3D spaces. In Yヒ・iti Suzuki, Douglas
Brungart, Hiroaki Kato, Kazuhiro Iida, Densil Cabrera, and Yukio Iwaya,
editors, Proc. IWPASH: Int. Wkshp. on the Principles and Applications of
Spatial Hearing, Zao, Miyagi; Japan, nov 2009. |
eisbn 978-981-4299-31-2, http://eproceedings.worldscinet.com/9789814299312
/9789814299312.shtm%l, We have developed a Collaborative Virtual Environment
(CVE) client that allows one to directionalize audio streams using a Head-Related
Transform Function (HRTF) filter. The CVE is a suite of multimedia and multimodal
clients, mostly authored in Java by members of our laboratory, for sharing
information regarding location and position of virtual objects among several
applications. This new client has been deployed in conjunction with the Helical
Keyboard, an interactive installation displaying three-dimensional musical scales
aurally and visually, to offer a more realistic user experience and musical immersion.
The new CVE client allows spatialization of audio sources with elevations between
-40 degrees to +90 degrees and azimuths between 0 and 355 degrees. In this fashion
we could overcome previous limitations on the auditory display of our objects, for
which we usually displayed only azimuth. |
|
[mcohen-09:2009] |
Michael Cohen, Juliツエan Villegas, Mamoru Ishikawa, Akira Inoue, Hiromitsu
Sato, Hiroki Tsubakihara, and Jun Yamadera. 窶弖MP My Ride窶・
windshield wipers that swing. In Proc. Asiagraph, volume 3, pages 126窶・30,
Tokyo, oct 2009. |
isbn 978-4-904490-01-3, http://www.asiagraph.jp, 窶弖MP窶・(pronounced /vimp/)
is acronymic for 窶老isual Music Player,窶・a synthetic audio-visual renderer. Inspired
by the hit TV show 窶弃imp My Ride,窶・in which cars are outrageously customized, we
recast the windshield wipers of an automobile with advanced multimedia technology,
allowing wipers to dance to music. We have programmed a beat detector using
窶弃ure Data.窶・This musical beat signal is filtered to drive a phase-locked loop (PLL),
which triggers choreographed, articulated gestures in virtual, actual, and model
windshield wipers. We use a cyclic buffer to implement a moving (窶忤indowed窶・ average, effectively a low-pass filter of inter-event intervals, a digital phase-locked
loop (DPLL) used to propel the selected rhythm. A choreography module articulates
the wiper sweeping gesture, mapping the DPLL beats into segmented gestures used
to drive windshield wipers in three varieties: virtual, through a driving simulator;
real, pulsing a standard wiper motor; and miniaturized, through a stepping-motor
actuated model wiper. The driving simulator, implemented in Java3D, renders the
virtual gestures. Mechatronic interfaces use signals generated by the DPLL to trigger
the physical wiper cycles; signals sent via USB to custom circuits driving Peripheral
Interface Controllers motivate the motors. |
|
[mcohen-10:2009] |
Michael Cohen, Juliツエan Villegas, Mamoru Ishikawa, Akira Inoue, Hiromitsu
Sato, Hiroki Tsubakihara, and Jun Yamadera. ArTech: 窶弖MP My
Ride窶・ windshield wipers that swing. In Proc. Asiagraph, volume 3, page
144, Tokyo, oct 2009. |
isbn 978-4-904490-01-3, http://www.dcexpo.jp/exhibition/asiagraph.html, Instead
of adding to the driving cacophony, windshield wipers can enhance musical
audition, reinforcing a beat by hiding their cabin noise in the rhythm, increasing the
signal:noise ratio by increasing the signal and masking the noise, providing 窶忻isual
music,窶・the dance of the wipers, as a welcome side-effect. We have implemented such
a visual music player, which takes an arbitrary musical source and renders its beats
as pulses of windshield wipers, both virtual and real. 窶弖MP窶・(pronounced /vimp/)
is acronymic for Visual Music Player, a synテヲsthetic audio-visual renderer. Inspired
by the hit TV show 窶弃imp My Ride,窶・in which cars are outrageously customized, we
recast the windshield wipers of an automobile with advanced multimedia technology,
allowing the wipers to dance to music. |
|
[mcohen-11:2009] |
Senaka Amarakeerthi, Michael Cohen, and Nicholas Nagel. A Speech
Emotion-Driven Avatar Animation System for Collaborative Virtual Environments.
In Proc. Audio Mostly: A Conference on Interaction with Sound,
Glasgow, sep 2009. |
Representing emotions in collaborative virtual environments (cves) is a very important
feature to make them realistic. People engage with computer-simulated virtual
environments mostly with the aid of keyboard, mouse, voice, and video. Most cves
are equipped with avatar systems to represent each user logged into a CVE session.
Some avatar systems are capable of expressing some emotions with gestures,
postures, and facial expressions. In previous studies, various approaches have been
explored to convey emotional states of humans to the computer, including voice, facial
movements, and emoticons. This paper discusses a method to extract emotions
from the voice of the speaker and animate an avatar to reflect extracted emotions
in real-time. The system has been developed in 窶弩onderland,窶・a Java-based open
source framework for creating collaborative 3d virtual words. In our prototype, six
primitive emotional states窶・anger, dislike, fear, joy, sadness, and surprise窶・were
considered. An existing emotion classification system, which makes use of short time
log frequency power coefficients (lfpc) to represent the speech signals and a discrete
hidden Markov model (hmm) as the classifier, was modified to build emotion
extraction unit. Extracted emotions were used to activate existing avatar gestures
in Wonderland. |
|
[mcohen-12:2009] |
Senaka Amarakeerthi, Rasika Ranaweera, Michael Cohen, and
Nicholas Nagel. Mapping Selected Emotions to Avatar Gesture. In Proc.
IWAC: 1st Int. Workshop on Aware Computing, Aizu-Wakamatsu, Japan,
sep 2009. Japan Society for Fuzzy Theory and Intelligent Informatics. |
www.u-aizu.ac.jp/misc/fan09/iwac09.html, Representing emotions in collaborative
virtual environments important to make them realistic. To display emotions in
such environments, facial expressions of avatars have been previously deployed. A
major drawback of such an approach is the difficulty to see facial expressions when
an avatar is far away. This paper describes a method to express emotions of avatars
by posy postures, as an alternative when facial expressions are not convenient. A prototype
was developed in Project Wonderland, a Java-based open source framework
of creating collaborative 3d virtual works. Although emotes are not discrete entities,
to reduce system complexity, privet exclusive emotions anger, dislike, fear, joy, sadness,
and surprise were considered for classification Emotions are input via popular
character combination, as used in email and textual chat spaces. These emoticons
were mapped to a predefined set of gesture and applied to the Wonderland avatars
in realtime. |
|
[wmpang-02:2009] |
Wai-Man Pang. An intuitive texture picker. In Proceeding of the 14th
international conference on Intelligent user interfaces, pages 365窶・68. ACM
New York, NY, USA, 2010. |
Color and texture are basic elements in digital graphics. Selection of color with a
picker is convenient in many of the image editing softwares. However, more organized
and intelligent GUI for texture pattern selection is still missing. In this paper, we
attempt to fill this gap with the introduction of several robust techniques in building
an intuitive texture picking GUI. By arranging patterns according to their visual
similarities, texture picker with plane and circular layout are presented. Additional
functionality include content-based texture searching which can quickly find similar
patterns of given sample. Preliminary response to the proposed interface is positive
in general, while further improvements are required, for example, on building a
hierarchy to facilitate high to low level selection for huge amount of texture patterns. |
[mcohen-03:2009] |
Juliツエan Villegas and Michael Cohen. Exploring Tonal Music Through
Operational Research Methodology. JORSJ: J. of the Operations Research
Society of Japan, 54(9):554窶・62, sep 2009. |
http://www.orsj.or.jp/english/index.html, issn 0030-3674. Two operational research
applications in music are presented. Initially, the mapping of musical scales
into multi-dimensional topologies is discussed, and the advantages of projecting these
structures into simple spaces explained. We also present the Helical Keyboard, an interactive
installation displaying three-dimensional musical scales aurally and visually.
Subsequently, the problem of minimizing musical dissonance between audio streams in
realtime is discussed, and a solution based on local minima search described. Keywords:
Mathematical modeling, OR practice, discrete optimization, musical scales, automatic
reintonation. |
|
[mcohen-13:2009] |
Michael Cohen. Tutorial: Spatial Sound and Entertainment Computing.
In ICEC: Int. Conf. on Entertainment Computing, Paris, sep 2009. |
[mcohen-14:2009] |
Michael Cohen. Narrowcasting, Multipresence, and Autofocus: Advanced
Functionality for Next-Generation Conferencing and Chatspaces窶・Figurative
Interface Controlling SIP-Based Media Server. In Aaron Walsh, editor,
iED: Immersive Education Initiative London Summit, London, England, apr
2009. |
[mcohen-15:2009] |
Michael Cohen and Owen Noel Newton Fernando. Awareness Systems:
Advances in Theory, Methodology and Design, chapter Narrowcasting
Attributes for Presence Awareness, pages 259窶・89. Human-Computer Interaction
Series. Springer, apr 2009. |
Our group is exploring interactive multiand hypermedia, especially applied to virtual
and mixed reality multimodal groupware systems.We are exploring user interfaces to
control source 竊・sink transmissions in synchronous groupware (like teleconferences,
chatspaces, virtual concerts, etc.). The apparent paradoxes of multipresence, having
avatars in multiple places or spaces simultaneously, are resolvable by an 窶和utofocus窶・feature, which uses reciprocity, logical exchangeability of source and sink, to project
overlaid soundscapes and simulate precedence effect to consolidate the audio display.
We have developed two interfaces for narrowcasting (selection) functions for presence
awareness in collaborative virtual environments (CVEs): for a workstation wimp-
style (windows/icon/menu/pointer) gui (graphical user interface), and for networked
mobile devices, a 2.5and 3rd-generation mobile phones. The narrowcasting operations
comprise an idiom for selective attention, presence, and privacy窶・an infrastructure
for rich computer-supported collaborative work (cscw). |
|
[mcohen-16:2009] |
Sabbir Alam, Michael Cohen, Juliツエan Villegas, and Ahmed Ashir. Narrowcasting
in SIP: Articulated Privacy Control, chapter 14, pages 323窶・45.
CRC Press: Taylor and Francis, 2009. |
10-isbn 1-4200-6603-X, 13-isbn 978-1-4200-6603-6,
http://www.crcpress.com/product/isbn/9781420066036,
http://www.amazon.com/SIP-Handbook-Services-Technologies-Initiation/%
dp/142006603X/ref=sr 11 1?ie=UTF8&qid=1232693739&sr=11-1, In multimedia
conferencing, media streams are exchanged between participants upon session
establishment by setting up communication channels within a group. By default,
each participant receives a combined stream obtained by mixing other participants窶・media. Situations arise when one wants to select a subset of the conference
participants to whom one窶冱 media is sent or from whom streams are received. Media
filters are necessary to configure privacy of the participants in the conference. In
analogy to broad-, multi-, any-, and swarm-casting, narrowcasting is a technique for
limiting and focusing information streams. Narrowcasting systems extend broadand
multicasting systems by allowing media streams to be filtered窶・for relevancy
control, privacy, and user interface optimization. We describe four narrowcasting
commands窶・mute, deafen, select, and attend窶・to provide distributed privacy
in sip-based conferencing. |
[mcohen-17:2009] |
Michael Cohen. Japan Society for Promotion of Science (Kakenhi):
窶弃ersonal Panoramic Multimodal Interface窶・ 2008窶・. |
[mcohen-18:2009] |
Michael Cohen, 2009. Program Committee, HC-2009: Twelfth Int. Conf. on Human and Computer (Hamamatsu) |
[mcohen-19:2009] |
Michael Cohen, 2009窶・0. Reviewer and Scientific Committee, J. of Virtual Reality and Broadcasting |
[mcohen-20:2009] |
Michael Cohen, June 2009. Reviewer, Elsevier Brain Research, http://ees.elsevier.com/bres/ |
[mcohen-21:2009] |
Michael Cohen, June 2009. Reviewer (tertiary), SIGGRAPH Asia Courses, http://www.siggraph.org/asia2009/ |
[mcohen-22:2009] |
Michael Cohen, June 2009. Reviewer, ICMC (Int. Conf. on Computer Music), http://www.icmc2009.org |
[mcohen-23:2009] |
Michael Cohen, June 2009. Reviewer, FAN2009/IWAC2009, http://web-ext.u-aizu.ac.jp/misc/fan09/fan09-e.html, http://web-ext.u-aizu.ac.jp/misc/fan09/iwac09.html |
[mcohen-24:2009] |
Michael Cohen, June 2009. Jury, Best Paper Award, Int. Conf. on IT in Asia (CITA09), http://www.cita09.org |
[mcohen-25:2009] |
Thesis Advisor: Michael Cohen |
[mcohen-26:2009] |
Thesis Advisor: Michael Cohen |
[mcohen-27:2009] |
Thesis Advisor: Michael Cohen |
[mcohen-28:2009] |
Thesis Advisor: Michael Cohen |
[mcohen-29:2009] |
Thesis Advisor: Michael Cohen |
[mcohen-30:2009] |
Thesis Advisor: Michael Cohen |
[mcohen-31:2009] |
Thesis Advisor: Michael Cohen |
[mcohen-32:2009] |
Thesis Advisor: Michael Cohen |
[mcohen-33:2009] |
Thesis Advisor: Michael Cohen |
[mcohen-34:2009] |
Michael Cohen. Narrowcasting, Multipresence, and Autofocus: Advanced
Functionality for Next-Generation Conferencing and Chatspaces 窶・Figurative Interface Controlling SIP-Based Media Server. Electronic, Electrical,
& Computer Engineering, Heriot Watt University (Edinburgh, Scotland);
http://www.eeehw.com/Researchers/DeptSeminars/Cohen.php, 1
May 2009.
|