Professor |
Most of the courses taken by engineers and computer science students emphasize scientific discipline and the accumulation of "truth." The Computer Arts Lab. activities includes such technically objective factors, but also encourages original expression, subjectively motivated by aesthetics rather than "correctness," sometimes "putting the art before the course!" Unlike many other labs' activities that try to converge on a "right answer" sharable by everyone else, artistic disciplines encourage originality, in which the best answer is one that is like no one else's! The Computer Arts Lab., through its resident Spatial Media Group,1 is researching projects including practical and creative applications of virtual reality and mixed (augmented, enhanced, hybrid, mediated) reality and virtuality; panoramic interfaces and spatially-immersive displays (especially stereotelephonics, spatial sound, and stereography); wearable and mobile applications, computing, and interfaces; and networked multimedia, with related interests in cve (collaborative virtual environments), groupware and cscw (computer-supported collaborative work); hypermedia; digital typography and electronic publishing; force-feedback displays; telecommunication semiotics (models of teleconferencing selection functions); information furniture; way-finding and navigation; entertainment computing; ubicomp (ubiquitous computing), calm (ambient), and pervasive technology. We are particularly interested in narrowcasting commands, conference selection functions for adjusting groupware situations in which users have multiple presence, virtually existing in more than one space simultaneously. We explore realtime interactive multimedia interfaces-- auditory, visual, haptic, and multimodal: Auditory We are exploring interfaces for multichannel sound, including stereo, quadraphonic, and nearphones (mounted on our rotary motion platform) two separate speaker array systems in the University-Business Innovation Center 3d Theater,2 and an octophonic system in our lab. Working with Dr. Durand Begault of NASA, we deployed online courseware, "Sonic,"3 organizing an introduction to desktop audio and presenting many sound samples. We use these contents in the "Audio Interfaces," graduate school course,4 which is a prerequisite for "Spatial Hearing and Virtual 3D Sound,"5 taught jointly with Prof. Jie Huang in the Human Interface Lab. With Profs. Robert Fujii and Satoshi Nishimura, we host a Computer Music Studio, featuring about fifteen computer music workstations-- each comprising a personal computer (Windows and Macs) with sequencing software and a keyboard synthesizer-- plus assorted amplifiers, racks, mixers, and effects processors. Ongoing student projects explore dynamic microtuning systems, quadraphonic spatialization of tetraphonic bass guitar strings, and control of narrowcasting using sip (Session Initiation Protocol). We annually conduct a Student Cooperative Class Project.6 In the past we have sponsored sccps on Digital Compositing (using Photoshop and the Gimp7 ), but recently the sccp has been focused on Computer Music,8 studying basic music theory and using a midi sequencer9 to compose and perform student-authored songs. This sccp segues into a graduate level computer music course.10 Visual We promote creative applications of scientific visualization, encouraging the use of Mathematica11 and stereoscopy,12 including chromastereoscopy13 (3d images with depth layers cued by color). We enjoy exploiting the unique large-format immersive stereographic display in the UBIC 3d Theater. Some group members are using visual input techniques through a web cam to sense user mood, for kansei interpretation, or for control of narrowcasting chatspace interfaces. We are also exploring creative applications of panoramic imaging and object movies.14 Haptic We are also exploring the use of haptic interfaces, including force-display joysticks and a rotary motion platform (the "Sc haire [for `shared chair'] Internet Chair"). We also conduct annual Creative Factory Seminars. Past cfss explored advanced audio interfaces and panoramic imaging, but in recent years, in conjunction with Prof. Rentaro Yoshioka of the Distributed and Parallel Processing Lab., We conduct annual Creative Factory Seminars. Past cfss explored advanced audio interfaces, panoramic imaging, and haptic modeling. This year, in conjunction with the Distributed and Parallel Processing Lab., we conducted a workshop on Haptic Modeling and 3d Printing, using force-feedback CAD workstations15 to make models that were then rapid prototyped (as stereolithograms) with the dppl's personal fabricator,16 closing the "idea (stored in brain neurons) - information (stored as bits) - matter (atoms)" pathway. Multimodal Using such multimodal interfaces, our students have crafted driving simulators, location-based games featuring the rotary motion platform,17 synaesthetic (cross-sensory modality) visual and haptic music players (rendering songs as light shows<18> or dancing chairs19 ). Using the aforementioned visual sensing technique, narrowcasting postures can be recognized, and used to control distributed chatspaces or virtual concerts. A recent student project deployed a microphone vector to track a moving sound source, using its network interface to trigger internet appliances (like lights that follow the source). We are also developing a driving simulator using collision-detection modulation of the force-feedback steering wheel and the rotary motion platform. The most recent version of the project features a dual-steering (front and back) fire truck, racing through a 3d model of our campus to reach a fire, piloted by two drivers, and featuring spatial sound effects. We are also exploring mobile (nomadic, portable) computing, working in conjunction with university spin-offs The Designium,20 Eyes,21 and GClue.22 Such keitai - based interfaces can be used to control internet appliances, panoramic imaging, spatial sound, or motion platforms. An exciting project combines spatial sound with way-nding, using gps tracking, the Segway personal transporter, and directional transfer functions. Other activities: We host an annual symposium, the Int. Symposium on Spatial Media,23 inviting experts to share their knowledge and passion regarding such themes as gSpatial Sound and Spatial Telepresenceh (f01), gMagic in Math and Musich (f02), gAdvanced Multimedia and Virtual Realityh (f03), gSpatial Soundh (f04), gHearing and Sound Installationsh (f05), gSound, Audio, and Musich (f06), gInteractive Media, Security, and Stereographyh (f06), and gMusic XML and the Structure of Swing, Understanding Color Media, Media Grid, and Visualization Toolsh (f07). Our lab sponsors several student performance circles, including the Yasakoi Dance Circle,24 Disco Mix Club, and Project Mermaid. We also sponsor a couple of other student circles, the Dual Boot (Ultimate Frisbee) Flying Disc Club and the Furiten Mah Jongg Circle. Through the research & development, the deployment & integration, of stereographic, spatial sound, and haptic applications, we nurture scientific and artistic interest in advanced computer-human and human-human communication. Our ultimate domain is the exploration of interfaces and artifacts that are literally sensational. Some relevant links: Audio Courseware: http://sonic.u-aizu.ac.jp Spatial Media: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohea.html
Japanese: http://sonic.u-aizu.ac.jp/spatial-media/Videos/cohejp.mpg QuickTime Virtual Reality: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/
Object Movie: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/shoe.mov Hideo Noguchi + Akabeko: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/Noguchi+Akabeko.mov Rotational Degrees of Freedom: http://sonic.u-aizu.ac.jp/spatial-media/QTVR/Rotational-DsoF.mov |
[mcohen-01:2007] |
Michael Cohen, Noor Alamshah Bolhassan, and Owen Newton Fernando.
A Multiuser Multiperspective Stereographic QTVR Browser Complemented
by Java3D Visualizer and Emulator. Presence: Teleoperators and Vir-
tual Environments, 16(4):414-438, August 2007. |
To support multiperspective and stereographic image display systems intended for
multiuser applications, we have developed and integrated two multiuser multiperspective
stereographic "browsers," featuring ibr-generated egocentric and cg exocentric
perspectives. The rst one described uses Apple's QuickTimeVR technology and
the Java programming language together with the support of the `QuickTime for
Java' application programming interface (api). This unique qtvr browser allows
coordinated display of multiple views of a virtual environment,limited only by the
size and number of monitors or projectors assembled around or among users (for
panoramas or turnoramas) in various viewing locations. Named "VR4U2C" (`virtual
reality for you to see'), the browser also provides a novel solution to limitations
associated with interactive stereographic display of qtvr imagery. A multinode
implementation providing stereographic qtvr (which we dub `sqtvr') enables
display of pairs of images exhibiting binocular parallax, and the stereoscopic depth
percept that results is enhanced by motion parallax inherent in displacement of the
viewpoint through a displayed 3d space coupled with rotation of the view through
a complete 360° horizontal panorama. Our navigable approach to sqtvr allows
proper occlusion/disocclusion as the virtual standpoint shifts, as well as natural
looming of closer objects compared to more distant ones. We have integrated this
stereographic panoramic browsing client with a sibling client, named "Just Look at
Yourself!", which is based on Java3d and allows realtime exocentric visualization
of the dollying and viewpoint adjustment, and juxtaposition and combination of
stereographic cg (computer graphic) andibr (image-based rendering) displays.
The Java3d application visualizes and emulates VR4U2C, integrating cylinders
wrapped around the stereo standpoints and dynamically texture-mapped with a
set of panoramic scenes into a 3d cg model of the same space as that captured
by the set of panoramic scenes. The transparency of the 3d cg space and the
photorealistic stereographic panoramic scene, as well as the size of the `stereo goggles'
upon which it is texture-mapped, can be adjusted at runtime to understand the
relationship of the spaces. Keywords: collaborative virtual environments (cves),
image-based rendering (ibr), binocular parallax, stereographic panoramic navigation.
First Prize, Experienced Division, University of Malaysia Faculty of Computer
Science and Information Technology Outstanding Research and Development Awards
(www.fcsit.unimas.my/index.php?option=com content&task=view&id=664&Itemid=248) |
|
[mcohen-02:2007] |
Mohammad Sabbir Alam, Michael Cohen, and Ashir Ahmed. Articulated
Narrowcasting for Privacy and Awareness in Multimedia Conferencing Systems and Design for Implementation Within a SIP Framework. JVRB: J.
of Virtual Reality and Broadcasting, 4(9), 2007. |
This article proposes a new focus of research for multimedia conferencing systems
which allows a participant to
exibly select another participant or a group for media
transmission. For example, in a traditional conference system, participants' voices
might by default be shared with all others, but one might want to select a subset of
the conference members to send his/her media to or receive media from. We review
the concept of narrowcasting, a model for limiting such information streams in a
multimedia conference, and describe a design to use existing standard protocols (SIP
and SDP) for controlling ne-grained narrowcasting sessions. |
[mcohen-03:2007] |
Julián Villegas and Michael Cohen. Synæsthetic Music, or the Ultimate
Ocular Harpsichord. In Toshiaki Miyazaki, Incheon Paik, and Daming
Wei, editors, Proc. CIT: 7th Int. Conf. on Computer and Information
Technology, pages 523-527, Aizu-Wakamatsu, October 2007. |
We address the problem of visualizing microtuned scales and chords such that each
representation is unique and therefore distinguishable. Using colors to represent the
different pitches, we aim to capture aspects from the musical scale impossible to
represent with numerical ratios. Inspired by the neurological phenomenon known
as synæsthesia, we built a system to reproduce microtuned midi sequences aurally
and visually. This system can be related to Castel's historic idea of the `Ocular
Harpsichord.' |
|
[mcohen-04:2007] |
Julián Villegas and Michael Cohen. Möbius tones and Shepard geometries:
an alternative Synæsthetic analogy. In NPAR: 5th Int. Symp. on
Non-Photorealisic Animation and Rendering, San Diego, August 2007. |
Traditionally, Shepard tones are compared to the optical illusion given by the Penrose
stairs. However, for such three-dimensional objects, the illusion is only achieved
from certain points of view. By displacing the position of the viewer, it's possible
to see the gaps between the extremes of the staircase. This artifact doesn't have an
aural counterpart, as the concept of 'point of view' has no analog. To circumvent
this problem, we use a Mbius strip which has a non-orientable topography with
only one surface and one edge. One can think of the edge of the surface as the
tonic of a scale, and points across the strip as notes in the scale. Due to the nature
of this object, every point over the surface has two representations in the musical
scale, making it impossible to play a single tone (with the exception of the unison).
In our Java3D application, the position of the ball is sampled at regular intervals,
and its distance from the edge is sent to a built-in synthesizer based on the Java
Sound API. The user can navigate through the space or interact with the model by specifying the position sampling rate, the randomness of the path, the tonic of the
scale, the color interpolator, etc. |
|
[mcohen-05:2007] |
Mohammad Sabbir Alam, Michael Cohen, and Ashir Ahmed. Narrowcasting:
Implementation of Privacy Control in Sip Conferencing. In Proc.
ICME: Int. Conf. on Multimedia & Expo, Beijing, July 2007. |
In traditional conferencing systems, participants' voices might by default be shared
with all others. However a VOIP user might want to select a subset of session
members to selectively exchange media streams. In this article, we describe an
implementation of narrowcasting, a model for limiting such information streams
in a multimedia conferences, as a class of policies, prototyping a system using
existing standard Session Initiation Protocol (SIP) methods for controlling negrained
narrowcasting sessions. |
|
[mcohen-06:2007] |
Owen Noel Newton Fernando, Michael Cohen, and Adrian David
Cheok. Mobile Spatial Audio Interfaces. In Adrian David Cheok, editor,
Proc. MobileHCI: Proc. 9th Int. Conf. on Human Computer Interaction with
Mobile Devices and Services, pages 259{261, Singapore, September 2007. |
For an increasing amount of people (especially young people), the mobile phone,
and not the computer is becoming their main media device and portal to the virtual
world. Mobile phones have become a ubiquitous technology and for many people an
important tool for communication and information access. Mobile telephony offers
an interesting platform for building multipresence-enabled applications that utilize
the phone as a social or commercial assistant. The main objective of this research is
to develop multipresence-enabled audio windowing systems for visualization, attention,
and privacy awareness of narrowcasting (selection) functions in collaborative
virtual environments (CVEs) for mobile devices such as 3rd- and 4th-generation
mobile phones. Mobile audio windowing system enhances auditory information on
mobile phones and encourages modernization of office- and mobile-based conferencing. |
|
[mcohen-07:2007] |
Mohammad Sabbir Alam, Michael Cohen, Julián Villegas, and Ashir
Ahmed. Figurative Privacy Control of SIP-based Narrowcasting. In Proc.
AINA: Proc. 22nd Int. Conf. on, Gino-wan City, Okinawa, March 2008. |
In traditional conferencing systems, participants have |
|
[mcohen-08:2007] |
Julián Villegas and Michael Cohen. Local Dissonance Minimization
in Realtime. In Proc. SIGMAP: Int. Conf. on Signal Processing and
Multimedia Applications, Barcelona, July 2007. |
This article discusses the challenges of applying the tonotopic consonance theory
to minimize the dissonance of concurrent sounds in real-time. It reviews previous
solutions, proposes an alternative model, and presents a prototype programmed in
Pd that aims to surmount the diffculties of prior solutions. |
|
[mcohen-09:2007] |
Hiroshi Saito and Michael Cohen. Existential Quantifiers in Mathematica
for Narrowcasting Predicate Calculus. In Minetada Osano, editor,
Proc. HC-2007: Tenth Int. Conf. on Human and Computer, pages 31-32,
Aizu-Wakamatsu, December 2007. |
In analogy to broad-, multi-, and any-casting, narrowcasting is a technique for limiting
and focusing information streams. For modern groupware situations like teleconferences,
in which everyone can have presence across the global network, users want
to shift and distributeattention (apathy) and accessibility/availability/exposure
(privacy), and narrowcasting organizes source and sink exclusion and inclusion filters
mute, select, deafen, and attend. To realize an elegant predicate calculus
notation formalizing such attributes using the logic programming features of Mathematica,
the existential quantiers were reimplemented to reference a list-expressed
finite domain, working-around the non-trivial finite domain of their application,
avatars in a virtual-reality style user interface. Keywords: Predicate Calculus, Existential
Quantiers, Narrowcasting. |
|
[mcohen-10:2007] |
Tatuya Nagai, Michael Cohen, Youko Murakami, and Yoshinori
Moriguchi. Networked Driving Simulator with Force Display, Rotating Motion
Platform, and Dual Steering. In Proc. TICE: 2nd Int. Wkshp. on
Tangible and Interaction in Collaborative Environments, at WET ICE,
the 16th Int. IEEE Wkshps. on Enabling Technologies: Infrastructures for
Collaborative Enterprises, pages 424-429, Paris, June 2007. |
We describe the integration of two pairs of force displays comprising a force-feedback
wheel (ffbw) and the S
c haire (`Share Chair'), a rotary motion platform, in a dualdriver
networked driving simulator which can share a virtual space using cve
(collaborative virtual environment) groupware. We have developed a double-driver
(long ladder-style) fire-truck simulation with a tiller (rear steering), driven via an
integrated pair of networked driving simulator stations. The dual-driver system is
useful to turn narrow corners rapidly and smoothly in case of (simulated) emergencies.
Its ffbw steering wheels display simple collision force to drivers when the
vehicle collides with walls or other vehicles. The technique of feeding back the effect
employs programs using C++ and DirectInput,escaping to an execution file called
ForceManager from the networked driving simulator, which is implemented with
Java3d. The effect pattern is changed by arguments to ForceManager. The S
c haire
is rotated with a servomotor, the rotation angle controlled via internet through the
cve. |
|
[mcohen-11:2007] |
Akira Sasaki, Kazuaki Yamauchi, Wenxi Chen, Michael Cohen,
Daming Wei, and Zixue Cheng. Innovative Mobile Phone Services Based on
Next Generation Infrastructure in Japan | A Survey. In Toshiaki Miyazaki,
Incheon Paik, and Daming Wei, editors, Proc. CIT: 7th Int. Conf. on
Computer and Information Technology, pages 491-496, Aizu-Wakamatsu,
September 2007. |
In this survey, we review current mobile phone technology and market in Japan
in various elds. We also consider next-generation mobile phone infrastructure and
innovative services, describing how future mobile infrastructure works by introducing
some cutting-edge technology differing from the current mobile infrastructure.
We explain why new services such as CGM (Communication Generated Market)
content and media type content will become available in the future mobile network.
New services are based upon the advanced infrastructure built-into next generation
mobile phones. Because of advancement of hardware specifications and new technology,
features of mobile phone terminals will evolve drastically from 2007 through
2010, and our daily lives will also be altered accordingly. |
|
[mcohen-12:2007] |
Michael Cohen and Ishara Jayasingha. Auditory and Haptic Disambiguation
of Browsing Models: Turnoramas and Virtual Viewpoints.
In Proc. HAID: 2nd Int. Workshop on Haptic and Auditory Interaction
Design; Lecture Notes in Computer Science 4813 (isbn 978-3-540-76701-5),
Seoul, November 2007. VRSJ (Virtual Reality Society of Japan). |
We are exploring the experience of a QTVR "object movie" by contrasting the
sensation when such contents are browsed with and without synchronized rotary
chair motion and panning sound. The haptic sensation induced by the vestibular
system and the binaural auditory stimulation can potentially disambiguate the
apparent mode of a virtual object examinationeither "turnorama," in which the
subject is xed but the object rotates, or "spin-around," in which the object is
fixed but the subject rotates and revolves in orbit around it. |
|
[mcohen-13:2007] |
Michael Cohen, Ishara Jaysingha, and Julián Villegas. Spin-Around:
Phase-locked Synchronized Rotation and Revolution in a Multistandpoint
Panoramic Browser. In Toshiaki Miyazaki, Incheon Paik, and Daming
Wei, editors, Proc. CIT: 7th Int. Conf. on Computer and Information
Technology, pages 511-516, Aizu-Wakamatsu, October 2007. |
Using our multistandpoint panoramic browser as a display,we have developed a
control function that synchronizesrevolution and rotation of a visual perspective
around a designated point of regardselected from a point embedded in a constellation
of omnidirectional images bundled into a multinode qtvr "pano" movie.The
phase-locked orbit is uniquely determined by the focus and the start point,and the
user can parameterizedirection,step size,and cycle speed,and invoke an animated or
single-stepped gesture.The images can be monoscopic or stereoscopic,and the rendering
supports the usual scaling functions (zoom/unzoom).Additionally, via sibling
clients that can directionalize realtime audio streams,spatialize hdd-resident audio
les,or render rotation via a personal rotary motion platformspatial sound and
propriceptive sensations can be synchronized with such gestures,providing complementary
multimodal displays. |
[mcohen-14:2007] |
Michael Cohen. Invited Talk: Wearware, Whereware, and Awareware.
In ISUC: Proc. First Int. Symp. on Universal Communication, pages 259-264,
Kyoto, June 2007. |
The `4-play' convergence of telephony, television, internet, and wireless is driving
a remarkable proliferation of new devices and services. Mobile terminals, as intimate
as clothing, are a kind of wearable computer. A diversity of \next-generation"
form factors of evolving "smart phones" is surveyed, including mobile stereotelephony,
spawned from cyberspatial audio and augmented audio models. Meanwhile, locationbased
services, along with seamless handoff and heterogeneous roaming via mimo
(multiple input/multiple output) smart antennas leading to software-dened radio
and cognitive radio will leverage geolocation and portable gps/gis. Such advanced
sensing will enable ubicomp and ambient intelligence, including an awareness of user
status and availability, and articulated models of privacy, like narrowcasting, that
allow users to distribute their attention and virtual presence. Multipresence and persistent
channels, encouraged by abc (always best connected) networks, might bid
"sayonara" to "hello," as lightweight circuit-based chatspaces obviate verbal handshaking. |
|
[mcohen-15:2007] |
Michael Cohen. Keynote Address: Spatial Media Arts. In Proc.
ICACA: Second Int. Conf. on Applied and Creative Arts, Sarawak, Malaysia,
January 2008. |
Many disciplines encountered in engineering or science assume some accumulation of
"truth," ideally converging on some common result. Art and design projects include
such technically objective factors, but also explicitly encourage creative expression,
subjectively motivated by aesthetics rather than "correctness." Unlike domains that
try to arrive upon a "right answer" shared by everyone, artistic disciplines (including
temporal arts like music and theatre) encourage originality, in which the best work is
unique. Spatial media explore both our real world (of 3 dimensions and 6 degrees of
freedom) and also artificial spaces of information. New technologies offer opportunities
for innovative design and advanced applications, both creative and "re-creative,"
simulating virtual and real spaces. Some examples of spatial media| targeted for
distribution traditional (print, etc.) or electronic (internet-delivered)| are presented,
including panoramic photography, stereoscopy, 3d-printing, typography, fashion design,
multimodal interfaces via information appliances, and dynamic arts like games
and virtual concerts. The feedback between co-potentiating hardware manufacturers
and software content providers| driven by "after-market" sales of applications and
services, the composition of infrastructure and applications| will continue to blur the
distinction between designers & engineers, art & science, and invention & discovery. |
|
[mcohen-16:2007] |
Michael Cohen. Keynote Address: Spatial Media Arts at the University
of Aizu. In Hiroyuki Sagawa, editor, CSIP (Center for Strategy of International Programs) Symposium: Future Perspectives of International Cooperation Programs in University of Aizu, Aizu-Wakamatsu, March 2008. |
Many disciplines encountered in engineering or science assume some accumulation of
"truth," ideally converging on some common result. Art and design projects include
such technically objective factors, but also explicitly encourage creative expression,
subjectively motivated by aesthetics rather than "correctness." Unlike domains that
try to arrive upon a "right answer" shared by everyone, artistic disciplines (including
temporal arts like music and theatre) encourage originality, in which the best work is
unique. Spatial media explore both our real world (of 3 dimensions and 6 degrees of
freedom) and also artificial spaces of information. New technologies offer opportunities
for innovative design and advanced applications, both creative and "re-creative,"
simulating virtual and real spaces. Some examples of spatial media| targeted for
distribution traditional (print, etc.) or electronic (internet-delivered)| are presented,
including panoramic photography, stereoscopy, 3d-printing, typography, fashion design,
multimodal interfaces via information appliances, and dynamic arts like games
and virtual concerts. The feedback between co-potentiating hardware manufacturers
and software content providers| driven by "after-market" sales of applications and
services, the composition of infrastructure and applications| will continue to blur the
distinction between designers & engineers, art & science, and invention & discovery. |
|
[mcohen-17:2007] |
Michael Cohen. Invited Talk: Narrowcasting for Presence Awareness.
In Norbert Streitz, editor, InterLink Workshop on Ambient Computing
and Communication Environments, Eltville-Hattenheim, Germany, November
2007. |
[mcohen-18:2007] |
Michael Cohen. Keynote Address: Wearware, Whereware, Everyware,
and Awareware: Mobile interfaces for location-based services and presence. In
Adrian David Cheok, editor, Proc. MobileHCI: Proc. 9th Int. Conf. on Human
Computer Interaction with Mobile Devices and Services, Singapore, September
2008. |
The `4-play' convergence of telephony, television/video, internet, and wireless is driving
a remarkable proliferation of new devices and services.Mobile terminals, as intimate
as clothing, are a kind of wearable computer. A diversity of next-generation
form factors of evolving "smart phones" is surveyed, including mobile stereotelephony,
spawned from cyberspatial audio and augmented audio models. Meanwhile, locationbased
services, along with seamless handoff and heterogeneous roaming via mimo
(multiple input/multiple output) smart antennas leading to software-dened radio
and cognitive radio, will leverage geolocation and portable gps/gis for \hyperlocality,"
geo-referenced links. Such advanced sensing will enable ubicomp and ambient intelligence,
including an awareness of user status and availability, and articulated models
of privacy, like narrowcasting, that allow users to distribute their attention and virtual
presence. Multipresence and persistent channels, encouraged by abc (always best connected) networks, might bid "sayonara" to "hello," as light-weight circuit-based
chatspaces obviate verbal hand-shaking. |
[mcohen-19:2007] |
Michael Cohen. Fukushima Foundation for the Advancement of Science
and Education (Kenzaidan): "Back-Seat Driver: Spatial Sound for Vehicular
Wayfinding and Situation Awareness", 2007-8. |
[mcohen-20:2007] |
Michael Cohen. Japan Society for Promotion of Science (Kakenhi):
"Personal Panoramic Multimodal Interface", 2007-8. |
[mcohen-21:2007] |
Michael Cohen, 2007. Program Committee, HC-2007: Tenth Int. Conf. on Human and Computer (3D Forum) |
[mcohen-22:2007] |
Michael Cohen, 2007. Reviewer, Computer Graphics and Applications |
[mcohen-23:2007] |
Michael Cohen, 2007. Program Committee, CIT: Sixth Int. Conf. on Computer and Information Technology, http://www.u-aizu.ac.jp/conference/cit07 |
[mcohen-24:2007] |
Michael Cohen, 2007. Publicity Co-Chair, CIT: Sixth Int. Conf. on Computer and Information Technology, http://www.u-aizu.ac.jp/conference/cit07 |
[mcohen-25:2007] |
Michael Cohen, 2007. Reviewer and Scientific Committee, J. of Virtual Reality and Broadcasting |
[mcohen-26:2007] |
Michael Cohen, 2007. Jury Member, CHI Interative Exhibits Jury |
[mcohen-27:2007] |
Michael Cohen, 2007. Reviewer, IEEE Multimedia |
[mcohen-28:2007] |
Michael Cohen, 2007. Reviewer, Int. J. of Virtual Reality |
[mcohen-29:2007] |
Michael Cohen, Sabbir Alam, and Julián Villegas. Figurative Narrowcasting
Interface Controlling SIP-Based Media Server, March 2008. |
[mcohen-30:2007] |
Mohammed Sabbir Alam. Doctoral Dissertation: "Narrowcasting
Attributes for Articulated Media Privacy and Awareness in SIP Audio Conferencing",
University of Aizu, 2007-8.
Thesis Advisor: Michael Cohen |
[mcohen-31:2007] |
Tatsuya Nagai. Masters Thesis: "Segwayfinding": Personal transporter
situation awareness using GPS, Google Maps, and Spatial Sound, University
of Aizu, 2007-8.
Thesis Advisor: Michael Cohen |
[mcohen-32:2007] |
Takenori Nakamura. Masters Thesis: Narrowcasting Position in Mobile
Groupware: Privacy of Mobile Phones Tracking Using Collaborative Virtual
Environment, University of Aizu, 2007-8.
Thesis Advisor: Michael Cohen |
[mcohen-33:2007] |
Katsuhito Koakutsu. Graduation Thesis: Using Google SketchUp to
Build the University of Aizu in Google Earth, University of Aizu, 2007-8.
Thesis Advisor: Michael Cohen |
[mcohen-34:2007] |
Charith Niranjen Wegodapola Giragama. Doctoral Dissertation:
"Multilingual Similarity Between Perceptual and Semantic Structures of Audio-
Visual Modalities", University of Aizu, 2007-8.
Thesis Advisor: Michael Cohen |
[mcohen-35:2007] |
Yuko Seki. Graduation Thesis: Helix Software (Fashion Design) in
Maya with Animation, University of Aizu, 2007-8.
Thesis Advisor: Michael Cohen |
[mcohen-36:2007] |
Yuki Konno. Graduation Thesis: Music Spatial Gestures with a MIDI
Breath Controller, University of Aizu, 2007-8.
Thesis Advisor: Michael Cohen |
[mcohen-37:2007] |
Junya Kobayashi. Graduation Thesis: Integration of a Laptop Sudden
Motion Sensor as Gyroscopic Control for Virtual Environments, University of
Aizu, 2007-8.
Thesis Advisor: Michael Cohen |
[mcohen-38:2007] |
Michael Cohen, Julián Villegas, Yuki Konno, Katsuhito Koakutsu,
Tatsuya Nagai, Takenori Nakamura, Yorinobu Mine, and Takayuki Hattori.
Introduction to Spatial Media: Computer Music, Audio Editing, Digital
Photography, Compositing, and Chromastereopsis. Aizu Wakamatsu 6th
Junior High School, September 2007. |
[mcohen-39:2007] |
Fachbereich Medien; Department of Media Fachhochschule
Dusseldorf; University of Applied Sciences. Science presentation at
the University of Aizu, 2007.
http://www.medien.fh-duesseldorf.de/showroom/projekte/duesseldorf_fbmedien_en.html |
[mcohen-40:2007] |
Michael Cohen. IT Specialists Lectures: Introduction to Sound and
Audio, March 2008.
http://borealis.u-aizu.ac.jp/itic/syllabus.html#IT07_6 |