|
|
Most of the courses taken by engineers and computer science students emphasize scientific discipline and the accumulation of “truth.” The Computer Arts Lab. activities include such technically objective factors, but also encourage original expression, subjectively motivated by aesthetics rather than “correctness,” sometimes “putting the art before the course!” Unlike many other labs' activities that try to converge on a “right answer” sharable by everyone else, artistic disciplines encourage originality, in which the best answer is one that is like no one else's.
The Computer Arts Lab., through its resident Spatial Media Group1 is researching projects including practical and creative applications of virtual reality and mixed (augmented, enhanced, hybrid, mediated) reality and virtuality; panoramic interfaces and spatially-immersive displays (especially stereotelephonics, spatial sound, and stereography); wearable and mobile applications, computing, and interfaces; and networked multimedia, with related interests in cve (collaborative virtual environments), groupware and cscw (computer-supported collaborative work); hypermedia; digital typography and electronic publishing; force-feedback displays; telecommunication semiotics (models of teleconferencing selection functions); information furniture; way-finding and navigation (including using a Segway personal transporter); entertainment computing; ubicomp (ubiquitous computing), calm (ambient), and pervasive technology. We are particularly interested in narrowcasting commands, conference selection functions for adjusting groupware situations in which users have multiple presence, virtually existing in more than one space simultaneously. We explore realtime interactive multimedia interfaces— auditory, visual, haptic, and multimodal:
Auditory We are exploring interfaces for multichannel sound, including stereo, quadraphonic, and nearphones (mounted on our Schaire rotary motion platform), as well as two separate speaker array systems in the University-Business Innovation Center 3d Theater2. A permanent installation there of the Helical Keyboard,3 refined and extended by Juliän Villegas and featuring realtime visual music with spatial sound and stereographic graphics, is on permanent exhibition. Working with Dr. Durand Begault of NASA, we deployed online courseware, “Sonic,”4 organizing an introduction to desktop audio and presenting many sound samples. We use these contents in the "Intro. to Sound and Audio" graduate school course,5 which is a prerequisite for “Spatial Hearing and Virtual 3D Sound,”6 taught jointly with Prof. Jie Huang in the Human Interface Lab.
With Profs. Robert Fujii and Satoshi Nishimura we host a Computer Music Studio, featuring computer music workstations complemented by assorted amplifiers, racks, mixers, and effects processors.
We annually conduct a Student Cooperative Class Project.7 In the past we sponsored sccps on Digital Compositing (using Photoshop and the Gimp8), but in recent years the sccp has been focused on Computer Music,9 studying basic music theory and dtm (desk-top music) software, including samplers and midi sequencers10 to compose and perform student-authored songs. This sccp segues into a graduate level computer music course.11
Visual We promote creative applications of scientific visualization, encouraging the use of Mathematica12 and stereoscopy,13 including chromastereoscopy14 (3d images with depth layers cued by color). We enjoy exploiting the unique large-format immersive stereographic display in the Ubic 3d Theater. The "M-Project" student cad and cg circle15 is hosted in our lab, under the supervision of Profs. Satoshi Nishimura and Michael Cohen. We are experimenting with various cad authoring tools, such as 3DStudioMax, Blender, Maya, and Sketch-Up. Various group members are exploring the application of cad techniques to the design and animation of exotic fashions, or the construction of a model of the university from photographs. We are also exploring creative applications of panoramic imaging and object movies,16 including a virtual tour of the university.17
Haptic We are also exploring the use of haptic interfaces, including force-display joysticks and a rotary motion platform (the “Schaire [for ‘shared chair’] Internet Chair”). A recently finished project uses the Sudden Motion Sensor in a laptop as a gyroscopic control of avatars in a virtual environment.18 We also convene annual Creative Factory Seminars.19 Past CFSS explored advanced audio interfaces and panoramic imaging, but in recent years, in conjunction with Prof. Rentaro Yoshioka20 of the Active Knowledge Engineering Lab., we conduct a workshop on Haptic Modeling and 3d Printing,21 using force-feedback cad workstations22 to make models that are then rapid prototyped (as stereolithograms) with the dppl's personal fabricator,23 closing the “idea (stored in brain neurons) information (stored as bits) matter (atoms)” pathway.
Multimodal Using such multimodal interfaces, our students have crafted driving simulators, location-based games featuring the rotary motion platform24 and synaesthetic (cross-sensory modality) visual and haptic music players (rendering songs as light shows25 or dancing chairs26). Using visual sensing techniques, narrowcasting postures can be recognized, and used to control distributed chatspaces or virtual concerts. A student project deployed a microphone vector to track a moving sound source, using its network interface to trigger internet appliances (like lights that follow the source). We are also developing a driving simulator using collision-detection modulation of the force-feedback steering wheel and the rotary motion platform. A recent version of the project features a dual-steering (front and back) fire truck, racing through a 3d model of our campus to reach a fire, piloted by two drivers, and featuring spatial sound effects. We are interested in exploring using figurative interfaces to express emotion and to control narrowcasting privacy using a media mixing system based on the Session Initiation Protocol for advanced conferencing features. We are also exploring extensions of Open Wonderland,27 an open-source framework for developing virtual reality environments. This year, group members have developed windshield wipers that dance, featuring beat detection, a digital phase-locked loop, and articulated wiper gestures.28
We are also exploring mobile (nomadic, portable) computing, working in conjunction with university spin-offs The Designium,29 Eyes, JAPAN,30 and GClue.31 Such keitai-based interfaces can be used to design kaleidoscopic “wallpaper” screen savers, or to control internet appliances, panoramic imaging, spatial sound, or motion platforms. An exciting project combines spatial sound with way-finding, using gps tracking, the Segway personal transporter,32 and directional transfer functions.
A advanced undergraduate course on “Human Interface and Virtual Reality”33 surveys many of these topics, contextualized by “machinema” (machine cinema) using “Alice,”34 featuring student-designed and -programmed, computer-generated interactive stories with 3d animation— including texture maps, photographic compositing, audio effects, speech synthesis, background music— and segments on panoramic and turnoramic imagery, stereopsis, and groupware.
Other activities:
We host an annual symposium, the Int. Symposium on Spatial Media,35 inviting experts to share their knowledge and passion regarding such themes as “Spatial Sound and Spatial Telepresence” ('01), “Magic in Math and Music” ('02), “Advanced Multimedia and Virtual Reality” ('03), “Spatial Sound” ('04), “Hearing and Sound Installations” ('05), “Sound, Audio, and Music” ('06), “Interactive Media, Security, and Stereography” ('06), “Music XML and the Structure of Swing, Understanding Color Media, Media Grid, and Visualization Tools” ('07), “Multimedia Computing” ('08), “Systems and Applications” ('09'10) “Distributed, Mobile, and Ubiquitous Multimodal Interfaces” ('10'11), and “Social Multimedia” ('11'12).
Our lab sponsors several student performance circles, including the Yasakoi It Dance Circle,36 and Disco Mix Club.We also sponsor a couple of other student circles, the Dual Boot (Ultimate Frisbee) Flying Disc Club,37 and the Furiten Mah Jongg Circle .38
We are working with Aizu Yougo Gakko ,39 the special education school next to the university, to develop multimedia interfaces that can encourage and entertain students with special needs. We have consulted on deployment of switch-adapted media players, have deployed some iPad accessibility applications, and have developed a song selection program using a “step-scan, select” affordance.
Through the research & development, the deployment & integration, of stereographic, spatial sound, haptic, and mobile applications, including virtual and mixed reality, we nurture scientific and artistic interest in advanced computerhuman and humanhuman communication. Our ultimate domain is the exploration of interfaces and artifacts that are literally sensational.
Some relevant links:
1 http://www.u-aizu.ac.jp/~mcohen/spatial-media/welcome.html
2http://www.ubic-u-aizu.jp/shisetsu/kengaku.html
3 http://www.u-aizu.ac.jp/~mcohen/spatial-media/groupware/clients/HKB/welcome-J.html, http://www.ubic-u-aizu.jp/images/stories/pdf/coh-02-01.pdf
4 http://sonic.u-aizu.ac.jp
5 http://www.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/graduate/Sound+Audio/syllabus.html
6 http://web-int/~j-huang/Lecture/3DSound/3dsound.html
7 http://www.u-aizu.ac.jp/official/curriculum/syllabus/3_E_000.html
8 http://www.gimp.org
9 http://www.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/undergraduate/Computer_Music
10 http://www.apple.com/ilife/garageband, http://www.pgmusic.com/band.htm
11 http://sonic.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/graduate/Computer_Music/syllabus.html
12 http://www.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/Mma.html
13 http://www.u-aizu.ac.jp/~mcohen/spatial-media/stereograms.html
14 http://www.chromatek.com
15http://mpro-aizu.blogspot.com
16http://sonic.u-aizu.ac.jp/spatial-media/QTVR/
17http://www.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/undergraduate/HI&VR/VirtualTour/
18http://www.u-aizu.ac.jp/~mcohen/welcome/publications/SMS-CVE.mov
19http://www.u-aizu.ac.jp/official/curriculum/syllabusCFS/curr04-cfs_e.html
20http://www.u-aizu.ac.jp/~rentaro
21http://www.u-aizu.ac.jp/official/curriculum/syllabusCFS/curr04-cfs-1_e.html\#CFS1-1
22http://www.sensable.com/products-freeform-systems.htm
23http://www.zcorp.com/Products/3D-Printers/spage.aspx
24http://sonic.u-aizu.ac.jp/spatial-media/mixedreality/VideoClips/KuruKuru-pitcher-long.mov
25http://sonic.u-aizu.ac.jp/spatial-media/mixedreality/VideoClips/CITMixedReality_Demo.wmv
26http://sonic.u-aizu.ac.jp/spatial-media/mixedreality/VideoClips/keitai+Schaire2.mov
27http://openwonderland.org
28http://www.u-aizu.ac.jp/~mcohen/spatial-media/VMPMyRide
29http://www.thedesignium.com
30http://www.aizu.com
31http://www.gclue.com
32http://www.segway.com
33http://web-int.u-aizu.ac.jp/~mcohen/welcome/courses/AizuDai/undergraduate/HI\&VR
34http://www.alice.org
35http://www.u-aizu.ac.jp/~mcohen/welcome/ISSM/10-12/
36http://www.u-aizu.ac.jp/circles/yosakoi
37http://www.u-aizu.ac.jp/circles/dualboot
38http://www.u-aizu.ac.jp/circles/furiten
39http://www.aizu-sh.fks.ed.jp
Rasika Ranaweera, Michael Frishkopf, and Michael Cohen. Folkways in Wonderland: a laboratory for the ethnomusicology of cyberworlds. In Proc. Int. Conf. on Cyberworlds, pages 106-112, Banff, Alberta; Canada, oct 2011.
cw2011.cpsc.ucalgary.ca. In this paper, we outline a musical cyberworld (a collaborative, immersive virtual environment), together with an experimental design, for the purposes of initiating the ethnomusicology of controlled musical cyberspace. Ethnomusicology, the ethnographic study of music in its social environment, has typically been conducted through qualitative fieldwork in uncontrolled, real-world, socio-cultural environments. Recently, ethnomusicologists have begun to attend to the study of virtual environments, including pre-existing cyberworlds (such as videogames). But we adopt an unprecedented approach by designing our own a custom musical cyberworld to serve as a virtual laboratory for the ethnographic study of music. By constructing an immersive social cyberworld suitable for ethnomusicological fieldwork, we aim for much greater control than has heretofore been possible in ethnomusicological research, leading to results that may suggest better ways of designing musical cyberworlds for research, discovery, learning, entertainment, and e-commerce, as well as pointing towards broader principles underlying the role of music in human interaction and community-formation. Such controlled research can usefully supplement—though never replace—traditional real-world fieldwork.
Senaka Amarakeerthi, Tin Lay Nwe, Liyanage C De Silva, and Michael Cohen. Emotion Classification Using Two-Layered Cascaded Subband Filters. In Proc. Interspeech, page (none), Florence, aug 2011.
http://www.interspeech2011.org. Speech is one of the most important signals that can be used to detect human emotions. Speech is modulated by different emotions by varying frequency- and energy-related acoustic parameters such as pitch, energy and formants. In this paper, we describe research on analyzing inter- and intra-subband energy variations to differentiate five emotions. The emotions considered are anger, fear, dislike, sadness, and neutral. We employ a Two-Layered Cascaded Subband (TLCS) lter to study the energy variations for extraction of acoustic features. Experiments were conducted on the Berlin Emotional Data Corpus (BEDC). We achieve average accuracy of 76.4% and 69.3% for speaker-dependent and -independent emotion classifications, respectively.
Senaka Amarakeerthi, Kithsiri Liyanage, and Michael Cohen. Delay Pools-Based Uninterrupted and Dynamic Bandwidth Distribution of Squid Proxy Cache. In Vitaly Klyuev and Alexander Vazhenin, editors, Proc. HCCE: Int. Conf. on Human-Centered Computer Environments, pages 244 246, March 2012.
http://sparth.u-aizu.ac.jp/hcce2012, isbn 978-1-4503-1191-5, doi 10.1145/2160749.2160801, http://dl.acm.org/citation.cfmdoi 10.1145/2160749.2160801, http://dl.acm.org/citation.cfm?id=2160801. id=2160801. Squid proxy sever is a popular web content caching service trusted by many network administrators. In this paper, we describe a method of managing the bandwidth of Squid proxy cache through the World Wide Web, thus allowing squid administrators and authorized users to allocate a percentage of bandwidth to a particular computer or group of computers without disturbing connected clients. This approach is useful for squid administrators with low bandwidth internet connections in dynamically prioritizing existing bandwidth for bandwidth-hungry applications such as videoconferencing and bulk file downloading and uploading. In controlling bandwidth utilization of users, delay pools is a widely used technique. In a Squid proxy cache, the delay pools setting should be made by editing a configuration manually and restarting the service. The proposed framework allows to grab delay pool parameters through a web interface. PHP and C++ programs were used in implementing the system. With the proposed approach, existing bandwidth can be allocated within hosts or subnets dynamically without disturbing existing connections. By analyzing realtime bandwidth distribution graphs, we could conclude that the proposed framework can distribute the bandwidth as required without affecting connected clients.
Rasika Ranaweera, Michael Cohen, and Shun Endo. Smartphone as Articulated Conductor's Baton for Virtual Concerts. In Naotoshi Osaka, editor, ACMP: Proc. of the Asia Computer Music Project, page (none), Tokyo, dec 2011. JSSA: Japanese Society for Sonic Arts.
www.acmp.asia/acmp2011sche.html. We describe how smartphones can be used as a simplified baton of a ensemble conductor in virtual concerts. Alice, a 3D programming environment, can be used to develop interactive virtual spaces. One can simply drag-and-drop 3D graphics and transform/arrange them in a WYSIWYG editor. Alice also provides a rich API to control and display objects, including some audiorelated functions. We have created a virtual concert application in which instruments are arranged around a conductor (in this case the user) located at their center. A workstation running Alice can render such scenes, including displaying auditory cues via stereo speakers. A user-conductor with a smartphone can simply point at a preferred instrument and tap to select or start playing. By tilting the device or sliding a control on the screen, volume of a selection can be adjusted. We used our own collaborative virtual environment (CVE) session server as the communication link between the smartphone or tablet and Alice. A middleware program reads orientation of the smartphone, sending such events to the CVE server. Another program, the Alice Cve Bridge, retrieves these events through the server and selects/plays/adjusts the instruments in the Alice virtual environment. Such explorations suggest the power of emerging mobile devices as generalized remote controls for interactive multimedia and ubiquitous computing.
Kensuke Nishimura and Michael Cohen. Multitouch Media Player for Accessibility. In Vitaly Klyuev and Alexander Vazhenin, editors, Proc. HCCE: Int. Conf. on Human-Centered Computer Environments, pages 184 189, Aizu-Wakamatsu, mar 2012.
http://sparth.u-aizu.ac.jp/hcce2012, isbn 978-1-4503-1191-5, doi 10.1145/2160749.2160787, http://dl.acm.org/citation.cfmdoi 10.1145/2160749.2160787, http://dl.acm.org/citation.cfm?id=2160787. id=2160787. We have developed two versions of a media player designed for differently abled users. The media player, suitable for selecting and playing songs or videos, runs on either computer-hosted “Alice” or an iOS smartphone or tablet. Even though special users have trouble with normal mouse and keyboard, such accessibility allows them to enjoy selected media.
Rasika Ranaweera, Michael Cohen, and Shun Endo. iBaton: Conducting Virtual Concerts Using Smartphones. In Vitaly Klyuev and Alexander Vazhenin, editors, Proc. HCCE: Int. Conf. on Human-Centered Computer Environments, pages 178-183, Aizu-Wakamatsu, mar 2012.
http://sparth.u-aizu.ac.jp/hcce2012, isbn 978-1-4503-1191-5, doi 10.1145/2160749.2160786, http://dl.acm.org/citation.cfmdoi 10.1145/2160749.2160786, http://dl.acm.org/citation.cfm?id=2160786. id=2160786. With the emergence of virtual environments, even general computer users can drag-and-drop 3d objects (cities, buildings, furniture, instruments, controls, animals and avatars) from a built-in galleries and arrange them to create attractive cyberworlds. We have created a virtual concert application using Alice, a 3d programming environment, in which instruments are arranged around a virtual conductor (in this case the user) located at their center. A user-conductor with a smartphone can use it as a simplified baton, pointing at a preferred instrument and tapping to select or start playing. When selected, an instrument is jiggled or its components dilated and contracted, and a spotlight appears until the instrument is muted, providing the conductor and audience with visual cues about the ensemble. One can also adjust volume/panning of a selected instrument by sliding corresponding controls on the smartphone screen. Alice 3 (Beta) provides a rich api to control and display objects, but native audio capabilities are limited to playing an audio file and adjusting volume before playing. Using a plugin for NetBeans, Alice programs (scenarios) can be edited as Java code outside Alice. Java Sound api which provides more complete control for controlling sampled audio and midi, we implement audio-related functions to the virtual concert. Smartphones have magnetometers that can be used to detect yaw and programmed through a compatible language, spatial information of smartphones can be accessed using the platform's api and transmitted to a collaborative virtual environment (cve) session server via Wi-Fi. A middleware program, the AliceCve Bridge, retrieves these events through the server and selects/plays/adjusts instruments in the Alice virtual environment. Such explorations suggest the power of emerging mobile devices as generalized remote controls for interactive multimedia and ubiquitous computing.
Prabath Weerasinghe and Michael Cohen. Beat Detection Animating Virtual Environment Models. In Vitaly Klyuev and Alexander Vazhenin, editors, Proc. HCCE: Int. Conf. on Human-Centered Computer Environments, pages 200-202, Aizu-Wakamatsu, mar 2012.
http://sparth.u-aizu.ac.jp/hcce2012, isbn 978-1-4503-1191-5, doi 10.1145/2160749.2160790, http://dl.acm.org/citation.cfmdoi 10.1145/2160749.2160790, http://dl.acm.org/citation.cfm?id=2160790. id=2160790. Alice is an innovative 3D programming environment that makes it easy to create an animation. Various virtual environment (VE) models are available in the Alice 3D environment. We created VE models using the Alice 3DIDE (integrated development environment). We deploy a beat detector to detect a beat of a song, based on PD (Pure Data, a free dataflow programming environment similar to MAX/MSP). It can extract the beat of a song while it is playing. Using our Alice-CVE (Collaborative Virtual Environment) Bridge and CVE-PD Bridge, we can create a communication link between the beat detector and Alice 3D environment. The CVE is a Java clientserver protocol developed by the Spatial Media Group of the University of Aizu. Clients connect to session server host via channels, and when clients need to communicate with each other they subscribe to the same channel. The AliceCVE Bridge allows any device that can connect to the CVE server to communicate with Alice without regard for architectural differences. When a song is played, its beat can be detected and sent data to CVE to animate objects and avatars. An avatar can dance (admittedly poorly) in the Alice 3D environment while receiving a realtime rhythm data stream from a CVE session server. Stage color can also be rhythmically changed according to the beat using the same communication link.
Michael Cohen, Rasika Ranaweera, Hayato Ito, Shun Endo, Sascha Holesch, and Juliän Villegas. Whirled Worlds: Pointing and Spinning Smartphones and Tablets to Control Multimodal Augmented Reality Displays. In HotMobile: Proc. Int. Wkshp. on Mobile Computing Systems and Applications, page (online archive), San Diego, feb 2012.
http://www.hotmobile.org/2012/papers/dandp/abstract.pdf Modern smartphones and tablets have magnetometers that can be used to detect yaw, which data can be distributed to modulate ambient media. We have implemented such functionality for both a Google Android smartphone and Apple iOS iPhone & iPad. A client-server architecture synchronizes distributed displays across shared channels, including image-based renderings of panoramic photos and object movies, spatial sound (periphonic) speaker arrays, rotary motion platforms, and the position of avatars or other objects in virtual environments such as Alice and Open Wonderland.
Prabath Weerasinghe and Michael Cohen. Beat Detection Animating Virtual Environment Models. In Naotoshi Osaka, editor, ACMP: Proc. of the Asia Computer Music Project, in Conjunction with the Joint Meeting, page (none), Tokyo, dec 2011. JSSA: Japanese Society for Sonic Arts.
www.acmp.asia/acmp2011sche.html. Alice is an innovative 3D programming environment that makes it easy to create an animation. Various virtual environment (VE) models are available in the Alice 3D environment. We created VE models using the Alice 3DIDE (integrated development environment). We deploy a beat detector to detect a beat of a song, based on PD (Pure Data, a free dataflow programming environment similar to MAX/MSP). It can extract the beat of a song while it is playing. Using our Alice-CVE (Collaborative Virtual Environment) Bridge and CVE-PD Bridge, we can create a communication link between the beat detector and Alice 3D environment. The CVE is a Java clientserver protocol developed by the Spatial Media Group of the University of Aizu. Clients connect to session server host via channels, and when clients need to communicate with each other they subscribe to the same channel. The Alice- CVE Bridge allows any device that can connect to the CVE server to communicate with Alice without regard for architectural differences. When a song is played, its beat can be detected and sent data to CVE to animate objects and avatars. An avatar can dance (admittedly poorly) in the Alice 3D environment while receiving a realtime rhythm data stream from a CVE session server. Stage color can also be rhythmically changed according to the beat using the same communication link.
Michael Cohen, Rasika Ranaweera, Hayato Ito, Shun Endo, Sascha Holesch, and Juliän Villegas. Whirling Interfaces: Smartphones & Tablets as Spinnable Affordances. In ICAT: Proc. Int. Conf. on Artificial Reality and Telexistence, page 155, Osaka, nov 2011.
www.ic-at.org/2011, issn 1345-1278. Interfaces featuring smartphones and tablets that use magnetometer-derived orientation sensing can be used to modulate virtual displays. Embedding such devices into a spinnable affordance allows a “spinning plate”-style interface, a novel interaction technique. Either static (pointing) or dynamic (whirled) mode can be used to control multimodal display, including panoramic and turnoramic images, the positions of avatars in virtual environments, and spatial sound.
Michael Cohen, October 2011.
Program Committee, NIME 2011 (Int. Conf. on New Instruments for Musical Expression), http://www.eecs.umich.edu/nime2012/
Michael Cohen, October 2011.
Program Committee, IEEE ICEC 2011 (Int. Conf. on Entertainment Computing), http://www.icec2011.org
Michael Cohen, 2011.
ICAT (Int. Conf. on Artificial Reality and Telexistence) Best Paper Award Committee Co-Chair, http://www.ic-at.org/2011/
Michael Cohen, March 2011-12. Executive Committee, IEEE Computer Society Technical Committee on ComputerGenerated Music
Michael Cohen, March 2012.
Program Committee, HC: Fourteenth Int. Conf. on Human and Computer (Hamamatsu and Aizu-Wakamatsu and Duesseldorf), http://sparth.u-aizu.ac.jp/ hcce2012, http://ktm11.eng.shizuoka.ac.jp/HC2011/
Michael Cohen, 2011.
Reviewer, Haptics Symposium, 2012.hapticssymposium.org
Michael Cohen, 2011-12. Reviewer, Entertainment Computing, http://www.journals.elsevier.com/ entertainment-computing/#description
Michael Cohen, 2011-12. Voting Member, IEEE MMTC (Multimedia Communications Technical Committee), http://community.comsoc.org/groups/ieee-mmtc
Michael Cohen, 2011-12. Editorial Review Board, ACM Computers in Entertainment (CiE), http://www.acm. org/pubs/cie/
Michael Cohen, 2011-12. Reviewer and Scientific Committee, JVRB, The Journal of Virtual Reality and Broadcasting, http://www.jvrb.org
Shohei Abe (s1160005). USB Interface for Driving Simulator Controls. Graduation thesis, School of Computer Science and Engineering, 2011-12.
Thesis Adviser: Michael Cohen
Koichirou Amitou (s1160008). Developing Driving Simulator with Alice 3.0 and Cockpit of a Real Vehicle. Graduation thesis, School of Computer Science and Engineering, 2011-12.
Thesis Adviser: Michael Cohen
Masaki Okano (s1160049). Narrowcasting Interface for Alice. Graduation thesis, School of Computer Science and Engineering, 2011-12.
Thesis Adviser: Michael Cohen
Kensuke Nishimura (s1160160). Multimedia for Accessibility: Media Players for Special Users. Graduation Thesis, School of Compute Science and Engineering, 2011-12.
Thesis Adviser: Michael Cohen
Shun Endo (s1160037). Whirling Interface: iPhone and iPad as Spinnable Affordances. Graduation thesis, School of Computer Science and Engineering, 2011-12.
Thesis Adviser: Michael Cohen
Hayato Ito (s1160049). Whirling interface: Spinnable Interface for Android Smartphone. Graduation thesis, School of Computer Science and Engineering, 2011-12.
Thesis Adviser: Michael Cohen
Prabath Weerasinghe (m5141110). Animating virtual environments with voice-based emotion and beat-tracked rhythm. Master thesis, Graduate School of Computer Science and Engineering, 2010-12.
Thesis Adviser: Michael Cohen
Kensuke Nishimura, Rasika Ranaweera, and Michael Cohen. “Yowme” Cybereye-exam. Health 2.0 Hackathon; Codethon, Koriyama, February 2012.
Combining modern desktop virtual reality with remote control and gestural interpretation of a smartphone, we reimagine the traditional eye exam in a digital format. http://health2con.jp/hackathon/