ISSM'18-'19: Program
The Nineteenth International Symposium on Spatial Media
in contiguity with
the 2018 AES International Conference on Spatial Reproduction: Aesthetics and Science
- Dates
- Friday-Saturday, Aug. 3-4, 2018
(Schedule under construction and subject to adjustment.)
- Demonstrations and Technical Session: Friday, Aug. 3, Room S1
-
- Time
- 1320-1410
- Speaker
- César Daniel Salvador Castañeda; セサル ダニエル サルバドル カスタニェダ
- Title
- “Synthesis of head-related transfer functions in the peripersonal space”
- Abstract
- The head-related transfer functions (HRTFs) are essential filters used in 3D audio playback devices for personal use. The HRTFs describe the transmission of sound from a point in space to the eardrums of a listener. Studies on auditory distance perception are highlighting the necessity of obtaining HRTFs for positions within reaching distances, so as to increase the realism during the binaural rendering of sounds in the peripersonal space. However, mainstream techniques are oriented to obtain HRTFs for sound sources beyond 1 m from the head's center. In this talk, state-of-the-art methods for synthesizing near-distance HRTFs from far-distance ones are overviewed. Special attention is given to space-transform domain methods and the possibilities they offer for adding individual distance-dependent features are discussed.
- Bio
-
César D. Salvador received the Graduate degree from the Pontifical Catholic University of Peru, Lima, Peru, in 2005, and the M.Sc. and Ph.D. degrees in 2013 and 2016, respectively, both from the Graduate School of Information Sciences, Tohoku University, Sendai, Japan. He is currently an Assistant Professor with the Research Institute of Electrical Communication, Tohoku University. His research interests include theoretical acoustics, spatial hearing, array signal processing, and 3D audio technology.
-
- Time
- 1410-1500
- Speaker
- Juan Camilo Arévalo Arboleda
- Title
- “Towards an Efficient Algorithm to Get the Chorus of a Salsa Song”
- Abstract
- A well-known musical genre and part of Latin-American cultural identity is Salsa. To be able to perform a scientific analysis of this genre, the first step to take is to analyze the structure of Salsa songs. Furthermore, the most representative part of Salsa is the chorus. In this research, we detail the design and implementation of an algorithm developed for getting the chorus of any Salsa song.
- Bio
- Juan Camilo Arévalo Arboleda is a fifth-year student at Pontificia Universidad Javeriana Cali in Colombia, where he is pursuing a degree in Computer Science. In his studies, Camilo is focusing on audio processing, music information retrieval and video games. Is a member of DESTINO research group, in which was worked as an assistant research and where is doing the work experience in the project "Historical research and accessible digital representation. The artistic heritage during the war and post-war of Spain", a project of the Complutense University of Madrid and Ministry of Science, Innovation and Universities of Spain, in collaboration of Pontificia Universidad Javeriana Cali in Colombia and Institute of the Cultural Heritage of Spain.
-
- Time
- 1510-1600
- Speaker
- Gerardo Mauricio Sarria
- Title
- “From Roughness Values to Chords”
- Abstract
-
The analysis of the perceived musical tension has a long history in psychoacoustics. In particular, the concept of roughness or buzzing sound quality of close intervals has been correlated to the multidimensional perception of consonance/dissonance. Taking from this, models have been proposed to measure roughness of sound aggregates, such as chords, based on approximation models of experimentally derived curves of ``raspyness'' perception as a function of frequency separation of chord pitches. These models provide measures for computing roughness values of given complex sound aggregates. In this work we take the inverse path of computing chords given roughness values. We provide musicians with a constraint programming based tool to infer chord sequences of given number of notes that follow some evolution of roughness value. This offers the composer the possibility to explore different tension/relaxation musical alternatives. We consider two different roughness models and implement them as real-numbers equations in Gecode. We coupled these with constraints on pitch values to construct the chords. Finally, we use the OpenMusic software to display and test results.
- Bio
-
Gerardo Sarria is a computer scientist from Pontificia Universidad Javeriana Cali, Colombia and a doctor of engineering from Universidad del Valle, Colombia. He has worked in several research and development projects in different areas of computing such as constraint programming, process calculi, semantics of programming languages, concurrency models, machine learning and data mining. He has a great affection for computer music and the development of theory and tools to model multimedia interaction scenarios. Currently, he holds a position at the Pontificia Universidad Javeriana Cali, Colombia as Associate Professor in the School of Engineering.
-
- Time
- 1600-1700
- Demonstrations
-
- Speaker arrays (Julián Villegas; ジュリアン ヴィジェガス)
- Narrowcasting (Michael Cohen; 公園 マイケル)
- Driving simulator (Kawabe Akihisa; 川部 暁久)
-
- Time
- 1700-1730
- Tour
-
- Anechoic chamber (Julián Villegas; ジュリアン ヴィジェガス)
- Computer Music Studio (Michael Cohen; 公園 マイケル)
-
- Time
- 1800-2000
- Welcome Banquet
-
Felice?
- Technical Session: Saturday, Aug. 4, North Lounge
-
- Time
- 1030-1130
- Speaker
- William L. Martens; ウイリアム L. マーテンス
- Title
- “The design of auditory augmented reality”
- Abstract
-
The design of auditory augmented reality (AR) applications requires attention to details of how virtual auditory events are superimposed upon actual events in the listener’s environment. For example, AR-based navigational aids for visually challenged users will not be adopted readily if conventional headphones are used, since these can interfere with natural spatial hearing. Therefore, “nearphones,” or other ”off the ear“ transducers are a good choice for these applications that benefit from such unobtrusive display of virtual auditory events. This paper examine a solution that has been relatively successful, but for which challenges remain. For example, if low-latency head-tracking technology is not employed, then listener motion can create mismatches between the behavior of virtual and actual events, which can create discomfort for the user of an AR system. In extreme cases, such mismatches can even cause postural instability.
- Bio
-
Dr. William Martens is a perceptual psychologist specializing in spatial
hearing research and the simulation of the acoustical cues used in human
sound localization. He has contributed to development of commercial spatial
sound processing technologies, such as the 3D Positional Audio software
for controlling the EMU-8000 DSP chip on the AWE-32 SoundBlaster sound
card for Creative Labs (U.S. patent awarded in 1999).
He received a Ph.D. in Psychology from Northwestern University in 1991,
and currently holds the position of Associate Professor in Audio and Acoustics,
at the University of Sydney.
- Sight-seeing Excursion (via charter bus): Ura-Bandai
- 1130
- leave University, stopping at convenience store for lunch
- 1300-1430
- Morohashi Museum of Modern Art (Salvadore Dalí); 諸橋近代美術館 (サルバドール・ダリ)
- 1500-1630
- Goshiki-numa (5-colored swamp); 五色沼 walk
- 1700
- Stop by Inawashiro Station
- 1800
- Return to University
- 1830-2000
- Optional: Shimei Dori Festival
mcohen@u-aizu.ac.jp