Date and time: Oct. 13, 2015, from 14:50 (4th period).
In this talk we will discuss opportunities, challenges, benefits, and tricks about building sound projects in Teensy Arduino micro-controllers. Specifically, we will demonstrate how to assemble hardware, use Arduino software, choose casings, etc. We will also show a functional prototype based on these ideas. Note that although there are several sources on Internet where to learn how to program Arduino, few of them explain how to select, and assemble the hardware for such projects. If you have interest on sound projects, or Arduino programming in general, please join us.
Rob Oudendijk of YR-Design is a Dutch engineer/dancer with a vast experience on building software and hardware solutions for the Netherland Embassy in Japan, Luxemburg Embassy in Japan, and several companies in the US, Europe, and Japan. He is currently involved in the development of the next-generation Geiger counters at Safecast.
Handouts of the presentation
We are pleased to sponsor a seminar this week about spatial sound.
All are welcome.
Date: Friday, Sept. 27
Location: S1 (275)
Time: 1440-1540 (2:40-3:40 pm)
Speaker: César D. Salvador
Affiliation: Research Institute of Electrical Communication and Graduate School of Information Sciences, Tohoku University, Sendai
Among 3D audio techniques, binaural synthesis aims to reproduce auditory scenes with high levels of realism by including the external facts of human spatial hearing. Basic perceptual cues for a spatial listening experience arise from the scattering, reflections and resonances introduced by the pinnae, head and torso of the listener. These phenomena can be described by the so-called Head-Related Transfer Functions (HRTFs). Typical measured sets of HRTFs, though, neither include the motion of the head nor provide the listener with enough spatial resolution, characteristics that are required, for example, in the accurate reproduction of moving sound sources. Several approaches that sidestep these two limitations have been proposed. They are based on the recordings made with microphones placed on the surface of a rigid sphere, and the angular interpolation of HRTFs from a representative set of sound sources in the distal region (beyond one meter distance from the head). However, the optimal arrangement of the representative sound sources, and the synthesis of binaural signals for sources in the proximal region (less than one meter distance from the head) has not yet been addressed. We introduce a novel method to synthesize the left and right ear signals for sound sources in both the distal and proximal regions. The synthesis is performed from the sound field captured by a rigid spherical microphone array. The present proposal exploits the directional structure of the captured sound field by means of its decomposition into spherical harmonics functions with high directivity (high order).
César D. Salvador received the B.Sc. degree in Electrical Engineering from Pontifical Catholic University of Peru in 2005, the M.Sc. degree in Information Sciences from Tohoku University in 2013, and early October will start the Doctor Course in the Graduate School of Information Sciences at Tohoku University. He was a Researcher with the Faculty of Communication Sciences at University of San Martin de Porres, Peru, from 2008 to 2010, where he led an Immersive Soundscape Project. His research interests include spherical acoustics and spatial hearing.