Date and time: Oct. 13, 2015, from 14:50 (4th period).
In this talk we will discuss opportunities, challenges, benefits, and tricks about building sound projects in Teensy Arduino micro-controllers. Specifically, we will demonstrate how to assemble hardware, use Arduino software, choose casings, etc. We will also show a functional prototype based on these ideas. Note that although there are several sources on Internet where to learn how to program Arduino, few of them explain how to select, and assemble the hardware for such projects. If you have interest on sound projects, or Arduino programming in general, please join us.
Rob Oudendijk of YR-Design is a Dutch engineer/dancer with a vast experience on building software and hardware solutions for the Netherland Embassy in Japan, Luxemburg Embassy in Japan, and several companies in the US, Europe, and Japan. He is currently involved in the development of the next-generation Geiger counters at Safecast.
Handouts of the presentation
The Support Association for International Students of the University of Aizu (SAISUA) is happy to announce the availability of scholarships for the 1st semester of the academic year 2015. We welcome you to apply by January 16th, 2015.
1. An international full-time regular undergraduate or graduate student at the University of Aizu
2. Not currently receiving a scholarship of more than 50,000 yen per month
3. Demonstrated high academic achievement
B) Scholarship Support
1. Amount: 20,000 yen per month for six months
C) Application procedure (How to apply)
1. Online application form: http://saisua.u-aizu.ac.jp/wordpress/
2. Upload the required supporting documents at the SAISUA homepage (the link above). These include latest transcript (REQUIRED, either issued by UoA or other institutions) and a document explaining your reasons for applying. If your transcript is in a language other than English, attach the translation (Your own translation is OK). It may help your case to provide other supporting documents such as your CV. These should be sent directly to Prof. Rockell (RQ 266).
3. Sealed letter of recommendation from your supervisor with hanko and/or signature. Please ask your supervisor to send the letter directly to Prof. Rockell.
D) Requirements for scholarship winners
Each winner of a SAISUA scholarship is expected to participate in at least two international university or community events during the year of the scholarship. Such activities might include:
– Giving an international talk about your country
– Making a local school visit to talk about your country to schoolchildren
– Giving a campus tour to visitors
– Working at an international booth or activity during the Campus Festival
– Attending the bi-annual University Welcome Party to greet new international students
– Other activities suggested by CSIP (Center for Strategy of International Programs) office
E) Application due date: January 16th, 2015
If interested, please visit our homepage: http://saisua.u-aizu.ac.jp/wordpress/
Center for Language Research
University of Aizu
Call for Papers: Special Issue of Springer “Virtual Reality” journal on Spatial Sound in Virtual and Augmented Reality
Guest Editors: Michael Cohen, Julián Villegas, & Woodrow Barfield
Please direct questions to: Michael Cohen (email@example.com). For general and miscellaneous enquires contact VirtualReality@brunel.ac.uk.
The Virtual Reality Journal, Springer Verlag Press, is announcing a call for papers to appear in a special edition that will focus on the use of spatialized sound in virtual and augmented reality environments. The journal seeks original, high-quality papers in the design and use of spatialized sound in virtual reality, including augmented reality and mixed reality. Each paper should be classifiable as mainly covering research, applications, or systems, using the following guidelines for each:
- Research papers should describe results that contribute to advances in state-of-the-art software, hardware, algorithms, interaction, or human-factors.
- Application papers should explain how the authors built upon existing ideas and applied them to solve an interesting problem in a novel way. Each paper should include an evaluation of the success of the use of VR/AR/MR in the given application domain.
- System papers should indicate how the implementers integrated known techniques and technologies to produce an effective system, along with any lessons learned in the process. Each paper should include an evaluation of the system, including benchmarking that was performed.
- Spatialized audio interfaces
- Perception and cognition (specifically in the context of spatialized sound in VR and AR)
- Navigation and way-finding
- Applications of spatialized sound for VR/AR/MR
Submission and Review Process
All papers should be submitted through the website www.editorialmanager.com/vire/ using the category of ‘Spatial Sound in Virtual and Augmented Reality.’
Paper submission deadline: 05 Jan 2015
Notification of acceptance: 04 Apr 2015
Submissions will be reviewed by at least three independent referees, with opportunity to edit papers based on reviewer’s comments.
Paper length and format: Papers shall have a typical length of 20 pages. MS Word and LaTeX templates are available from here: http://www.springer.com/computer/image+processing/journal/10055?detailsPage=pltci_1060360
More information: http://arts.u-aizu.ac.jp/cfp-vr/
We are pleased to sponsor a seminar next week about spatial sound.
All are welcome.
Date: Thursday, Jan. 16
Location: room S8
Time: 13:10 — 14:40
Speaker: Sungyoung Kim
Affiliation: Electrical, Computer, and Telecommunication Engineering
Department, Rochester Institute of Technology, NY, USA.
In order to provide consumers more enhanced and immersive experience in sound, most of the new multi-channel reproduction formats highlight the significance of height-related information. In this talk, we investigated the influence of height-related room impulse responses when reproduced via various “height-loudspeakers,” including a virtual loudspeaker. Test participants listened to the corresponding sound fields and rated their perceived quality in terms of spaciousness and integrity. The results showed that perceived quality was affected by height loudspeaker positions but not by height signals, which was convolved with specific room impulse responses.
Sungyoung Kim received a B.S. degree from Sogang University, Korea, in 1996, and a Master of Music and Ph.D. degree from McGill University, Canada, in 2006 and 2009 respectively. His professional work experiences include recording/balance engineer at Korea Broadcasting System (KBS), Seoul, Korea (1995–2001) and research associate at Yamaha Corporation (2007–2012), Hamamatsu, Japan. He is now an assistant professor at Electrical, Computer, and Telecommunication Engineering Department, Rochester Institute of Technology. His research interests are spatial audio and human perception, and efficient ear training methods. He is a member of the IEEE and Audio Engineering Society (AES).
Do you want to be part of a world-wide experiment that will help develop the algorithms used in the next generation of mobile telephony?
We’re recruiting now!Your task would be to rate a series of sounds, heard via headphones, using a specialized software in the computer music studio of the University of Aizu.- Don’t need experience
– Must be Japanese
– Any age
– Any gender
– Must not have hearing problems
– Expected date for the experiments: February
– We’ll pay ¥2,000 per hour (¥4,000 per 2 hour experiment)
– Personal information is not required
Your participation is highly needed.
Please contact Prof. Julián Villegas (Japanese OK):
indicating your name, telephone number, and email address.
電話番号: (0242) 37-2608
Saturday, Oct. 12 & Sunday, Oct. 13, 11:00-13:00
- 328-E, 11:00-12:00
- Julián Villegas: Range-Modulated Transfer Functions for Spatial Sound
- Nakada Anzu & Nishimura Kensuke: CVE-Mathematica Interface Supporting Mobile Control
- Bektur Rysekeldiev: Spatial Sound on Mobile Devices
- Sasamoto Yuya: Reactable Spatial Sound Control
- 328-F, 11:00-12:00
- Michael Cohen: Schaie Internet Chair
- UBIC 3D Theater, 12:00-13:00
- Michael: Helical Keyboard
- Ohashi & Oyama: Musical Control with Spinning Affordances
- 328-E, 11:00-12:00
- ジュリアン ヴィジェガス: 立体音響における伝達距離関数
- 中田 杏 & 西村 健亮: ネットワークプログラムシステムとMathematicaインターフェースがモバイルコントロールと対応する
- リスケリヂィエフ ベクトウル:携帯電話の立体音響
- 笹本 佑哉: Reactable Spatial Sound Control
- 328-F, 11:00-12:00
- 公園 マイケル: インターネット椅子シミュレーター
- UBIC 3D Theater, 12:00-13:00
- 公園 マイケル: 螺旋型鍵盤
- Ohashi & Oyama: 音楽をスマートフォンで回すインターフェイス
We are pleased to sponsor a seminar this week about spatial sound.
All are welcome.
Date: Friday, Sept. 27
Location: S1 (275)
Time: 1440-1540 (2:40-3:40 pm)
Speaker: César D. Salvador
Affiliation: Research Institute of Electrical Communication and Graduate School of Information Sciences, Tohoku University, Sendai
Among 3D audio techniques, binaural synthesis aims to reproduce auditory scenes with high levels of realism by including the external facts of human spatial hearing. Basic perceptual cues for a spatial listening experience arise from the scattering, reflections and resonances introduced by the pinnae, head and torso of the listener. These phenomena can be described by the so-called Head-Related Transfer Functions (HRTFs). Typical measured sets of HRTFs, though, neither include the motion of the head nor provide the listener with enough spatial resolution, characteristics that are required, for example, in the accurate reproduction of moving sound sources. Several approaches that sidestep these two limitations have been proposed. They are based on the recordings made with microphones placed on the surface of a rigid sphere, and the angular interpolation of HRTFs from a representative set of sound sources in the distal region (beyond one meter distance from the head). However, the optimal arrangement of the representative sound sources, and the synthesis of binaural signals for sources in the proximal region (less than one meter distance from the head) has not yet been addressed. We introduce a novel method to synthesize the left and right ear signals for sound sources in both the distal and proximal regions. The synthesis is performed from the sound field captured by a rigid spherical microphone array. The present proposal exploits the directional structure of the captured sound field by means of its decomposition into spherical harmonics functions with high directivity (high order).
César D. Salvador received the B.Sc. degree in Electrical Engineering from Pontifical Catholic University of Peru in 2005, the M.Sc. degree in Information Sciences from Tohoku University in 2013, and early October will start the Doctor Course in the Graduate School of Information Sciences at Tohoku University. He was a Researcher with the Faculty of Communication Sciences at University of San Martin de Porres, Peru, from 2008 to 2010, where he led an Immersive Soundscape Project. His research interests include spherical acoustics and spatial hearing.