Author Archives: arts-admin

Guest lecture on “Developing sound projects with Arduino”

Place: iLab2

Date and time: Oct. 13, 2015, from 14:50 (4th period).

Summary

In this talk we will discuss opportunities, challenges, benefits, and tricks about building sound projects in Teensy Arduino micro-controllers. Specifically, we will demonstrate how to assemble hardware, use Arduino software, choose casings, etc. We will also show a functional prototype based on these ideas. Note that although there are several sources on Internet where to learn how to program Arduino, few of them explain how to select, and assemble the hardware for such projects. If you have interest on sound projects, or Arduino programming in general, please join us.

Profile

Rob Oudendijk of YR-Design is a Dutch engineer/dancer with a vast experience on building software and hardware solutions for the Netherland Embassy in Japan, Luxemburg Embassy in Japan, and several companies in the US, Europe, and Japan. He is currently involved in the development of the next-generation Geiger counters at Safecast.

Handouts of the presentation

Saisua scholarship: call for applicants

Dear students,

The Support Association for International Students of the University of Aizu (SAISUA) is happy to announce the availability of scholarships for the 1st semester of the academic year 2015. We welcome you to apply by January 16th, 2015.

A) Eligibility
1. An international full-time regular undergraduate or graduate student at the University of Aizu
2. Not currently receiving a scholarship of more than 50,000 yen per month
3. Demonstrated high academic achievement

B) Scholarship Support
1. Amount: 20,000 yen per month for six months

C) Application procedure (How to apply)
1. Online application form: http://saisua.u-aizu.ac.jp/wordpress/
2. Upload the required supporting documents at the SAISUA homepage (the link above). These include latest transcript (REQUIRED, either issued by UoA or other institutions) and a document explaining your reasons for applying. If your transcript is in a language other than English, attach the translation (Your own translation is OK). It may help your case to provide other supporting documents such as your CV. These should be sent directly to Prof. Rockell (RQ 266).
3. Sealed letter of recommendation from your supervisor with hanko and/or signature. Please ask your supervisor to send the letter directly to Prof. Rockell.

D) Requirements for scholarship winners
Each winner of a SAISUA scholarship is expected to participate in at least two international university or community events during the year of the scholarship. Such activities might include:

– Giving an international talk about your country
– Making a local school visit to talk about your country to schoolchildren
– Giving a campus tour to visitors
– Working at an international booth or activity during the Campus Festival
– Attending the bi-annual University Welcome Party to greet new international students
– Other activities suggested by CSIP (Center for Strategy of International Programs) office

E) Application due date: January 16th, 2015

If interested, please visit our homepage: http://saisua.u-aizu.ac.jp/wordpress/

Sincerely,

Younghyon Heo
Associate Professor
Center for Language Research
University of Aizu

コンピューターで 作曲しよう: Pure-data入門 (TRY series lectures)

受講生の方へ: 

  • ミュージシャンやプログラマーである必要はありません
  • 必要なものは、コンピュータを利用した画期的な手法で音楽を作ろうとする意欲だけです

受講にあたって:

  • GarageBandやCakewalk SONARのような市販のアプリケーションを利用した講座ではありません
  • ビジュアルプログラミング言語Pure-dataで、音とコンピュータのインタラクションを作成します
  • 講座を通して、みなさんがPure-dataの基礎や、録音、音の創り方、音をコントロールする外部機器の使い方などを習得できます

スケジュール:

  • 第1回: 導入
  • 第2回: 音の合成
  • 第3回: シーケンサ
  • 第4回:  コントロールと入出力

講師の紹介:

  • 小泉宣夫: 京都府出身。博士(工学)。専門は音響工学。NTT研究所で音声通信機器の開発、音響信号処理などの研究に従事。1999年、東京情報大学教授。同大学院研究科委員長、総合情報学部長を経て現在総合情報学科映像音響コース教授。大学ではメディア機器論、情報メディア論、コンピュータミュージック論などを担当。ディジタルオーディオ、サウンド合成、フィジカルコンピューティングに関する教育、研究を推進している。学会活動では電子情報通信学会/日本音響学会・応用(電気)音響研究委員会の委員長等を務めた。著書には、「音のコミュニケーション工学」(共著)コロナ社(1996)、「基礎音響・オーディオ学」コロナ社(2005)、「サウンドシンセシス」(共著)講談社(2011)などがある。

  • Julián Villegas—ジュリアン ヴィジェガス: 会津大学教授。専門はコンピュータ芸術学。大学ではCG、空間音響、音響と音楽、コンピュータミュージックを担当。関心のある研究は、音声理解、音楽や音の学際的研究、心理音響、 実験的心理、リアルタイムプログラミング、視覚と聴覚における錯覚、バイノーラル音を含む。彼はValle University (コロンビア)で電気工学を専攻し学士を、会津大学でコンピュータサイエンスを専攻し、修士と博士を取得。The University of Basque Country (スペイン)でポストドクターを経て現在に至る。論文6編をジャーナルへ投稿、本で2章を執筆、40回を超えるカンファレンスへの投稿実績があり、その他、査読なし論文や、音楽制作、ソフトウェア開発などにおいても精力的に活動している。

  • 長坂 卓: 秋田県出身。会津大学学部4年。コンピュータ理工学を専攻。所属はヒューマンインターフェース学講座。仰角方向における音像定位に関係する周波数帯について研究中。

クラス教材:

結果:

 

Pd 参考図書:

  • Pure Data -チュートリアル&リファレンス- (日本語) 単行本(by 美山千香士 (著)
  • Pd Recipe Book -Pure Dataではじめるサウンドプログラミング(単行本) 単行本 by 松村 誠一郎 (著)

参考リンク:

 

CFP-VR

VRCall for Papers: Special Issue of Springer “Virtual Reality” journal on Spatial Sound in Virtual and Augmented Reality

Guest Editors: Michael Cohen, Julián Villegas, & Woodrow Barfield

Please direct questions to: Michael Cohen (mcohen@u-aizu.ac.jp). For general and miscellaneous enquires contact  VirtualReality@brunel.ac.uk.

The Virtual Reality Journal, Springer Verlag Press, is announcing a call for papers to appear in a special edition that will focus on the use of spatialized sound in virtual and augmented reality environments. The journal seeks original, high-quality papers in the design and use of spatialized sound in virtual reality, including augmented reality and mixed reality. Each paper should be classifiable as mainly covering research, applications, or systems, using the following guidelines for each:

  • Research papers should describe results that contribute to advances in state-of-the-art software, hardware, algorithms, interaction, or human-factors.
  • Application papers should explain how the authors built upon existing ideas and applied them to solve an interesting problem in a novel way. Each paper should include an evaluation of the success of the use of VR/AR/MR in the given application domain.
  • System papers should indicate how the implementers integrated known techniques and technologies to produce an effective system, along with any lessons learned in the process. Each paper should include an evaluation of the system, including benchmarking that was performed.

Topics

  • Spatialized audio interfaces
  • Perception and cognition (specifically in the context of spatialized sound in VR and AR)
  • Navigation and way-finding
  • Applications of spatialized sound for VR/AR/MR
  • Presence

Submission and Review Process

All papers should be submitted through the website www.editorialmanager.com/vire/ using the category of  ‘Spatial Sound in Virtual and Augmented Reality.’

Paper submission deadline: 05 Jan 2015
Notification of acceptance: 04 Apr 2015

Submissions will be reviewed by at least three independent referees, with opportunity to edit papers based on reviewer’s comments.

Paper length and format: Papers shall have a typical length of 20 pages. MS Word and LaTeX templates are available from here: http://www.springer.com/computer/image+processing/journal/10055?detailsPage=pltci_1060360

More information: http://arts.u-aizu.ac.jp/cfp-vr/

qrcode

Rise and shine: Investigating the influence of height channels on the multichannel audio reproduction

We are pleased to sponsor a seminar next week about spatial sound.
All are welcome.

Date: Thursday, Jan. 16

Location: room S8

Time: 13:10 — 14:40

Speaker: Sungyoung Kim

Affiliation: Electrical, Computer, and Telecommunication Engineering
Department, Rochester Institute of Technology, NY, USA.

Abstract:
In order to provide consumers more enhanced and immersive experience in sound, most of the new multi-channel reproduction formats highlight the significance of height-related information. In this talk, we investigated the influence of height-related room impulse responses when reproduced via various “height-loudspeakers,” including a virtual loudspeaker. Test participants listened to the corresponding sound fields and rated their perceived quality in terms of spaciousness and integrity. The results showed that perceived quality was affected by height loudspeaker positions but not by height signals, which was convolved with specific room impulse responses.

Biography:
Sungyoung Kim received a B.S. degree from Sogang University, Korea, in 1996, and a Master of Music and Ph.D. degree from McGill University, Canada, in 2006 and 2009 respectively. His professional work experiences include recording/balance engineer at Korea Broadcasting System (KBS), Seoul, Korea (1995–2001) and research associate at Yamaha Corporation (2007–2012), Hamamatsu, Japan. He is now an assistant professor at Electrical, Computer, and Telecommunication Engineering Department, Rochester Institute of Technology. His research interests are spatial audio and human perception, and efficient ear training methods. He is a member of the IEEE and Audio Engineering Society (AES).

We need Japanese subjects|日本人の参加を必要です

Do you want to be part of a world-wide experiment that will help develop the algorithms used in the next generation of mobile telephony?
We’re recruiting now!Your task would be to rate a series of sounds, heard via headphones, using a specialized software in the computer music studio of the University of Aizu.- Don’t need experience
– Must be Japanese
– Any age
– Any gender
– Must not have hearing problems
– Expected date for the experiments: February
– We’ll pay ¥2,000 per hour (¥4,000 per 2 hour experiment)
– Personal information is not required

Your participation is highly needed.
Please contact Prof. Julián Villegas (Japanese OK):

julian@u-aizu.ac.jp
(0242) 37-2608

indicating your name, telephone number, and email address.


お知らせ

次世代の携帯電話に使われる技術の開発のための世界的な実験に参加してみたいと思いませんか?
私達は今、そういった方を集めています!

していただくことは会津大学のコンピューターミュージックスタジオで、専用のソフトウェアを使い、いくつかの音をヘッドホンで聞いて、音質を評価することだけです。

• 未経験の方でも問題ありません。
• 日本人の方のみを対象とさせていただきます。
• 年齢は問いません。
• 性別も問いません。
• 聴覚に問題のない方のみ対象とさせていただきます。
• 実施は2月を予定しています。
• 1時間につき¥2000をお支払いいたします。
(時間は2時間ですので¥4000お支払いいたします。)
• 個人情報は不要です。

あなたのご参加をお待ちしております!
下記の連絡先にお名前、電話番号、メールアドレスをご記載の上ご連絡ください。

連絡はこちら

ジュリアン ヴィジェガス
電子メール: julian@u-aizu.ac.jp
電話番号: (0242) 37-2608

Open Campus demonstrations, Fall 2013

Saturday, Oct. 12 & Sunday, Oct. 13, 11:00-13:00

  • 328-E, 11:00-12:00
    1. Julián Villegas: Range-Modulated Transfer Functions for Spatial Sound
    2. Nakada Anzu & Nishimura Kensuke: CVE-Mathematica Interface Supporting Mobile Control
    3. Bektur Rysekeldiev: Spatial Sound on Mobile Devices
    4. Sasamoto Yuya: Reactable Spatial Sound Control
  • 328-F, 11:00-12:00
    1. Michael Cohen: Schaie Internet Chair
  • UBIC 3D Theater, 12:00-13:00
    1. Michael: Helical Keyboard
    2. Ohashi & Oyama: Musical Control with Spinning Affordances
  • 328-E, 11:00-12:00
    1. ジュリアン ヴィジェガス: 立体音響における伝達距離関数
    2. 中田 杏 & 西村 健亮: ネットワークプログラムシステムとMathematicaインターフェースがモバイルコントロールと対応する
    3. リスケリヂィエフ ベクトウル:携帯電話の立体音響
    4. 笹本 佑哉: Reactable Spatial Sound Control
  • 328-F, 11:00-12:00
    1. 公園 マイケル: インターネット椅子シミュレーター
  • UBIC 3D Theater, 12:00-13:00
    1. 公園 マイケル: 螺旋型鍵盤
    2. Ohashi & Oyama:  音楽をスマートフォンで回すインターフェイス

Binaural synthesis based on the spherical harmonic analysis with rigid microphone arrays seminar

We are pleased to sponsor a seminar this week about spatial sound.
All are welcome.

Date: Friday, Sept. 27

Location: S1 (275)

Time: 1440-1540 (2:40-3:40 pm)

Speaker: César D. Salvador

Affiliation: Research Institute of Electrical Communication and Graduate School of Information Sciences, Tohoku University, Sendai

Abstract:
Among 3D audio techniques, binaural synthesis aims to reproduce auditory scenes with high levels of realism by including the external facts of human spatial hearing. Basic perceptual cues for a spatial listening experience arise from the scattering, reflections and resonances introduced by the pinnae, head and torso of the listener. These phenomena can be described by the so-called Head-Related Transfer Functions (HRTFs). Typical measured sets of HRTFs, though, neither include the motion of the head nor provide the listener with enough spatial resolution, characteristics that are required, for example, in the accurate reproduction of moving sound sources. Several approaches that sidestep these two limitations have been proposed. They are based on the recordings made with microphones placed on the surface of a rigid sphere, and the angular interpolation of HRTFs from a representative set of sound sources in the distal region (beyond one meter distance from the head). However, the optimal arrangement of the representative sound sources, and the synthesis of binaural signals for sources in the proximal region (less than one meter distance from the head) has not yet been addressed. We introduce a novel method to synthesize the left and right ear signals for sound sources in both the distal and proximal regions. The synthesis is performed from the sound field captured by a rigid spherical microphone array. The present proposal exploits the directional structure of the captured sound field by means of its decomposition into spherical harmonics functions with high directivity (high order).

Biography:
César D. Salvador received the B.Sc. degree in Electrical Engineering from Pontifical Catholic University of Peru in 2005, the M.Sc. degree in Information Sciences from Tohoku University in 2013, and early October will start the Doctor Course in the Graduate School of Information Sciences at Tohoku University. He was a Researcher with the Faculty of Communication Sciences at University of San Martin de Porres, Peru, from 2008 to 2010, where he led an Immersive Soundscape Project. His research interests include spherical acoustics and spatial hearing.