By Eduardo Patricio
In this post we present two videos in different formats, but edited from the same source material captured on the 20th of June 2018, at Barigui park (Curitiba, Brazil).
The audio was recorded with the ZYLIA ZM-1 3rd order Ambisonics spherical microphone array while the video was captured by a 360-degree camera (Gear 360).
Below, you can watch both videos and find some information on how to achieve the two different results, with focus on preparing the audio recorded with the ZM-1 microphone for each scenario.
Interactive, immersive video with full 3D sound
(media components: 360-degree video + Ambisonics audio)
Non-interactive video with fixed perspective 3D sound
(media components: Tiny planet” video + binaural audio)
The microphone and the camera were placed on a single camera stand with a small clamped extension arm (see picture below). Both devices were aligned vertically with a small horizontal offset. We made sure the microphone and the camera always had the same relative facing direction (front of the microphone aligned with the camera side where the recording button is found).
For scenario B, we used the video from Gear 360 in ‘tiny planet’ format and a binaural audio track.
Since, the source material is the same as the one from scenario A, we’ll list here only the steps that differ.
Scenario B steps:
Choosing binaural preset on ZYLIA Studio PRO in REAPER
#ambiencerecording #ambisonics #binaural #soundscapes #immersiveaudio #360recording
We are happy to announce a new version of ZYLIA Ambisonics Converter. We introduced a few changes based on your input and suggestions.
We added a batch processing. Now, it is possible to process multiple 19 channels wave files within a single session.
There are also quality improvements and bug fixes for 2nd and 3rd order HOA. This update significantly increases the perceptual effect of rotation in HOA domain as well as corrects spatial resolution for 2nd and 3rd order. It is recommended to update to this new version.
By Jakub Zamojski & Lukasz Januszkiewicz
Recording and mixing surround sound becomes more and more popular. Among the popular multichannel representation of surround sound systems like 5.1, 7.1 or cinematic 22.2, especially worthy of note is an Ambisonics format, which is a full-sphere spatial audio technique allowing to get a real immersive experience of 3D sound. You can find more details about Ambisonics here (What is the Ambisonics format?).
Our previous blog post “2nd order Ambisonics Demo VR” described the process of combining audio and the corresponding 360 video into fine 360 movie on Facebook. Presented approach assumes using of 8-channel TBE signal from ZYLIA Ambisonics Converter and converts audio into the Ambisonics domain. As a result we get a nice 3D sound image which is rotating and adapting together with the virtual movement of our position. However, it is still not possible to adjust parameters (gain, EQ correction, etc.) or change the relative position of the individual sound sources present in the recorded sound scene.
In this tutorial we are going to introduce another approach of using ZYLIA ZM-1 to create a 3D sound recording, which gives much more flexibility in sound source manipulation. It allows us not only to adjust the position of instruments in recorded 3D space around ZYLIA microphone, but also to control the gain or to apply any additional effects (EQ, Comp, etc.). In this way we are able to create a fancy spatial mix using only one microphone instead of several spot mics!
Spatial Encoding of Sound Sources – Tutorial
In the end of July 2017, using ZYLIA ZM-1 microphone we have recorded a band called “Trelotechnika”. All band members were located around ZM-1 microphone, 4 musicians and one additional sound source – drums (played from a loudspeaker). During the post-production process, we applied ZYLIA Studio PRO VST plug-in (within Reaper DAW) on recorded 19-channel audio track. This allowed us to separate the previously recorded instruments and transfer them into the individual tracks in the DAW. Those tracks were then directed to the FB360 plug-ins, where encoding to the Ambisonics domain was performed.
“Spatial Encoding of Sound Sources” - a step-by-step description
Below, you will find a detailed description of how to run a demo session presenting our approach of recording and spatial encoding of sound sources. Demo works on Mac Os X and Windows.
After opening the session, you will see several tracks:
3. Separated signals from ZYLIA Studio PRO are passing to 5 individual tracks. You are able to adjust the gain, you can also mute or solo instruments, or you can apply some audio effects. A good practice is to use a high-pass filter for non-bass and low-pass for bass instruments to reduce a spill between them. We applied these filters to our session:
4. Spatialiser track – receives 5 signals from tracks with separated instruments. Spatialiser allows to distribute sound sources in desired positions in the 3D space.
a) Click on FX and choose FB360 Spatialiser.
d) Back to Spatialiser view. You will see an equirectangular picture and five circles with numbers. Each circle represents a sound source position in the space. By default, sources are located in the positions corresponding to the real positions of the instruments in the picture, but it is possible to adjust it by clicking on the circle and dragging it around the picture.
6. Now video is synchronized with audio. Adjusting the location of play-head in REAPER’s time line will affect the video’s time. Tap space bar to play audio and video. Rotation of the video in the player is tracked by the decoded and binauralized Ambisonics sound.
7. A good practice is to play video from the beginning of file to keep the synchronization. In some cases, it is necessary to close the VideoClient + VideoPlayer and load 360 video again to recover the synchronization.
8. Now you are able to rotate video across the pitch and yaw axis. Your demo is ready to run.
By Jakub Wasilewski
Nowadays the entertainment industry is more and more interested in immersive experiences when it comes to movies, video games and music. Along with already quite common 360 degree videos, there is also a growing demand for 3D audio. Surround systems like 5.1, 7.1 or cinematic 22.2 can surely deliver impressive effects, but none of them is an actual full spatial audio system. Mostly because once sources had been recorded, there is no way to change their positions freely.
At the same time when the music audience was thrilled for the first time with “The Dark Side Of The Moon”, an Ambisonics idea has been conceived by Michael Gerzon from the University of Oxford. He realised, that using three figure-of-eight microphones and additional omnidirectional one, he can make a soundfield recording that can be expressed as four speaker-independent channels, the so called B-Format. It allows to store soundfield information in four channels and decode it later into specific speaker setup.
As you can probably imagine, representing a whole soundfield using only four channels is not optimal. Spatial audio resolution is not very impressive when using a B-Format which is only 1st order of Ambisonics, In the 1970s It was difficult to achieve higher audio resolution due to the level of technical advancements at that time. Despite the fact that the Ambisonics technique was grounded on solid technical and mathematical foundation, it has not gained much attention or commercial success.
In order to obtain higher orders of Ambisonics, which provides better soundfield representation and spatial resolution, a dense microphone array and advanced signal processing algorithms are needed. Both of these advancements are the cornerstones of the new recording system developed by ZYLIA.
ZYLIA 3rd order Ambisonics microphone
You need ambisonics microphone? Just try ZYLIA!