We are happy to announce the new release of ZYLIA Studio (v2.1).
With this version, we introduce the following features:
We are happy to announce the release of ZYLIA 6DoF HOA Renderer for Max MSP v2.0. (macOS, Windows).
This software is a key element of ZYLIA 6DoF Navigable Audio system. It allows you to reproduce the sound field in a given location based on Ambisonics signals recorded with ZM-1S microphones. The plugin works in Max/MSP environment so you can use this tool directly in your project. Please refer to the provided example project with our plugin and the manual for ZYLIA 6DoF HOA Renderer.
The newest ZYLIA 6DoF HOA Renderer for Max/MSP has a lot of improvements.
The new User Interface can be accessed in a separate window – it allows you to set up the whole configuration directly in the plugin, without using the message mechanism of Max/MSP. The UI allows you also to lock the microphones’ positions, preventing the unintentional change of the scene configuration.
The next thing is an increased number of supported signals, right now you can pass up to 30 HOA signals to the plugin, and create a 6DoF experience on much larger scenes.
If you would like to test this plugin, you can also use a 7-day free trial, and play around with our 6DoF audio rendering algorithm. The test recording data for the plugin can be found on our webpage.
We are happy to announce the new release of ZYLIA ZM-1 drivers for macOS (v2.9.1). This driver supports the latest release of macOS 11.1 Big Sur.
We are happy to announce the new release of ZYLIA ZM-1 drivers for macOS and Linux.
MacOS driver v2.9.0
Linux driver v2.5.0
by Pedro Firmino
This tutorial is based on the solution developed by professor Angelo Farina for preparing a 360 video with 3rd Order audio (source http://www.angelofarina.it/Ambix+HL.htm).
In this adaptation, we will show you how to create a 360 video with 3rd Order Ambisonics audio using:
This tutorial consists in 2 parts:
A: Preparing the 360 content with 16 channels
B: Injecting metadata using Spatial Media Injector version, modified by Angelo Farina.
At the moment, only HOAST library ( https://hoast.iem.at/ ) is the only platform which allows online video playback of 3rd Order Ambisonics and therefore the content created from this tutorial is meant to be watched locally using VLC player.
For this tutorial, basic Python knowledge is advised.
For preparing a 360 video with 1st order Ambisonics, visit the link:
1. As usual, start by recording your 360 video with the ZYLIA ZM-1 microphone and remember to have the front of the ZM-1 aligned with the front of the 360 camera.
2. After recording, import the 360 video and the 19 Multichannel audio file into Reaper.
Syncronize the audio and video.
3. On the ZM-1 audio track, insert ZYLIA Ambisonics Converter and select 3rd Order Ambisonics. This will decode your 19 multichannel track into 16 channels (3rd Order Ambisonics).
4. On the Master track, click on the Route button, On the track channels, select 16. Now you are receiving the signal from the 16 channels from the audio track.
5. Once the video is ready for exporting, click File – Render.
As for the settings:
Sample rate: 48000
Channels: 16 (click on the space and manually type 16)
Output format: Video (ffmpeg/libav encoder)
Size: 3840 x 1920 (or Get width/height/framerate from current video item
Video Codec: H.264
Audio Codec: 24 bit PCM
Render the video.
After having the 360 video with 16 channels, it is necessary to inject metadata for Spatial Audio.
In order to do this, Python is required. Python is preinstalled in macOS but
you can download Python 2.7 version here: https://www.python.org/download/releases/2.7/
Afterward, download Angelo Farina’s modified version of Spatial Media Metadata Injector, located at:
The next part:
1. With the downloaded file located in your Desktop, run macOS Terminal application.
2. Using “cd” command, go to folder where you have Spatial Media Injector (eg. “cd ~/Desktop/spatial-media-2/”)
3. Run Python script “sudo python setup.py install”. Type your password.
After the build is complete, type command: “cd build/lib/spatialmedia”
6. Enter python gui.py and the application should run.
With the Spatial Media Metadata Injector opened, simply open the created 360 video file, and check the boxes for the 360 format and spatial audio. Inject metadata and your video will be ready for playback using 3rd Order Ambisonics audio.
Behind the scenes of the orchestra recording made with 30 Ambisonics microphones. How did we create a virtual stage with navigable audio?
Zylia in collaboration with Poznań Philharmonic Orchestra showed first in the world navigable audio in a live-recorded performance of a large classical orchestra. 34 musicians on stage and 30 ZYLIA 3’rd order Ambisonics microphones allowed to create a virtual concert hall, where each listener can enact their own audio path and get a real being-there sound experience.
ZYLIA 6 Degrees of Freedom Navigable Audio is a solution based on Ambisonics technology that allows recording an entire sound field around and within any performance imaginable. For a common listener it means that while listening to a live-recorded concert they can walk through the audio space freely. For instance, they can approach the stage, or even step on the stage to stand next to the musician. At every point, the sound they hear will be a bit different, as in real life. Right now, this is the only technology like that in the world.
6 Degrees of Freedom in Zylia’s solution name refers to 6 directions of possible movement: up and down, left and right, forward and backward, rotation left and right, tilting forward and backward, rolling sideways. In post-production, the exact positions of microphones placed in the concert hall are being mirrored in the virtual space through the ZYLIA software. When it is done, the listener can create their own audio path moving in the 6 directions mentioned above and choose any listening spot they want.
6DoF sound can be produced with an object-based approach – by placing pre-recorded mono or stereo files in a virtual space and then rendering the paths and reflections of each wave in this synthetic environment. Our approach, on the contrary, uses multiple Ambisonics microphones – this allows us to capture sound in almost every place in the room simultaneously. Thus, it provides a 6DoF sound which is comprised only of real-life recorded audio in a real acoustic environment.
How was it recorded?
* Two MacBooks pro for recording
* A single PC Linux workstation serving as a backup for recordings
* 30 ZM-1S mics – 3rd order Ambisonics microphones with synchronization
* 600 audio channels – 20 channels from each ZM-1S mic multiplied by 30 units
* 3 hours of recordings, 700 GB of audio data
Microphone array placement
The placement of 30 ZM-1S microphones on the stage and in front of it.
To be able to choose the best versions of performances, the Orchestra played nine times the Overture and eight times the Aria with three additional overdubs.
Simultaneously to the audio recording, we were capturing the video to document the event. The film crew placed four static cameras in front of the stage and on the balconies. One cameraman was moving along the pre-planned path on the stage. Additionally, we have put two 360 degrees cameras among musicians.
Our chief recording engineer made sure that everything was ready – static cameras, moving camera operator, 360 cameras and recording engineers – and then gave a sign to the Conductor to begin the performance. When the LED rings on the 30 arrays had turned red everybody knew that the recording has started.
A large amount of data make it possible to explore the same moment in endless ways. Recording all 19 takes of two music pieces resulted in storing 700 GB of audio. The entire recording and preparation process was documented by the film with several cameras. Around 650 GB of the video has been captured. In total, we have gathered almost 1,5 TB of data.
Post-processing and preparing data for the ZYLIA 6DoF renderer
First, we had to prepare the 3D model of the stage. The model of the concert hall was redesigned, to match the dimensions in real life. Then, we have placed the microphones and musicians according to the accurate measurements. When this was done, specific parameters of the interpolation algorithm in the ZYLIA 6DoF HOA Renderer had to be set. The next task was the most difficult in post-production - matching the real camera sequences with the sequences from the VR environment in Unreal Engine. After this painstaking process of matching the paths of virtual and real cameras, a connection between Unreal and Wwise was established. In this way, we had the possibility to render the sound of the defined path in Unreal - just as if someone was walking there in VR. Last, but not least - was to synchronize and connect the real and virtual video with the desired audio.
The outcome of this project is presented in “The Walk Through The Music” movie, where we can enter the music spectacle from the audience position and move around artists on the stage.
You can also watch the “Making of” movie to get more detailed information on how the setup looked like.
We are happy to announce the new release of ZYLIA ZM-1/ZM-1S Driver (v2.8.0) for macOS.
The new driver now supports ZYLIA ZM-1S microphone arrays and enables to connect up to 18 such microphones to a single macOS computer.
We are happy to announce the new release of ZYLIA ZR-1 Firmware v1.3 with Remote Control.
The newest firmware version provides a totally new feature to ZR-1 recorder - Remote Control. From now on you can connect to your ZR-1 device with WiFi and control recording process directly via the web browser of your smartphone or tablet.
With the ZR-1 Remote Control application you can:
ZR-1 FIRMWARE UPGRADE PROCEDURE
The ZYLIA ZR-1 Portable Recorder firmware can be updated with specially prepared files provided by ZYLIA. To perform the firmware update, the user is required to upload the provided update files to a USB flash drive. The procedure is as follows:
In these crazy times, we musicians face many new challenges. We spend more time creating at home – we play in here, write new songs, record and mix. It is, however, a good time to learn new audio techniques and polish the old ones.
There is a tool that will allow you to take your first easy steps in sound post-processing, so you could bring yourself closer to the world of professional musicians.
Record and mix with ZYLIA Music!
ZYLIA Music set consists of one spherical microphone array (with 19 microphones hidden inside!) and easy in use ZYLIA Studio software. That’s it! This is all you need to make your home recordings in studio quality.
How does it work?
Everything is simple and intuitive. The software will guide you step by step through the recording process. You don't need to be familiar with all the cables, as well as, with recording interfaces and techniques. You don't need to know how to properly position the microphones to record the sound correctly. This recording studio will do everything for you. You just have to play well ;-)
ZYLIA Studio workflow.
If you want to set free in full your music creativity and explore the sound design topic, we have a PRO plugin for you. This will take you to the next level of sound post-processing. You will be able to experiment with Ambisonics sound and mix the 360-degree scene using virtual microphone technology. You also have a wide range of spatial presets at your disposal. Invaluable, especially nowadays, is the possibility to stream 3D audio in binaural format, so the one which mirrors the way our ears hear.
Don't waste your time scrolling through boring videos on the Internet. Get started making beautiful music!
In this tutorial we describe the process of converting 360 video and 3rd order Ambisonics to 2D video with binaural audio with linked rotation parameters.
This allows us to prepare a standard 2D video while keeping the focus on the action from the video and audio perspective.
It also allows us to control the video and audio rotation in real time using a single controller.
Reaper DAW was used to create automated rotation of 360 audio and video.
Audio recorded with ZYLIA ZM-1 microphone array.
Below you will find our video and text tutorial which demonstrate the setup process.
Thank you Red Bull Media House for providing us with the Ambisonics audio and 360 video for this project.
Ambisonics audio and 360 video is Copyrighted by Red Bull Media House Chief Innovation Office and Projekt Spielberg, contact: cino (@) redbull.com
Created by Zylia Inc. / sp. z o.o. https://www.zylia.co
Requirements for this tutorial:
We will use Reaper as a DAW and video editor, as it supports video and multichannel audio from the ZM-1 microphone.
Before recording the 360 video with the ZM-1 microphone make sure to have the front of the camera pointing the same direction as the front of the ZM-1 (red dot on the equator represents the front of the ZM-1 microphone) , this is to prevent future problems and to know in which direction to rotate the audio and video.
Step 1 - Add your 360 video to a Reaper session.
The video file format may be .mov .mp4 .avi or other.
From our experience we recommend to work on a compressed version of the video and replace this media file later for rendering (step 14).
To open the Video window click on View – VIDEO or press Control + Shift + V to show the video.
Step 2 - Add the multichannel track recorded with the ZM-1 and sync the Video with the ZM-1 Audio track.
Import the 19 channel file from your ZM-1 and sync it with the video file.
Step 3 – Disable or lower the volume of the Audio track from the video file.
Since we will not use the audio from the video track, we require to remove or put the volume from the audio track at minimum value.
To do so, right click on the Video track – Item properties – move the volume slider to the minimum.
Step 4 – Merge video and audio on the same track.
Select both the video and audio track and right click – Take – implode items across tracks into takes
This will merge video and audio to the same track but as different takes.
Step 5 – Show both takes.
To show both takes, click on Options – Show all takes in lanes (when room) or press Ctrl + L
Step 6 – Change the number of channels to 20.
Click on the Route button and change the number of track channels from 2 to 20, this is required to utilize the 19 multichannel of the ZM-1.
Step 7 - Play both takes simultaneously.
If we press play right now, it will only play the selected take, therefore we need to be able to play both takes simultaneously, therefore:
Right click on the track – Item settings – Play all takes.
Step 8 – Change 360 video to standard video.
Next we will need to convert the 360 video to equirectangular video to visualize and control the rotation of the camera.
To do so, open the FX window on our main track and search for Video processor.
On the preset selection, choose Equirectangular/spherical 360 panner, this will flatten your 360 video allowing you to control the camera parameters such as field of view, yaw, pitch and roll.
Step 9 – As FX, add ZYLIA Ambisonics Converter plugin and IEM binaural Converter.
On the FX window add as well:
You should now have the binaural audio which you can test by changing the rotation and elevation parameters in ZYLIA Ambisonics Converter plugin.
Step 10 – Link the rotation of both audio and video.
The next steps will be dedicated to linking the Rotation of the ZYLIA Ambisonics Converter and the YAW parameter from the Video Processor.
On the main track, click on the Track Envelopes/Automation button and enable the UI for the YAW (in Equirectangular/spherical 360 panner) and Rotation (in ZYLIA Ambisonics Converter plugin).
Step 11 – Control Video yaw with the ZYLIA Ambisonics Converter plugin.
On the same window, on the YAW parameters click on Mod… (Parameter Modulation/Link for YAW) and check the box Link from MIDI or FX parameter.
Select ZYLIA Ambisonics plugin: Rotation
Step 12 – Align the position of the audio and video using the Offset control.
On the Parameter Modulation window you are able to fine-tune the rotation of the audio with the video.
Here we changed the ZYLIA Ambisonics plugin Rotation Offset to -50 % to allow the front of the video match the front of the ZM-1 microphone.
Step 13 – Change the Envelope mode to Write.
To record the automation of this rotation effect, right-click on the Rotation parameter and select Envelope to make the envelope visible.
After, on the Rotation Envelope Arm button (green button), right click and change the mode to write.
By pressing play you will record the automation of video and audio rotation in real time.
Step 14 – Prepare for Rendering
After writing the automation, change the envelope to Read mode instead of Write mode.
Disable the parameter modulation from the YAW control:
Right click on Yaw and uncheck “Link from MIDI or FX parameter”
OPTIONAL: Replace your video file with the uncompressed version.
If you have been working with a compressed video file, this is the time to replace it with the original media file. To do this, right click on the video track and select item properties.
Scroll to the next page and click Choose new file.
Then select your original uncompressed video file.
Step 15 – Render!
You should now have your project ready for Rendering.
Click on File – Render and set Channels to Stereo.
On the Output format choose your preferred Video format.
We exported our clip in .mov file with video codec H.264 and 24bit PCM for the Audio Codec.
Thank you for reading and don’t hesitate to contact us with any feedback, questions or your results from following this guide.