ZYLIA PORTABLE RECORDING STUDIO. MULTI-TRACK MUSIC RECORDING WITH ONE MIC.
  • Solutions
    • Live rehearsal recording
    • 360 VR recording
    • Field recording
    • Audio recording for Dolby Atmos
    • Navigable Audio for VR
    • 3D audio streaming
  • Products
    • ZYLIA MUSIC
    • ZYLIA PRO
    • ZYLIA 6DOF
    • Software >
      • Applications >
        • ZYLIA Studio
        • ZYLIA Ambisonics Converter
      • Plugins >
        • ZYLIA Studio PRO
        • ZYLIA Ambisonics Converter Plugin
        • ZYLIA 6DoF HOA renderer
    • Hardware >
      • ZYLIA ZM-1 microphone
      • ZYLIA ZM-1S microphone
      • ZYLIA ZR-1 Portable Recorder
  • Support
    • Get Started
    • Register
    • Downloads
    • Tutorials
    • Help Center
    • Contact support
  • Community
    • Demo
    • Testimonials
    • Affiliate Program
  • Blog
  • Company
    • About
    • Jobs
    • Press
    • Contact
  • Shop

How to prepare a 360 video with 3rd Order Ambisonics audio

11/26/2020

0 Comments

 
by Pedro Firmino
This tutorial is based on the solution developed by professor Angelo Farina for preparing a 360 video with 3rd Order audio (source http://www.angelofarina.it/Ambix+HL.htm).

In this adaptation, we will show you how to create a 360 video with 3rd Order Ambisonics audio using:
  • MacOS
  • ZYLIA ZM-1 microphone
  • ZYLIA Ambisonics Converter plugin
  • IEM Binaural Decoder
  • Reaper
  • Modifed version of Google Spatial Media Metadata injector created by professor Angelo Farina.
​
This tutorial consists in 2 parts:
A: Preparing the 360 content with 16 channels
B: Injecting metadata using Spatial Media Injector version, modified by Angelo Farina.


At the moment, only HOAST library ( https://hoast.iem.at/ ) is the only platform which allows online video playback of 3rd Order Ambisonics and therefore the content created from this tutorial is meant to be watched locally using VLC player.

For this tutorial, basic Python knowledge is advised.

For preparing a 360 video with 1st order Ambisonics, visit the link:
https://www.zylia.co/blog/how-to-prepare-a-360-video-with-spatial-audio

PART A
1. As usual, start by recording your 360 video with the ZYLIA ZM-1 microphone and remember to have the front of the ZM-1 aligned with the front of the 360 camera.

2. After recording, import the 360 video and the 19 Multichannel audio file into Reaper.
Syncronize the audio and video.
Picture
​3. On the ZM-1 audio track, insert ZYLIA Ambisonics Converter and select 3rd Order Ambisonics. This will decode your 19 multichannel track into 16 channels (3rd Order Ambisonics).
Picture
​4. On the Master track, click on the Route button, On the track channels, select 16. Now you are receiving the signal from the 16 channels from the audio track.
Picture
​5. Once the video is ready for exporting, click File – Render.

As for the settings:
Sample rate: 48000
Channels: 16 (click on the space and manually type 16)
Output format: Video (ffmpeg/libav encoder)
Size: 3840 x 1920 (or Get width/height/framerate from current video item
Format: QT/Mov/MP4
Video Codec: H.264
Audio Codec: 24 bit PCM

Render the video.
Picture

PART B
After having the 360 video with 16 channels, it is necessary to inject metadata for Spatial Audio.

In order to do this, Python is required. Python is preinstalled in macOS but
you can download Python 2.7 version here: https://www.python.org/download/releases/2.7/

Afterward, download Angelo Farina’s modified version of Spatial Media Metadata Injector, located at:
http://www.angelofarina.it/Ambix+HL.htm
Picture
​The next part:

1. With the downloaded file located in your Desktop, run macOS Terminal application.
2. Using “cd” command, go to folder where you have Spatial Media Injector (eg. “cd ~/Desktop/spatial-media-2/”)
Picture
3. Run Python script “sudo python setup.py install”. Type your password.
Picture
​After the build is complete, type command: “cd build/lib/spatialmedia”

6. Enter python gui.py and the application should run.
Picture
​With the Spatial Media Metadata Injector opened, simply open the created 360 video file, and check the boxes for the 360 format and spatial audio. Inject metadata and your video will be ready for playback using 3rd Order Ambisonics audio.

Picture
0 Comments

How to convert 360 to 2D video with linked Ambisonics rotation for binaural audio

8/11/2020

0 Comments

 
In this tutorial we describe the process of converting 360 video and 3rd order Ambisonics to 2D video with binaural audio with linked rotation parameters.
This allows us to prepare a standard 2D video while keeping the focus on the action from the video and audio perspective.
It also allows us to control the video and audio rotation in real time using a single controller.

Reaper DAW was used to create automated rotation of 360 audio and video.
Audio recorded with ZYLIA ZM-1 microphone array.

Below you will find our video and text tutorial which demonstrate the setup process.
Thank you Red Bull Media House for providing us with the Ambisonics audio and 360 video for this project.

Ambisonics audio and 360 video is Copyrighted by Red Bull Media House Chief Innovation Office and Projekt Spielberg, contact: cino (@) redbull.com
Created by Zylia Inc. / sp. z o.o. https://www.zylia.co
Requirements for this tutorial:
  •  Video and audio recorded with 360 video camera and ZM-1 microphone
  •  ZYLIA Ambisonics Converter plugin
  •  IEM Binaural Decoder
  •  Reaper

We will use Reaper as a DAW and video editor, as it supports video and multichannel audio from the ZM-1 microphone.

Before recording the 360 video with the ZM-1 microphone make sure to have the front of the camera pointing the same direction as the front of the ZM-1 (red dot on the equator represents the front of the ZM-1 microphone) , this is to prevent future problems and to know in which direction to rotate the audio and video.

​Step 1 - Add your 360 video to a Reaper session. 

The video file format may be .mov .mp4 .avi or other.
From our experience we recommend to work on a compressed version of the video and replace this media file later for rendering (step 14).

To open the Video window click on View – VIDEO  or press Control + Shift + V to show the video.
Picture
Step 2 - Add the multichannel track recorded with the ZM-1 and sync the Video with the ZM-1 Audio track.

Import the 19 channel file from your ZM-1 and sync it with the video file.
Picture
​Step 3 – Disable or lower the volume of the Audio track from the video file.

Since we will not use the audio from the video track, we require to remove or put the volume from the audio track at minimum value. 
To do so, right click on the Video track – Item properties – move the volume slider to the minimum.
Picture
Picture
​Step 4 – Merge video and audio on the same track.

Select both the video and audio track and right click – Take – implode items across tracks into takes
This will merge video and audio to the same track but as different takes.
Picture
​Step 5 – Show both takes.
​
To show both takes, click on Options – Show all takes in lanes (when room) or press Ctrl + L
Picture
Step 6 – Change the number of channels to 20.

Click on the Route button and change the number of track channels from 2 to 20, this is required to utilize the 19 multichannel of the ZM-1.
Picture
​Step 7 - Play both takes simultaneously.

If we press play right now, it will only play the selected take, therefore we need to be able to play both takes simultaneously, therefore:

Right click on the track – Item settings – Play all takes.
Picture
Step 8 – Change 360 video to standard video.

Next we will need to convert the 360 video to equirectangular video to visualize and control the rotation of the camera.

To do so, open the FX window on our main track and search for Video processor.

Picture
​On the preset selection, choose Equirectangular/spherical 360 panner, this will flatten your 360 video allowing you to control the camera parameters such as field of view, yaw, pitch and roll.
Picture
Step 9 – As FX, add ZYLIA Ambisonics Converter plugin and IEM binaural Converter.

 On the FX window add as well:
  • ZYLIA Ambisonics Converter plugin and set it to 3rd Ambisonics Order. Make sure to set the microphone orientation how you recorded it in the first place.
  • IEM Binaural Decoder. Here you can choose headphone equalization at your liking.
Picture
Picture
You should now have the binaural audio which you can test by changing the rotation and elevation parameters in ZYLIA Ambisonics Converter plugin.
​Step 10 – Link the rotation of both audio and video.

The next steps will be dedicated to linking the Rotation of the ZYLIA Ambisonics Converter and the YAW parameter from the Video Processor.

On the main track, click on the Track Envelopes/Automation button and enable the UI for the YAW (in Equirectangular/spherical 360 panner) and Rotation (in ZYLIA Ambisonics Converter plugin).
Picture
​Step 11 – Control Video yaw with the ZYLIA Ambisonics Converter plugin.

On the same window, on the YAW parameters click on Mod…  (Parameter Modulation/Link for YAW) and check the box Link from MIDI or FX parameter.
Select ZYLIA Ambisonics plugin: Rotation
Picture
Step 12 – Align the position of the audio and video using the Offset control.

On the Parameter Modulation window you are able to fine-tune the rotation of the audio with the video.
Here we changed the ZYLIA Ambisonics plugin Rotation Offset to -50 % to allow the front of the video match the front of the ZM-1 microphone.
Picture
Step 13 – Change the Envelope mode to Write.

To record the automation of this rotation effect, right-click on the Rotation parameter and select Envelope to make the envelope visible.
Picture
​After, on the Rotation Envelope Arm button (green button), right click and change the mode to write.
Picture
​By pressing play you will record the automation of video and audio rotation in real time.
Step 14 – Prepare for Rendering

After writing the automation, change the envelope to Read mode instead of Write mode.
Picture
​Disable the parameter modulation from the YAW control:
Right click on Yaw and uncheck “Link from MIDI or FX parameter”
Picture
​OPTIONAL: Replace your video file with the uncompressed version.

If you have been working with a compressed video file, this is the time to replace it with the original media file. To do this, right click on the video track and select item properties.
Picture
Scroll to the next page and click Choose new file.
Then select your original uncompressed video file.
Picture
​Step 15 – Render!

You should now have your project ready for Rendering.
Click on File – Render and set Channels to Stereo.
On the Output format choose your preferred Video format.
We exported our clip in .mov file with video codec H.264 and 24bit PCM for the Audio Codec.
Picture

Thank you for reading and don’t hesitate to contact us with any feedback, questions or your results from following this guide.
0 Comments

ZYLIA Studio PRO presets for Dolby atmos

2/23/2020

0 Comments

 
To support our customers and their workflow we have prepared several presets of ZYLIA Studio PRO for Dolby Atmos.
Simply download the zip package, extract files and import the appropriate surround preset into your Reaper session.
Picture

What is Dolby Atmos?​

​Wikipedia - "Dolby Atmos is a surround sound technology developed by Dolby Laboratories. It expands on existing surround sound systems by adding height channels, allowing sounds to be interpreted as three-dimensional objects."

Read more at Wikipedia> 
0 Comments

How to prepare a 360 video with spatial audio

1/21/2020

0 Comments

 
by Eduardo Patricio
In general VR related workflows can be complex and everyone seems to be looking for standard solutions. Here, we will show you, step by step, how to prepare a 360 video with spatial audio in, possibly, the shortest way! 
Required gear:
  • ZYLIA ZM-1 microphone array;
  • A 360 camera (e.g. Insta 360 One X);
  • A computer.

​After following steps A, B and C, you’ll have a video file with 1st order Ambisonics spatial audio that can be played on your computer with compatible video players (e.g. VLC) or uploaded to YouTube.

Picture
OK, let’s have a close look at each step. 
Picture
Put your ZM-1 mic and a 360 camera (we’ve been using an Insta360 One X here). Ideally, you’d keep the camera on the top and mic on the bottom and they’d be aligned to a single vertical axis.

Picture
There are many ways to achieve this. It depends on available gear. A simple, flexible and sturdy way to do it is to use a standard microphone stand and 2 articulated “magic arms” like this one: ​
​Amazon.com 
Picture
​Having said that, a small horizontal offset is not the end of the world 
see this arrangement here:
Picture
and the final result: ​
With the gear in place, start recording both audio and video and clap in between the mic and the camera. The clap sound spike can be used to sync the footage later.
Picture
Here’s a pic of one of our recording setups:
ZYLIA ZM-1, ZYLIA ZR-1, Kodak Pixpro SP360 and mic stand with ‘magic arms’

​

If you’re new to recording with a ZM-1, here’s a useful link:​
Picture
​After recording, it’s time to put sound and image together, do a simple edit and render the file. Make sure you have all the software tools (download links below*).
  • Bring your files (the 19-channel audio from the ZM-1 and the video one from your camera) onto the DAW;
  • Lower the video volume;
  • Align the clap position;
  • Trim the ends;
  • Adjust the volume of the audio file;
  • Add ZYLIA Ambisonics Converter Plugin to the audio track and adjust the settings;
  • Change the master output to 4 channels;
  • Render!

Here’s a video showing all the sub-steps in Reaper:
​Note
If you need to check how the recording sounds, add a binaural decoder plugin (e.g. IEM Binaural decoder) to the audio track, after ZYLIA Ambisonics Converter.
 
Picture
​This is the last step. Just load the file rendered / exported from Reaper onto Google’s Spatial Media metadata injector, check the appropriate box for your kind of 360 video and check the bottom option: “My video has spatial audio”. 

Click on “Inject metadata” and save the new injected file.
That’s it!
Now you can enjoy the spatial audio
  • By playing it on your computer, using VLC player, for example (very few players will handle 360 video + Ambisonics correctly); or…
  • By uploading it to YouTube. 
*Software tools used
​
Picture
ZYLIA Ambisonics Converter plugin

Picture
Google’s Spatial media metadata injector
Picture
​Reaper Digital workstation
Picture
IEM Binaural decoder – part of a suite of Ambisonics
​plugins
Mounting:
https://www.amazon.com/Stage-MY550-Microphone-Extension-Attachment/dp/B0002ZO3LK/ref=sxbs_sxwds-stvp?keywords=microphone+clamp+arm&pd_rd_i=B0002ZO3LK&pd_rd_r=6860690f-2adc-4b00-a80e-de436939ed2b&pd_rd_w=GlE2J&pd_rd_wg=qrTTx&pf_rd_p=a6d018ad-f20b-46c9-8920-433972c7d9b7&pf_rd_r=GGS60M0DGQ5DQF44594V&qid=1575529629
 
https://www.amazon.com/Aluminum-Microphone-Swivel-Camera-Monitor/dp/B07Q2V6CBC/ref=sr_1_186?keywords=microphone+boom+clamp&qid=1575529342&sr=8-186

Amazon:
https://www.amazon.com/Neewer-Adjustable-Articulating-Mirrorless-Camcorders/dp/B07SV6NVDS/ref=sr_1_205?keywords=microphone+clamp+arm&qid=1575531393&sr=8-205
Allegro generic alternative for us to test: https://allegro.pl/oferta/ramie-przegubowe-11-magic-arm-do-kamery-8505530470
 
0 Comments

New release of ZYLIA Ambisonics Converter plugin v1.4.0

5/28/2019

0 Comments

 
We are happy to announce the new release of ZYLIA Ambisonics Converter plugin v1.4.0. 
New features and improvements:
  • Automation of plug-in parameters for Digital Audio Workstations.
    This allows for programmed and automatic adjustment of Rotation and Elevation parameters directly from your DAW. Users can now emulate different movements of ZYLIA ZM-1, such as rotation. This adds new creative possibilities to the sound design and post-production processes.
Picture
Bug fixes:
  • ZYLIA Ambisonics Converter plugin now properly loads the microphone array version saved in personal presets.
  • ZYLIA Ambisonics Converter plugin now ignores typed-in incorrect values of Rotation and Elevation parameters. Values of Rotation out of range of [-180, 180] and Elevation out of range of [-90, 90] do not trigger any changes.
0 Comments

The power of audio in 6 Degrees-of-Freedom (6dof)

10/16/2018

0 Comments

 

Recordings made with 6DOF Development Kit by Zylia

​What would happen if on a rainy and cloudy day, during a walk along a forest path, you could move into a completely different place thousands of kilometers away from you? Putting the goggles on would get you into a virtual reality world, you would find yourself on a sunny island in the Pacific Ocean, you would be on the beach, admiring the scenery and walking among the palm trees listening to the sound of waves and colorful parrots screeching over your head.
​
It sounds unrealistic, but such goals are determined by the latest trends in the development of Augmented / Virtual Reality technology (AR / VR). Technology and content for full VR or 6DoF (6 Degrees-of-Freedom) rendered in real time will give the user the opportunity to interact and navigate through virtual worlds. To experience the feeling of "full immersion" in the virtual world, realistic sound must also follow a high-level image. Therefore, only each individual sound source present in virtual audio landscape provided to the user as a single object signal can reliably reflect both the environment and the way the user interacts with it.

What are Six Degrees of Freedom (6DOF)

"Six degrees of freedom" is a specific parameter count for the number of degrees of freedom an object has in three-dimensional space, such as the real world. It means that there are six parameters or ways that the object can move.

Six degrees of freedom consists of the following movement parameters:
  • Translation – Moving along the different axes X, Y and Z
    • Moving up and down along the Y axis is called heaving.
    • Moving forwards and backwards along the X axis is called surging.
    • Moving left and right along the Z axis is called swaying.
  • Rotation – Turning in order to face a different axis
    • Moving between X and Y is called pitch.
    • Moving between X and Z is called yaw.
    • Moving between Z and Y is called roll.
Picture
​Fig. Six degrees of freedom (Wikipedia Commons).
* Source – techopedia.
There are many possibilities of using a 6DoF VR technology. You can imagine exploring a movie plan in your own pace. You could stroll between the actors, look at the action from different sides, listen to any conversations and paying attention to what is interesting only for you. Such technology would provide really unique experiences.

​A wide spectrum of virtual reality applications drives the development of technology in the audio-visual industry. Until now, image-related technologies have been developing much faster, leaving the sound far behind. We have made the first attempts to show that 6DoF for sound is also achievable.

Read more about 6DOF Development Kit


How to record audio in 6DoF?

​It's extremely challenging to record high-quality sound from many sources present in the sound scene at the same time. We managed to do this using nine ZYLIA ZM-1 multi-track microphone arrays evenly spaced in the room.
Picture
In our experiment the sound field was captured using two different spatial arrangements of ZYLIA ZM-1 microphones placed within and around the recorded sound scenes. In the first arrangement, nine ZYLIA ZM-1 microphones were placed on a rectangular grid. Second configuration consisted of seven microphones placed on a grid composed of equilateral triangles.
Picture
Picture
Fig. ​Setup of 9 and 7 ZYLIA ZM-1 microphone arrays
​Microphone signals were captured using a personal computer running GNU/Linux operating system. Signals originating from individual ZM-1 arrays were recorded with the specially designed software. 
​We recorded a few takes of musical performance with instruments such as an Irish bouzouki (stringed instrument similar to the mandolin), a tabla (Indian drums), acoustic guitars and a cajon.
Picture
Fig. Three ZYLIA ZM-1 mics in a line. ZYLIA ZM-1 with synchronization mechanism attached
Picture
Fig. Recorded musicians – from left – Przemyslaw Sledziuha Sledz, Michal Obrebski and Kasia Mizerny

Unity and 3D audio

To present interesting possibilities of using audio recorded with multiple microphone arrays we have created a Unity project with 7 Ambisonics sources. In this simulated environment, you will find three sound sources (our musicians) represented by bonfires among whom you can move around. Experiencing fluent immersive audio becomes so natural that you can actually feel being inside of this scene.

MPEG Standardization Committee
​

​During exploration experiments, MPEG Standardization Committee has realized that there is a lack of test material for audio and video experiments into MPEG-I Immersive Video. Our recordings have been accepted as a reference material in MPEG Standardization Committee. Each of the experts involved in standardization works on MPEG-I can now use the Zylia recordings to develop and test audio technologies for VR/AR applications. 
​Our informal listening tests prove that it is possible to achieve satisfactory 6DoF effect (with restricted position of listening) by probing space using 3rd order Ambisonics microphones like ZYLIA ZM-1. We are getting closer to make the real VR more real😊
Picture
6DOF Development Kit by Zylia
0 Comments

How to add binaural and Ambisonics sound to a video for YouTube 360?

9/14/2018

1 Comment

 
By Eduardo Patricio

​In this post we present two videos in different formats, but edited from the same source material captured on the 20th of June 2018, at Barigui park (Curitiba, Brazil).

The audio was recorded with the ZYLIA ZM-1 3rd order Ambisonics spherical microphone array while the video was captured by a 360-degree camera (Gear 360).
 
Below, you can watch both videos and find some information on how to achieve the two different results, with focus on preparing the audio recorded with the ZM-1 microphone for each scenario.

Scenario A

Interactive, immersive video with full 3D sound 
​(media components: 360-degree video + Ambisonics audio) 

Scenario B

​Non-interactive video with fixed perspective 3D sound
(media components: Tiny planet” video + binaural audio)
Equipment used
 
  • ZYLIA ZM-1 microphone
  • Samsung Gear 360 camera (2017 model) + micro SD card
  • Laptop (software list below)
  • Camera stand
  • Extension support arm with clamp
  • USB cable (for the ZM-1)
  • Large microphone windshield 
Software used

  • ZYLIA Studio
  • ZYLIA Studio PRO
  • ZYLIA Ambisonics Converter
  • REAPER
  • Insta360studio
  • Adobe Premiere
The microphone and the camera were placed on a single camera stand with a small clamped extension arm (see picture below). Both devices were aligned vertically with a small horizontal offset. We made sure the microphone and the camera always had the same relative facing direction (front of the microphone aligned with the camera side where the recording button is found).

Picture
ZM-1 and Gear 360 placement on a single stand
​with a clamped extension
Picture
Camera and microphone alignment
Picture
Chosen location at Barigui park
​The exact location at Barigui park was carefully chosen for its proximity to the lake that always attracts various birds, and also for being next to a rather busy highway and a helicopter landing pad. 
​For scenario A, we used the regular stitched video from the Gear 360 and a 1st order Ambisonics audio file.

​Scenario A - Basic steps taken:
  • Simultaneously record video footage on Gear 360 and audio on laptop (running ZYLIA Studio);
  • Convert the raw 19-channel file from ZYLIA Studio to Ambisonics, using ZYLIA Ambisonics converter;
  • Edit 360-degree video and Ambisonics audio on Adobe Premiere.
 
Here are the detailed steps taken for the conversion to Ambisonics:

  • Input (ZM-1’s raw) audio file was selected
  • Microphone model selected: ZYLIA ZM-1 model 1D
  • Orientation selected: Upright, 0 degrees
  • Output format selected: AmbiX*
 
* Standard currently (August 2018) used on YouTube.
Picture
For scenario B, we used the video from Gear 360 in ‘tiny planet’ format and a binaural audio track.

Since, the source material is the same as the one from scenario A, we’ll list here only the steps that differ.

Scenario B steps:

  • Process stereoscopic video from Gear 360 on Insta360 Studio to have the ‘tiny planet’ effect;
  • Convert the raw 19-channel file from ZYLIA Studio to binaural, using ZYLIA Studio PRO running in REAPER.
  • Edit 360-degree video and Ambisonics audio on Adobe Premiere.
Picture
Choosing binaural preset on ZYLIA Studio PRO in REAPER
#ambiencerecording #ambisonics #binaural #soundscapes #immersiveaudio #360recording
1 Comment

Yao Wang creating immersive 360 audio and visual experience

4/26/2018

0 Comments

 
We had a great pleasure to meet Yao Wang during our visit at Berklee College of Music. A few days ago Yao published her project 'Unraveled' - it is a phenomenal immersive 360 audio and visual experience. You as a listener find yourself at the center of all elements, you are surrounded by choir, strings, synths, and imagery. You can experience being in the middle of the music scene.
​

​Get to know more about this project and read an interview with Yao Wang.
Picture
Art work by @cdelcastillo.art
Zylia: Tell us what is the story behind your project?
​

Yao: Last spring, I was 9 months away from graduating from Berklee College of Music, and the panic of post-graduation uncertainty was becoming unbearable. I was struggling to plan my career and I wanted to do something different. I spent a whole summer researching the ins and outs of spatial audio and decided to do my Senior Portfolio Project around my research. What I have found is that spatial audio is often found in VR games and films - recreating a 3D environment. It is rarely used as a tool for music composition and production. I saw my opportunity. 

With the help and hard work of my team (around 60 students involved), we succeeded in creating ‘Unraveled’, an immersive 360 audio and visual experience, where the audience would find themselves at the center of all elements, being surrounded by choir, strings, synths and imagery. My role was the project leader, composer, and executive producer. I found a most talented team of friends to work on this together: Gareth Wong and Deniz Turan as co-producers, Carlos Del Castillo as visual designer, Ben Knorr as music contractor, Paden Osburn as music contractor and conductor, Jeffrey Millonig as lead engineer and Sherry Li as lead vocalist and lyricist. Not to mention the wonderful musicians and choir members.​ I am truly grateful for their hard work, dedication and focus. 
​

‘Unraveled’ also officially kickstarts my company ICTUS, a company that provides music and sound design content specializing in spatial audio solutions. For immersive experiences such as VR, AR and MR, we are your one-stop audio shop for a soundscape that completes the reality. We provide music composition, sound design, 360 recording, mixing, mastering, post-production, spatialization and visualizing services tailored to your unique project. 
We are incredibly humbled that 'Unraveled' has been officially selected for the upcoming 44th Seattle International Film Festival, which runs May 17 to June 10, and to have been accepted for the Art and Technology Exhibition at the Boston Cyberarts Gallery, from Saturday May 26 to Sunday July 1.
​
"Get a pair of headphones. Cave in somewhere quiet. Alone. Empty your thoughts and… Allow yourself to dive into something new. A pristine place that will truly disconnect you from the daily frenzy of life."
‘Unraveled’ has been officially selected for the upcoming 44th Seattle International Film Festival, which runs May 17 - June 10, with more than 400 films from 80 countries, running 25 days, and with over 155,000 attendees!
Zylia: Recording so many people at once must have been challenging. How did you organize this?
Yao: I worked very closely with Paden Osburn, the conductor and music contractor, to schedule, revise, coordinate and plan the session. Paden is a dear to work with, basically allowing me to focus on the music while she coordinated with the rest of the amazing choir members. We had developed a great workflow.

I also had many meetings with the team of engineers as well as many professors to figure out the simplest, most efficient way to record. It was indeed very challenging and stressful to pull off, but it was also one the most magical night of my life. ​
Picture
Behind the scene, photo by @jamiexu0528.
Zylia: Tell us more about the technical part of this project.
Yao: On October 27, 2017, we had a recording session of the choir parts with 40 students from Berklee College of Music. The recording was done using three ambisonic microphones (Zylia, Ambeo, TetraMic). We tried forging a 320 piece choir by asking the 40 students to shift their positions around the microphones for every overdub. We also recorded 12 close mic-ed singers to have some freedom spatializing individual mono sources.

The spatialization was achieved through Facebook360 Spatial Workstation in REAPER. Many sound design elements were created in Ableton and REAPER. The visuals were done in Unity. We basically created a VR game and recorded a 360 video of the performance. Carlos Del Castillo did an outstanding job creating an abstract world that had many moments syncing with the musical cues.
​
​Zylia: What do you think about using ZYLIA ZM-1 mic for recording an immersive 360 audio?
​

Yao: I clearly remember meeting Tomasz Zernicki on the Sunday prior to our choir session. The Zylia team came to Berklee and demonstrated the capabilities of their awesome microphone, and I thought I had nothing to lose, so I asked for a potential (and super last minute!) collaboration that has proven to be fruitful. This has also brought me great friendship with Edward C. Wersocki who operated the microphone at our session. Unfortunately, he couldn't stay for the whole session, so only partial lines were recorded with the ZYLIA. He also guided me with the A to B format conversion which was extremely easy and user-friendly. I loved the collaboration and will only keep pursuing and exploring more possibilities with spatial audio. Hopefully, this will be the first of many collaborations.
Picture
Behind the scene, ZYLIA ZM-1, photo by @jamiexu0528.
Zylia: What are your plans for future? Any interesting projects on your mind?
​

Yao: My long-term goal would be to establish my company ICTUS as one of the leading experts in the field of spatial audio. We are currently working on an interactive VR music experience called ‘Flow’ with an ethnic ensemble, GAIA, and the visuals are influenced by Chinese water paintings. The organic nature of this project will be a nice contrast to ‘Unraveled’s futuristic space vibe.
​
Another segment of the company is focused on creating high quality, cinematic spatial audio for VR films and games. We are producing a 3D audio series featuring short horror/thriller stories with music, descriptive narration, dialogues, SFX and soundscapes. Empathy is truly at the heart of this project, some of our stories will have a humanitarian purpose and we will be associated with many organizations that are fighting to end domestic abuse, human trafficking, rape, abuse and other violent crimes. We hope to bring more awareness and traffic to these causes with our art. Spatial audio is incredibly powerful, it really allows you to be in the shoe of the victims and without the visuals, I swear your imagination will go crazy! ​
Picture


Finally, I aspire to always stay at the forefront of technology and use it as a tool to elevate art, without ever overshadowing the core message and values that it brings.

Yao Wang

​Yao is a composer, sound designer, producer and artist. She recently graduated from Berklee College of Music with a Bachelor of Music Degree in Electronic Production & Design and Film Scoring. Passionate about immersive worlds and storytelling, Yao has made it her mission to pursue a career combining her love for music, sound and technology. With this mission in mind, she is now the CEO and founder of ICTUS, a company that provides spatial audio solutions for multimedia. 
Facebook   |   Instagram   |   ​ICTUS
Picture
0 Comments

ZYLIA Ambisonics Converter v. 1.1

3/21/2018

0 Comments

 
​We are happy to announce a new version of ZYLIA Ambisonics Converter. We introduced a few changes based on your input and suggestions.
Picture
ZYLIA Ambisonics Converter converts the ZM-1 multi-channel recordings to Higher Order Ambisonics (HOA) and enables you to prepare 3D audio recordings for playback on the 'Facebook 360' and 'Youtube 360’ platforms.
​
​We added a batch processing. Now, it is possible to process multiple 19 channels wave files within a single session.

There are also quality improvements and bug fixes for 2nd and 3rd order HOA. This update significantly increases the perceptual effect of rotation in HOA domain as well as corrects spatial resolution for 2nd and 3rd order. It is recommended to update to this new version.
0 Comments

Create your own spatial Ambisonics mix with ZYLIA!

1/18/2018

0 Comments

 
By Jakub Zamojski & Lukasz Januszkiewicz

Recording and mixing surround sound becomes more and more popular. Among the popular multichannel representation of surround sound systems like 5.1, 7.1 or cinematic 22.2, especially worthy of note is an Ambisonics format, which is a full-sphere spatial audio technique allowing to get a real immersive experience of 3D sound. You can find more details about Ambisonics here (What is the Ambisonics format?).

Our previous blog post “2nd order Ambisonics Demo VR” described the process of combining audio and the corresponding 360 video into fine 360 movie on Facebook. Presented approach assumes using of 8-channel TBE signal from ZYLIA Ambisonics Converter and converts audio into the Ambisonics domain. As a result we get a nice 3D sound image which is rotating and adapting together with the virtual movement of our position. However, it is still not possible to adjust parameters (gain, EQ correction, etc.) or change the relative position of the individual sound sources present in the recorded sound scene.
In this tutorial we are going to introduce  another approach of using ZYLIA ZM-1 to create a 3D sound recording, which gives much more flexibility in sound source manipulation. It allows us not only to adjust the position of instruments in recorded 3D space around ZYLIA microphone, but also to control the gain or to apply any additional effects (EQ, Comp, etc.). In this way we are able to create a fancy spatial mix using only one microphone instead of several spot mics!
Picture

Spatial Encoding of Sound Sources – Tutorial

In the end of July 2017, using ZYLIA ZM-1 microphone we have recorded a band called “Trelotechnika”. All band members were located around ZM-1 microphone, 4 musicians and one additional sound source – drums (played from a loudspeaker). During the post-production process, we applied ZYLIA Studio PRO VST plug-in (within Reaper DAW) on recorded 19-channel audio track. This allowed us to separate the previously recorded instruments and transfer them into the individual tracks in the DAW. Those tracks were then directed to the FB360 plug-ins, where encoding to the Ambisonics domain was performed.

“Spatial Encoding of Sound Sources” - a step-by-step description

Below, you will find a detailed description of how to run a demo session presenting our approach of recording and spatial encoding of sound sources. Demo works on Mac Os X and Windows.
DOWNLOAD STAGE

  1. Download and install REAPER software. The evaluation version is fully functional and perfect to run with our demo.
  2. Download and install Facebook 360 Spatial Workstation software. It is a free bundle of spatial audio VST plug‑ins. In the session we used version 3.1 beta1.
  3. Download and install ZYLIA Studio Pro (VST3 Mac, Windows). It’s possible to run the demo in trial mode. In the session we used VST3 plug-in.
  4. Download the REAPER session prepared by Zylia Team. It is already configured with our audio tracks and all required effects. Unzip it.
  5. Download 360 movie – two versions are available: high quality [3840 x 1920] and medium quality [1920 x 960].  High quality version sometimes tends to pause in FB360 Video Player on slower CPUs.
  6. Run REAPER and open the session (ZYLIA-Ambisonics-sources-enc.rpp).
TUTORIAL
 
​After opening the session, you will see several tracks:
1. Very important! Please, ensure that Reaper is working with sample rate of 48 kHz.
​

2. ZS_pro track – contains 19-channel WAVE file recorded with ZYLIA ZM-1. Click on FX button located on the ZS_pro track. If everything is correct, you will see ZYLIA Studio PRO VST plug-in. By default, there will be 5 virtual microphones – each one already assigned to one of the instruments: bass, drums, guitar, synth and pad. By clicking on a specific virtual microphone, you can adjust azimuth, elevation, width, and separation mode. Master send in routing panel for track should be unchecked.
Picture
Picture
3. Separated signals from ZYLIA Studio PRO are passing to 5 individual tracks. You are able to adjust the gain, you can also mute or solo instruments, or you can apply some audio effects. A good practice is to use a high-pass filter for non-bass and low-pass for bass instruments to reduce a spill between them. We applied these filters to our session:
Picture
​4. Spatialiser track – receives 5 signals from tracks with separated instruments. Spatialiser allows to distribute sound sources in desired positions in the 3D space.
​      a) Click on FX and choose FB360 Spatialiser.
   b) Click on Load button placed on the video grid. Choose Slave mode and load the provided video clip. You will see a message box “H264 is not a recommended codec” - click X.
​
Picture
   c) Set the video format to Mono and the display mode to Desktop. In Connect to DAW you should be able to choose your computer’s name. If not, try to restart Reaper and repeat the steps. You are ready to click the Open button. Video box will appear.
Picture
   d) Back to Spatialiser view. You will see an equirectangular picture and five circles with numbers. Each circle represents a sound source position in the space. By default, sources are located in the positions corresponding to the real positions of the instruments in the picture, but it is possible to adjust it by clicking on the circle and dragging it around the picture.
Picture
5. Control track – receives a multichannel signal from Spatialiser. Control provides connection with the Video Player, rotates the audio scene and applies binauralization.

   a) Click on FX and choose FB360 Control.

   b)  Ensure that Listener Roll, Listener Pitch and Listener Yaw are properly received from video – controls should be darkened. Open a video box and try to rotate the image - Pitch and Yaw sliders should follow the image to rotate.

   c) JS: Master Limiter boosts the volume and protects from clipping/distortions.
​
   d) 
Master send in routing panel for track should be checked.
Picture
6. Now video is synchronized with audio. Adjusting the location of play-head in REAPER’s time line will affect the video’s time. Tap space bar to play audio and video. Rotation of the video in the player  is tracked by the decoded and binauralized Ambisonics sound.

7.  A good practice is to play video from the beginning of file to keep the synchronization. In some cases, it is necessary to close the VideoClient + VideoPlayer and load 360 video again to recover the synchronization.
​

8.  Now you are able to rotate video across the pitch and yaw axis. Your demo is ready to run.
0 Comments
<<Previous
    Picture

    Categories

    All
    360 Recording
    6DOF
    Ambisonics
    Good Reading
    How To Posts
    Interviews
    Live Stream
    Product Review
    Recording
    Software Releases
    Tutorials

    Archives

    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    January 2019
    December 2018
    October 2018
    September 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016

    Authors

    Zylia Team

    RSS Feed

Picture
Picture

Company

About
R&D
News
Jobs
Press
​Contact

Products

ZYLIA ZM-1 microphone array
ZYLIA ZM-1S microphone array
ZYLIA ZR-1 portable recorder
ZYLIA Studio
ZYLIA Studio PRO plugin
ZYLIA Ambisonics Converter
ZYLIA Ambisonics Converter plugin
ZYLIA 6DOF Navigable Audio

Shop

​Distributors

Downloads

Support

Blog

Community

Connect with us

Subscribe to ZYLIA Newsletter

I acknowledge that by clicking „Subscribe” I give my permission to receive Zylia Newsletter about Zylia’s products and promotions. By submitting the form I agree with Zylia’s Privacy Policy. You can unsubscribe at any time using the link provided in a Newsletter.
Our brands:
Picture
Picture
Picture
Get ZYLIA:
Picture
Picture
Terms & Conditions   |   Privacy Policy   |   Contact
© Zylia Sp. z o.o., copyright 2018. ALL RIGHTS RESERVED. 
  • Solutions
    • Live rehearsal recording
    • 360 VR recording
    • Field recording
    • Audio recording for Dolby Atmos
    • Navigable Audio for VR
    • 3D audio streaming
  • Products
    • ZYLIA MUSIC
    • ZYLIA PRO
    • ZYLIA 6DOF
    • Software >
      • Applications >
        • ZYLIA Studio
        • ZYLIA Ambisonics Converter
      • Plugins >
        • ZYLIA Studio PRO
        • ZYLIA Ambisonics Converter Plugin
        • ZYLIA 6DoF HOA renderer
    • Hardware >
      • ZYLIA ZM-1 microphone
      • ZYLIA ZM-1S microphone
      • ZYLIA ZR-1 Portable Recorder
  • Support
    • Get Started
    • Register
    • Downloads
    • Tutorials
    • Help Center
    • Contact support
  • Community
    • Demo
    • Testimonials
    • Affiliate Program
  • Blog
  • Company
    • About
    • Jobs
    • Press
    • Contact
  • Shop