We are happy to announce the new release of ZYLIA ZR-1 Firmware v1.3 with Remote Control.
The newest firmware version provides a totally new feature to ZR-1 recorder - Remote Control. From now on you can connect to your ZR-1 device with WiFi and control recording process directly via the web browser of your smartphone or tablet.
With the ZR-1 Remote Control application you can:
ZR-1 FIRMWARE UPGRADE PROCEDURE
The ZYLIA ZR-1 Portable Recorder firmware can be updated with specially prepared files provided by ZYLIA. To perform the firmware update, the user is required to upload the provided update files to a USB flash drive. The procedure is as follows:
In these crazy times, we musicians face many new challenges. We spend more time creating at home – we play in here, write new songs, record and mix. It is, however, a good time to learn new audio techniques and polish the old ones.
There is a tool that will allow you to take your first easy steps in sound post-processing, so you could bring yourself closer to the world of professional musicians.
Record and mix with ZYLIA Music!
ZYLIA Music set consists of one spherical microphone array (with 19 microphones hidden inside!) and easy in use ZYLIA Studio software. That’s it! This is all you need to make your home recordings in studio quality.
How does it work?
Everything is simple and intuitive. The software will guide you step by step through the recording process. You don't need to be familiar with all the cables, as well as, with recording interfaces and techniques. You don't need to know how to properly position the microphones to record the sound correctly. This recording studio will do everything for you. You just have to play well ;-)
ZYLIA Studio workflow.
If you want to set free in full your music creativity and explore the sound design topic, we have a PRO plugin for you. This will take you to the next level of sound post-processing. You will be able to experiment with Ambisonics sound and mix the 360-degree scene using virtual microphone technology. You also have a wide range of spatial presets at your disposal. Invaluable, especially nowadays, is the possibility to stream 3D audio in binaural format, so the one which mirrors the way our ears hear.
Don't waste your time scrolling through boring videos on the Internet. Get started making beautiful music!
In this tutorial we describe the process of converting 360 video and 3rd order Ambisonics to 2D video with binaural audio with linked rotation parameters.
This allows us to prepare a standard 2D video while keeping the focus on the action from the video and audio perspective.
It also allows us to control the video and audio rotation in real time using a single controller.
Reaper DAW was used to create automated rotation of 360 audio and video.
Audio recorded with ZYLIA ZM-1 microphone array.
Below you will find our video and text tutorial which demonstrate the setup process.
Thank you Red Bull Media House for providing us with the Ambisonics audio and 360 video for this project.
Ambisonics audio and 360 video is Copyrighted by Red Bull Media House Chief Innovation Office and Projekt Spielberg, contact: cino (@) redbull.com
Created by Zylia Inc. / sp. z o.o. https://www.zylia.co
Requirements for this tutorial:
We will use Reaper as a DAW and video editor, as it supports video and multichannel audio from the ZM-1 microphone.
Before recording the 360 video with the ZM-1 microphone make sure to have the front of the camera pointing the same direction as the front of the ZM-1 (red dot on the equator represents the front of the ZM-1 microphone) , this is to prevent future problems and to know in which direction to rotate the audio and video.
Step 1 - Add your 360 video to a Reaper session.
The video file format may be .mov .mp4 .avi or other.
From our experience we recommend to work on a compressed version of the video and replace this media file later for rendering (step 14).
To open the Video window click on View – VIDEO or press Control + Shift + V to show the video.
Step 2 - Add the multichannel track recorded with the ZM-1 and sync the Video with the ZM-1 Audio track.
Import the 19 channel file from your ZM-1 and sync it with the video file.
Step 3 – Disable or lower the volume of the Audio track from the video file.
Since we will not use the audio from the video track, we require to remove or put the volume from the audio track at minimum value.
To do so, right click on the Video track – Item properties – move the volume slider to the minimum.
Step 4 – Merge video and audio on the same track.
Select both the video and audio track and right click – Take – implode items across tracks into takes
This will merge video and audio to the same track but as different takes.
Step 5 – Show both takes.
To show both takes, click on Options – Show all takes in lanes (when room) or press Ctrl + L
Step 6 – Change the number of channels to 20.
Click on the Route button and change the number of track channels from 2 to 20, this is required to utilize the 19 multichannel of the ZM-1.
Step 7 - Play both takes simultaneously.
If we press play right now, it will only play the selected take, therefore we need to be able to play both takes simultaneously, therefore:
Right click on the track – Item settings – Play all takes.
Step 8 – Change 360 video to standard video.
Next we will need to convert the 360 video to equirectangular video to visualize and control the rotation of the camera.
To do so, open the FX window on our main track and search for Video processor.
On the preset selection, choose Equirectangular/spherical 360 panner, this will flatten your 360 video allowing you to control the camera parameters such as field of view, yaw, pitch and roll.
Step 9 – As FX, add ZYLIA Ambisonics Converter plugin and IEM binaural Converter.
On the FX window add as well:
You should now have the binaural audio which you can test by changing the rotation and elevation parameters in ZYLIA Ambisonics Converter plugin.
Step 10 – Link the rotation of both audio and video.
The next steps will be dedicated to linking the Rotation of the ZYLIA Ambisonics Converter and the YAW parameter from the Video Processor.
On the main track, click on the Track Envelopes/Automation button and enable the UI for the YAW (in Equirectangular/spherical 360 panner) and Rotation (in ZYLIA Ambisonics Converter plugin).
Step 11 – Control Video yaw with the ZYLIA Ambisonics Converter plugin.
On the same window, on the YAW parameters click on Mod… (Parameter Modulation/Link for YAW) and check the box Link from MIDI or FX parameter.
Select ZYLIA Ambisonics plugin: Rotation
Step 12 – Align the position of the audio and video using the Offset control.
On the Parameter Modulation window you are able to fine-tune the rotation of the audio with the video.
Here we changed the ZYLIA Ambisonics plugin Rotation Offset to -50 % to allow the front of the video match the front of the ZM-1 microphone.
Step 13 – Change the Envelope mode to Write.
To record the automation of this rotation effect, right-click on the Rotation parameter and select Envelope to make the envelope visible.
After, on the Rotation Envelope Arm button (green button), right click and change the mode to write.
By pressing play you will record the automation of video and audio rotation in real time.
Step 14 – Prepare for Rendering
After writing the automation, change the envelope to Read mode instead of Write mode.
Disable the parameter modulation from the YAW control:
Right click on Yaw and uncheck “Link from MIDI or FX parameter”
OPTIONAL: Replace your video file with the uncompressed version.
If you have been working with a compressed video file, this is the time to replace it with the original media file. To do this, right click on the video track and select item properties.
Scroll to the next page and click Choose new file.
Then select your original uncompressed video file.
Step 15 – Render!
You should now have your project ready for Rendering.
Click on File – Render and set Channels to Stereo.
On the Output format choose your preferred Video format.
We exported our clip in .mov file with video codec H.264 and 24bit PCM for the Audio Codec.
Thank you for reading and don’t hesitate to contact us with any feedback, questions or your results from following this guide.
We are happy to announce the new release of ZYLIA Studio PRO v1.6.0.
• We have improved our processing engine which provides higher quality sound with a wider spectrum up to 20 kHz.
• We have also introduced some improvements to the energy map algorithm. A new higher resolution (360 points in the horizontal plane) allows for sound source localization with higher precision.
• For the users that need Dolby Atmos, we introduced 15 predefined presets, which will expand your 3D Audio recording workflow.
We are happy to announce the new release of ZYLIA Ambisonics Converter (Standalone application v1.6.0 and Plugin v1.5.0).
New ZR-1 Firmware v1.2 has been released. The new firmware introduces the following features:
ZR-1 firmware upgrade procedure
The ZR-1 firmware can be updated with specially prepared files provided by ZYLIA. To perform the firmware update, the user is required to upload the provided update files to a USB flash drive. The procedure is as follows:
If you've already done an update with the Support team, you don't have to do anything. If not, check your inbox and make an appointment with a Support person to run this update together.
Zylia: What is your background?
Florian Grond: I am an artist and researcher with a particular interest in audio technology and sound art. Due to my natural science background, I have always been curious about new technology trends, and my artistic side tries to put technology to good use for topics that capture my imagination. In recent years, many of the art and research projects I was involved in had to do with how immersive sound solutions can help us connect with new or unfamiliar perspectives and help translate them to a broader public. To give two examples, I have conceived and implemented in the past a multi-speaker soundscape simulator for urban planners in Montreal, so that they can better experience and understand the impact of noise pollution in public spaces. I have also investigated with blind literary scholar Dr. Piet Devos binaural recordings as a means to capture the immersiveness of soundscapes and to use these recordings as a point of departure for a conversation on how blind people experience and navigate their environment.
Zylia: How would you describe Ambisonics technology?
Florian Grond: With my science background, I was always intrigued by the underlying theoretical concepts. Still, I also like that for first-order Ambisonics, you can practically understand it as an extension of the MS recording technique. I think it is this dual perspective as an artist and researcher, having something palpable and concrete but also something that has a solid theoretical foundation.
But for someone working with sound, the key question is, of course, what does it sound like? First-order recordings give you a great sense of envelopment. Still, only recent advances in affordable higher-order microphone arrays as spearheaded by ZYLIA and the ZM-1 make Ambisonics interesting for spatial sound recordings and reproduction. Another important aspect is that we see more and more plugin collections that target multichannel signals and, more specifically, the b-format. So, everyone working with Ambisonics will soon enjoy many of the tools they are used to for traditional audio formats.
One of my favourite sound programming environments for creative coding and prototyping is SuperCollider, and some years ago, I decided to implement higher-order Ambisonics for this platform based on the Faust library ambitools by Pierre Lecomte. I am pleased to see that artists and sound creatives are increasingly using this library in many projects.
The second project is a continuation of binaural recordings with blind workshop participants. Here I plan on using the ZM-1 in addition to the binaural microphones. I’ll probably find a way to have it stick out from the top of a back bag to get a 3rd order b-format recording close to the listener perspective. I will then convert it into binaural and compare it with the signal captured close to the ears. This 3rd order Ambisonics perspective will also give the possibility to add head tracking. For this project, I am particularly looking forward to using the ZR-1 recorder, which I can hide in the back bag too. I have made many field recordings with the ZM-1 connected to my laptop, which worked great. Still, I was desperately waiting to have a completely mobile 3rd order recording equipment as I have it now with the ZR-1.
Zylia: You had the opportunity to work with the ZYLIA 6DoF VR / AR Set. What do you think about such technology and approach for 6dof audio recording and processing?
Florian Grond: The 6DoF VR / AR Set is undoubtedly a unique tool, and what excites me from a research perspective is that it opens up a field of possible applications where many new things wait to be discovered. One of the things that impressed me from the start, was how effortless 171 channels of 24 bit 48 kHz streamed into one single laptop over the USB interface when we tested the 6DoF set here at McGill University in the sound recording department.
The 6DoF VR / AR Set will also help bring the technology of various sectors together, gaming, AR VR, sound recording so that 6DoF recordings can be distributed and experienced correctly. I think it is an exciting moment because the technology is stable and mature so that innovators can dive right into it. By being brand new, it creates opportunities to discover new fields of applications and to define the field of immersive audio. I am curious to see in which ways we will be able to experience 6Dof soon: Will it be through game engines or maybe even over the internet in the browser?
Zylia: How does this solution differ from others available on the market? What is its potential?
Florian Grond: Several key features set the 6DoF VR / AR Set apart from any other solution. First, as a 3rd order microphone array, the ZM-1 provides a high spatial resolution and captures the information of the diffuse field accurately. The second aspect is that the ZM-1 is not only a microphone but also a USB audio interface; this means that the 6DoF VR / AR Set can be relatively easily extended with more microphones. If you had analogue equipment, the numbers of channels of your audio interface would limit the nodes in your 6DoF grid. Of course, you might find a workaround with complex audio networks, but it needs to be practical too.
Third, ZYLIA has implemented an excellent interpolation algorithm of the b-format audio streams. While 6DOF interpolation has been developed in the past years, this is the first time, results of this research have translated into a practical application.
Zylia: How can this solution be used in different projects? How would you recommend using it?
Florian Grond: I am convinced the 6DoF VR / AR Set, will see many different uses: Navigable sound recordings of orchestras, as the ones we are working on right now, are an exciting field of applications. The music event industry, in general, will likely enjoy working with 6DoF for the immersive broadcasting of events to a fanbase, much larger than the audience in place. The 6DoF VR / AR Set will also find applications in sports event broadcasting. In the game industry, 6DoF will complement the object audio approach in situations where you need a rich atmosphere and where not all sound sources are necessarily visible, such as e.g. in forests or urban soundscapes, here 6DoF can create a sonic richness at a fraction of the computing power. I can also see small scale versions of the 6DoF VR / AR set being used in teleconferencing situations for large boardrooms. There are plenty of applications for rapid prototyping of urban soundscape design as well as academic research. For instance, research on how to capture and document complex in situ sound works as well as room acoustics where multiple distributed room impulse responses can be recorded from only one source.
Immersive arts are a constantly evolving and developing part of Art branch, which in principle, has a simple definition — it’s the creation of a world around the person in a way that makes them feel part of and inside of it. In practice, the label of immersive art touches on everything from illusory world-building to simply including a piece of interactivity within a larger, traditional art show. In that description, to be labeled “immersive art” the only requirement is that an audience no longer exists as a passive group of onlookers. Viewers become “participants” and no two people experience the same thing. This can be done in almost any medium of art.
Our ZYLIA Brand Ambassador – Przemysław Danowski is highly involved in creating different immersive art experiences. He’s a spatial and algorithmic audio designer and producer, working at Sound Engineering Department in Fryderyk Chopin University of Music (Warsaw, Poland) and a sound design consultant in Visual Narratives Laboratory (vnLab) at the Film School in Łódź. His fields of interest are spatial audio, sound in game engines (Unreal Engine 4, Unity3D, FMOD, Wwise), sound for VR/AR, procedural, and generative sound (Max MSP, pd, Supercollider, etc.). Interested in audio/video codecs, compression, authoring, and streaming. He is also a director and producer of immersive music documentaries.
Z: What are you most proud of in your work?
PD: I’m really proud when I see my successes of my students, and the biggest success for myself now is being a part of an artistic group with my colleagues from the Fine Arts Academy in Warsaw. This is why my teaching is focused on connecting students from both schools that I am working with. Being able to contribute to the works of a talented team is the most satisfying thing.
Z: Can you reveal any tricks or tips for beginners in the field of immersive art?
PD: Yes. First - networking. Do networking! Gather knowledge from people and share your knowledge with people. Second – experiment and try every tool that is available. Read the documentation! (RTFM! ;-) Don’t give up when something isn’t working – ask for help. Find workarounds and again – share your experiences with the community.
Z: What do you think is the future of 3D audio recording? What technological changes can occur or are already taking place in the industry?
PD: There is this gap between hardware and software now. There is a lot of great free or inexpensive software for 3D audio. On the other hand, the equipment is very expensive and it has a small assortment. I hope that it will change and with the mass adoption of immersive formats and the hardware for 3D audio will become more diverse and affordable.
I believe that a big part of the production of audio/video will take place in a virtual world or extended realities rather than on flat screens or in physical studios. I think that in near future avatar equipment will be something that will replace currently known software. There is a big shift coming with the cloud architecture – you will not need the computing power onboard. All the computing will be done in the cloud so the user devices will be just terminals. That will bring mass adoption XR, VR, and 3D audio along with it.
Z: Which industries can benefit from spatial audio and immersive solutions?
PD: Spatial audio will be an integral part of every industry, because XR will be the interface for all of the IT, and IT is now a big part of every industry that I know.
Z: You had the opportunity to get familiar with the ZYLIA 6DoF VR/AR Set. What do you think about such technology and approach for 6DoF audio recording and processing?
PD: I think that this is the beginning of a whole new era for sound. Volumetric audio and video - this is not sci-fi, it’s real and it will become a big part of the entertainment and art industry in the next years.
Z: How does this solution differ from others available on the market? What is its potential?
PD: There are countless possible applications for this I will not even try to list all of them. For me, it is a great medium for capturing theater, opera, concerts, exhibitions, sports. It’s like sampling reality in 3D. It is very easy to apply. You don’t need so much “handcraft” work as in object audio to have a 3D soundscape. You just apply adequate resolution, that is how many sampling points – microphones – you will apply per area. And it doesn’t need to be uniform. You can decide what you are sampling with high resolution and which parts will be more generalized. I believe that it is also more efficient in terms of needed resources. It’s easy to plan. I am really looking forward to doing some experiments with it.
Z: What are your next career plans? What would you like to do this year?
PD: Due to the coronavirus outbreak, my primary plans were changed. I was scheduled to do some presentations of the Connexion project at kingt gut! conference in Hamburg and AES in Vienna. My Polyphony project was about to premiere on festivals but all of that needed to be changed in the face of this new situation. I hope that I will be able to have this works presented in the second half of this year and in the meantime I’m working with my team on new projects and continuations of previous projects.
I’m working on and the second edition of my Connexion project, (codename: Re:Connexion). It’s a virtual sound art installation that exists in XR. It is an interactive music/sound interface, on which you will be able to perform. This is going to be a multiplayer application, so the performance will be collaborative. This installation will be placed in venues such as physical museums (in a designated room for it) and in the internet as well, so you will be able to perform with your friend that is standing right next to you and people from all around the world at the same time.
by Pedro Firmino
In continuation to our previous blog post “How to stream 3D audio in Binaural format with ZYLIA ZM-1 and ZYLIA Ambisonics Converter plugin” ... Here is how you are able to accomplish the same effect in a Windows system using a DAW with ReaStream and ZYLIA Ambisonics converter plugin.
What you require:
- ZYLIA ZM-1 microphone array
- ZYLIA Ambisonics Converter plugin
- BinauralDecoder plugin by IEM ( https://plugins.iem.at/ )
- ReaStream plugin (https://www.reaper.fm/reaplugs/ )
- DAW of your choice.
- OBS (as a streaming application)
Step 1: Receive input of the ZM1 into your DAW
As the first step we need to receive the input from the 19 channels of the ZM-1 to a track in the DAW, therefore connect the ZM-1 microphone, open your DAW and select the ZM-1 as your Audio device.
Afterwards, create a track, change the number of channels from 2 to 20 and add the ZM-1 as a 20 channel input source.
By arming the track to record, you should now be receiving the 19 multichannel signal from the ZM-1 into your DAW.
Step 2: Achieving Binaural sound using ZYLIA Ambisonics Converter plugin and IEM Binaural Decoder
On the FX chain of the ZM-1 input track you will have to add the following plugins in this specific order:
On the BinauralDecoder, you may choose to add some headphone equalization if you believe it’s necessary for the streamed audio.
For last, in the ReaStream plugin, remember to have your track armed and enable live monitoring.
To send the output for your streaming application, enable “send audio/MIDI” in the ReaStream plugin and select “local broadcast” from the dropdown list.
Step 3: Receive the signal in OBS
On the right side of OBS, open the Settings and click the Audio settings.
In the Desktop Audio choose your output device. In this case Speakers. Confirm with OK.
With the Desktop Audio added, click on the cog Icon and select Filters.
In the + Icon add a new VST 2x Plug-in: Choose ReaJs (it is included in the ReaPlugs vst pack).
Click the Open plugin interface, click Load – Utility – Volume. Set the volume to the lowest.
On the Filters window, also add ReaStream-standalone plugin. Open the Plug-in Interface and select to receive. Make sure the identifier is the same as in the Reaper session.
You will now be using the ZM-1 as an input audio device and are ready to start streaming!
by Tomasz Ciotucha
You are probably sitting at home and taking advantage of excess time and learning new things. If someone is interested in getting new skills in binaural streaming then this blog post is for you.
The binaural audio takes advantage of psychoacoustic principles to mimic the human hearing system. It forms an immersive projection that appears to be around your head - not between your ears and headphones.
Here you have a popular example of binaural audio (listen on headphones):
Using ZYLIA ZM-1 you can capture high-resolution binaural audio. This can occur in real-time, so you can stream it online to other people. To achieve this, you will need ZYLIA Ambisonics Converter Plugin, Soundflower, IEM Binaural Decoder, and, of course - a ZYLIA ZM-1 microphone array. In this post, you will find out how to set everything up on your Mac system.
We have also prepared a Reaper template project for you, but you can do it in any DAW which supports 19-channel tracks. The template project is available for download just below this paragraph. In this post you will find out how to set everything up.
1. IEM Binaural Decoder
You will have to install the whole IEM suite. Visit the page: https://plugins.iem.at/
Download the suite and install the software. Now you can proceed to the next steps.
To download the software use the link below:
Scroll down to the bottom of the site and click on “Soundflower-2.0b2.dmg” in the “Assets” area to start downloading.
After the successful installation in “Audio MIDI Setup” there should show up “Soundflower (2ch)” and “Soundflower (64ch)” devices. This step is just to check if the Soundflower is installed correctly.
If you want to use the Reaper template project provided by us, you can jump to point 3.b after completing the steps described in this paragraph. You can download the project from this site. If you want to set up the Reaper session on your own, proceed through point 3.a.
Make sure that your ZYLIA ZM-1 microphone array is connected to your computer. If you are using the template, open it, and if not, create a new project. First you have to go to Reaper’s preferences and select Zylia as an Input Device and Soundflower (2ch) as an Output Device. You have to tick the box marked in the image below to allow use of different input and output devices
3.a Setting up a new Reaper project
In the new project create an audio track by hitting ⌘ + T. Then you have to configure the track’s routing, by hitting the “Route” button which is next to the track’s fader in the mixer session (marked “1” in the image). A new window should appear. There, you have to specify the amount of track channels to 20 (marked “2” in the image). Make sure that “Master send” is ticked.
Then you have to arm the track by clicking the red circle in the track panel (marked “1” in the image). Before the next step, you may have to make the track a bit wider by dragging its bottom line down (marked with an arrow in the image) so the “IN FX” shows up (orange rectangle in the image).
Now, select input channels to “Input 1..Input 20 [20 chan]” by clicking on the area highlighted in the image.
Two new windows should show up. Search for ZYLIA Ambisonics Converter Plugin and add click OK. Then add another plugin, IEM Binaural Decoder. If you haven’t downloaded the IEM plugin yet, go back to the first step of this tutorial. If you managed to set everything as described here, now you can proceed to next steps described in point 3.b.
3.b Using template project
4. Setting up streaming software (in this case OBS)
In OBS, go to preferences and specify the number of input channels to Stereo and select the Soundflower (2ch) as an input device (Mic/Auxillary Audio). Disable the Desktop Audio.
Click OK to save the settings. You might have to restart the OBS.
Now everything should be set up and you should get input signal from your ZYLIA ZM-1, as marked here in the image below.
#zylia #3Daudio #stream #streaming #audio #binaural #OBS #reaper #IEM #soundflower #music #concert #gig #homerecordin #homeconcert #concertathome