A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Assets and Asset Store' started by olix4242, Jun 25, 2015.
It's working!!! Thank you!!!
Can I record audio from this?
I need to know before buying stuff...
I'd like to do a real-time capture of a VR scene. I plan to buy 2 blackmagic intensity cards and using VR panorama RT. Did anyone try this and can confirm that's working well?
Yes, this can be done. With VR Panorama RT you can unwrap your scene into a equilateral format. But you will have to use a good GFX hardware. Of course, all depends on complexity of your scene, your GFX hardware and what are your requirements (resolution, framerate, stereo or mono).
Btw, You don't need 2 Blackmagic cards - one Intensity Pro 4k should be enough. (one is tested and I'm using them, but if you choose two - I don't know if you can even start two of them together). But a VR PanoramaRT is capable of outputting also to multiple screens. Yiou can see it here in action:
Also, you will probably have to personalise VR Panorama a little bit for a resolution and your prefered layout - but this can be done without programming by simply editing a layout of a prefabs provided.
You can record audio with spatializer plugins. But most of them will output only a stereo sound (spatialized). They ussually don't work as multichannel formats, as they are made to be used with headphones. This means that if you look your movie you won't get any rotation of your audio space.
Here is a nice article that somewhat explains "Align With Horizont" function available also in VR Panorama.
Ah alright haha
So then what about the default Unity plugin? would it work for that?
Any advice for this?
I have some issues, not 100% because of the VR Panorama plugin. My main issue right now is that I have a QUAD that is playing a mp4 video on it using the AVPro Video plugin. In normal playmode is fine, but during the rendering the video get's bad.
One problem is that the video get's played at the normal framerate but when I'm building the screenshots (frames), I'm building them much much slower, so the video get's repeated a couple of times + that it jinkers a bit.
Any advice on how to deal with this?
Second problem is with TENKOKU Water, during playmode I see some weird reflection in the water, but this is more related with the water plugin, but again, do you have some advice?
Thanks Oli <3
New user of your awesome plugin here. Love it, and everything works great, just having some weird audio syncing issues when the ffmpeg adds the audio to the video file. The audio plays about 2 seconds slower than it should in the video. I could easily fix this in premiere pro or aftereffects myself if need be, since I have both the audio file and the video file, so it isn't a major problem. Still, it would be great if I could get the audio synced up automatically via the ffmpeg plugin to save time, as I would have to manually sync audio on every new render I want feedback on.
I was wondering if this is an isolated issue on my end, or if you have seen it happen before? In either case, do you have any ideas on why this might happen?
I to had some problems but... At first I thought it was the audio but it was the scene itself. My game actors were moving a little bit slower, but it was a slowing over time and not a fix 2 second delay. I had some actors that I was triggering them with PlayMaker. Turns out that when I was building the video, the Audio Action was being ignored and my states were being played one after another.
Yep, I figured it could be a problem with my scene too, so trying to debug through that as well. I should probably look into my transition timings and look into the audio transitions when the video is building too, thanks for your reply!
Hi, this problem was discussed before wirth users having the same issue with AVPro. You should look on page 8 of this discussion and you will find a script i've posted, that (partially) solves this problem. unfortunately, this is a more a hack from my side than a real solution that should be done by AVPro team. So, try it, and please do as others did - write a request to AVpro to solve this problem.
This is described in a manual. Apparently your audio starts on Awake (with StartOn Awake box ticked). This isn't a correct way to start audio because unity scene actually sttarts on Update and Awake starts before. So you will have to use a script provided with VRPanorama in a folder Assets/VRPanorama/Scripts/AudioSyncWithVRCapture.cs. Add this script to your audio clip, and deactivate Play On Awake on this audio clip. Note, that you have to do this only fro clips that start from a scene begining. Other audio clips that are triggered by script, should work just fine.
Ok thanks! Just bought the plugin, will buy the intensity card soon. If I want to capture a stereo VR scene, I thought I needed two cards because I will have to capture 2x HD outputs at the same time. If I well understand, VR Panorama is capable to output the 2 outputs into one single 4k output through HDMI? That's why I wouldn't need to buy 2 cards...?
I would like to use this tool to export an HDR sequence so I can use that to generate reflections and lighting for prerendering in VRAY. I only see support for PNG and JPG. Is there a way to add 16 bit float support?
Unfortunately, there is no support for HDR formats yet. Unity can natively export only 24 or 32 bit images (and also mp4 videos don't support higher depths).
I'm currently testing a method to export also in 32 bit per channel EXR format - but this is something that can take some time as it requires a lot of work.
Hi oli, i want to ask before buying your plugins. I want to create 360 video from my scene. Could your plugin do this in realtime / on .exe ?
I am actually quite interesting with this post by metallizard.
We want to build a desktop application (in *exe format) and that application can produce 360 Videos. Is it possible using this plugin without Unity Editor? We plan to preset all parameters for example the quality, and so on.
Also, I concern about the audio, which is still experimental. Could you tell us more about the audio limitation (if any)? I plan the application produces the 360 video in 10-15 seconds only.
Just had two quick questions oli:
1) For some reason it seems like any object in my scene that is marked as "static" gets left out of the render. Is that intended behavior? EDIT: Actually I was wrong about this one, it's anything that is using a "mobile/Bumped Specular" shader. EDIT2: after removing and re-adding the camera object this issue has gone away. Not sure what caused that.
2) For one scene i'm using different skyboxes for each eye and a two camera left/right eye rig during normal playback. Is there any way to capture that with VRPanorama?
Hi, I've used your plugin successfully in the past, however for the current project I have a separate camera rendering some worldspace UI (to avoid it being affected by some postprocesing effects)
I thought I could use VRCompositor for this, but it results in the world space UI being rendered into each cube map face, which obviously isn't right.
What is the correct way of doing this?
Oh fixed it by making the UI camera a child of the main camera, just like in the demo scene. Not sure how that changes things but I'll worry about that later
Actually I do get it now, if it's a child of main camera it will get cloned as part of the setup, and the orientation of it will be adjusted correctly. Nothing to see here folks!
Is there some documentation how to use the VRCompositor.cs script? I think it's what I need because I have one camera for each eye (I'm using layer to show different objects in each camera).
Sorry to be the bearer of bad news, but I don't think this is easily possible.
VRPanaroma creates the eye cameras internally when rendering in Equidistant Stereo mode. So it will basically duplicate the one camera you have the VRCapture script on.
Having two active VRCapture scripts in the scene at the same time is not possible.
However to answer your actual question this is how VRCompositor works for me (there is very little documentation about it, I've worked this out from the source code and the demo scene provided)
Make sure there is only one top level camera (i.e., move any additional cameras to be child)
Add VRCapture and VRCompositor to the top level camera.
Add a reference for each child level camera in the VRCompositor CameraLayers property. Don't include the top level camera itself.
Make sure the child camera's have the right clear settings (most likely none, or depth only, depending on what you want to achieve)
PS: What are you trying to achieve? I've not yet seen a VR experience that works well if the left and right eye are showing something different.
I want to use stereoscopic movie texture inside the scene to incorporate "real" stereo content inside the virtual scene. It works well with the oculus. The challenge now is to find a way to capture this in real-time to create a VR video...
Will the VRCompositor understand that one camera is the right and the other the left?...
hum, just realized that VRCompositor doesn't work with VR Panorama RT...
Will try to use 2 instances of Panorama RT, one for each eye and assigning each to a different display... I'll let you know if it works... might be too expensive for a single computer...
No, for now it isn't possible. But this is a feature on what I'm working. But for now it's difficult to make it behave in a stable manner as you can never know on what system it will be used - and you surely don't want to block a final user computer for a long time.
Let me know your thougts about this, and maybe It can help me to decide what to do next.
It is quite difficult to make any documentation about this one, as a multilayered setup combinations are really different based on cases.
Audio support isn't experimental anymore. It should output what you hear. But you have to setup your audio hardware to a configuration that you want. This is due to a unity limitations.
Be avare that (for now) I can't give any good support for spatialized audio for youtube (First Order Ambisonic (FOA) audio channel layouts)
. It uses a coding that isn't documented very well, so I don't know from where to start. (it uses a Live recording techniques that are difficult to simulate in engine). But you can export multichannel audio that can be transformed with plugins like this one:
Be aware that this requires a good audio production knowledge.
Also, any audio that isn't triggered by scripts (bacground music) shouldn't start on awake (default option for audio). For this task you should use a script that is provided with VRPanorama - AudioSyncWithVRCapture.cs
Contact me in private with a invoice number, I can send you a new version that supports culling masks for both eyes. This should work for you.
Work in Android?
If the engine captures all frames, but fails on the ffmpeg rendering process, is there a way to restart the render without having to capture each frame again? I see all of them sitting in the folder.
Of course. There is a button "Encode H.246 video from existing sequence" that does this
No, panorama rendering requires modern hardware and lots of Vram (1giga at least) to render hi resolution panoramas.
I've just started using VR Panorama and it's great, useful in so many ways. I have noticed, that the video/frames exported are much lighter and less contrast than the game view. I'm sure there is a reason, is there a way to counter that or something I'm missing?
Hi, you should use Linear Lighting color space, while you are probably using Gamma in . VR Panorama defaults to Linear lighting and deffered rendering and it's best when used with HDRI lighting. So, you should switch your color space to Linear in Player settings/other.
So we can record from the Unity player without building out the game?
Hi, I'm using VR Panorama and I actually got an issue. I will try to explain it.
My project is divided on several scenes which are all loaded asynchronously and additively in a main scene.
I have setup my VR Capture camera on my main scene.
I put a DontDestroyOnLoad script on the VR Capture camera in order to keep it when I unload the main scene.
I have modified the code in VRCapture.cs in order to always keep all instanciated used object between scenes (using dontDestroyOnLoad):
renderHead = (GameObject)Instantiate(Resources.Load("360RenderHead"));
renderHead.hideFlags = HideFlags.HideInHierarchy;
renderPanorama = (GameObject)Instantiate(Resources.Load("360Unwrapped"));
After this modifications, I don't have any errors but all the pictures taken look like this screenshot
Have you any idea of what i'm doing wrong ?
Is it possible to make it works with additive scenes ?
When I upload the video to Youtube I'm getting this strange result:
It's like only 180 degrees..
Am I missing some configuration?
Thanks in advance!
It looks like you are rendering normal video capture. You should use Equidistant Capture (from a drop down menu in VR Capture component)
Hi, vr panorama uses a a quite complex rig for its rendering, and it looks like some objects get deleted (probably temporary render rextures, not sure). Unfortunately, right now I'm unable to put my hands on a pc.
My suggestion is to render scene by scene for a moment. Also, contact me to get a new version that has simplified code and should be simpler to implement what you want.
Working now! Thanks!
I've been using your VRPanoramaRT component for real-time rendering into equirectangular output into TouchDesigner for a 360 projection - it's working great, so thank you for your work on this.
At the moment we have two separate workstations: one for rendering the Unity app, and the other for TouchDesigner to map the 360 video to our projectors. I'm using a Blackmagic 4K vidcap card in between to get the video stream from renderer to projector workstations, but we're hoping to bump up the resolution to 8K, and this rules out vidcap card for now.
What we're trying to do now is to try using Spout2 so that we run both the Unity app and TouchDesigner on one workstation and they share a video texture/buffer between the two apps. There is a Unity4Spout plugin available, but since VRPanoramaRT creates a camera rig with six cameras, I think we're going to need to do some tweaking to make it work with VRPanoramaRT - and I was hoping you could help us figure out how to make the two work together.
Both Spout2 and Spout4Unity are free to download:
Here's a description from the readme file about how to set up Unity4Spout:
Sending a Spout texture from Unity:
Setup a camera in your Unity scene: Select a RenderTexture in the 'Target Texture' property to render into.
Add the SpoutSender component to your Camera:
Use a sharing name for your texture. (Has to be unique on your system so Spout can provide this texture to other clients under this name)
Select a RenderTexture. (You have to select the RenderTexture that your Camera uses for rendering!)
(RenderTexture settings: Color Format: ARGB32, Depth Buffer: No depth buffer)
Depending on Unity's current DirectX mode and the graphics card of your system you have to change the Texture Format of your Spout Senders.
DX9(Normal Unity rendering mode): Should work with all modes
Any ideas or suggestions are most welcome! Thank you.
- mars -
Support mp4 player vr format it ?
Can record interactivity?
Hi. No, it works as offline renderer (not realtime). So all movements and actions should be animated/scripted.
But you could capture your interactive animations with this free tool: https://www.assetstore.unity3d.com/en/#!/content/19622
And afterwards use that captured animation to render with VRPanorama.
Or, you could use a VRPanorama RT component to unwarp panoramas in realtime, and use video hardware capture devices (like Blackmagic intensity PRO 4K) to create your interactive captures.
Sorry, I don't quite understand your question
Thats quite interesting approach.
But are you sure that an 8K texture could be easily transfered without hickupps? I think it would require a realy large bandwidth.
As for implementing
You don't want to use this rig for transfering RT texture. You would only have to render into a RT view and then transfer that RT view via spout. ou can easily change one of the "360UnwrappedRT" prefabs by changing target RT texture on their PanoramaCamera objects and send that texture via Spout.
Let me know how it works and what performance are you getting.
Thanks for reply
Thank you that's great to know! I knew I was doing something wrong.
Hi Olix242, I changed the target RT texture of the PanoramaCamera object of the "360UwrappedRTMono" prefab as you suggested, and that did the trick, however, the output is upside down - do you flip the cubemap in the code somewhere? Thanks! - mars -