Search Unity

Stereoscopy and 3D Vision

Discussion in 'Scripting' started by lesfundi, Mar 16, 2010.

  1. lesfundi

    lesfundi

    Joined:
    Jan 10, 2009
    Posts:
    628
    http://www.nvidia.com/object/3D_Vision_Main.html
    -> this link will show you the setup of the nvidia for stereoscopy.

    1) can I use this with Unity3D standard? What do I all need to setup or scripting to make this work with Unity3D standard or pro?

    2) The only thing I need is to walk in a 3d environment and use this hardware to see it in stereoscopy. Is there any help for that out there?

    lesfundi
     
  2. cerebrate

    cerebrate

    Joined:
    Jan 8, 2010
    Posts:
    261
    I've gotten 3d vision to work with unity. It only works if you compile a game, and then view said game in fullscreen. Windowed games do not work with 3d vision.

    GUI stuff is rendered as being as close as possible, not actually partway into the screen. I don't think this is changeable.
     
  3. lesfundi

    lesfundi

    Joined:
    Jan 10, 2009
    Posts:
    628
    thanks for the information.

    1) did you had to get/make some script to make this work?
    2) do you need to use a specifik scaling?
    3) did you need to set some parameters?

    carl
     
  4. cerebrate

    cerebrate

    Joined:
    Jan 8, 2010
    Posts:
    261
    no, you don't need to do anything in unity to get it to work. You merely compile the game, make sure that 3d vision is enabled in the control panel, then start the game in fullscreen. 3d vision should automatically start making the game 3d.
     
  5. Wolfram

    Wolfram

    Joined:
    Feb 16, 2010
    Posts:
    261
    Confirming that. Our project is quite complex, and I didn't even have to recompile it to use the nvidia stereo stuff (as long as it's using fullscreen).

    The only problem are the stereo settings. You can adjust them on-the-fly using hotkeys, but of course they're dependent on scene scale, and unfortunately not all of our scenes are modeled at the same scale, so we need to thing about weaking that.
    If you enable the extended keys in the driver, you can adjust the "zero parallax" with ctrl+F5/F6, which should help you placing your GUI elements at the depth you want. However, it is likely that all GUI elements are always rendered at screen level. Since we use actual 3D billboard geometry for all our GUI stuff, instead of UnityGUI or GUIText/GUITexture, we can adjust the depth of our GUI nicely.
     
  6. dgutierrezpalma

    dgutierrezpalma

    Joined:
    Dec 22, 2009
    Posts:
    404
    There are two kinds of stereo 3D and both of you are talking about only one kind of stereo 3D.

    You can have "native" stereo 3D where the game generates two views of the same scene and put them side-by-side (or any other similar format) so developer have complete control over the 3D effect. The display device doesn't have to generate the second view of the scene, it only has to show each view to each eye.

    You also can have "fake" stereo 3D where the game only generates one view of the scene and the 3D driver have to generate the second view dynamically. The gamer has more control over the 3D experience, but this system has a very big problem: as the driver doesn't have all the information it needs to reconstruct completely the second view, it might generate some visual glitches.


    Both of you have talked about "fake" stereo 3D, but I'm far more interested in "native" stereo 3D. We have the same problem with 3D movies: there are lots of "2D-to-3D quick conversions" that can't compete in quality with "real 3D movies".
     
  7. Wolfram

    Wolfram

    Joined:
    Feb 16, 2010
    Posts:
    261
    Not quite. The method nvidia uses actually intersects the rendering commands before they are fully processed by the GPU, and internally modifies the camera position (I believe by tweaking the fragment/vertex shader pipeline), renders the scene twice, with different camera positions, and merges the result into a stereo signal, which can be active shutter stereo (frame interleaved), or red/cyan anaglyph (this is selectable in the graphics driver).

    So the result when using the nvidia stereo driver is identical as if you had actually placed two distinct cameras in your scene. There are no gaps in the resulting image (although the driver might have problems with complex, non-standard shaders), it is "real" stereo.
    Since every (Mono-)3D-App uses the same principle (=define camera, render scene from that view), the driver can transform the output of these apps to stereo images, without the need of modifying the app itself.

    The disadvantage, however, is, that you can't really influence the stereo parameters (eye separation, zero parallax) - you can adjust these settings with the nvidia driver, but only on a per-application basis, not on a per-scene or evern per-camera basis.

    EDIT: so according to your terminology, there would be three types of stereo: application based (giving you full control over all stereo parameters), "fake" and reconstructed from pre-rendered 2D-material (which is expensive/tricky to do, and *will* produce artefacts), and the nvidia-style (resulting in full stereo, but with limited possibility to control the stereo parameters).
     
  8. dgutierrezpalma

    dgutierrezpalma

    Joined:
    Dec 22, 2009
    Posts:
    404
    I have tried NVidia 3D Vision and it does a very good job with simple games, but it have some problems with some post-production effects (some GUI elements, advanced shadows, mirror reflections...) where some elements are drawn at a wrong depth. It is a good trick that is useful for old games, but new games should use native stereo 3D to avoid this kind of problems.

    EDIT:
    When I said fake stereo 3D I meant "everything that is not native stereo 3D". However, if you wish, we can consider NVidia 3D Vision as a different category. However, I will continue thinking new games should use native stereo 3D.
     
  9. Wolfram

    Wolfram

    Joined:
    Feb 16, 2010
    Posts:
    261
    Ah, true, that's what I meant by "complex, non-standard shaders". I'm not sure *what* kind of shaders are supported/correctly transformed, and I'm not sure that is documented anywhere. Probably all or most multi-pass shaders (which includes shadows and mirror effects) might fail, depending on how they actually implemented their 3d driver.

    Of course it's always best to implement the stereo on your own, so you can control everything that happens. But for stereo at absolutely zero additional implementation costs, the nvidia stereo driver does a pretty good job.

    Creating a stereo signal manually within Unity is somewhat tricky, and/or is hindered by the I-don't-give-a-f*ck-about-our-users mentality of Micro$oft:
    - You can create anaglyph stereo signal using a simple camera script that is published somewhere in this forum. But nobody wants to use anaglyph stereo nowadays.
    - You can NO LONGER create a passive stereo signal for passive polarized stereo, because nvidia dropped the "Horizontal Span" mode to create one large Desktop split into two monitor signals since Vista, and because the stupid Windoze Display Manager concept apparently is incapable of emulating this behaviour - it can only do "DualView", which means two separate graphics contexts, which makes it impossible for single-window-applications to utilize the 2nd monitor. (I won't mention that nvidia put the "Horizontal Span" stuff into the Quadro drivers even before Vista, removing it from the GeForce drivers altogether, although it doesn't depend on Quadro hardware (which is pretty much identical to GeForce anyway) and would run on GeForce as easily).
    - And for the third, and only other possible alternative of creating a stereo signal - which would be the active shutter 120Hz frame interleaved signal - I couldn't tell how you would actually create such a signal in Unity. Normally you would do this with quad buffer stereo, but I have no idea whether you can do something like that with Unity?
    - Yes, there are some exotic devices using line interlaced stereo etc., which you also could do with a simple script, but none of these methods are wide-spread.

    So what other real options do you really have, except using the nvidia stereo driver? :-(
     
  10. tomvds

    tomvds

    Joined:
    Oct 10, 2008
    Posts:
    1,028
    Just out of interest (as I didn't get it from your post), when you say two distinct cameras, do these two distinct cameras have separate off-axis projection matrices, based on some sort of (driver settings-specified) focal point? Whenever I hear 'separate cameras' without the magic 'off-axis' word, I get worried about stereo quality :p.
     
  11. Wolfram

    Wolfram

    Joined:
    Feb 16, 2010
    Posts:
    261
    I don't know how they implemented it, and I was unable to find any documentation on that so far. Since they are messing with the very intestines of their graphics pipeline anyways, they would be in a unique position to do correct off-axis projection with a "shear" matrix. But maybe they were just lazy and did the simple move-cameras-apart-a-bit-and-rotate-around-zero-parallax-point trick :D
    They call the two parameters that can be adjusted "depth" and "convergence", which is just "eye separation" and "zero parallax", but these terms could be used for both methods. :?

    EDIT: I'll try to create a test scene for that tomorrow, and get back to you.
     
  12. monark

    monark

    Joined:
    May 2, 2008
    Posts:
    1,598
  13. nawash

    nawash

    Joined:
    Jan 29, 2010
    Posts:
    166
    Hi all
    Among the methods quoted here, is there a method that actuallu uses Quad buffer rendering (OpenGL ARB GL_STEREO) ?
    As far as I understand the NVidia active stereo hardware method is based on API Drawcall interceptions on NOT GL_STEREO "norm"

    What I would need is to have control on the 2 3D parameters from the Unity 3D application (for per user settings and not per application setting)

    Nobody has answered http://forum.unity3d.com/viewtopic.php?t=33304

    I hope my question is clear, if not, please let me know.

    N
     
  14. Wolfram

    Wolfram

    Joined:
    Feb 16, 2010
    Posts:
    261
    It appears the nvidia driver is creating "correct" stereo, the perspective is being computed with off-axis rendering (bottom snapshot, created by the driver), instead of a simple rotation (top snapshot, created using the forum script in http://forum.unity3d.com/viewtopic.php?p=102313#102313 ). Note the misalignment of the bar at the top.
     

    Attached Files:

  15. monark

    monark

    Joined:
    May 2, 2008
    Posts:
    1,598
    if you read on it that thread you will come across some code that uses the shearing method too 8)
     
  16. Wolfram

    Wolfram

    Joined:
    Feb 16, 2010
    Posts:
    261
    Argh, I didn't notice that thread had more than one page... -.- Thanks!
    Here's the scripted off-axis version, which is pretty much identical to the nvidia output (except I didn't use identical stereo settings, and I also reduced the cyan intensity, compared to the forum script):
     

    Attached Files:

  17. GantZ_Yaka

    GantZ_Yaka

    Joined:
    Apr 26, 2013
    Posts:
    10
    I try my Unity Game for nVidia 3D vision and on the edges of some models I see a fair offset "ghost" image. This effect is observed in all the unity3d scenes. When I try to increase 3D depth amount in nVidia vision driver, I see 4 ghosts for each object. In other NotUnity games with great depth amount showing only two ghosts of objects - one for the left eye and one for the right.

    Help me pls to fix this problem. $3V2B5035.JPG