Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Resolved Perspective Rendering Distortion On Side Of Wide Screens. Solution New Scriptable Render Pipeline?

Discussion in 'Graphics Experimental Previews' started by AlanMattano, Feb 15, 2018.

  1. AlanMattano

    AlanMattano

    Joined:
    Aug 22, 2013
    Posts:
    1,501
    SOLUTION: I ask NVIDIA for this improvement here, please vote:
    https://www.nvidia.com/en-us/geforc...e-new-correct-3d-perspective-projection-not-/


    SOLUTION UNITY
    HDRP

    URP: https://docs.unity3d.com/Packages/c...manual/Post-Processing-Panini-Projection.html

    ___________________________________________________________________________

    PROBLEM DESCRIPTION

    Perspective rendering solution is distorted on side of a widescreen monitor as shown in this screenshot


    The 1m cubs perspective looks pretty wrong in-game view. The perspective solution is extremely distorted as well as the rest of the scene
    upload_2018-2-14_20-54-27.png

    Sometimes making a zoom, changing fov in the camera is not an option.

    This perspective distortion is produced because square flat clipping planes are used to start and stop rendering the scene. The rendering starts and finishes at a fix local X value. Making the middle rendering ray (in the center of the camera) shorter than the rendering ray on the side.
    upload_2018-2-14_20-58-54.png

    I'm not talking about the advantage of having a rounded spherical clipping plane for culling advantage. I'm talking about changing when the render starts and stop rendering distance for each ray for resolving the perspective distortion.

    The nearest Plane Distance is different for each ray and probably is calculated in a fix X value. Having fix beginning X and finishing X, the ray on the side must travel more distance and is much longer than the ray in the center of the screen.

    These two game objects at the same distance from the camera are rendered as if they were in different positions.

    upload_2018-2-14_21-5-14.png

    Instead must be rendered as if they were at the same distance from the camera.
    Having a Fix Near and far Rendering Distance from the camera, each ray, will result in having the same rendering length. So there will be Clipping Spheres instead of planes.

    upload_2018-2-14_21-11-6.png

    I do not think this solution is computational more expensive since will render less overall volume, will also contain fix distance values, the LOD will not trigger when we are rotating the camera and will avoid game objects been rendered going in and out from the far occlusion zone.

    Here is a link to old feedback: Solution please Vote

    Can the new Scriptable Render Pipeline be the solution? HOW

    I know that is a big topic for a novice programmer with intermediate capacity.
    Can you guide where to look or start from? Or adding code here so we can discuss it.
     
    Last edited: Nov 9, 2020
    oAzuehT, AntonioModer and Prodigga like this.
  2. Prodigga

    Prodigga

    Joined:
    Apr 13, 2011
    Posts:
    1,123
    I hope this is possible, something like this would really show how flexible the new SRP is!
     
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    Not really. This is a limitation imposed by the hardware itself. All modern consumer GPUs use rasterization to render triangles. Rasterization uses a pinhole camera model. A pinhole camera has a flat perspective plane.

    The solutions are:

    Don’t use rasterization.
    Raytracing is possible on modern GPUs, but rendering arbitrary mesh data remains expensive and difficult and most of today’s tools and technology is centered around the creation of mesh based assets. Real time raytracing at a quality level and resolution similar to rasterization remains an unsolved problem in computer graphics. The closest that’s been publicly shown is the OTOY Brigade Engine. That requires multiple Nvidia Titan class GPUs to run 1080p at 30 fps, and is visually very noisy.

    Certainly you could use SRP to make a raytracing based pipeline, but you could do that before then too just as easily since almost none of the new features added with SRP would help with this.

    Image effect distortion.
    You can render the image at a higher resolution and use an image effect to distort the image into the perfect spherical projection you’re after. VR does this already, though it does it to counter the effects of distortion from the headset’s lenses. For FOVs of < 140 degrees or so this is a straightforward solution that doesn’t require anything particularly special.

    Multi-view / cube map.
    Similar to the image effect distortion, but using multiple views to reduce the amount of oversampling, or allow for FOVs over 170 degrees with out rediculously high resolution renders. Some older demo scene productions use this technique, as does FishEyeQuake. Some high end VR rendering uses this as an optimization as well. This can be done with the existing renderering paths and don’t need SRP, though it could potentially be done more efficiently with SRP.

    The biggest problem with all of these is they greatly increase the cost of rendering.
     
    Last edited: Feb 16, 2018
    AlanMattano, one_one and elbows like this.
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    Here’s an implementation of the multi-view technique.
    https://github.com/charlesveasey/UnityDomemaster

    I did forget one other method. Vertex displacement. If your meshes have a high enough poly count, or you use tesselation, you can distort the meshes themselves to match the expected distortion. I’ve seen some people do this, and it can be quite fast, but it might not look as good unless the tesselation is unreasonably high, and it brings with it a ton of other issues. It’s similar to the various round world shaders out there.

    Here’s an example, again for the purposes of VR pre-distortion.
    https://www.gamasutra.com/blogs/Bri...tion_Correction_using_Vertex_Displacement.php
     
    Last edited: Feb 16, 2018
    one_one and elbows like this.
  5. rsodre

    rsodre

    Joined:
    May 9, 2012
    Posts:
    229
    All of the options are not really good in terms of performance or quality.
    The best solution would be to have a camera where we can define the ray for each pixel.
    I always wanted a camera like that in Unity, that would be amazing.
    Can we do that with SRP?
     
    AlanMattano likes this.
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    Correct.

    That's called raytracing. See my above comments on that.
     
    Harinezumi likes this.
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    I'll reiterate some information above with some additional information.

    All modern consumer GPUs are designed around rasterization. Rasterization as it exists on modern GPUs is a technique for quickly and efficiently calculating the pixel coverage of a triangle (i.e. knowing what pixels to render). Part of the efficiency comes from being able to make assumptions about the kind of projection being used, specifically a projection matrix which has those flat clipping planes you described above. If you calculate the position of each vertex of a triangle using a projection matrix, you can easily calculate all of the pixels that the triangle covers, but this only works if the projection matrix is consistent across the entire screen.

    If you want an arbitrary ray direction for each pixel, with rasterization you would need to change the projection matrix used for each pixel, and recalculate every single vertex in the scene with this new matrix to test for coverage. This would be significantly slower to render than traditional rasterization.

    The alternative is raytracing (or path tracing, which for the purposes of this topic is the same thing). These work by tracing a line from the camera out until it hits a triangle. Basically just like a using a raycast in Unity c#. This is much, much slower if done naively as, like with arbitrary rays directions with rasterization, requires calculating every single triangle in the scene for every ray. There are ways to speed this up with various kinds of spacial partitioning setups. However generating the spacial partitioning data in the first place can be by itself slow. It also means a lot more data is needed to render the scene. In the end even with all of the known optimizations used today with raytracing, just the tracing by itself against a highly optimized spacial partition is still slower than rasterization.

    Modern GPUs have gotten a lot faster over the years, and as such GPU accelerated raytracing has started to become more plausible. This is especially true now that we have things like OpenCL and compute shaders. Basically this lets you bypass the rasterization specific hardware completely and use the GPU's raw number crunching power to do those expensive spacial partition traversal and ray / triangle intersections. It also means you don’t get to make use of one of the biggest benefits of GPUs, which is that highly optimized rasterization specific hardware.


    If you really want that kind of rendering in Unity, check out the OTOY Octane Renderer for Unity. It’s less of a rendering path that runs in Unity and more of an external application that can read data out of Unity and convert it into what it needs to render in their path tracer.
     
    one_one and AlanMattano like this.
  8. AlanMattano

    AlanMattano

    Joined:
    Aug 22, 2013
    Posts:
    1,501
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    The multi-monitor supporting games shown off (badly) in that video aren't doing anything terribly fancy. There's just a unique camera view per monitor. There's a ton of "stretching" on the sides of the 32:9 monitors they're using that's unavoidable without the above mentioned methods.

    Also, since real time raytracing is now a "thing" people are talking about with the Nvidia RTX GPUs and DirectX Raytracing APIs, it should be noted that there still aren't any truly raytraced games. Everything shown so far are hybrid raster/raytrace solutions where the main view is still rasterized, and only the reflections and/or ambient lighting, shadows, AO, etc. is being raytraced. So we're still confined to the pinhole camera model. I would guess we're still a hardware generation or two away from fully raytraced solutions being plausible at modern resolutions.
     
    AlanMattano and bac9-flcl like this.
  10. AlanMattano

    AlanMattano

    Joined:
    Aug 22, 2013
    Posts:
    1,501
    @bgolus And why VR does not have this issue?
     
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    It does have this issue.

    The Rift, Vive, and PSVR all recommend rendering to a resolution of at least ~1.4x the actual physical display, and the resulting image is distorted to match the resulting lens barrel distortion so the image you see does not appear distorted. This maximizes the panel's resolution. All of the mobile VR solutions do the same distortion, though most are rendering at less than the panel's display resolution due to GPU performance limitations.

    VR applications using Nvidia VRWorks along with the Oculus Go and Quest use some form of multi-projection rendering. Essentially the equivalent of rendering to multiple cameras with different resolutions and projection matrices, but with only a single draw call issued from the CPU.
    https://developer.nvidia.com/vrworks/graphics/multiresshading
    https://developer.nvidia.com/vrworks/graphics/lensmatchedshading
    https://developer.oculus.com/blog/optimizing-oculus-go-for-performance/

    However that is for displays that are really only doing 90~110 degree fovs; both Oculus and HTC are quoting their diagonal FOV which no gamer ever uses when talking about FOV. For the super wide HMDs, like the StarVR or Pimax, it really is like a multi-monitor setup with additional cameras pointing to the left and right to cover the periphery. Though Nvidia has a solution for that too.
    https://developer.nvidia.com/vrworks/graphics/multiview

    For all of these techniques it means some amount of distortion needed to correct for the lens barrel distortion is done as part of the rendering, but each "view" is still a pinhole camera model and another layer of post processing is needed to convert that into essentially the exact same image you would have gotten before, just with less wasted rendering time.

    AMD has talked about similar tech, but don't know of any titles that have used it like this. The only game I know of that uses AMD's LiquidVR multi-view rendering is rendering both eyes as a single draw call, but not doing anything fancy with the projections like the above.

    The OSVR is an odd duck in that no special image post processing is done and instead uses additional optical layers to do the distortion, but this means most of the panel's display resolution is wasted. The image on the HMD's display is the same image you'd see on your monitor. Old VR HMDs from the 70s through the 90's sometimes did something very similar, but employing an additional hidden CRT with a lens and camera in front of it to do the same pre-distortion before piping that camera's view into the HMD.
     
    Last edited: Apr 10, 2019
    AlanMattano likes this.
  12. AlanMattano

    AlanMattano

    Joined:
    Aug 22, 2013
    Posts:
    1,501
  13. AlanMattano

    AlanMattano

    Joined:
    Aug 22, 2013
    Posts:
    1,501
    @bgolus I make a request to NVIDIA hoping is the right place for the feature request. We need to push for fixing.
    I presume there could be also an improvement in frames per second but not shore.
     
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,336
    This isn't something that can "just get added". This is a fundamental limitation of rasterization and projection matrix math that is the standard for real time 3D rendering.

    "Can you add support for non-pinhole camera models" is the same request as "can you fundamental replace the entirety of hardware accelerated 3D rendering from the ground up with new technology that doesn't exist?"

    There are multi decade research projects into this kind of thing that have basically ended with "it's cheaper to use rasterization and warp the results".

    The closest we have is the recent advances in hardware raytracing, which honestly are more to do with advances in spacial acceleration structure generation than the raytracing itself. But we're still a few generations of GPUs away from being able to use raytracing to entirely replace traditional rendering.
     
    adamgolden and AlanMattano like this.
  15. AlanMattano

    AlanMattano

    Joined:
    Aug 22, 2013
    Posts:
    1,501
  16. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    Correct. Render with pinhole, then distort the result. Look up panini projection:

    http://paulbourke.net/dome/fish2pannini/

    Basically, spherical projections would require the GPU to be able to raster curved triangles, which is impossible at a fundamental level as @bgolus already explained, unless you render fully using raytracing. Ergo, it's much more efficient to just render at a slightly higher resolution than distort the resulting image. Heck, Unreal 4 even has this built-in:

    https://docs.unrealengine.com/en-US/Engine/Rendering/PostProcessEffects/PaniniProjection/index.html

     
    AlanMattano likes this.
  17. rz_0lento

    rz_0lento

    Joined:
    Oct 8, 2013
    Posts:
    2,361
    AlanMattano and Neto_Kokku like this.
  18. AlanMattano

    AlanMattano

    Joined:
    Aug 22, 2013
    Posts:
    1,501
  19. Neto_Kokku

    Neto_Kokku

    Joined:
    Feb 15, 2018
    Posts:
    1,751
    There you have it, panini for everyone.

     
    AlanMattano likes this.