Search Unity

Lightfield in VR

Discussion in 'AR/VR (XR) Discussion' started by dannox, Feb 12, 2018.

  1. dannox

    dannox

    Joined:
    Oct 20, 2010
    Posts:
    12
    Hi All,
    is there a way to render a Lightfield in Unity with Oculus or Vive?

    thanks
     
  2. bradweiers

    bradweiers

    Joined:
    Nov 3, 2015
    Posts:
    59
    Could you be a bit more specific?
     
  3. dannox

    dannox

    Joined:
    Oct 20, 2010
    Posts:
    12
    Basically having in Unity something like this

    I import a lightfield (also synthetic one or taken by a camera like Lytro) and navigating in Unity with a Camera (VR should be only an addition indeed). Since I'm quite ignorant, I don't know if first, you need a reconstruction of the scene and then rendering, or directly rendering. Another example is this one:
    https://www.roadtovr.com/lytro-pick...light-fields-real-time-rendering-game-engine/

    As I understood, Octane (that was released recently as rendering engine for Unity also) should be able to do that, providing a real-time rendering. Is it true? is there some sample or tutorial for using it and importing a lightfield as an example?
     
    Last edited: Feb 13, 2018
    pachermann likes this.
  4. pachermann

    pachermann

    Joined:
    Dec 18, 2013
    Posts:
    133
    Iam searching for a similar solution für VIVE with octane-unity
     
  5. RoguePrimitive

    RoguePrimitive

    Joined:
    Sep 11, 2017
    Posts:
    3
    Hard to find info about this Octane lightfield approach. A certain project could definitely make use of this. I saw this old video when I was searching for realistic diamonds in unity. The developer seems to have disappeared and I'm now looking for other options.

     
    pachermann likes this.
  6. CommotionTheory

    CommotionTheory

    Joined:
    Apr 16, 2018
    Posts:
    2
    I'm also interested in finding out how to render/generate light fields in Unity. In my case it would be from a large set of photos taken in 360 spherical direction. I can't find any light field plugin/software that are publicly available.
     
  7. noemis

    noemis

    Joined:
    Jan 27, 2014
    Posts:
    76
    Same here. Finding many tests and approaches, but nothing really usable. To be more specific for @bradweiers : The demo from google Welcome to lightfields was done in Unity. Oculus, Otoy, Nvidia, Lytro,... are talking about lightfields or showing demos. Is there something for Unity we've overseen or on the roadmap.

    Especially for VR lightfield could be very useful.
     
  8. PatHightree

    PatHightree

    Joined:
    Aug 18, 2009
    Posts:
    297
    Otoy is the worlds biggest tease, they show spectacular stuff but hardly anything of it ends up becoming available.

    Fortunately, Google is investing in lightfields as well.
    The content from the Welcome to Lightfields app can be viewed in Unity by means of this github repo
    https://github.com/PeturDarri/Fluence-Unity-Plugin

    But the biggest news is Seurat by Google and ILM (frankly I'm surprised there's not more noise about this).
    In short, Seurat is a way to capture volumes in a DCC app (Unity/3dsMax) by making numerous RGBD panorama's.
    Then processing this into a simplified mesh + lightfield data encoded into a texture.
    This data can then be viewed as a lightfield volume in Unity.
    AND ALL TOOLS ARE FREE, AVAILABLE AND OPEN SOURCE !

    Anticleric made a great intro video


    Github repo for capturing and rendering seurat meshes in Unity https://github.com/googlevr/seurat-unity-plugin
    Github repo of 3dsMax export plugin https://github.com/superrune/3dsmaxSeuratExport
    Github repo of the processing pipeline https://github.com/googlevr/seurat
    Github repo of prebuilt binaries of the processing pipeline https://github.com/ddiakopoulos/seurat

    Have fun playing with lightfields :D !
     
    pachermann and noemis like this.
  9. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Can someone explain how lightfield texture works?


    edit: okay they are literally just doing a kind of photogrammetry trick on cubemap sample to create a decimate mesh ... not what i expected but clever
     
    Last edited: Jan 27, 2019
  10. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Okay after checking the tech a bit more, it seems it could be cool if you do complex world on a budget, with constant streaming, since it has predictable cost.

    For example if we assume uncompressed 4k textures (64mo), 9 tiles with 5 tile of buffer (loading tiles from a diagonal movement,14 tiles at worst, loading new tile before unloading old one) we have:
    - a texture memory budget for static environement of 896mo at worst,with 576 while not loading, with the streaming overhead being at worst 320mo, at best 192mo.
    - if texture is compressed, I think it's 25mo per texture which bring the worst case at 350mo and usual case at 225mo, and the loading 125mo at worse and 75mo at best.
    - given 10 000 poly budget and assuming xyzuv data at 32bits uncompressed it's less than 3mo.
    - also since each "tile view" has a complete scene representation you only pay to render a single tile at a time
    - Probably will suffer from some temporal inconsistency when swapping visual tile due to different mesh simplification? IDK
     
  11. PatHightree

    PatHightree

    Joined:
    Aug 18, 2009
    Posts:
    297
  12. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Thanks!

    I understand how lightfield work in theory.
    Basically the brute force way is each of the screen pixel hold and query an associated "cubemap" (or equivalent), that is queried using the view vector (eyes to screen) but then the data is screen resolution * cubemap resolution at worst. We can probably compress the data furthermore by bundling ray that return similar data, or retro projecting cubemap that are close enough and don't have disjointed parallax data (ie occlusion).

    Which mean what i'm asking is the ??? part of the plan, that is "They’re aligned and compressed in a custom dataset file that’s read by special rendering software we’ve implemented as a plug-in for the Unity game engine." as told in teh article you linked!

    It turns out it was just a decimate of mesh reconstructed by photogrammetry inside a view volume. I tought they had a new way to store the data (by figuring out how to solve the parralax occlusion part of the storage), but then that's clever and probably enough. I was thinking they would have an insight where they pick a reference cubemap, and only encode the delta parallax (extra data hidden from the reference) of other cubemaps.

    But anyway, they made me think about the problem a bit longer, so it's a win, I have a better understanding of the issue I was looking to them how they would solved it (or at least I can express it more easily).
     
  13. PatHightree

    PatHightree

    Joined:
    Aug 18, 2009
    Posts:
    297
    Dude you're missing a big part of the picture, the geometry is NOT the end result of the process.
    It serves as a base off of which the shader+texture data work their parallax magic.

    Go load up the seurat-unity-plugin project and add the Seurat output data from this reddit thread.
    It contains links to 2 sets of test data that somebody made of the viking village sample scene, a tutorial video and an apk of a mobile build.
    Go see for yourself and be amazed.

     
  14. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I did watch those video, I guess I need to try it myself, but I don't see much in those video. It looks like standard projection of texture, from a (set of) viewpoint(s)

    The big idea (seems to me) is that a UV is enough to encode "delta" of the occlusion and a mesh IS the occlusion, no need to get fancier. It end up removing all unnecessary information for the area. I mean I AM amaze by the simplicity of the solution

    It doesn't help that the video linked show have reduce shader from the original scene blacksmith scene.

    However if I'm wrong in my assessment, I would like to see it, I mean I'm interested in the technique, i just can't run it right now.
     
  15. PatHightree

    PatHightree

    Joined:
    Aug 18, 2009
    Posts:
    297
    Watching a lightfield video is like watching a VR video, it does not convey the experience
    I found a full capture-to-viewing pipeline which is free, open source, unity compatible and you react with scepsis
    Please show me more worthy alternatives please
     
  16. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    WHAT? where is the skepticism? I have been saying it's great and clever I'm trying to understand the technique behind that's it! lol

    Sorry if I confused you
     
  17. PatHightree

    PatHightree

    Joined:
    Aug 18, 2009
    Posts:
    297
    Sorry man, I totally read that wrong
    My bad
     
  18. fuzzy3d

    fuzzy3d

    Joined:
    Jun 17, 2009
    Posts:
    228
    https://www.presenzvr.com/
    I think results looks better then Seurat...
    ...but I have 3ds MAX 2013 and V-Ray 3.6... not supported/old versions...cannot try...
    upload_2019-6-25_7-47-57.png

    OTOYs lightfields are RIP?