Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Voting for the Unity Awards are OPEN! We’re looking to celebrate creators across games, industry, film, and many more categories. Cast your vote now for all categories
    Dismiss Notice
  3. Dismiss Notice

Is this ray tracing that nVidia is showing us really that good or is it just marketing?

Discussion in 'General Graphics' started by SiriusT987, Aug 23, 2018.

  1. SiriusT987

    SiriusT987

    Joined:
    Feb 28, 2017
    Posts:
    67
    Okay, so, I watched a couple of videos and I thought hey that's pretty cool, but then my inner wannabe game dev started to question this ray tracing thing.
    I'm not an expert, I don't know how lighting and reflections work fully, but I have a minimal understanding.

    In this video
    when they use ray tracing the flame shows up in the characters eye, if it's of there is no flame because they use SSR which only renders reflections on screen. My question is that why don't they use real time reflection probes with SSR? It's probably a lot less accurate, but couldn't all this be done using reflection probes? To me in this demo looks like they just disabled the flame so it wouldn't reflect. I don't really think that this combo is more expensive than ray tracing either.

    Another one is the Metro:Exodus one.
    The demo just sucks. That's not how i see my room lit at all. The camera should adapt to the darker scene like an eye. Actually, "RTX off" looks way more realistic. As far as I know you can control the bounce amount and intensity of the light. And why is using an "artificial" light source bad? Don't you need that for shadows and lens flare anyway?

    Please share your thoughts about this. Am I dumb, and I should learn more about lighting and reflections? Because I really don't get the hype they are trying to achieve. To me it seems like a worse, more expensive, but more accurate alternative that we currently have.
     
  2. tspk91

    tspk91

    Joined:
    Nov 19, 2014
    Posts:
    130
    It really is that good. And more so the machine learning than the raytracing: thanks to the tensor cores we can get by with much lower samples per pixel so we get the benefits of raytracing ten years before previously thought. Basically the cores "get a feeling" on how a noisy shadow/reflection/etc should look denoised. Not to mention the AI supersampling and AA which are incredible too.

    Raytracing bypasses a lot of the hacks game engines use currently to simulate shadows, reflections, global illumination, and more. This will help game development when its the norm, no more baking, placing probes, adjusting shadows...

    The BF:V reflections would be probably more costly and less accurate with realtime probes. Each probe is like six cameras if im not mistaken, each doing the full render loop, that takes a lot of power (not to mention you would need a lot of realtime probes around the scene to emulate the demo). RTX raytracing is done on separate cores (so it can be done in pararell with rasterization of the screen) and with a specially laid out scene data format optimized for it so that queries are cheap and reused.

    The Metro:Exodus example is just not very good, as you say the exposure is way too low.
     
    SiriusT987 likes this.
  3. BrianG12

    BrianG12

    Joined:
    Aug 23, 2018
    Posts:
    2
    I think it helps even though (maybe) it might be marketing gimmick, nVidia does have good developers and development team and even if is not good now it will later down the line be a hit.
    Icloud Google Classroom GIMP
     
    Last edited: Aug 24, 2018
    SiriusT987 likes this.
  4. Stardog

    Stardog

    Joined:
    Jun 28, 2010
    Posts:
    1,886
    It's not worse than when it's off, but they are obviously hyping their tech by making the "Off" version look much worse than it would actually be in a game.

    They have always done this to confuse people who don't have graphics knowledge.
     
    SiriusT987 likes this.
  5. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Well, it won't dramatically change visuals much as developers got most of the way already with approximations. On a flat screen, if you were not shown the side by side comparisons, you really wouldn't miss it. In fact pretty sure you can fool most people into thinking something is raytraced anyway.

    However where it will shine the most, is currently where it can't really be used fully: in VR, because VR absolutely requires a very stable image and proper reflections, and this is something these GPUs are going to shine for.

    Not so much on a screen a few feet away with lots of action going on. Seriously, not going to be much different. These examples are exaggerated to hell for obvious reasons.

    The AA, denoising, GI, reflections, shadows etc - all get quite a bit lost in a real game on a TV set. With VR though... game changer (if it can render fast enough).

    Having said all that, it's not just for action games. I can see many uses for all or part of the technology in many game types and fields.
     
    JamesThornton and SiriusT987 like this.
  6. Torbach78

    Torbach78

    Joined:
    Aug 10, 2013
    Posts:
    296
    I would imagine that this eliminates static reflection/shadow maps; they become dynamic and no longer need to load in memory. Perhaps improved volume lighting effects too
     
    SiriusT987 and hippocoder like this.
  7. SiriusT987

    SiriusT987

    Joined:
    Feb 28, 2017
    Posts:
    67
    That is what i thought too. I said it's worse because as far as i know only the new Turing cards support it, and it's really big hit on performance, so i don't know how it will work on the 2050 or even the 2060 let alone the 1030 successor.
     
  8. SiriusT987

    SiriusT987

    Joined:
    Feb 28, 2017
    Posts:
    67
    Thanks for the info. :D
    I've found this footage of the new Tomb Raider:
    http://www.pcgameshardware.de/Grafi...ormance-in-Shadow-of-the-Tomb-Raider-1263244/
    The 2080 Ti, the flagship consumer card, wasn't that happy with the ray tracing. BTW this is only 1080p. (The 2080 is supposed to 50% faster than the 1080) That is why I said that using real time reflection probes could be an alternative.
    You would need one for every character/vehicle and like 5 for that street plus one for every car if you really want to. I think that would work decently, not great or accurate but it is something. Also you could disable some smaller things and reduce the resolution of the reflections. They clearly use a low res one in the BF demo looks like a 32x32 XD
     
  9. SiriusT987

    SiriusT987

    Joined:
    Feb 28, 2017
    Posts:
    67
    I've completely forgot about VR. Maybe because I haven't even tried VR :( The thing is that VR, optimistically, needs like 5 years to get to the graphics we had 2-3 years ago. If you add the ray tracing on top, it does not look like we will see it soon. Unless they somehow use pre-rendered "videos" on the VR glasses screen and you'd be still able to control the game. That would be really cool. it would essentially be an interactive movie :D
     
  10. jcarpay

    jcarpay

    Joined:
    Aug 15, 2008
    Posts:
    558
    Basicly, raytracing is the holy grail, period. Everything else is a hacky approximation.
    Don't expect 60FPS realtime raytracing with the the first generation RTX cards
    But I very welcome the direction NVIDIA has taken with their raytracing approach. The future looks bright.
     
  11. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Over the years I allowed myself to contemplate each new feature on the hardware rendering scene as it arrived, and so far it's always been a case of "yes I could use that new tech, but why burn performance on that when I could have so much more particles or so much more other things that give a bigger impact on the visuals" so unless the raytracing part is an independent cost, I probably would prefer to use it as an optimisation for existing pipelines.
     
  12. jcarpay

    jcarpay

    Joined:
    Aug 15, 2008
    Posts:
    558
    I agree. However, I'm sure the new RTX series will outperform the previous generation when going head to head without the raytracing pipeline. I see the raytracing part as an introduction to a future rendering platform that eventually will provide unmatched visual fidelity combined with great framerates. This will take some years though. But I guess NVIDIA decided the time was right for introduction.
     
  13. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,229
    As someone who worked on real time GPU accellerated raytracing 10 years ago, it's been the future for a long time. It just works, but it's slow for arbitrary geometry.

    upload_2018-8-24_19-7-19.jpeg
    Real time raytracing done in Unreal Engine 3 in 2008 on an Intel Larabee GPU

    The problem is it's slow. So we've been waiting for computers to get way faster, or for someone to figure out how to trace much more efficiently. Instead they figured out how to use fewer rays to get similar quality, along with computers getting faster.

    Also, we've been doing raytracing on GPUs for a long time now. Usually against a height map, distance fields, or voxel data. Screen space ambient occlusion, screen space reflections, and parallax occlusion mapping, these are all types of raytracing. There's also games like Claybook which is doing real time raytracing of SDFs on PS4. But being able to send arbitrary rays into a scene made from triangles with out converting to an intermediate format and trace to areas not in view of the camera has always been the end goal, and it's finally possible.

    But it's going to be a feature of high end cards for a long time.
     
    hippocoder likes this.
  14. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,229
    Unfortunately I don't think any of this will be useful for VR any time soon. It's still too slow, and it's being demoed as something you can add to 1080p titles, if you use a low resolution buffer and denoise / upscale / DLAA the sins away. The problem is all of these techniques aren't necessarily temporarily stable, or more importantly visually stable between both eyes. DLAA appears to produce very clean images, but the extra details and subpixel features it halucinates likely won't be the same in both eye images, and will lead to all sorts of depth discontinuities.
     
    hippocoder likes this.