A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'General Discussion' started by CDF, Aug 15, 2018.
Oh, my good old S3 ViRGE 64 days with the 3D Studio (no max)...
I remember at least one game from back then that had a better and faster software renderer than their hardware.
No comments on my hardware survey?
Actually what's really interesting about this raytracing platform is the inter GPU high speed switching architecture that has a huge bandwidth and allows all of the GPUs to share memory.
300 GB/sec vs PCI Express 5.0 32 GB/sec maybe we need this technology on our motherboards to totally remove the CPU/GPU bottleneck?
We're not bottlenecked by the performance of the bus between the CPU and the GPU. Two years ago Gamers Nexus ran some benchmarks and discovered that the performance difference for at the time the highest-end games was less than one percent between x8 and x16.
If we're not being bottlenecked by 3.0 why would you believe we would be bottlenecked by 5.0?
By the way it's 64 GB/sec. Four times the maximum bandwidth of 3.0 and eight times the bandwidth used in 2016.
totally going to set myself up with this for Doom Eternal
I saw that and my first thought was "Man, I'm glad I bought that 1080 for $400".
I'm on the 1080 TI, probably get the 2080 TI anyway
So why can't your CPU and GPU share memory?
E.g. a 3.6 Ghz 8 core 64 bit cpu has a theoretical processing bandwidth of about 230 Gigabytes a second, which is probably a fraction of the processing bandwidth of GPUs and many times faster than the PCIe bus.
Because of how the system architecture is literally laid out. It's not a latency problem, it's a "things literally are not designed to accommodate this" problem.
Why would you want them to share memory? Consumer CPUs are designed to use a different memory standard than that of a GPU. DDR (CPU RAM) is designed to have low latencies but sacrifices bandwidth while GDDR (GPU RAM) is designed to have high bandwidth but has higher latency as a result.
If you're curious what would happen if a GPU made use of CPU memory just look at the GeForce GT 1030. NVIDIA launched a DDR4 model and a GDDR5 model. The DDR4 card is only able to achieve half the performance and it isn't like the card was powerful to begin with. Using DDR for a high-end card would likely lead to orders of magnitude in losses.
While GPUs would see a performance bottleneck from having access to system memory you would think that CPUs would see a large performance increase thanks to having access to much faster memory but that's not necessarily the case either.
Benchmarks covering the difference between single and dual channel memory are difficult to find because there is very little price difference between the two, but some do exist and at least for gaming the performance improvements are negligible.
I've provided a link to the first page of results but there are four pages in total. Only one game showed a meaningful difference in performance in favor of dual channel while the others showed little to no difference. Clearly CPUs are not bottlenecked by memory for gaming purposes.
Keep in mind that while the benchmark is definitely old (2015) it's using an older memory standard (DDR3) too. DDR4 is substantially faster and we're on the cusp of DDR5 which is currently estimated to be double the performance of the previous. It's still far slower than GDDR5, GDDR5X, and the brand new GDDR6 though.
Only CPUs are going multi-core and GPUs are becoming more like CPUs so a higher bandwidth memory like HBM or GDDR will probably become the go-to level memory of the future.
The spec for DRAM 5 is in the works and aims to increase the bandwidth of the memory.
It's a bummer that HDMI 2.1 isn't ready yet for these cards. Otherwise they'd tempt me to switch from a 1080. I’ll wait for the 7nm die shrink next year.
The main difference between CPU and GPU memory is latency vs bandwidth. CPU and GPU have different needs.
Any actual current gaming benchmarks on these cards yet, how much faster than a 1080 Ti are they?
Not seen any benchmarks but in the talk he said the parallell integer and float engine on the cards would accelerate standard rasterisation also. Sounded like close to 2x from the 1080Ti to 2080Ti.
I dont think anyone have told the date when they lift the benchmark NDA
Wait, does it mean it will change the way we write shader on Unity when we want to enable RTX rendering pipeline ?
We dont need to care about drawcalls/batching, instancing, vertex amount, polycount with that kind of rendering ?
I don't get how the hybrid rasterization + raytracing + AI will work for us.
The Nividia presentation focused on new features instead of the performance increase in existing games. Based on the limited information available, I am guessing the a 2X performance improvement for existing DX11/DX12 games when going from GTX1080ti to RTX2080ti. Once hardware reviewers (such as Tom's Hardware and Anandtech) post detailed reviews, we will all have a much better idea about that relative performance between hardware generations.
As I understand it, RTX is hybrid raytracing added to an existing render pipeline. I don't think RTX will allow for a purely raytraced pipeline.
You mean in September 2018, at $499, $599 and $999 (or, if you're going with NVidia's Founders Editions, $599, $699 and $1199).
I find it kind of shocking that there seems to be no word from Unity about this. If I was NVidia, I would make sure that this gets built into the big engines really fast. It seems like Unreal already has it ... most of the demos apparently are done with Unreal, and apparently, NVidia collaborated closely with Epic on this one.
The thing is: If this works as advertised, it basically means that PC gaming in 2020, and console gaming whenever this is built into consoles, will no longer care about reflection probes, lightprobes, lightbaking, screenspace reflections, shadows, possibly not even anti-aliasing and certain kinds of post-processing effects ... all of this will come "for free" with the GPU.
While the bump in visual fidelity may not be that obvious to the common eye, the simplification in game development is a complete game changer.
No, I mean exactly what I meant. That context, was for the film industry, where, yes, these cards will accelerate existing extremely high resolution chained setups so they go from days to hours.
It's not all about gaming approximations.The raytraced shots from the games and what you look at, are in actual fact pretty crappy and do one local bounce. It's still good for games, but nowhere near sufficient for film. For film, it will cost many of these cards chained together (which they're designed to support) and of course a lot more bounces and a lot more range etc...
Which means hours of rendering time.
The home cards doing it in realtime are not doing the same jobs, just very limited reflections. Think of it in terms of a much better SSR or local reflections, great contact shadows, etc.
I would assume that things work very differently with this pipeline.
I don't think RTX will change that much in that regard. But all the tricks that are currently used to achieve solid physically-based shading will be obsolete. Unlimited and randomly complex lightsources, perfect shadows, perfect reflections, perfect ambient occlusion. And all of that without lightbaking, traditional GI, lightprobes, reflection probes, static lightmaps, hard vs. soft shadows.
I think this snippet explains it quite well visually:
NVidia Shadows Dynamic Occluders Reflections; 4:33:15 on Twitch
You probably saw this one already - but I think it's quite relevant: Raytracing Performance; NVidia Gamescom Keynote on Twitch, 2:18:22
And here is a pretty impressive video of where Unreal is at with this: Filming and acting in Star Wars using Virtual Reality (it's not so much about rendering technology but more about content creation ... what I find impressive is how well this has all been integrated into the engine / editor).
Ah, ok, sorry - I missed that context.
Are you sure it's just one local bounce? The presentation made it look like rays were passing through glass, refracted, and bounced two times. But I'm not perfectly sure about the two bounces, here's that part:
We can only speculate at this stage, but certainly the key word here is hybrid. There are a few ways Unity can open up DirectX raytracing (including nvidias RTX implementation) and then various effects/graphics features can be written on top of that. I kind of expect to get some of this stuff in the Unity HD render pipeline eventually, rather than existing standard pipeline, but I am only guessing based on where all of Unitys focus on modern, improved graphics is these days.
Just wondering, RTX is good and all, but Im assuming Unity has been in on its development. So, is there ANY news yets on how Unity will be supporting these raytracing features, on-going?
Im stoked on this. At least, we can put to bed many of the traditional techniques.
But.. they will still be relevant. My bet is, for this generation, raytracing will still need to be rationed. So expect to see games with ray-traced shadows, but still using reflection maps, or vice versa. I cant see ALL possible raytracing techs being possible in one game right now.
Besides, they are not all made equal. Some take more processing than others. For instance, mirror reflections are not too expensive, whilst slightly blurred reflections are MUCH slower. So, for now, we will still need to juggle Raytracing LODs maybe.
Anyway, bring it on
Honestly, I won't put any stock in raytracing tech being mainstream until I see it on home consoles.
If we have to speculate, I think the very first step will be to change the lightmapper to use it. This would mean when you have an RTX card and you're generating lightmaps or reflection maps, you could do it super (I mean really super) fast.
You can't rely on it as an end-user card just yet, especially because it's expensive and you need to make these things compatible (switch back to maps, really) with previous generation cards and lower hardware.
Back in the day, 2001 or so, I was a beta tester for a raytracing renderer on 3dsmax, called Brazil. The urge, of course was to use raytrace THIS and raytrace THAT, and soon enough you'd have a render that took two days per frame to complete.
Of course the same will hold here. The developers of Brazil were clear on this, they advocated using scanline techniques in an image, where it made little difference to the end look. And its a fair point.
IMHO raytracing so far has been mainly demoes for reflections. Where I think it REALLY excells is in area shadows, and Reflection Occlusion. I mean, currently in Unity, using show maps, is pretty skanky. Whilst reflection maps, actually look pretty decent, most of the time.
For me, I'd just love to have raytraced shadows, for all of my lights. If RTX can deliver that, in Unity, in this generation, its a pretty big win.
Are next gen consoles like PS5 getting raytracing techology ?
If so, Unity should consider adding raytracing support, cause SRP does target more friendlier console integration.
We literally do not know.
It would be just a guess, but I would guess the PS5 would not contain an RTX card. Consoles are usually designed to be relatively inexpensive with just enough power to not completely embarrass themselves. The only way the PS5 might get some raytracing support would be if the next gen AMD APUs already included it, and it would probably not be powerful enough to actually use that feature in a production game with that hardware. But this is admittedly all guesses and speculation.
AMD's Radeon Rays only requires that the hardware be able to handle OpenCL 1.2. AMD's APUs have had support for it as far back as 2011. The PlayStation 4 and Xbox One are using a GCN-based architecture for their GPUs meaning they have had it too.
I was also thinking that new AMD cards next year will have some support for realtime raytr. with radeon rays.
While that is true, I still doubt there is enough performance in that part to actually use that feature in a production game.
Imagine having this in Unity in real time
Looks nice. We will but not tomorrow yet
I do wonder, how looks like cheap raytracing. Basically with minimum number of rays.
Is it any good as rater based rendering?
Any examples anywhere?
Mirror like reflections, or sharp edged shadows, look pretty neat with between 2-4 sample per pixel. And these would look better than scanline methods, such as shadow maps, as raytraced shadows can be cast, pin sharp across massive scenes. And of course mirror reflections would probably render in real-time, 60fps, rather than re-calc of refl maps, which usually take about a min of 4 frames.
Once you start talking about soft rays, like area shadows or ambient occlusion, then you need 8 or more rays. The more blur, the more rays.
So chrome, and mid day sun shadows are both pretty cheap, whilst brushed alluminium is REALLY expensive to pull off well, with no noise. Same thing for area lights of any kind. Its the same for Refraction, sharp refraction, like in glass of water is relatively cheap, as long as ray depth is kept to a minimum, usually 3-4 bounces. But if your talking about some kind of cloudy gel, then it gets very expensive to have noise free results.
So sharp reflections, such as on water, chrome, car body work, glass and so on, is going to be a big use at first. And I can see scene lighting, for sun shadows also being an obvious use. Maybe also ambient occlusion, as well.
Think of it this way. If you have a stylized game, with grey diffuse surfaces, then you could spend your raytrace budget on soft area lighting. Or ambient occlusion. Whatever effect gives the best improvement over scanline, and current hacks, like screen space Ambient Occlusion, which has quite a lot of issues (but is better than none!)
Honestly, these are precisely my use cases, and precisely why I'll be happy to see this tech get widespread enough adoption that I can make use of it. There are a lot of really nice area lighting effects I'd like to play with.
@cthomas1970 thx for extensive description, nice to read.
Do any of you think, there will be ever need of use bot rendering methods together, to gain on performance? Specially when early GPUs support will be accessible. Maybe for example some static distance object with baked light / shadows effects, while keep raytracing for processing near and dynamic environment including reflections etc.?
Or using some mix approaches tricks, to save on performance for 4K screens, to be able handle effectively. I think this can be feasible?
Any tricks that are much more efficient than raytracing, but look exactly the same, are still going to be a good idea. Think about it as if you were going to make a movie. Do you build a life size model of ancient Rome, or can you get away with a Matte paining of it. Sure, the matte painting wont be quite as nifty as re-creating the center of Rome in its heyday. But it would be an awful lot cheaper, and most people woult not notice the difference.
So, you've saved X budget by using a matte painting, and now you can use that resource for something else, possibly more important.
Both. Movies frequently make use of both real and fake locations. Spartacus, for example, used some locations in Spain that had Roman architecture while filming the remainder of it in California.
Likewise we may reach the point where making use of raytracing makes sense for the techniques that see huge benefits from it while rasterization continues to be used for areas where it continues to shine. Furthermore depending on the way the hardware is designed (eg raytracing running concurrently with rasterization) we might not see performance suffer.
Thats actually the point I was trying to convey. Using "brute" force to achieve a goal is not good in and of itself. If you can do something cheaper, with less resources, and no one will ever know/feel the difference, do it. That means using non raytrace methods, that are lighter on the GPU where you can.
At some point, this wont be as necassary, you will be able to pick ALL current raytrace methods and newer ones down the road, and get 90hz on a 16K screen. But for now, we will have to pick our battles.
The war against perf is not over yet (but its heck a lot easier than it once was....)
That's true - but "cheaper, with less resources" can also mean not messing with hacky workarounds that significantly increase the complexity, risk and cost of game development.
Of course, if you want to reach the widest audience, you'll have to support mobile. If you want to target VR, you have to make sure that you hit 90 FPS.
But if you want to great an amazing look and can live with a smaller audience, you might want to invest the time otherwise used for fixing the hacky approaches and making them look right into creating better art assets.
Reviews became available yesterday. Currently benchmarks are showing the RTX 2080 Ti at only 20 to 30% faster than the GTX 1080 Ti. The RTX 2080 is basically tied with the GTX 1080 Ti.
The big speed increase on top of this seems to be the DLAA. It seems like when enabled it only renders 50% of the pixels and the recreates it with AI when it does the anti alias pass. another 40% speed increase.
Yeah been watching a few perf comparisons on this. DLSS seems like the winning tech *so far*.
Don't think there's a single game/benchmark out yet that uses RTX raytracing yet, so gotta wait a bit longer for the real deal.
It is strange releasing a new piece of tech (RT cores) which are essentially unusable at launch. Yeah I know it's all coming, but still...
Did anyone here manage to pick one of these cards up?