A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'General Discussion' started by CDF, Aug 15, 2018.
if ($10,000) Graphics.rtx = true
if ( 2018 && $10,000 II
2019 && $5,000 ||
2020 && $2,500 ||
2021 && $1,250 ||
2022 && $625 ) Graphics.rtx = true;
Assuming that GPU's follow a yearly doubling in performance per dollar.
So in about 5 years time "affordable" high-end GPUs with real-time Hollywood level ray tracing.
However, if screens keep getting larger and faster it could take a lot longer before GPUs are able to catch up to 8k,16k,32k,64k... screens with 144hz, 240hz, 580hz, 1000hz refresh rates
I'm excited. I am planning to order at least one RTX2080 the moment they are available to order.
Cant wait for the RTX 2080 TI. My 1080 TI feels old now
Yeah, not sure about a 64K screen haha. Can we even see that high quality?
Gonna be a real interesting next 5 years for sure. Wonder who's gonna be the first to get a game out with realtime ray tracing? Might have to start actually checking minimum specs again
Remember you're not always sitting at a "normal" distance from the display. Think VR headsets.
I think some games have already used limited ray tracing, I think it was CryEngine.
Yeah VR is extrelmy demanding since min framerate is 90 fps, and it will not become less straining when headsets increase resulution. Though foveated rendering powered by tobii eyetracking will solve alot of that.
Definitely. VR could honestly benefit from added resolution. The view in the Oculus Rift always reminds me of a really old screen door. It will be interesting to see what happens with VR headset resolutions once these next gen Nivida cards are readily available.
We also need more perfomance on the CPU side, I really hope Vulkan + gfx jobs soon will perform well. Many have 8 - 16 threads today
One whole percent!
The thing is do we need full raytracing or just boosted raytracing for real-time GI, Ambient Occlusion, Reflections and Refractions/Caustics?
Or will raytracing GPUs be more of a workstation development platform for baking better lighting faster e.g. Unity GI
As most likely we will get games with boosted graphics settings, just like we have quality settings today and games that can take advantage of hardware features e.g. Nvidia's Apex which boosts particles systems and cloth physics.
As a steam publisher I can take part in those statistics, and there is not even 1 procent of my customers that have participated so I think those statistics say very little. Here in sweden ryzen is on a march for example. They did a survey on the most popular gaming and over clocking site and Zen plus Zen+ got 16 procent. 8600k and 8700k from Intel grabbed 11 procent.
Edit: also I said 8 to 16 threads, check your statics again 4 cores with HT makes 8 threads
I don't know. I vaguely remember the last time they provided the number of people using the platform and one percent of those with eight or more cores isn't too far away from the number of people in /r/pcmasterrace.
How many peasants people with 4C/8T are there in your player base?
Personally I see these new GPU's being used as workstation boosters for now, especially for film/tv. I was pretty blown away to see how many GPU's are used in render farms these days (1:11:20 in the video)
I think within 1-2 years they'll enable enhanced features of rendering in games where supported. Long term... the holy grail of graphics is full realtime ray tracing. I doubt we'll see real time photo realism for a very long time. But maybe 10 years there will be a few games running a full ray traced scene.
Do we need ray tracing? Technically no, but just imagine the workflow and visual improvements. No more light baking!
4C/4T and to a lesser extent 4C/8T is still very much the norm. If you think even 10% have more than that you're either super subject to confirmation bias or just lying to yourself. The people buying the new hot cpu are the vocal minority
I can check the hardware survey tomorrow and report back, but like I said it's just a portion of the entire player base. VR has players with more muscles though so it will probably be better hardware in that market than steam over all
There's nothing to suggest it isn't a representative sample though which means the numbers wouldn't change
Granted VR is a subset that you can expect to be positively skewed in specs
6700k was the norm then the 7700k, pretty sure the 8700k with 12 threads will follow. But even with 8 threads toy will see massive gain of multi core rendering
6700k isn't the norm though. i5's are the norm, not i7s. We're talking about pc gamers as a whole and not pc master race elitists
With the way everything is progressing - both hardware and software - it won't be for too much longer. A few of the tech reviewers I watch on YouTube have started seeing performance problems with 4C/4T with some games. It isn't until they start using 4C/8T or 6C/6T that the problems disappear.
Within the next year or two we may reach the point that you can't play current games with 4C/4T.
I don't deny that at all, but it quite a leap from saying 8/16 is the norm
Thanks to AMD 8700k is practically free. The 2700x with 16 rhreads is also cheap. VR gamers are PC masterrace
I didn't say that I said 8 to 16 threads are the norm, 8 threads include all 4c/8t CPUs
Dont start throwing phrases like 'Hollywood level raytracing' around like that, because you are way, way off when you do.
If you watch that entire nvidia presentation then you will see that there is a section that actually deals with the demands of Hollywood-type rendering. And it is miles away from realtime, and miles away from one x $10,000 Quadro card.
Rather, in the example he gave they looked at using one rack of 4 x RTX servers. Each RTX server has 8 Quadro GPUs. The total cost of this rack is $500,000.
And with that setup, he speaks of 3 seconds of footage being able to be rendered in 1 hour.
The realtime raytracing era is exciting, but it involves a hybrid approach and should not be confused with Hollywood level applications whatsoever. That is not to say that people wont be able to use the affordable, realtime hybrid approach for certain cinematic purposes, but the scale of the numbers on display should make it clear that there are still two different worlds here, and blurring the lines between them can only get you so far.
3 secs of footage in an hour on that budget is really great obviously it's not the dream realtime gaming thing. I think that'll be whenever it's all affordable, few years off.
1 - 2019 $250,000
2 - 2020 $125,000
3 - 2021 $62,500
4 - 2022 $31,250
5 - 2023 $15,625
6 - 2024 $ 7,812
7 - 2025 $ 3,906
8 - 2026 $ 1,953
9 - 2027 $ 976
10 - 2028 $ 488
Or about 9-10 years away from a desktop GPU, although ARM has been pushing raytracing demos and there is the new RTX gaming GPU line from Nvidia due out this year?
Didn't Intel work on a raytracing GPU not long ago?
Depends exactly what dream people have in mind. Much like the reaction to the initial RTX & DirectX Raytracing stuff at GDC, its clear that peoples imaginations have been captured by this stuff, but its far from clear that people have limited their dream to what the hybrid approach will actually offer with this looming generation of cards.
If we consider the upcoming RTX 2080 that may be announced in a matter of days to be affordable, and peoples dreams are safely within the boundaries of a hybrid approach, then we will probably start to get a much better sense of things as they pertain to game engines by the end of this year/start of next year. The hybrid approach consisting of plenty of stuff still being rasterised, and with raytracing & de-noising & low res combined with new anti-aliasing tricks, giving us nice area lights with soft shadows and nice reflections. It seems reasonable to expect we will get useable implementations of those things for realtime in game engines with higher end cards. Whether we can also squeeze decent realtime ray-traced GI on top of that with these cards remains to be seen, maybe we will be able to with certain kinds of games/scenes but this is the sort of area where peoples dreams and expectations might start to diverge from this new generation of cards, unclear to me at present. I'm certainly excited to even get one of these features at performant speeds in game engines I have access to, so I'm looking forward to the next 2 years a lot
I should probably know better than to engage with your constant failures to grasp the detail of technology, but in any case:
a) price/performance will not evolve in such a simplistic manner.
b) even if it did, thats still only to be able to render 3 seconds of 'Hollywood' footage in 1 hour, so nothing like realtime.
You are barking up the wrong tree. Nothing that has been announced is going to magically cause the world of realtime graphics to completely leave behind the 'rasterise and fake various things' approach in the next 10 years. What is on offer, soon, is the ability to fake slightly less things, including a number of things that can be very pretty indeed, and that people have every right to get excited about. But its way too easy to use this exciting starting point to leap off into utter nonsense.
I dunno, throw some bitcoin mining in there and I don't think that decline in price is so steady.
It will also depend on what AMD get up to. AMD for a lot of years remained second fiddle because it was really cost effective. After winning several consoles worth of supply, it's evident they have enough room to compete hard and are happy to do so.
This is good for us of course.
Even if we could predict the price changes in that class of hardware, it would still not give us a good picture about the future of the realtime side of things. There are a number of reasons for this but since I already said a lot already, I will just pick one factor for now:
For the 'Hollywood type rendering' in the nvidia presentation, a big factor was how much framebuffer memory is available to render a complex scene. In this case he makes a big deal about the new bridge that enables two 48GB cards to share their memory, giving 96GB total.
I'm reasonably confident that when we get a range of consumer realtime ray-tracing games that really start to live up to peoples dreams, enough 'faking it' will still be employed that we dont need 96GB of videocard memory to get results that make people happy. So thats one reason I'm not going to try to extrapolate an affordability timescale using a very expensive RTX render farm as the starting point!
If competition goes well on that front, I can see it affecting what percentage of PC gamers end up with a capable enough card to do some of this hybrid stuff at various stages. But where I think the action really is, as far as practical realtime techniques for the upcoming generation of cards goes, is what talented people on the graphics programming side of things are able to manage. Given that the techniques which will now be do-able in realtime are mostly only really becoming practical now because of things like AI-based de-noising, it would probably be foolish of me to ignore the possibility that all manner of clever stuff may be done by developers in the next few years which deliver plenty of lovely results, maybe beyond the handful of possibilities that have been shown off so far. Glossy presentations that like to throw out one-liners about the end of 'fake it' techniques are a bit misleading because I think a lot of what is going to be on offer are new forms of faking it, and there is nothing wrong with that, it will yield some great stuff despite not really being the true, pure dream of realtime-raytracing that people have hankered for over many decades already.
Technically speaking, nobody mines bitcoin with a GPU. Any direct bitcoin mining currently taking place is being done using ASICs instead of GPUs. There are lots of people mining altcoins using GPUs. Luckily for gamers, altcoin values have been relatively weak this summer, so a lot of the crypto mining demand for GPUs is currently greatly decreased.
This. Between the next generation of cards coming out now (it's all but confirmed at this point) and mining situation you mentioned manufacturers are finding themselves with a large number of cards they suddenly need to be rid of.
I happened to be checking a subreddit dedicated to hardware sales (/r/buildapcsales) and picked up a new 1080 directly from the manufacturer for hundreds of USD less than the price they had months ago and over a hundred below the MSRP.
My 780 Ti got replaced by a 980 GTX because the Ti basically died, One of the 980's fans stopped spinning a few months ago but it's a pretty cool card so it doesn't need all that much cooling.
Ultimately I'm probably just going to leave it and buy a new GPU when I get a new rig. I tend to prefer to replace the whole shindig at once.
Note the graph uses a logarithmic scale, but you can see a near 10x performance boost in GPUs in about 10 years.
It's all about Gigarays now.
I checked my hardware survey page at steamworks. It was exactly 1 procent of my customers that have done the survey. Not alot in a small market like VR.
Anyway, here it is
It doesnt say if its i5 without HT or not. But I also got this statics, though its grouped into different CPU categories
edit: haha, 50% of those customers have a VR headset, for a VR only game. I think you should take lightly on steam statistics
Honestly dunno why everyone wants realtime raytracing. It may sound like a dream, but honestly as we approach that goal the gains we've made should enable better algorithms that will allow for better approximations... I don't know if we'll ever get it but I can imagine we will get pretty close. If there's anything I've learned, its that nobody can predict the future.
3D raytracing is for squares anyway. Real pros do all their work on PCs so old that they can only run rudimentary 3D operations on an (optional) vector coprocessor.
Realtime raytracing is done all the time, technically in most games for post effects and even I guess, contact shadows. What most people are referring to is probably doing the entire frame with classical raytracing, and I actually don't think that ever will be wanted or needed, because there is no reason to throw the baby out with the bathwater. Approximations that require multiple camera renders from different viewpoints, and shadows / transparency are the key solvable issues for ray (or path) tracing so I think a hybrid is likely to become the standard rather than full retard one way or other. Since both approaches (ray and shader) offer fantastic solutions for what they do, but become much slower outside of those areas.
Didn't a few AAA studios already kick off research into hybrid systems?
It's not just rays it's AI to improve the image quality.
What's interesting is that it's estimated you would need about 36.8 petaflops to run a simulation of the human brain in real time (source).
So in theory with only 19 of Nvidias GDX2 $399,000 systems so around $7,581,000 you could run a human brain simulation in real time.
We're excited because it can greatly accelerate tasks involving the tracing of light rays. Global illumination is one task that traces light rays. Imagine being able to use Enlighten in realtime rather than having to wait for a preview bake to adjust the results. Better yet imagine being able to do it at the highest setting rather than the lowest.
we have trouble simulating INSECT brains.
It's pretty clear no-ones every going to do "brute force" raytracing on the whole scene, with no hacks, optimizations or tricks to "speed it up". Any opportunity people get to squeeze more performance out of what they've got, they'll take it. This is why we're all watching heavily compressed video streams instead of downloading raw frame data. Its why digital tv uses encoding to reduce how much data has to be downloaded. It's why there's all these techniques and special human-intelligence-driven workarounds and bandaids all over the place, to create an end result using less resources. It would be nice for example if we ditched triangles and went to real mathematical geometry and splines, which then get properly raytraced with multiple rays per pixel antialiased and all that, but fact is, when they can produce "almost" the same quality using all these algorithmic "clever" methods, they just won't go there.
Why do I always feel like you want to lecture me and/or argue about the things I say with you? Pardon me if I am off base but it feels like anytime you reply to something I post it feels like a rebuttal.
Lets not forget that increasing multithreaded processing power means increased capabilities in other areas, not just raytracing.
It felt like you were asking a question in addition to giving your opinion (which I do agree with by the way).
Ah, ok. That makes sense.