A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
New Unity Live Help updates. Check them out here!
Discussion in 'General Discussion' started by OCASM, Mar 19, 2018.
And if you combine that with an AI Raytrace Render Filter to remove the dithering like this from NVIDIA...
All this photorealism isn't adding anything to the gameplay.. where is the progress on the voxel particle sandbox, I want to see soil simulation levels of mindless entertainment being possible damnit.
Next up.. Vulcan-Ray ....where you at?
Just keep in mind modern game engines are not being used solely for games now. Microsoft may have picked GDC more for the fact that the people who build and work with game engines are there than anything else.
According to nvidia that's coming too (pinned comment):
Epic has announced support coming to Unreal. Hoping to hear the same from UT.
As Ryiah said, we aren't all using game engines to make games
Can confirm. I'm currently working on a POC for a graphic novel with scenes built in Unity and panels rendered via Octane. Lots of use cases these days.
AMD joins the stage with their Vulkan-based real-time ray tracer:
Is it just me or does AMDs announcement feel like "Oh sheet we missed the boat, quick announce something to show we were working on it too!" ?
I hope Unity can bring us news about Microsoft Directx Raytracing and NVIDIA RTX.
The new HD pipeline appears to be the perfect scenario to add features like that.
Isn't Unity going with OTOY and won't OTOY adopt these?
The first demonstration of real-time raytracing in Unreal Engine 4 using Microsoft’s new DXR framework and NVIDIA RTX technology:
Just to moderate the expectations that some of the hype from GDC over this might encourage to go wild:
That UE4 demo is running on multiple GPUs and most of what is being talked about at this stage is a mixed approach, with rasterisation still being responsible for plenty of stuff. eg reflections, AO, shadows for rectangular area lights are being done via raytracing, the rest is not. Todays (and tomorrows) GPUs have not suddenly gained the power to do full on raytracing for everything in complex scenes, or even to completely tackle entire lighting aspects such as GI without running into massive performance issues. Indeed much of what has enabled nvidia to unveil and show off this stuff now is work they have done on some denoise algorithms.
As best I can tell what we have here is the beginnings of a framework for some clever stuff that has plenty of potential over the next 5-20 years, but we are not reaching the end of 'faking it' by any means. If you hear talk of games with this tech shipping before the end of 2018, think of it being used in limited ways, either for an effect or 3 in complex scenes, or games with simple enough style & content to experiement with an entirely raytraced approach (not sure we'll see much of the latter, dunno)
Yes for games. But again, many of us are using these tools to do things other than real time games.
True, and I've never been primarily interested in games myself.
But even beyond that context, if we are still talking about realtime then what I said still applies. Again taking that UE4 star wars scene, they used a mixed solution, with raytracing only responsible for certain key parts of what we see. They had the luxury of setting a 30fps 1080p target, and using multiple GPUs, and all the nvidia goodies. And they were not able to come up with a performant solution for the ambient lighting & GI side of things. The beefy hardware, nvidia denoise and a few other tricks enabled them to use raytracing for the reflections, AO and area light shadows, but even then they had a few drops below their target framerate at times.
If we arent talking about realtime, then yes I suppose a few of the things we've heard about today might get utilised in the sort of raytracing renderers that use the GPU and want to get results quickly, but dont need to deliver multiple frames per second. But since there is hype in this space as well. eg saw a recent OTOY reference in the press with some 60fps claims but no context by which to judge this accurately (and it may be the journalists fault), but OTOY themselves are sometimes guilty of talking about multiple different technologies at the same time and blurring the lines, creating inappropriate expectations. I very much see the same potential from glossy demos of the raytracing stuff by microsoft/nvidia/epic etc at this years GDC. The real detail is there though beneath the initial keynote demos, and there is a big difference.
From our point of view, even if it takes a couple seconds per frame it is still giving us nearly 90% of the quality of what would normally take us hours or days using our traditional render farm.
More than that though, it allows our artists to get a "realtime" view that is much closer to what the final rendered output will eventually be (when rendered offline traditionally at the end) than we can currently do. That is why it excites me so much.
Otoy is doing a presentation tomorrow at Unity Central and it's going to be livestreamed.
Oh dont get me wrong, I get excited about all manner of techniques in this space, and I can see the general benefit in various scenarios including ones like yours. Still have to be very clear about what is possible and what has actually been announced and showcased at GDC this year though.
The Microsoft and Nvidia bits give us a GPU pipeline specific to raytracing, supported in DX12, with some additional nvidia research being used to provide some effective denoise stuff. This foundation, which in nvidias case they have tied to their Volta architecture, can be used by others for various tasks involving raytracing. Epic tried a few in UE4 and got useable results in the realm of reflections etc. I dont think they provided much of a guide as to them actually attempting to bring this stuff to a wider audience in standard UE4 at this stage, and in their detailed presentation on these matters the main thing they focussed on actually bringing to UE4 soon was area lights (without shadows) which arent using this tech (this tech was used to shadow those lights in the demo but those shadows are not coming to UE 4.20)
So we have an interesting potential foundation that I'm sure people will build interesting things upon over time. However for the scenario you mention, I would think the existing work on GPU-based offline rendering is more like where the action is (eg OTOY OctaneRender). Maybe they will rework their tools to use some of the raytracing stuff in DX12 and/or nvidia stuff given time, but in the meantime they are much further ahead because they have already built a full rendering solution. Thats not what this microsoft & nvidia stuff is, the stuff unveiled at GDC is not a complete raytrace-based rendering solution, and quite a lot of it is based on the idea of this mixed strategy of combining rasterisation with some raytracing, with various tricks to make realtime applications feasible. If I've got the luxury of not needing realtime, then I probably dont need to resort to all of those tricks, though brilliant denoise stuff in general can obviously help reduce the samplecount required to get reasonable results.
Good. I hope they do a good job of making clear which different things they are going on about apply to which of their products, to avoid some of the misplaced expectations and hype of the past. I'm a fan of their work but its still early days for their Unity stuff and the blurred lines have been most unhelpful at times - eg making it sound like their stuff was already well useable for unity lightmap baking when in fact it was just an early starting point that demod the concept but was not a fully fledged solution for lightmaps.
DXR will open up ways for people to do do secondary rays efficiently such as shadows, AO, GI and the likes but it won't replace rasterization for primary rays anytime now, not even in the near future. Even for secondary rays, it will take some time for people to adopt the technology API into game engines and users to get the required hardware.
Thanks for putting it in a very succinct manner, unlike my lengthy attempts!
Cheers for posting that video. It does a good job of showing how important the denoise stuff is to even begin to achieve practical realtime results, albeit still with very beefy hardware at this point.
The Path Tracing & AI Denoising part of the video is certainly interesting in the context of the non-realtime stuff that I was previously suggesting the likes of OctaneRender are better placed to handle at this point. Thats still my opinion, but it is good to know that nvidia etc have tried taking their systems in this direction already.
As you can see in the demoed video, GPU path-tracing still requires lot of computations for a final image, but for static light baking is awesome. However, using GPU raytracing for secondary rays provides practical real-time high quality effects such as area-like shadows, AO, reflections (even with multiple-bounces), etc.
The Unreal demo is impressive, if you and those who buy your game/interactive experience can afford a 60000$ system (Actually it is now $49,900 - over a 25% discount, so hurry).
If you wish to make a Hollywood movie, it could help for light previz - or not - I do not really know.
For a low budget movie, I doubt it is usable, for machinima e.t.c. it is an overkill.
I think that both players (Unity, Unreal) are stretching the idea of using a game engine for producing movies. They might fill some niches (low quality-low budget-short indie films) and they might convince ($$$) some filmmakers to use the engine to produce a short film, but in terms of image quality (still and motion) - pipeline integration - scalability game engines suck compared to any DCC, even Blender's renderer Cycles is much better in these areas than Unity / Unreal.
The Octane 4 presentation just ended a few minutes ago and it was said that currently they have unbiased path tracing running at 1fps with AI lighting and denoising. They plan to have it run at 30fps by the end of the year.
Did they say on what hardware?
Full presentation on the Star Wars Unreal demo. Starts at 1:42:46.
It think it was two high-end cards.
We're rapidly reaching the point where it's no longer relevant because realtime is simply good enough.
It depends on the context actually.
For realtime applications :
Every year we see impressive demos - this one is the most impressive, hands down, from the tech enthusiast pov.
I ask myself - what can I practically do with this tech ? Nothing really.
For realtime - interactive applications we are at best 6-8 years behind the point that such high - spec tech is available - affordable - widespread. From my perspective, this tech is impressive but without any practical application.
Let's conside the HD pipeline demo - the Book of Death or something as another example.
It has the merit of running - now - on a PS4 Pro.
It is not as impressive (I find it very impressive compared with what else is available now for today's tech) but it runs in today's consoles - high end PCs.
What can I practically do with this tech, now ? Nothing really.
It is interesting for big studios who create AAA games - it will be production - ready in one year or so. It might be useful in a year or so - or not, the content is the king and making AAA content is out of the reach of most indies. Just consider the cost of proper motion capture technology - iphoneX, perception neuron e.t.c. are cute but not production ready, you need a real motion capture studio plus actors for achieving this level of quality).
Game engine vs renderer - Game engines evolve and so are DCC renderers. I believe that CPU and GPU DCC renderers will continue to offer better image quality - pipeline integration - scalability - user base. Game engine realtime rendering is just a GPU renderer with some clever compromises which make it "good enough" for e.g. cinematic sequences in games, machinima movies and so on. But films ? DCC renderers will be better, they evolve fast too (Blender Eevee, Cinema4D and Modo ProRender integration, , Maya Hardware Viewport 2.0 as examples for realtime previz, Houdini OpenCL solvers for GPU accelerated VFX, Octane Cloud Build for scalability) - all of this happens now - will be more mature as time passes and hardware evolves.
DCC programs spend all their RnD budget in this direction, it is their main source of income. Game engine use tech demos as a bait to impress their audience and will continue to do so - congrats to Unity for making demos that can be actually used in production.
That shiny 60000$ NVIDIA system will be used much more often with DCC apps than game engines in the next years.
In a nutshell - I believe that DCC renderers will continue to be not "good enough' but "better". Or not. We will see.
DCC - Digital content creation, aka Blender, Maya, Houdini, 3dsMax e.t.c.
So in a couple of years when we have raytraced 8k VR will we still need or want reality?
How come blender an open source DCC is closest to have realistic gpu rendering that you can use right now, while all of autodesks ripoff subscription based acquisition products have nothing comparable built in? weird
I think Blender is just pushing for PBR rendering but going through a lot of changes as it even has an in built game engine now.
What about Blenders cycles isn't it a GPU based renderer?
I believe Cycles can do GPU rendering alongside CPU as a hybrid. And Eevee is purely GPU without the advanced stuff Cycles can support...
Either way better than Arnold or what was mentalRay..
It's just sad most the blender changes don't come along the lines of making the workflow and UX less rubbish.. the built in GPU rendering support is simply a direction I'd expect a company like Autodesk to have been investing in a long time ago.. I'm genuinely wondering about Autodesk, they ain't barely done anything really useful for there acquired products in years.
"changes as it even has an in built game engine now."
they can throw that away.. unless it is supporting some mainstream language like c# or javascrap and its web market it's gonna be pretty useless with game engines like Unity.. bit like stingray's yuk lua scripting .. I think blender uses python another poky language
Virtual reality needs the other three senses more than it needs a very high resolution.
Can't eat in VR.
I think this has just changed: Raytracing Performance; NVidia Gamescom Keynote on Twitch, 2:18:22
With the 2070, 2080 and 2080 Ti, while definitely still "early adopter" and "high end gamer", this technology is now landing at consumers.
I really hope Unity will properly support us in using this tech. The next few years will be crazy!
Amazing ! Or not ?
My first reaction was - this is amazing ! These RTX cores must be some ASIC-like highly specialized pieces of silicon that do one thing very fast, raytracing. But they are exclusive to the new RTX and Quadro Nvidia cards.
So we have a new generation of Nvidia cards that
- are 50% more expensive
- have a new VR port (no compatible VR headsets announced yet, VR is not dying according to HTC)
- will probably be 10-20% faster in gaming (by the "old" fps metrics used in the rest thousands of existing games)
- do raytracing (remains to be seen how well)
They appear in a post-mining-craze period, where the second hand market is full with cheap Pascal and Vega cards. There are 11 or so games that will support them and most of them just use some new antialiasing algorithm.
There are piles of unsold Pascal cards due to recently increased production.
In a year, 7nm GPU cards will appear, probably with more significant fps gains.
Nvidia RTX cards will sell - but the number sold will not be significant enough for mass adoption from the non-sponsored devs.
AAA games, where raytracing might mean something, are console first - no RTX in consoles until next generation and AMD->NVIDIA shift from Sony-Microsoft is unlikely to happen in the next generation.
Chances are Unity will eventually support this RTX-thing just for being on the hype train but I doubt it will change something for anyone. Besides youtubers. They will have a nice time with these - free for them - cards.
My Pong clone will wait until 5nm GPUs arrive.
The tech is just not ready for it - yet.
Only if you compare them to the current prices. The GTX 1080 had an MSRP of $699 at launch and only became $499 when the GTX 1080 Ti was launched.
At least one or two YouTube personalities had the opinion that the series is basically a refresh and I have to admit it feels very much like they just made minor adjustments to Pascal, added support for GDDR6 (which isn't hard when you consider they've already had to support at least three different memory standards), and called it a new series.
Last time Unity discussed their GPU-based progressive lightmapper it was within the context of discussing AMD Radeon Rays which is the competing raytracing technology. Having greatly improved performance in heavy tasks will be a boon to content creators and it's definitely going to have a place in the industry. How much beyond that remains to be seen.
The big difference is that AMD Radeon Rays is device agnostic - i.e. runs on AMD and NVIDIA. This makes it more suitable choice for a company like Unity, since game developers have either NVIDIA or AMD. Also, unless AMD decides to make a similar hardware implementation, i.e. specialized cores that significantly accelerate the raytracing process, it will be much slower than RTX, like 8 times slower.
The "problem" with RTX is that it will only run on these cards - locking the interested suppliers to a NVIDIA GPU, like CUDA did before for compute and tensor cores for AI. While these two technologies targeted a specialized demographic that could justify paying the premium because it was actually cheaper than the CPU equivalent alternatives, the current demographic aka gamers is very different.
Practically the gamers will choose between:
- fewer reflections, less accurate shadows and a couple more juggies in 11 games (plus a few more later)
- some hundred dollars.
Btw anyone else remembers that some months ago many tech reviewers complained about NVIDIA asking them to sign contracts that forbid them say things that might put NVIDIA products in a bad light if they wanted to continue receiving free samples ? Strange timing.
RTX can be very important for lookdev - rendering - DCC in general, it is an amazing technological achievement.
NVIDIA, send me 4x 2080ti for free and I will become a fan too.
They are not 50% more expensive. For example, the RTX 2070 is supposed to outperform the GTX 1080ti, but is going to sell for less than the GTX 1080ti has been selling for. The RTX 2080ti seems priced to replace the Titan Xp instead of the GTX 1080ti, but it is also supposedly much faster than the Titan Xp, so that price may be justified.
As for the performance relative to the previous generation, I wished Nvidia had posted more details about that. We won't know actual performance with existing games until hardware reviewers have a chance to post some real numbers. You are saying 10-20%, and I think it will be higher than that. I am guessing the RTX 2080ti will perform twice as well as the GTX 1080ti in existing games. If you are correct and the cards only offer a 10-20% performance bump for existing games, then the cards will be doomed in the marketplace. The market is expecting a large performance improvement, and 10-20% would be a fairly trivial increase.
I'd be shocked if that were the case. The GTX 1080 Ti has 11+ TFLOPS and the RTX 2070 has 7.5 TFLOPS. Only way it would outperform it is if the GTX 1080 Ti were being held back by memory bandwidth.
The RTX 2080 has the best chance and it's slightly behind too.
I agree. On paper, the RTX 2070 beats the GTX 1070, but probably not the GTX 1080 or GTX 1080ti. I am very interested to see actual performance in existing games comparing the generations of cards. It will be interesting to see how much of a difference the extra memory bandwidth makes.
Oh that's where you're wrong! All the senses can be fooled, you just need to develop the technology! Even eating in VR has already been done
I really love VR as a medium but we have to be careful. This raytracing technology is awesome now, I really hope they implement it in Unity, but in the future we need to be careful
That's just easy ranting.
The new VR port is just an addition, doesn't really matter if you have a non-virtual link VR.
Raytracing is a step forward in photorealistic graphics, of course it also helps the company selling it, but that's just how capitalism works....?
It's ok if you don't care for a new technique of rendering that adds a lot compared to rasterization, but complaining about
doesn't really make sense, since they've developed a new chip with a different architecture specialized in raytracing. They spent a lot of money to build that, and now they get to ask for your money if you like the new juicy features, that's just how it works
My text walls are not so easy to write and contain some arguments. Easy ranting would be sth like "RTX suckszzzzzz".
I understand how capitalism works - like everyone does. My post contains criticism regarding the cost-perf ratio of these new cards which is debatable, not economy theories.
The new VR port is a feature that adds value to the product. The value it adds becomes less if you take into account the fact that by the time the new VR headsets that take advantage of this port might appear, a new generation of 7nm GPUs might appear as well.
I do criticise what I care about.
I believe that raytracing is not here yet (<60fps 1080p on a 1199$+ card, albeit on early drivers - game versions).
RTX raytracing implementation adds to the player experience (eyes need candies) but frame drops in 1080p are significant.
DLSS is interesting (I realised today it is super sampling instead of antialiasing - sorry for the misinformation above), guessing pixels might or might not convince gamers.
The RTX seems like an overpriced but bold half-step towards raytracing, I wish that some optimized drivers - games change my mind along the way.