A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Graphics Experimental Previews' started by sybere, Oct 13, 2018.
Solution for GrabPasses or other workaround to replace in shader grap or in Amplify Shader Editor..
Yes, hoping in a Grab Pass solution for HDRP too.
Please Grab Pass capability soon would great.
There is no grab pass in HDRP, lit shader already have builtin support for rough refraction.
@SebLagarde in light of this, I was wondering if there are plans to support refracted objects appearing through refracted objects in HDRP, I realise the back object can have Pre refraction enabled but then it isn't refracting itself. Currently this effect is possible with Amplify when not using HDRP and I believe it uses the Grab Pass for this, but as you can see in the second image (unless I'm doing something wrong) the built in refraction for the HDRP/Lit shaders don't support his?
Example using Amplify Shader - You can see the back object through the front object and both are refracting
Example 2: HDRP Lit shader - Rear object is invisible through the front object.
Is this possible and will this be supporting or are there any work arounds for this?
This is now fixed in HDRP 4.8 by selecting the appear in refraction option in the HDRP/Lit shader Thank you
>by selecting the appear in refraction option
FYI: This functionality have been there since a long time. even since 2018.2. Not sure why you weren't able to make it work before
There is no grabpass in HDRP (It is an extremely consuming operation and we decided to keep the control on it to have performance control)
Note that we have expose sceneColor node in shader graph (that have currently a remaining issue that will be fix in 4.9.0).
Note that I don't know what Amplify is using, but to have a refracted object that see another refracted object, it require two resolve of the scene color (and if you handle rough refraction it require to convolve color buffer in mips - or two "grab pass"). In HDRP we have decided to only offer two options (which is already expensive), we offer rough refraction and distortion. (Refraction mimic properties of material, with IOR and absorption, distortion is artistic effect apply on whole screen and all objects). Render pass in HDRP are like this:
Render Transparent that can appear in refraction
Resolve color buffer into a color pyramid (required to handle object that do refraction)
Resolve color buffer into a color pyramid (for distortion effect))
Render distortion vector
Each group of transparent (Before refraction and regular) have their own sorting, and first group will appear before the other. Then distortion is apply on top of everything (an object performing distortion will distort itself).
This is the decision we have made for HDRP to keep control of performance (i.e avoid artists to make unreasonable choice). It could not fit a particular project (Also note that most engine only perform one resolve of color buffer as it is costly).
It mean that we don't supported refracted object into refracted object as the example you show, but we support distortion on top of transparent object see through refracted object.
In HDRP you can disable option like distortion and refraction to remove them (note that last color pyramid is reuse for SSR next frame).
With transparent you will ALWAYS have sorting issue, and artists will always require a case we don't support. Best in this case is to add additional resolve and additional group or reverse order of pass if the features is important for your project.
Thanks for the reply, that makes sense.
I like how even in the worst case HDRP finds a way to make the most of the data for example next frame.
@SebLagarde Will it be possible to access the convolved mips in the sceneColor node? I want to blur by depth in a water shader. Basically like the distortion blur but without the self distortion/blurring which makes it limited in usefulness.
Hi, We add this feature request to our todo list.
Question: Is there a reasons to not used the refraction for this ? Refraction do what you want, or is it because you prefer the distortion vector map instead of normal map + IOR + Absoprtion for controlling the distortion ?
Thank you Seb, if I can get it to work using the refraction feature that will be great. Last time I tried i couldn't manage to get the result I wanted but I'll try again and let you know!
Any news about how to replace GrabPasses? I would like to upgrade my shaders to use HDRP
Hi, I have answer here: https://forum.unity.com/threads/hdrp-shader-grabpass.568585/#post-4121446 you have Refraction, Distortion
and Scene node in shader graph
Can we replace grabpass with a custom rendered rendertexture as in standard pipeline ?
How can we grab the correct texture to do refraction in HDRP inside a shader ?
How is is possible to get a refraction texture to feed our own shaders in HDRP ?
Like the camera.render() in previous versions.
Flexibility mostly, also the way refraction works kinda weird, it shift the entire refraction (based on mesh normal?) causing empty area on screen edges (Vignetting effect) or just cutoff on the bottom of the screen.
While using scene color we avoid getting all of those artifact, and even a bonus we can do depth color.
The only problem with this is there's an artifact on the side of the screen a black area, this is also happened with scene depth node.
also currently i have to connect it into emission output, which mean sometimes the distortion can became brighter or darker when using automatic exposure, unlike using screen space refraction it stay the same.
It would be great if there's a solution for those problem. Oh and mind you all of my case are for creating water shader
Hello, artist here!
I swear to take responsibilities, feed it well, and make less unreasonable choices..
So please make this a toggle option just like other pass in the SRP Settings!
Jokes aside, after these whole time promoting Unity for animations, I find this decision very strange.
There are those who don't use Unity for real-time rendering, and in this case, it limits a lot of things that are very hard to fake (Like blend mode stacking, or rooms with a huge wet/frosty transparent glass). (Would be strange if I have to render the specific scene in other software, then render the rest in Unity again.)
In most case, there won't be any need for this. But this kind of extreme rendering quality are exactly why HDRP are separated from the URP.
Even in real-time cases, looking back at legacy rendering that supports this feature, I haven't seen any artist abusing the Grab Pass feature and renders the whole game unplayable.
With the improvement of hardware and graphic cards every year, this wouldn't even be a problem in a couple of years.
Maybe you can start with believing that your core customers are sensible, responsible adults.
There are specific use case for this, and those who specifically look for them understands the consequences well enough.
Let us take the gun and shoot our own foot if we must.
You don't democratize through taking away their choice. Democratizing means you give your community the freedom to choose both right and wrong for their own to decide.
That is their decision and voice to make.
You can grab the buffers for colour etc and use them but SRP in general doesn't have grabpass and nor do AAA games in that same way as far as I know. The grabpass concept itself is flawed and slow. It will do a CPU copy of the texture and stall the pipeline. There's much better techniques than grabpass.
You might for example want to copy a buffer at some point and re-use the contents?
Tell Unity what your goal is, not how to get there, so it can be solved in a better way than grabpass.
BTW this stuff has absolutely nothing to do with democracy and Unity doesn't mean it like you think: It has only ever been a business democratisation - enabling you to use the engine that costs millions and still make money without paying Unity a dime. It doesn't mean a vote on features although Unity does very much listen and consider the feedback, the ultimate result has always been their decision
AAA Games do not, but in this case, it is not about AAA games. It is flawed and slow, but not for animation rendering that does not need it to render every frame in real-time. Sure its not the best most efficient solution, but it is the most straightforward simplest solution that could get the work done for niche purposes. (It can be toggled off for games, so it wouldn't matter anyways right?)
My goal is to have multi-layered transparent blend modes, my goal is to have multi layered refractive materials (Frosted glass). I love Shader Graph, and the new VFX Graph. I prefer if I could stay with the SRP.
There is no graphic engineer in the team. In this case, should I regress back to the legacy rendering? That's like telling a us to drill the wall manually with your fingers, just because the mechanical drill might injure you in a very niche case. This is the most straightforward way, there is no alternative without hiring a graphic engineer.
Now, you will be telling me to just not have them in the scene. Yes sure, but its still wrong to make the engine less powerful without providing alternatives (Scene color does not satisfy the need, because multiple transparent object in this case), just because an artist might make "Unreasonable" decisions. This is discriminating artists. Why not disable multi threading just because some programmer might make "Unreasonable" decision and crash the game?
Unless you provide me a way to do it, my point still stands. Unity now caters to the film & animation market. This kind of niche use case should be considered. This is no reason to make SRP less powerful.
I can create a different scene for the animation, but Unity should at least try to make this asset (Blend Modes) work in SRP without limitations. Multiple blending is extremely useful in a lot of cases.
LWRP is supported with some limitations due to lack of Grab Pass and multipass shaders in general, namely:
`Self With Screen` mode will always work with `Unified Grab` optimization enabled (instances of the effect won’t “see” each other).
Masking is not supported.
Camera (post-processing) effects are not supported, but you can fake them with a full-screen UI image using the blend mode effect.
UI objects will correctly render only when either `Screen Space - Camera` (with a selected camera) or `World Space` canvas render modes are used for the parent canvas, otherwise the screen texture will render upside-down.
Only the special LWRP-compatible shader families are supported (more info below).
Source : https://docs.google.com/document/d/1p9Q-fA3WeJX7la6ZkcsXEp6tKn09D08YT3FEv0Fct_0/edit#
Now, imagine rendering animations without post processing effects just because your scene uses a stacked blend special effect for its core. Ridiculous? Yes.
I see yeah, it's for an asset. But Unity just said at Unite that most of the Editor Users are artists and have already done graphs for scripting, vfx, animation, shaders and soon audio I guess. So please don't make unfounded and utterly illogical statements about Unity not caring about artists.
LWRP/Universal has a very, very, very specific job: Run on ALL machines Unity supports. It's not, and never will be changed to be film-centric (that's HDRP's job).
But your pain can be eased and Unity will read this and I think you can probably achieve it today using the camera layering equivalent to capture the frame etc... basically render the frame to a texture first then do something with that texture and render something else?
As far as I can tell, Universal can do this, but the asset author can't?
I honestly prefer for unity to focus on game and realtime rendering market with highly performant render pipeline as unity are a game engine
Thank you, yes it might be achievable in dirty and gruesome ways. But that makes it even less efficient than just using Grab Pass like the legacy rendering. I do indeed hope Unity will reconsider the inclusion, even if its a niche use case.
It is not for an asset specifically, but the best implementation which includes creating a custom pass which that asset uses includes all those shortcoming listed above in SRP. I am using HDRP for rendering the film, that is why I ended up in this topic.
As for supporting all device, a quote from the other thread.
And for @Reanimate_L
It shouldn't even affect games & the real-time rendering side because it can be toggled just like Opaque Textures & Depth Textures. It will have no impact on performance when toggled off in the SRP settings.
There are some things that can only be done this way without any shortcoming, except for performance (Though, even that is pretty negligible, while it may seem very inefficient when compared to other techniques for get what is rendered behind the object on demand dynamically. (Both Opaque & Transparent))
I've spent quite a time with the LWRP several months ago, and, as far as I can tell, it's impossible to make a full-fledged replacement for grabpasses with it. Built-in RPs have very limited customization options compared to the current rendering system and it's by design, I suppose:
Sure, you can build anything you want with a custom pipeline, but that's not an option for an asset store package.
But you can simply render to texture and use that, which is something I've done since 2010 in Unity to replace grabpass on mobiles where it is cripplingly slow.
You get the full set of drawing up to that point though so you can do anything with a manual "grabpass".
That is roughly what I'm currently doing for the limited LWRP support. But grabpass is not just about saving the screen texture, it allows doing that within a shader pass, which enables the aforementioned "stacking" when multiple objects using a shader with grabpass would "grab each other". LWRP requires a static setup of the custom passes in a config asset; so you can grab the screen, save it to a texture and then render the objects utilizing that screen texture, but the objects won't "see" each other on that texture.
is SRP using multipass? they are targeting single pass shader now mostly afaik
Right, the multipass shaders in general are not supported, which is also a major limiting factor. The customization is built around adding custom pre-defined render passes at specific points in the pipeline, eg: https://gist.github.com/Elringus/69f0da9c71306f1ea0f575cc7568b31a
Grab Pass needs to exist, at least as an option for advanced users.
Absolutely agree. At the very least, it should become available before the old render system gets deprecated; otherwise, users will just completely loose a way to achieve some of the functionality, without any workarounds. Removing features is bad, no matter how good the new stuff is.
I don’t know what they fixed there, but I still have this kind of refraction, no matter what settings I trying:
Fact is, HDRP is a new rendererer because built in cannot scale. It cannot keep up because of your thoughts and your conclusion.
So you might want to revisit that line of thinking. Also you can achieve this without grabpass and its horrifying design that no AAA game I know of in existence has (it's simply not a good idea to abuse CPU/GPU like that).
What can be done though is looking for alternatives and I'm fairly sure you can copy the frame and render more and still get better performance than grabpass.
The fact is that a feature is removed and there is no workaround (copying the frames has nothing to do with this). I won't argue that grabpass was not ideal for games, but don't forget that Unity is not just for games; so maybe it's not me who have to revisit the line of thinking.
Where can one find the "appear in refraction' option? Using 6.2.9 HDRP.
On the other hand unity is not focused on multimillion AAA titles and small studios cant afford writing renderers from scratch with pipelines to get what was readilly available before, so unity has to decide if wants to discourage all indies with the new extreme complexity or keep them.
As always the truth is somewhere between, pipelines may be good, the way the were introduced was like unity rushed them 2 years before are ready for any serious use.
Soo what about this "appear in refraction' option?
Use the drop-down to set the rendering pass that HDRP processes this Material in.
• Before Refraction: Draws the GameObject before the refraction pass. This means that HDRP includes this Material when it processes refraction. To expose this option, select Transparent from the Surface Type drop-down.
• Default: Draws the GameObject in the default opaque or transparent rendering pass pass, depending on the Surface Type.
• Low Resolution: Draws the GameObject in half resolution after the Default pass.
• After post-process: For Unlit Materials only. Draws the GameObject after all post-processing effects.
In the shader graph you can sample the Scene Color with mips and do your own refraction effect if you want.
Any transparent object in the rendering pass "Before Refraction" will be visible in the refraction, otherwise it will not be visible
hope that help