A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate
in the Unity community.
Discussion in 'Graphics Experimental Previews' started by Tim-C, May 9, 2017.
It has threads here: https://forum.unity3d.com/forums/graphics-experimental-previews.110/
When can we expect to get another test release for 2017.1/2017.2? (Latest one is +2 months old)
So what is there on the repository _should_ work you just need to know the right version to go with the repository.... I'm on vacation, but back at work next week. I'll do some tagging then
2 New releases: https://github.com/Unity-Technologies/ScriptableRenderLoop/releases Thank you @Tim-C
@Tim-C In the LightWeightPipeline you were using a ComputeBuffer to provide per-object light list which is not the case anymore as this is done in the engine from what I understood, using unity_4LightIndices0.
Could you tell me which Unity version supports per object light list and is it planned to add this in the coming 2017.1 patches ?
Hi, This isn't scheduled to land in 2017.1 as it requires a few extra engine changes (we need to be able to also provide reflection probe indicies... and this is taking a bit longer as it requires a big refactor ;( ). If you want to revive the old code it will still work for you... it's just the low end isn't really designed with compute in mind.
@Tim-C Thanks for the answer !
what will happen to the Surface Shaders?
Back when we didn't used a custom render pipe line on our project, I liked the concept of having code that is easy to customize (example: adding normals maps w/o the hassle of pushing a 3x3 matrix through the vertex pipeline) and survives API updates. But in the current state of the SRP I don't see any place for surface shader or am I wrong?
We are investing possibilities (including graphical tool solutions)
Graphical tool solutions seem rather fun! I believe some guy made an editor called strumpy in a parallel universe, many years ago
I think the surface shader abstraction has a lot of value. There are some things about the implementation I grumble about (magic keywords, changing vector spaces depending on code, etc), but being able to abstract away the lighting into a simple wrapper struct of it's inputs is extremely valuable.
At work, we rarely use surface shaders since we have our own lighting pipeline, and when SRP ships we will likely build our new pipeline on top of that. But we have multiple graphics coders and highly technical artists to support such a pipeline.
At home I write a lot of stuff for the asset store, and here is where having a general abstraction helps a lot. 5.6, for instance, broke my lighting pipeline for tessellated shaders in MegaSplat, because it's based on vertex/fragment shaders and most of the macros and variables were changed around. Since none of this is ever documented, I'll be digging through shader source for a week to fix this. On top of it, simply testing all the lighting modes unity provides is a painfully slow process, and I'll likely end up with some issue that only happens in some combination of lighting settings. (I have actually spend more time on lighting issues than any other issue in the package - these only show up on tessellated shaders because I am forced to use non-surface shaders in this case due to some missing abstractions in the surface shader system).
Surface Shaders, in contrast, just work through these changes. (MicroSplat only uses surface shaders for this reason, because anything else is a support nightmare).
So I think solving this type of an issue is pretty key for a wider adoption of SRP - without it, it will be too much of an investment for many users to make, and in many cases most of the shaders on the asset store won't work with an SRP pipeline without some way for the SRP pipeline to determine how the shaders should be translated. (Obviously there will be some SRP's where a translation isn't even possible, but I think for many uses it will be)
And while I love shader graphs, especially for teaching, they often produce horribly inefficient shader code, and I've never seen one which fully encapsulates what can be done easily in regular code. I easily see 2x speedups by converting shader graphs someone has written into hand written code.
So my suggestion would be to come up with a Scriptable Shader Pipeline. Basically, something which lets us easily parse and transform a surface shader like structure into the actual shader for a given SRL. A SRL could define the lighting input it expects (ie: like StandardOutput, etc), pass that to a generic vertex/frag helper function to fill out, much like the surface shader does now. It would also define which passes need to be generated from a given shader.
Then, build any graph based tools on top of that abstraction, such that shaders can be written in code or graph, and take advantage of the unique benefits each provide.
My problem is that I'm currently forced to use Amplify Shader Editor. Rather, I don't use it but the artist does. The reason for this is we have absolutely no time with such a tiny team, to roll our own shaders. This means any changes are costly in development time if we go with 3rd party solutions.
I don't agree that it's impossible to write a shader editor that is not at least quite a bit faster than the existing surface shader design. I know there's a lot of heat against it. But I don't have time, and most customers won't have time either unless they've actually got a graphics person on the team or it's a tiny game.
But, I love speed. So, is it possible to have a visual shader editor but with the ability to replace large parts with custom blocks? Similar to SRP for shaders... for SRP. What a mouthful!
We built our art department with people who are comfortable with coding shaders as well as using graphs, and basically no longer use graphs at all. We ship on low end mobile, so every bit counts. For us, everything starts with custom shaders, since we customize most of the pipeline top to bottom. Our lighting system for the Walking Dead game matches the Unity one nearly pixel for pixel, but is 30% faster on our mid-spec target platform. Basically, we've made this process an intrinsic part of how we operate- but I don't think most teams work that way, and are closer to what your describing (shader graphs, shaders as customization instead of foundational).
I'm not against a shader graph- they are quite useful. But I don't want that to be my only interface to writing shaders (ala UE4) and I don't want to completely give up the lighting abstractions that are available in stock Unity, because writing raw vertex/frag shaders that support Unity's lighting feature is a freaking nightmare to maintain as Unity constantly changes things and never documents a single line of those changes. This becomes totally impossible in a SRP world, where you might have 20 more lighting pipelines in existence.
I have yet to see a shader graph which allows you to make the same kind of low level optimizations you can make in written shader code. I think it's possible, but would introduce a ton of complexity to the interface, which is one of the main things that makes shader graphs attractive.
Most shader graphs have a raw HLSL node, but that's not where I see most of the bottlenecks. Being able to control each stage of the pipeline, what gets passed between them, and easily refactor and reason about large chunks of code is where text based solutions win. Graphs, IMO, are best used for the part of the pipleline that needs fast iteration and artist feedback, not as the framework itself.
In my view, this stuff should be built up in layers. The new scriptable asset importer being the lowest layer, the scriptable render loop being the next. Beyond that, a simple way to abstract lighting from the complexity of writing shaders. Beyond that, visual tools such as a shader graph or SRP graph can be built on top of those abstractions. (For instance, some engines like Frostbite have a graph based interface for writing scriptable render loops - no reason you can't add these on top of a lower layer).
Hi all. I start experements with SRP and have some questions/requests:
1) How I can execute c# code after part of render loop commands and continue execute graphics commands? (For example I want render some shader passes, dispatch after this compute shader which fill some buffer. After this I want get buffer data and filll next commands based on it). Yes, I can use camera.Render() inside render loop script... but its very weird solution... I write custom pipeline and... cant do two dependencies passes in normal way.
2) How I can place into graphics commands queue sending standart unity Camera callbacks (Like PreRender, PostRender, PreCull, OnRenderImage)? For postprocessing it can be very usefull, if I can just call to render loop: execute all OnRenderImage of this camera with this render target. Yes, I can call needed methods in components by hands... But its not allways possible: I use some third party assets which use camera callbacks for lambdas or private methods. And not allways it has open source code.
3) How I can exclude camera from render loop? Its will be nice if I can set render loop asset per camera in old manner.
I have downloaded the ZIP file after all in the Asset folder, and also set the linear rendering model, and put it after downloading the PostProcessV2.ZIP decompression into the Asset. but the 49 error after opening the project and all of the built-in scenes are purple material. How to solve?
-What's your Unity version?
-Have you tried re-importing PostFX?
-Does deleting the PostFX folder fixes the purple materials problem?
and please post in english
Thanks. I tried to re import PostFX. It doesn't work. My version is Unity2017.1
Did you downloaded SRP from the Github/releases page?
Yes, I downloaded the ZIP file from GitHub. Here is the address: https://github.com/Unity-Technologies/ScriptableRenderLoop
I chose the master branch.
It won't work, they are using 2017.3 already and the developer said that it's not going to work with any public Unity version you can read it here https://github.com/Unity-Technologies/ScriptableRenderLoop/issues/386
You can still download 2017.1 release from here: https://github.com/Unity-Technologies/ScriptableRenderLoop/releases/tag/unity-2017.1
And the newest one for 2017.2 beta here: https://github.com/Unity-Technologies/ScriptableRenderLoop/releases/tag/unity-2017.2b4
Expected first release is 2018.1.
This means that the official version of Unity will not be supported at this stage?
Until 2018.1, yes.
I downloaded the 2017.1 version of SRP and opened it with 2017.1f3, and there are two other errors. The namespace UnityEngine.Experimental.PostProcessing in HDRenderPipeline.cs can't be found
I tried to change the UnityEngine.Rendering.PostProcessing error to two warnings. But the HDRenderPipline scene is black and I can't see anything?
OK, I can cancel the MianCamera component of HDRenderLoop in the PostProcessLayer scene
Go to Edit->Project Settings->Graphics and change your Scriptable Render Pipeline Settings asset.
Does the scriptable render pipeline allow specifying custom vertex data somehow? The use case I'm thinking about in particular is for trying to pack more data into smaller structures. I have a case where I have a bunch of 2d data that I need to pass to the shader and the 2d data is all pretty low precision (doesn't need more than 13bits per axis).
I've tried packing the position/uv/normals data all into the Vector3 vertex buffer in meshes, but there's two problems I run into with that, first packing into floating points is pretty math intensive so a bit slow, 2nd Unity culls out data because the vertex data can end up outside the viewing frustum (and possibly other weird side effects that I don't know about).
Nope, at least not in the current API. UT has been saying this will come, but IMO it's not related to SRP, it would be more a Mesh/Renderer refactor.
Maybe have a look at ComputeBuffers if your target platform supports them?
Set the Mesh.bounds yourself maybe?
Is the mesh bounds the only thing Unity is looking at then? Wasn't sure since it seems to cull things even when you tell it not to generate bounds (option in the SetTriangles function).
I'll try experimenting with ComputeBuffers, not sure if I can get away with only supporting platforms that support those, but perhaps I can have a switch for making those a bit more efficient if they do.
So, so far it seems pretty awesome, great to see an SSS shader provided now and works decently well.
I'm curious what happened to semi-transparent shadows though. I see now that we amazingly have shadows cast onto semi-transparent objects, and those objects casting shadows, but was the cost for doing this to get rid of semi-transparent shadows? If the shadow intensity was adjusted by the surface transparency like it is with the default Unity setup, this solution would finally be perfect for things like fur/hair cards and other neat effects.
Hey all, sorry for some silence here we are pretty deep in 'getting stuff done' mode. From out side that involves:
BIG performance passes (many benefiting legacy rendering also)
UX / frontend work
Ability to customise inspectors based on which pipe you have active
Reflecting correct editor state when a pipe is active (hiding some quality settings / lightmapping settings)
In terms of using the pipeline we are maintaining an active and working branch that runs against 2017.3 here:
Please read the steps on how to use it here (do not use the git release mechanism). https://github.com/Unity-Technologi...ee/Unity-2017.3#how-to-use-the-latest-version
Are you on unity 2017.3 beta and want to use SRP? Good news we maintain an active branch now located on git.
How to use the latest version
The repository no longer consists of a complete Unity project, but rather assumes to be put inside a sub-folder of the Assets\ folder of an existing Unity project. Make sure that your project uses linear color space (Edit > Project Settings > Player).
Perform the following instructions to get a working copy of SRP:
> cd <Path to your Unity project>/Assets
> git clone https://github.com/Unity-Technologies/ScriptableRenderLoop
> cd ScriptableRenderLoop
> git submodule update --init --recursive --remote
After you launch the project go to graphics settings and select a render pipeline to work with. Currently I recommend staring with the light weight as it's a little simpler and more mature.
Getting an error: texture.imageContentsHash unrecognized
var hashTexture = texture as RenderTexture;
var hash = hashTexture.imageContentsHash;
Looks like beta 4 of 2017.3 which came out earlier today has a change that may explain & fix that error?
There is a release for publicly available 2017.3.0b2 on Github https://unity3d.com/unity/beta/unity2017.3.0b2.
To work with the latest Github Unity-2017.3 version that have the error "imageContentsHash", you require 2017.3.0b4.
This fix an important bug of cubemap refreshing.
As a note: When we introduce breaking C++ change for 2017.3 like this one, we now do a release so there is always the latest public version that is able to run a corresponding Github release (here 2017.3.0b2).
I've been playing around with the latest version, and maaaan, the HD pipeline is complicated to set up. Also, the specular highlights tend to disappear after a while until the next reboot of the editor. Shadows are kind of buggy, too, no matter how low I set the bias, objects still seem to float.
Performance is great, though.
> HD pipeline is complicated to set up
Expected for now, all is still experimental and features to come in next release of Unity will simplify thing.
>the specular highlights tend to disappear after a while until the next reboot of the editor
This is not something we observe.
Any more information about this / Repro step ? What kind of light, what kind of material ? Graphic card ?
There is a debug view mode that allow to show the specular lighting only (HDRenderPipeline->Debug windows. Choose lighting tab and select lighting mode -> specular only), does it disappear in this mode ?
Also on HDRenderPipelineAsset, in the inspector you can disable compute rendering (uncheck tile/cluster). See if it bring back the highlight.
>Shadows are kind of buggy, too
Sadly true, currently being refactor.
>Performance is great, though.
For GPU side it is average, could be better.
For CPU side is not great, but related to vanilla Unity rendering system. This may evolve in the future.
Just the SSS test scene, without any modification. The problem is present with all kinds of materials and lights. Disabling tile/clustered has no effect on this. Using Intel HD630 with latest drivers. (I know it's not an ideal GPU for development, but until my new card arrives, I don't have better)
I noticed that spot and point lights have a max smoothness parameter, which is quite nice, but directional lights are missing this option. Is it intentional?
Edit: replacing the HD render pipeline asset to an other HD render pipeline asset with the exact same settings fixed the highlights
I couldn't reproduce this behaviour ever since. Probably the pipeline asset was corrupted.
Can developers share some information about rendering transparencies in HD pipeline? I'm actually dreaming about order independent transparency and fourier opacity mapping or something like this.
Is there any documentation on how to write custom shaders for the HDRP? i would like to test adding wrinkle map support
No OIT work plan currently. Of course we look at it, but performance is a concern.
For FOM we have plan to implemented it but it require change on the engine side, so will not be implemented before several months (and sadly you will not be able to do it either as currently you need to go throught the DrawShadows call), unless maybe if you use a render to texture that is located at the location of the light ...)
You mentioned performance right now is very "average" since it needs engine-side changes, and I can completely understand this.
However, will the initial 2018.1 planned release of the HDPipeline at least reach parity with the current rendering pipeline? (and, is it still planned to release with 2018.1? I heard that somewhere but not sure how official that information was)
No documentation yet. Writing a shader in HD is complex.
Best will be to duplicate Lit.shader folder and remove/Add everything you don't need/need.
The goal for the future is to rely on a shader graph (no ETA provided, no need to ask ). In your case of adding wrinkle map support, I will suggest to simply add it to the Lit.shader.
You can look for what is done for the _SpecularColorMap in LitDataInternal.hlsl to see how to add a texture sample, track where are declare all the values use (texture, sampler) (it is in Lit.Shader property, LitProperties.hlsl, LitUI.cs), and add your texture the same way.
Then where the normal map is sample in LitDataInternal.hlsl
normalTS = ADD_IDX(GetNormalTS)(input, layerTexCoord, detailNormalTS, detailMask);
Sample your wrinkle map weight and your stretch/squish normal map, blend everything (in tangent space) and save it to normalTS
When I said average performance, I don't compare with vanilla Unity.
Compare to vanilla Unity I think we are way better GPU side (way more lights..., but as there is also way more features and better PBR supported it is not always obvious), however there is several location where we have performance waste that we will improve in the future (unnecessary copy, slow RT clear, unoptimized code etc...)
CPU side, it is control by Vanilla Unity. Should have parity or be better. But vanilla Unity is very slow
No ETA (Joker here ).
What specifically do you think it slow? I know you're no fan of scripting/C#, but aside from that, what do you think is a bottleneck engine-side? Some specific mechanisms (culling, sorting, collating command buffers), general architecture, what?
One thing that surprises me with SRP so far is most things still seem very much tied to the main thread, eg CullResults.Cull() is immediate (not a command), context.Submit() is blocking, etc.
So with that in mind I don't really understand how we can ever achieve proper world/render state double buffering so the simulation can go on its way for frame N+1 while the render thread and GPU finishes frame N... doesn't feel very next-gen or even last-gen .
Is it possible to expose the renders that passed the cull test (aka. FilterResults), including stuff like LOD Level / screen coverage? I'm want to experiment with a texel space renderer and I need this information to build an atlas of visible objects.
If this is the case right now, then perhaps would it be worth considering splitting CullResults into CullResultsImmediate and CullResultsDeferred? I never understood the rationale behind CullResults and then the filtering step. (I get why you would break it up into two steps internally but I don't understand why that is exposed.) What would make sense to me is having a list of CullingParams that can also filter based on Render Queue and passing that into CullResults.Cull and having a list of CullResults returned.
As far as exposing Renderers, why not an ExposedRenderer component of some sort? Or maybe a checkbox for existing renderers?
There's other weird quirks in the basic API, such as _Time being set up in SetupCameraProperties. All the changes that have been added in 2017.2 and 2017.3 make perfect sense to me, but I still find the basic stuff confusing and having a lot of "quirky magic" that's been keeping me from comfortably diving in.
Are there planned changes for these things in the future?
[EDIT] the issue i was having is a non-issue, I made a dumb late night mistake...
basically was testing the culling planes but my debug drawing wasn't accounting for non-hits.
I'm attemping to convert my stencil portals to SRP, and perform the culling using the ScriptableCullingParameters SetCullingPlane(), using clipped down frustums for the portals viewed through.