So I figured that Unity must internally keep some kind of structure of what is being sent to the gpu. Is this collection somehow accessible? I'd like to loop through all the renderers and set some material property blocks.
You should describe what exactly it is you are trying to accomplish functionality and gameplay wise, as there is likely a smart way to go about it than iterating through renderers.
I'm generating an unique ID texture for post processing purposes. I've tried playing with instance id in a shader, but these values are only unique per batch, not per visible renderer.
Do the IDs need to be sequential? If not, then in a shader you could merely output the object center position XYZ to your PP buffer to use as a differentiator in the post processing. If you are trying to save on memory then you can hash those 3 values into one. This will output a separate solid color for each object being rendered (except for the edge case of having the same world space pivot as another object..) Keep in mind though you might also be losing instance IDs because of objects being static and having static batching enabled? When that's the case, objects sharing the same material get merged into one object and thus is a singular instance ID. If your objects are properly using CBuffer based instanced rendering they should have a unique ID each...
Yeah I've experimented with center depth + a center pos hash, so far the center depth seems to have the best results with only a few edge cases. Maybe i took another deeper look at instancing, i think i had dynamic & static batching off... Thanks for your comment. Code (CSharp): #if defined(UNITY_INSTANCING_ENABLED) return smoothstep(unity_InstanceCount, 0.0, unity_InstanceID); #else return 0; #endif I've tried this, you see anything wrong with it?
InstanceID is 0-based for each draw call, it won't be cumulative between calls. So you'll want to add unity_BaseInstanceID to it as well to account for the splitting of instance groups into separate calls (or just use it as your G value). Also not sure what the purpose of doing a smoothstep is there, it's going to end up returning zero on every second instance and greater no? I would combine center position value and instance + base as the most secure, and if instancing not enabled, fall back to just position.
I'm not sure if im trying something impossible, but I've had some results on using the alpha of the actual render tex as a outline buffer. This means, I need to pack some unique information into a single (8-bit?) channel. This I figured could be very cheap mobile vs. a full separate pass (which albeit, would make things much easier). The smoothstep is to get a value between 0-1, that i can put into alpha. Code (CSharp): #if defined(UNITY_INSTANCING_ENABLED) return smoothstep(unity_InstanceCount + unity_BaseInstanceID, 0.0, unity_InstanceID + unity_BaseInstanceID); #else return 0; #endif This doesnt seem to work either, and the problem with instance id's in general is that they always "reset", when a new kind of mesh is drawn? I think at this point I just fold and go with the center depth, maybe figure out a stable way of modulating that center so that I get rid of those artifacts ps. Also, I've tried just upgrading the precision of the render texture, but for some reason, the render debug always shows that for normal opaque rendering, the texture used is always the default rbga8 unorm. Is this intended? I would have imagined that Unity would render stuff at whatever precision the cameras target texture is set?
Smoothstep applies a curve to the blend between those values. You would want lerp for something like this so it's a consistent rate of change. Also, when the third value of lerp or smoothstep reaches 1.0, that means it will be returning the second value. So after 1 instance you've already hit a return value of 0. Really you should just return unity_InstanceID + unity_BaseInstanceID. Ignore the smoothstep. Things are going to look mostly pure white when you preview them, but you're going to have values in the texture that are above 1.0. a float is not restricted to 0.0 to 1.0. RGBA8 is only going to give you a 256 value range, it won't be floats but fixed/byte basically. Not sure why it's changing the buffer on you though, been a while since I messed with it. Is your project settings > Player color space set to Linear? Gamma might be forcing the low-precision buffer.
The proper render target seems to be only used in the post processing stage. Might this have something to do with using the Game RT that resides in the assets folder? I'm doing the low-pix cam trick, that you're perhaps familiar with. I render to a RT and then have a second camera that views a quad with a material that refs that rt. It shows that the actual rendering is done on a TempBuffer vs. in Post, the high precision GameRT is used. edit: tested on an empty scene, proper render targets, I think im doing something stupid with my cameras, need to check them out. edit: I had the HDR button on the cam off, that fixed it