Hi. I'm a student at a university who has been watching this project from the shadows from a while, and now this summer on weekends I think I would like to try building a couple of SRPs for the experience. I have two projects in mind that would greatly benefit from custom pipelines. One of them is a game in which different scenes use different visual styles with different effects, weather, ect. The other is an FPS somewhat similar to Splatoon with a "paint the world" effect (which I got a prototype working in 5.6 using particles and texture arrays and a fake lightmap, though the mechanics are different and I would eventually want to add a full fluid simulation. So if you don't mind, I have quite a few questions on things I have ran into so far. First, what exactly is the purpose of using a factory system and producing pipeline instances? I couldn't really figure out how that is useful, and some of the Unity pipelines don't seem to bother with it (BasicRenderPipeline just uses static functions, and the mobile deferred pipeline just passes rendering back to the asset). While I really like the idea of splitting run-time data with serialized data, Unity automatically creates and deletes pipeline instances whenever something is changed in the asset from the editor, which means any run-time data registered into the instance from script during Start() would get broken if an artist decided to modify the shadow settings during play mode. And I'm not sold on the idea of generating a new pipeline instance every frame to handle dynamic events. What am I missing here? Second is just a nitpick, but why is GetCullingParameters in CullResults rather than a method called GetPararmetersFromCamera in CullingParameters? Third, what exactly are your plans for managing lightmaps? Will we have access to controlling when Enlighten would perform a meta pass after requesting a renderer update? Will we be able to store our own custom texture transform matrices in renderer components that work for multiple lightmaps and other kinds of world-space maps? For example, when I built the FPS prototype, I had to create a baked black directional light and specify a small lightmap resolution to get the entire baked lightmap into a single lightmap atlas, so that I could use the baked lightmap UVs to index my texture array. But in the future I would love to be able to just automatically have Unity pack the baked lightmap uvs to fill a single atlas, and then after painting the particles to the texture array, update global illumination on either the CPU or GPU (not sure which will be easier/more performant) using either Enlighten or my own system, and then draw the scene. Are there any plans to make something like this feasible? Fourth, regarding discussion of callbacks within the render pipeline, I imagine it would be something like this: 1. I build up a list of CullResults for things that need callbacks. (C#) 2. I call a function in Unity API that takes a list of CullResults and spits out a 2D list of Renderers. (C++) 3. I use this 2D list to send out appropriate messages as I build up my rendering instructions. (C#) Would I use SendMessage for this? 4. I submit my context. (C++) Would this be an efficient approach compared to how Unity is currently doing things? Fifth, will we be able to customize CameraEvents to attach command buffers to? Sixth, will we be able to define our own render queue enumerations instead of simply using "Opaque", "Transparent", ect? More specifically, have an easy way to specify them from Shaders and such? Seventh, how would the following use case be possible in SRPs (assuming it is possible)? I have a particular type of enemy whose body is emitting fire. However, I want full control over the style of the fire, so I write a compute shader that takes in the deformed mesh (hopefully from GPU skinning), and outputs mesh data for fire for that frame. I have over 100 of these fire enemies in my scene, but only about 10 of them will be on screen at once, and I only need to update the fire when the enemy is on screen. I want to render the enemy mesh during the opaque pass. Then in the transparent pass I want to run the compute shader on only the visible enemies and then draw indirectly the fire. I then want to use a similar technique for smoke enemies, water enemies, ect. The only way I can imagine doing this would be to use a custom callback that sends out a reference to a command buffer to fill to all the objects after culling objects on a specific render queue (hence why I asked about custom render queue enumerations). Eighth, I'm noticing in some the examples the use of an AdditionalLightData script. I have a sinking feeling this is going to lead to a lot of artist frustration, as an artist could easily forget to add this script when creating a light. It could also lead to confusion when changing the light component parameters have no effect, because actual data lies within the AdditionalLightData script. For example, maybe I wanted the light intensity to be calculated by a lightbulb type and wattage so that I could simulate a sketchy electrical system. Would it be possible to get a minimal version of a light (and probes and maybe even cameras) that we could inherit from that has the normal MonoBehavior messages (or at least the important ones)? Maybe this minimal class would only contain info for culling (like bounding box and such that would be hidden from the editor)? And then provide some way to print a warning when a user adds the regular Unity light? (This might already be possible. I'm not very good at editor scripting.) Are there alternative solutions far superior to this idea? Ninth, can we get callbacks for when a pipeline asset gets assigned in the editor to that particular pipeline (as well as a callback for when a pipeline gets removed)? Tenth, which shader variables does SetupCameraProperties actually set up? And finally, is it a good idea for me to be trying to build my own SRPs this early? Am I asking too many questions? I really like where Scriptable Render Pipelines are going. Aside from the things above, everything is really intuitive. It is easy to cull what you want to cull. I have full customization over shadows, light falloffs, styles, abilities to do crazy interactions between multiple lights and cameras, apply filtering to the skybox by drawing it first and then running compute shaders, whose results I can use to apply other shading effects, and all sorts of stuff. And things for the most part just work. The easiest evidence that anyone can try is Debug.Log the order the cameras get passed in the array. You'll find it is sorted by the cameras' depth values, just as one would expect!