Yeah, you read that title correctly. I'm trying to create a method of pre-rendering 3D models in Realtime so that they can be used as sprites... also... in realtime. Effectively what I'm going for is that very specific pre-rendered look as well as a completely stable pixel placement regardless of position, rotation, and scale while at the same time gaining all of the benefits of using 3D models, skinning, bones, cloth, animation retargeting and blending, IKs, etc... I'm using a custom SRP so I've got a little extra control with what I'm doing but I can help but shake the feeling that I'm walking down a long corridor when the exit I wanted was ten feet to the left of the entrance. I've tried a few different methods and the one I've settled on so far is to schedule a list of objects to draw and then have a dedicated phase in my SRP to swap render targets one-by-one and render each object out onto a different one using CommandBuffer.DrawMesh(). Later during a more 'natural' phase of rendering all of the sprites will be drawn like normal using quads. Each sprite quad would of course need its own material with its own render texture supplied as the sprite to draw. This leaves me with one really big issue. Each sprite effectively needs it's own material supplied. I supposed I could generate these materials and render targets and do all of the linking at runtime but it somehow seems wasteful and I'm also slightly worried that, down the road, this could have severe technical or performance issues I'm currently not aware of. Is there some better alternative I should consider? Perhaps rendering sprites to a single very large rendertarget, where each sprite gets a dedicated rect of it's own? Perhaps a shader-only method that doesn't involve all of these render targets and sprites to be rendered in the first place?