I need a solution for drawing 3D mesh based UI elements in the new UI system. My first thought was to look at the way image graphics work, and i noticed that the built in geometry construction was not ideal for this since it expects sets of 4 UIVertex instead of 3. I then tried using a standard mesh renderer setup but realized that masking and events weren't going to work nicely, so I switched back to using the graphic system with OnFillVBO, which works but is pretty slow for fairly low res mesh geometry, and this is meant for mobile platforms. I noticed that it seems to be raycasting against the actual graphic geometry instead of the RectTransform. I also noticed that the raycasting seems to only hit the back of the geometry instead of the front. I'm trying to figure out the best way to get this working. I have thought about creating a canvas renderer that renders the 3D elements first, and then let the standard canvas render any 2D elements like images and text on top of that. I would also need to emulate the stencil based masking system. If I do this, and use DrawMeshNow, then I would have to manage batching of these meshes manually, probably per frame. I also noticed that there is a Physics/Physics2D Raycaster, in edition to the Graphic Raycaster which I assume can be used for detecting events from non graphic components. Is this the best way to do this?