From reading the documentation and blog posts e.g. Real-time style transfer in Unity, it seems as though Barracuda neural networks must be called and evaluated separately from the render pipeline. E.g. in the demo code for the project mentioned above, as far as I can see, the rendered output of the game is copied (at presumably great expense) back into CPU memory [EDIT: this is not true, see reply below] so that it can be converted into a Tensor and then submitted to the Barracuda model... which runs on the GPU. The blog post even mentions CNN's in the rendering loop, but it's unclear what the actual technical limitations are / if any work is being done to address them. It's clear that there could be a lot of potential applications for evaluating models in-pipeline. My question is, what sort of technical limitations (if any) exist to prevent us from being able to run the generated compute shaders in the same way as we'd run any other compute shader as part of the rendering pipeline?