Code (CSharp): RenderTexture.active = tex; currenttexture = new Texture2D(2048, 2048, TextureFormat.RGBA32, false, true); currenttexture.ReadPixels(new Rect(0, 0, tex.width, tex.height), 0, 0, false); ; currenttexture.Apply(false); I have been reading about this all day. I have come to understand it's about GPU data vs CPU data. The reason I need to perform this heavy action is to run GetPixel on the new Texture2D to get a single pixel. The X and Y of that pixel is derived after I create the new texture and get its height and width. I am dealing with a 2048x2048 Rendertexture and the hit is noticeable. Is it possible to determine that Getpixel X and Y first, and then define ReadPixels so it only pulls back a minimal pixel group 16x16 or 8x8 (I don't know how small a texture2d can be) so my resulting Texture2D is the minimum size and my pixel sits at 0,0? It really feels like a waste to copy a 16mb image for 1 pixel. I have seen a lot of code today for getting a pixel from a RenderTexture, none suggested this. Is there a reason this would be an issue? You could essentially create a getpixel for rendertextures without taking the hit.