Hi all, I am using Unity for a research app and this is the last step that's in the way at the moment. The output from the final camera (viewing contents referred to as 'Main Scene') needs to produce content which is 1-bit monochrome and packed into a single output frame (24-bits at 60fps).The special projector takes care of converting this back into usable frames (albiet at 1440fps). In the previous version (non-Unity implementation) this was managed in a single CG program which mashed the contents of 24 FBOs into a single 24-bit frame. The key was a color template texture, a 1D texture 32 pixels wide. The color of the nth pixel was just a single bit flipped 'on' based on the index of the pixel. Pixel 0 had Blue0 turned on, Pixel 8 had Red0 turned on (for some reason the texture required BRG ordering of color). The last 8 pixels (25->31 index) were junk and never used. To 'bake' in color for the nth frame, you would look at the pixel in the nth FrameBuffer. If it was non-zero (any bit is on), simply set the nth bit on by extracting it from the nth pixel of the color template texture and adding it to the overall color (OUT). A code snippet of this is below: Code (CSharp): const float2 stp = float2(1.0/32.0,1.0); if(any(tex2D(image0,texCoord).xyz)) OUT.color += tex2D(colTemplate,stp*0.0f); In the current Unity version, there's a shader attached to each object in the main scene which computes if the object should contribute to the n-th frame or not. The question is, how do I transfer this information to the n-th bit position? Is it easier to use a pre-computed texture or is it simple math? I am worried about precision (and thus would like to know which data types to use) as this can cause random artifacts when precision is lost. Also, do I need to turn any specific setting on or off so that it doesn't muck with the output colour? Thanks.