Hello, I am trying to do supersample rendering (render a higher resolution than the screen, then scale it down to the screen resolution) so that a shader that stretches pixels looks better. (Otherwise, one pixel gets spread across several when the view stretches.) I have accomplished this by rendering the effect to a RenderTexture and then drawing that texture to the screen during OnGUI, but this technique has the downside of not preserving colorspace properly (linear/gamma). (I think it's probably also slower...) And it also doesn't draw Gizmos. I've spent all day so far trying to work around this, and nothing seems to work: - Creating a larger rendertexture and blitting to that first (just draws the low-res image to the texture) - Set the camera's target texture to the render texture, blit to that in OnRenderImage, then set RenderTexture.active to null (and camera's target texture to null, or else Unity crashes) and to a blit to null (the screen buffer) -- this blits whatever fits of the larger texture onto the screen buffer and doesn't scale it to fit. Graphics.DrawTexture has the same problems as GUI.DrawTexture. Has anyone gotten this to work?
Here's how I do it : Create an empty GameObject on the scene, zero'd the transform, add a Camera to it (culling mask to none, clear flags to solid color) and a new component (lets name it Supersampling). Disable all the cameras on your scene (either manually or in the Supersampling component, see below). In Supersampling, have a list of all cameras that should be rendered. In Supersampling.OnRenderImage() create/recycle your supersampling render texture and render all the cameras from the list to this RT. Code (CSharp): for (int i = 0; i < Cameras.Count; i++) { Camera cam = Cameras[i]; cam.targetTexture = supersamplingRT; cam.Render(); cam.targetTexture = null; } Finally, blit the RT to screen. For supersampling 2x2, a simple bilinear filter works fine, but with sizes bigger than 2x2 you'll want to use a lowpass or lanczos filter. I'll try to find some time to clean up my code and post it on Github.
Thanks, this is awesome! So when you blit to the screen, do you use a shader to downsample for you? Whenever I've tried blitting large textures to the screen, they just copy whatever can be shown at native resolution to the screen and clip all of the extra (so for a 2x2 supersample, I only get the bottom 1/4 of the texture drawn to screen).
i thought id upload my script as it should make things easier for others so as described, you 1) deactivate your main camera 2) create a new camera and set culling mask to none and clear flags to solid color 3) attach this script Code (CSharp): using UnityEngine; using System.Collections; using UnityStandardAssets.ImageEffects; public class supersampling : MonoBehaviour { [ExecuteInEditMode] RenderTexture supersamplingRT; public Camera cam; const int factor = 2; void Start () { supersamplingRT = new RenderTexture (Screen.width*factor,Screen.height*factor,24, RenderTextureFormat.ARGB32); } void OnRenderImage (RenderTexture source, RenderTexture destination) { cam.targetTexture = supersamplingRT; cam.Render(); cam.targetTexture = null; Graphics.Blit (supersamplingRT, destination); } } 4) link your main camera to the script in the script you can change the amount of supersampling and you could include an array if you render multiple cameras note that this does not work well with resizing the window, the image will be distorted. sadly the size of rendertextures cant be changed later, if someone has a solution thatd be cool
Thank for explaining how you achieved this effect, i made a very similar system for Unity 4 that no longer works in Unity 5 because reasons. Here is a sample of a system that checks the current main camera in the scene and applies a scaling factor to it, this system makes it possible to keep the effect active across of cameras with no work involved in switching the actual camera. It does have some performance issues right now since the way a new camera is picked using the Camera.Main function, but the dummy camera should be the one tagged as "MainCamera". If someone has a good suggestion on how to fix this in a way that allows the script to automatically pick the camera, i would like to hear about it. Code (CSharp): using UnityEngine; using System.Collections; public class ScreenIndependenRenderer : MonoBehaviour { RenderTexture viewPortRT; public float scale = 0.1f; private float lastScale; private Camera currentCamera; private Vector2 currentResolution; private bool shouldUpdateRenderTexture = false; Camera dummyCam; // How big or small the scale can be private float minScale = 0.1f; private float maxScale = 4.0f; void Awake() { // Get the current resolution currentResolution = new Vector2(Screen.width, Screen.height); // Set the last scale to the current scale to avoid creating multiple render textures on create lastScale = scale; // Set up the temporary camera dummyCam = gameObject.AddComponent<Camera>(); dummyCam.cullingMask = 0; dummyCam.backgroundColor = Color.black; dummyCam.clearFlags = CameraClearFlags.SolidColor; // Create a render texture CreateRenderTexture(); } void Update() { // Checks if the resolution has changed CheckResolution(); // Checks if the camera has changed CheckCamera(); // Checks if the scale has changed CheckScale(); if (shouldUpdateRenderTexture) { CreateRenderTexture(); } shouldUpdateRenderTexture = false; } // Checks for resolution changes void CheckResolution() { // Get the current screen resolution Vector2 tempRes = new Vector2(Screen.width, Screen.height); // Check if the resolution has changed if (tempRes != currentResolution) { // Save the new resolution currentResolution = tempRes; // Set the correct aspect ratio for the camera currentCamera.aspect = currentResolution.x / currentResolution.y; // Mark the render texture for updating shouldUpdateRenderTexture = true; } } void CheckCamera() { // Get the main camera Camera mainCamera = Camera.main; // Check if there is a main camera if (mainCamera) { // Check if the main camera is the same as the current camera if (mainCamera != currentCamera) { // Camera is not the same, set the new camera currentCamera = mainCamera; // Set the correct aspect ratio for the camera currentCamera.aspect = currentResolution.x / currentResolution.y; } } else { // Set to null currentCamera = null; } } void CheckScale() { if (lastScale != scale && currentCamera) { // Check if the scale is below zero if (scale <= minScale) { scale = minScale; } // Check if the scale is above the max resolution else if (scale >= maxScale) { scale = maxScale; } // Log the scale lastScale = scale; // Mark the render texture for updating shouldUpdateRenderTexture = true; } } // Creates the rendertexture void CreateRenderTexture() { viewPortRT = new RenderTexture((int)(Screen.width * scale), (int)(Screen.height * scale), 24, RenderTextureFormat.ARGB32); } void OnRenderImage(RenderTexture source, RenderTexture destination) { // Check if we have a camera before rendering if (currentCamera && viewPortRT) { currentCamera.targetTexture = viewPortRT; currentCamera.Render(); currentCamera.targetTexture = null; Graphics.Blit(viewPortRT, destination); } } }
Just a very small update to my previous post. I implemented a class that i attach to my scene cameras that also handles toggling various effects on and off, making a static reference to the current main camera and referencing that in the screen independent rendering system instead of Camera.Main does work. I won't share the the updated solution as it also references some of my other camera control code, but it should be very simple to implement.