Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice
  3. Join us on November 16th, 2023, between 1 pm and 9 pm CET for Ask the Experts Online on Discord and on Unity Discussions.
    Dismiss Notice

Crazy Idea: Multiple frames per second (hear me out)

Discussion in 'General Discussion' started by Not_Sure, Mar 22, 2016.

?

Would this work?

  1. Yes

    5 vote(s)
    18.5%
  2. No

    7 vote(s)
    25.9%
  3. Maybe

    10 vote(s)
    37.0%
  4. Corn

    5 vote(s)
    18.5%
  1. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    Some of you may remember me bringing this up before, but I'm just back on this kick.

    I've been kicking around how you might go about optimizing the render loop more.

    I would love some input from the more knowledgeable folks here.



    What if you took two cameras and ran them on two separate render loops?

    You have a camera for close objects that runs on a firm 60 FPS.

    Then you have a second camera that does distant objects, but the loops is dynamic.



    It seems you could enjoy a very concrete 60 FPS for objects that need a high FPS, and then spend left over usage to render distant objects that are more or less scenery.



    Of course, this would require tweaking to ensure the two camera line up, and that the distant camera's render field has a lip so that it has some wiggle room. But I'm sure it's nothing that couldn't be worked around.



    What do you all think?
     
  2. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    ...I vaguely remember reading about a game/simulator doing this. Primary camera at 60 fps close up, and a secondary one at say 30 fps for distant objects. It's comparable to rendering close up at full res, and a secondary camera for distant objects at say half res. Which again is comparable to what some games are doing now to get around depth buffer precision issues.

    But IMO it could be a bit problematic, as you'd have one frame rendering primary + secondary, one primary, one both, one primary, so every other frame 'lags' compared to the rest. Essentially 30 lags a second.

    Unless I misunderstood you, then disregard all above.
     
  3. darkhog

    darkhog

    Joined:
    Dec 4, 2012
    Posts:
    2,218
    Not Sure if it would work. Try making a prototype using some free assets or bought you have access to.
     
  4. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    To add to my other comment: biggest problem will be that if you're turning rapidly, closeby would be updated, but further away would remain on screen (terrain from left side while looking to the right) or simply swoosh off the side and you stare into the skybox.

    Edit: Just realized, but this may be a lot more usable for isometric games or similar. You could render the background/terrain to a rendertexture every few frames (with some pad out of screen), while updating moving things every frame.
     
  5. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    It's called a Skybox, In theory you could have a traditional skybox and then a distant object skybox and you would get parallax.
     
    Ryiah and Kiwasi like this.
  6. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    Something like the old school parallax effects in ancient animated movies?

    You best bet might be to simply skip update frames on distant objects.
     
  7. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    Like I said, this would require tweaking.

    I don't see why you couldn't render beyond the edge of the screen and move the image every time the close camera moves.


    Yes, I know what a skybox is. But if I'm not mistaken 3d skyboxes / sky domes are still just rendered each frame.

    I'm saying, do that, but then cut down on how often it's updated.


    Hmm, that sounds like a good idea to test the concept. But I would really like to see what it would look like if it was running on two separate render loops that trade off resources.
     
  8. ericbegue

    ericbegue

    Joined:
    May 31, 2013
    Posts:
    1,353
    Its seems you are describing the principle of LOD.

    But with your system, if the cameras are running at different frame rate, that would have give some weird synchronization effects. Though I don't what that would actually looks like.
     
  9. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    I'm guess the distant objects would look like they're jiggling.

    That's why I'm saying it would require some tweaking.

    EDIT: You could have the distant camera render more than what's on screen and then move the frame as needed. Then you could do some math to figure out how much more should be rendered in which direction.

    Worse case scenario, you get a single frame where it has unrendered pixels. And then you could still do stuff to hide that too.
     
    Last edited: Mar 22, 2016
  10. Neoptolemus

    Neoptolemus

    Joined:
    Jul 5, 2014
    Posts:
    52
    I can see where you're coming from, essentially render static objects that are distant only half as many times as more dominant objects on the screen.


    There are numerous issues with this approach however:


    - In DirectX and OpenGL you can only have one device context. Essentially only one camera can render at a time. You would have to render the scene first with the 60fps camera, then render with the 30fps camera, even if you tried to run the 2nd camera on a separate thread


    - Having every other frame running an extra render step will probably lead to an inconsistent framerate


    - If the end user's GPU cannot maintain 60fps then you will run into problems


    - The main performance hit in real-time rendering is in calls to the driver to bind textures and the like. By having two separate cameras, you would be increasing those calls because you'd essentially have to do a full render pipeline twice every other frame. Your solution would perform worse than if you just rendered everything with the main camera

    - You would need some trickery regarding the Z-buffer to ensure that camera B doesn't render stuff out of order with camera A. You'd need to render the whole scene first to populate the Z-buffer using camera A, then sample it while rendering from camera B

    In other words, your idea would be very complicated to implement and would probably perform worse.
     
  11. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,327
    Variations of that has been done before. "Distant objects animate at lower fps", "Distant objects render at lower resolution, then are upscaled"

    You have one GPU that does all the work, so adding one more rendering loop will slow down the first one. Also, two loops are very likely perform worse than just one loop. (because of extra overhead having a second loop adds - critical sections, etc.)

    Also, you'll still need to blend images from both cameras each frame, unless both cameras are completely static.
     
    Ryiah and Deleted User like this.
  12. Deleted User

    Deleted User

    Guest

    Occlusion culling does a far better job of "optimising rendering"..

    Just to add, it's generally not far rendering you have to worry about. It's what's right in front of you, due to billboards, LOD's, instance pools / Occlusion etc
     
    Last edited by a moderator: Mar 22, 2016
    Ryiah likes this.
  13. Frednaar

    Frednaar

    Joined:
    Apr 18, 2010
    Posts:
    153
    I believe this is an interesting subject, I was considering this for a flight sim project where you need several cameras anyway to avoid z-fighting.

    I did not have time to experiment it but was thinking about rendering textures to a pseudo skybox on an asynchronous call made a few times a second per each side, leaving bottom side blank. camera should follow camera position but not rotation so you solve the fast rotation issue....
     
  14. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Since your framerate is running at 60fps and every frame you get a full-screen wipe (unless you plan to see some nasty trails from junk pixels in a previous frame), you will get one frame with fore and background drawn, and then the next frame will be foreground only with either junk or blank background, which will produce a flicker/strobe.
     
  15. dogzerx2

    dogzerx2

    Joined:
    Dec 27, 2009
    Posts:
    3,960
    You might have a little problem with intersecting meshes that are rendered by different cameras.
     
  16. Neoptolemus

    Neoptolemus

    Joined:
    Jul 5, 2014
    Posts:
    52
    In theory you could get around this by storing different render targets for each camera and blending them each frame to composite the final image. The 30fps camera would update its render target half as often as the 60fps camera, so every other frame the 60fps camera would blend "stale" data from the previous render.

    Of course, you're now adding yet more complexity and additional blending which you wouldn't have with a conventional 1-camera setup.
     
  17. BIG-BUG

    BIG-BUG

    Joined:
    Mar 29, 2009
    Posts:
    457
    The idea could work for specific games with a locked camera which can't be turned so that the far scene needs less updates than the near scene. I'm thinking endless runner or sidescroller here. But I don't think there are many cases where the benefits would outweigh the hassle.
    But there are in fact variations of this idea which are in use:
    -Half Life 2 for example uses a second camera to render a "dynamic" skybox.
    -Often engines render "imposters" of distant objects, basically a photography of the object on a billboard. This technology is mostly used on trees of course (like in Unity) but also works if you want to render a city skyline in the background.
     
    Kiwasi likes this.
  18. Fera_KM

    Fera_KM

    Joined:
    Nov 7, 2013
    Posts:
    307
    Isn't this what Street Fighter V does with it's current levels? Or did I misinterpret something?
     
  19. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Interesting subject but nothing new. Practically every game you'll buy renders things at different rates and sizes (which is why the fanboy faithful screeching at native 1080p are amusing). Unity renders things at different rates (reflection probes, temporally based FX).

    Basically - render only when you need to, and only render *each element* at the lowest resolution you can get away with. Some games render deferred to different sizes. For example the deferred normals can be lower res, but Unity doesn't give the option. Particles could get rendered to lower res buffers. Lots of room for aggressive optimisation without noticing visually. 900p and 1080p are virtually indistinguishable, particularly when it's not the main geometry pass being rendered smaller. This happens all the time in real games.

    Unity doesn't do it for us, and the problem is rolling our own often means not using Unity's features at times, and that's a tough decision to make.

    These are all best practises for when your game is complete and you have identified the real problems. Games like Ico's shadow of the colossus would often just render distant landscape to cubemap. Farcry 3 did this too for consoles (and probably low desktop quality settings).

    Rendering distant things out at lower framerates is a thing, but you'll want it *very* distant or you'll get artefacts when you rotate.

    I'm glad the subject came up because people need to accept that these are common techniques in AAA games, and Unity is often lambasted as being slow even though there's not much it can do in some situations. I would like more engine-level features to be added to allow us to gain better performance as well, such as rendering shadows to buffers you keep around instead of rendering all the shadows all the time, and this sort of thing is really quite difficult to pull off right now, lots of guesswork, lots of headaches, lots of reasons I shouldn't be using Unity for open world games, really.
     
    AcidArrow, Martin_H, dogzerx2 and 4 others like this.
  20. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,516
    Having a consistent framerate is at least as important as having a high framerate, and it strikes me that this would maximise local inconsistencies in the frame rate - you would be constantly alternating between short and long frames.

    What's more, the long frames still have to be fast enough to be playable. Having an average of 60fps doesn't really make much sense if you're constantly stuttering between 120fps and 15fps (or whatever it works out to) to get it. Plus, as soon as you consider refresh rates the actual images that make it to the user might be different again.

    Plus, the games where a high frame rate is most important - those with fast movement and rotation - are the ones where the artefacts this introduces will be most obvious. As hippo says, this kind of thing is already commonly built into systems where the artefacts it causes aren't obvious, with things like reflections or impostors being updated less frequently or only when a certain amount of movement has occurred. In combination with some kind of budgeting system this approach works very well, because instead of having long and short frames the "extra detail" rendering is being continually distributed over frames, with each doing a little work towards it.
     
    Martin_H, frosted and Ryiah like this.
  21. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    Okay, so what I'm gathering is that the best way to do something like what I'm suggesting would be to have a camera render a short distance, then have a cube map generate the larger world asynchronously and use it as a skybox.

    Is that correct?
     
  22. Zuntatos

    Zuntatos

    Joined:
    Nov 18, 2012
    Posts:
    612
    I suppose so. Then, say, update one direction every frame so every 4 frames is a full update (assuming up/down aren't needed). That'd keep the framerate relatively steady as well.
     
    Not_Sure likes this.
  23. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,516
    I still see this introducing far more problems that it solves. Plus, a cube map has to render each direction of the cube, not just the direction you're currently looking in, so aren't you in fact making it render more?

    And having it update only one face at a time for something like this will likely result in very obvious seams, on top of making certain types of post process effects harder, on top of dropping the frame rate for parts of your scene to 1/6th or 1/4th of the rest. If a smooth frame rate is important then isn't that the opposite of what you want to be doing?

    I don't think there's any such thing as "ONE QUICK TRICK TO BOOST YOUR PERFORMANCE - GRAPHICS PROGRAMMERS HATE HIM!" It's a matter of understanding how the rendering pipeline works and what you need to get out of it, then carefully customising and tuning it to optimally meet your needs.
     
    Martin_H likes this.
  24. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    The cubemap thing is valid. It's rendered every so often, not at anything like 30fps. Many well known titles do it.
     
    Not_Sure and Martin_H like this.
  25. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,516
    I guess it comes down to how far away "distant" is considered to be.
     
  26. hippocoder

    hippocoder

    Digital Ape Moderator

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    Open world distant :)
     
  27. TylerPerry

    TylerPerry

    Joined:
    May 29, 2011
    Posts:
    5,577
    What if you had a parallax corrected skybox that only updates when the player has moved enough for it to be obvious that it's a cubemap? If the player is standing still then it never updates but if they are moving rapidly then it would be updating every half a second or something.
     
  28. tiggus

    tiggus

    Joined:
    Sep 2, 2010
    Posts:
    1,240
    someone should link this to the "is 2D easier" thread.
     
  29. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    I'm a little lost as to what to do to update the skybox with images from a camera.

    Would render to texture play a part?

    Does anyone have some good reading material on it?
     
  30. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    If you really wanted to go crazy you could bake a bunch of skyboxes. Render once per build, not once every xxx seconds.

    Of course this would need to be game dependent.
     
    Ryiah likes this.
  31. TylerPerry

    TylerPerry

    Joined:
    May 29, 2011
    Posts:
    5,577
    It's actually really easy just checkout these two references:

    http://docs.unity3d.com/ScriptReference/Camera.RenderToCubemap.html
    http://docs.unity3d.com/ScriptReference/RenderSettings-skybox.html

    Though I think if you use a custom skybox material you can just update the texture and it will work? I haven't tried it myself.
     
    Ryiah and Not_Sure like this.
  32. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
  33. Meredoc

    Meredoc

    Joined:
    Mar 9, 2016
    Posts:
    21
    There's a better way, and an old OpenGL engine did just that: render the distant object into a temporary low-rez sprite and put it out there instead of the object. One measly polygon will be enough, basically. I'm sure this is already implemented, here and there.
    Adding another pipeline is just painful and a massive bugfest.
     
  34. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    You know, I actually tested that once.

    Was not too keen on the results, but I was a unity pup back then...
     
  35. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    Consequently, if I'm rendering the surrounding area to the skybox, then how do I render the actual sky?
     
  36. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,327
    Camera has "render" method. Assuming you want to go with your original idea, you'll need to render skybox to texture, then blit rendered texture onto screen. you can also create temporary camera, set visible layers for it, render what it can see, etc.

    If your render target supports alpha channel, you can render skybox with alpha mask. So, unpainted region will have alpha of zero.
    In that way you coudl draw sky, then "pre-rendered" skybox (while cutting out the pixels), then the scene, in turns.

    I'm not sure how much time you'll save doing that, though. You see full 2d texture blit is not necessarily a cheap operations. On older radeon cards you could drop fps from 300 to 30 by blitting non-pow2 texture to screen.
     
  37. Meredoc

    Meredoc

    Joined:
    Mar 9, 2016
    Posts:
    21
    What was the problem?
     
  38. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    Lots of things.

    Lots of things I could most likely fix now. But mostly, it didn't do anything for performance.
     
  39. shadiradio

    shadiradio

    Joined:
    Jun 22, 2013
    Posts:
    83
    I also thought SFV did this. There's definitely a difference between foreground and background animation frame rates (on my PC), and it's possible that it depends on system specs?
     
  40. Meredoc

    Meredoc

    Joined:
    Mar 9, 2016
    Posts:
    21
    Hmm, how could one possibly find a faster optimisation than rendering down distant objects to planes and representing them over several keyframes?

    I mean, your suggested method is just a bit like dividing the polycount in two for the far plane, but doubling all the overhead? Where's the gain?
     
  41. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    So, I found this great wiki article to help me get a starting point (if I can ever get the time to try this out).

    My one hang up now is how I can run this script periodically and not give the performance "chugs".

    I'm guessing that you can't exactly render things asynchronously, huh?
     
  42. Frednaar

    Frednaar

    Joined:
    Apr 18, 2010
    Posts:
    153
    no but you can do coroutines to create the images and then change your skybox textures in one pass... I was also looking at this post, likewise I will try to crack it as soon as I have time, should not be that hard (I think)
     
  43. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    So consequently, how would you break this up as a coroutine?

    Code (csharp):
    1.  
    2. #pragma strict
    3. class SkyBoxGenerator extends ScriptableWizard {
    4.   var renderFromPosition : Transform;
    5.  
    6.   var skyBoxImage = new Array ("frontImage", "rightImage", "backImage", "leftImage", "upImage", "downImage");
    7.  
    8.   var skyDirection = new Array (Vector3 (0,0,0), Vector3 (0,-90,0), Vector3 (0,180,0), Vector3 (0,90,0), Vector3 (-90,0,0), Vector3 (90,0,0));
    9.  
    10.  
    11.   function OnWizardUpdate()
    12.   {
    13.   helpString = "Select transform to render from";
    14.   isValid = (renderFromPosition != null);
    15.   }
    16.  
    17.   function OnWizardCreate()
    18.   {
    19.   var go = new GameObject ("SkyboxCamera", Camera);
    20.  
    21.   go.camera.backgroundColor = Color.black;
    22.   go.camera.clearFlags = CameraClearFlags.Skybox;
    23.   go.camera.fieldOfView = 90;
    24.   go.camera.aspect = 1.0;
    25.  
    26.   go.transform.position = renderFromPosition.position;
    27.  
    28.   if (renderFromPosition.renderer)
    29.   {
    30.   go.transform.position = renderFromPosition.renderer.bounds.center;
    31.   }
    32.  
    33.   go.transform.rotation = Quaternion.identity;
    34.  
    35.   for (var orientation = 0; orientation < skyDirection.length ; orientation++)
    36.   {
    37.   renderSkyImage(orientation, go);
    38.   }
    39.  
    40.   DestroyImmediate (go);
    41.   }
    42.  
    43.   @MenuItem("Custom/Render Skybox", false, 4)
    44.   static function RenderSkyBox()
    45.   {
    46.   ScriptableWizard.DisplayWizard ("Render SkyBox", SkyBoxGenerator, "Render!");
    47.   }
    48.  
    49.   function renderSkyImage(orientation : int, go : GameObject)
    50.   {
    51. go.transform.eulerAngles = skyDirection[orientation];
    52. var screenSize = 1024;
    53. var rt = new RenderTexture (screenSize, screenSize, 24);
    54. go.camera.targetTexture = rt;
    55. var screenShot = new Texture2D (screenSize, screenSize, TextureFormat.RGB24, false);
    56. go.camera.Render();
    57. RenderTexture.active = rt;
    58. screenShot.ReadPixels (Rect (0, 0, screenSize, screenSize), 0, 0);
    59.  
    60. RenderTexture.active = null;
    61. DestroyImmediate (rt);
    62. var bytes = screenShot.EncodeToPNG();
    63.  
    64. var directory = "Assets/Skyboxes";
    65. if (!System.IO.Directory.Exists(directory))
    66. System.IO.Directory.CreateDirectory(directory);
    67. System.IO.File.WriteAllBytes (System.IO.Path.Combine(directory, skyBoxImage[orientation] + ".png"), bytes);
    68.   }
    69. }
    70.  
    I'm guessing that "go.camera.Render();" is going to be the biggest chunk, no matter how you cut it up.

    Maybe I could run a loop that goes:
    1) Move background camera to main camera position and set skybox to default.
    2) Render front skybox
    3) Render left skybox
    4) Render right skybox
    5) Render top skybox
    6) Render bottom skybox
    7) Render back skybox
    8) Animate Background

    Then it would only rendering for 1/8 the the frames and I could alter the resolution as needed AND it would all move together on step #8, avoiding the seams as @angrypenguin pointed out.
     
    Last edited: Apr 4, 2016
  44. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,516
    How do coroutines help here?
     
  45. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,541
    You would set the camera position in step #1 and nothing would move until step #8, which means that in step #2 - #7 all the objects rendered would line up perfectly.

    Something like:
    Code (csharp):
    1.  
    2.  
    3. public Bool isOn = true;
    4.  
    5. void Start(){
    6.  StartCorourine(DistantCameraUpdate(mainCamera));
    7. }
    8.  
    9. IEnumerator DistantCamera (Transform target){
    10.  while (isOn == true){
    11.   SetPosition();
    12.   yield return null;
    13.  
    14.   RenderFront();
    15.   yield return null;
    16.  
    17.   RenderLeft();
    18.   yield return null;
    19.  
    20.   RenderRight();
    21.   yield return null;
    22.  
    23.   RenderTop();
    24.   yield return null;
    25.  
    26.   RenderBottom();
    27.   yield return null;
    28.  
    29.   RenderBack();
    30.   yield return null;
    31.  
    32.   AnimateDistantObjects();
    33.   yield return null;
    34.  }
    35. }
    36.  
     
    Last edited: Apr 4, 2016
  46. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,516
    I still don't see how a coroutine is helping. If you don't want it to move then... don't move it?

    The code does look nice, I'll admit.
     
  47. Frednaar

    Frednaar

    Joined:
    Apr 18, 2010
    Posts:
    153
    Yes, this is what I was thinking, the whole skybox should align perfectly as long as the camera does not move. You might have some distortion at the boundaries between the skybox and the normally rendered scene but as long as it is far away you should not notice it.

    Some possible improvements:
    - cache textures in memory, no need for saving to disk.
    - the whole skybox renderloop could vary based on player distance moved (i.e. every 10 meters)
    - set the texture size as a public variable (helps experimenting with performance)
    - you could nest skyboxes using layered cameras (like parallax background in 2d games) with nested closed/far clipping planes... not sure about worth the effort though...
     
  48. makeshiftwings

    makeshiftwings

    Joined:
    May 28, 2011
    Posts:
    3,350
    I don't know how far you've got but I ran into lots of problems with image effects like SSAO, AA, Bloom, pretty much everything really - they make use of the depth buffer or the full screen of pixels and there were endless issues trying to get them to look right when pasting two different camera images together. Or getting them to look right would require running the effect twice each frame, once on each camera, which cost more performance than whatever it was I was trying to save. I had lots of issues with transparent objects like trees on terrain and water as well. In the end, I found that sticking with just one main camera made it much easier to drop in new shaders or camera effects and that using LOD's for distant objects was good enough for performance.
     
    Kiwasi, Not_Sure and angrypenguin like this.