Search Unity

Is this technique possible?

Discussion in 'General Graphics' started by ZeHgS, Oct 18, 2018.

  1. ZeHgS

    ZeHgS

    Joined:
    Jun 20, 2015
    Posts:
    117
    Hi guys! The image below is from Trine 2:




    Pretend an entire game takes place in this room. The only dynamically moving things would be the characters and their shadows, every other object only moves in a very limited, repeating manner (for example lights flickering, chains jangling, etc.). Therefore, if the entire room could be pre-rendered as a short looping video, floors and on screen objects included, graphics could be much, much better than if the whole scene were being rendered in real time, right? My questions are:

    1. Is it possible to have a camera that is set up in Unity with exactly the same angle, FoV and position as the camera when the scene was pre-rendered in, say, 3DS Max using VRay? Then could extremely low poly 3D stand-ins for the pre-rendered floor, obstacles, items, etc. be placed exactly where they should be in the scene for collision detection with the dynamically moving things but remain invisible to this camera?

    2. Could the characters and dynamically moving things then be rendered normally in real time perfectly on top of the pre-rendered 2D video of the scene, thus maintaining size perspective in case of an angled room and movement in multiple dimensions (for example if the room were a tunnel that moves into the screen)? In a way, it's kind of like when cartoons such as Family Guy are drawn for certain sketches on top of actual video footage and real life is mixed with cartoon characters. Is this common practice in games?

    3. How difficult would this be to implement in Unity?

    4. Since the entire background would consist basically of a video, how much graphics processing power could this technique theoretically save up for rendering the moving characters, especially in contrast to the whole scene instead being rendered normally in real time using every available resource to try to come as close as possible to the VRay pre-rendering in terms of visual quality and photorealism?

    Thanks a lot!
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    1. Yep, assuming you render your background with a basic pinhole camera (I.e.: no lens modelling).
    2. Yep. Or at least was. Still common for some mobile games. Was super common for much of the late 90's and early 00's. The first several Resident Evil games for example. Still used today for some special effects like explosions, water splashes, or landslides in AAA game cutscenes. Tomb Raider, Uncharted, etc. for example use this frequently. Some recent isometric RPGs do this too, like Pillars of Eternity.
    3. Not very. Place a camera facing plane in the background with a movie texture on it, done.
    4. So, it saves a ton of rendering performance, but also creates lots of problems.

    The biggest problem is shadows and lighting in general. Both for the real time objects receiving lights and shadows from the scene, and the prerendered backgrounds receiving shadows from the realtime objects.

    You can place real time lights in the scene to mimic direct lighting, and have invisible low poly shadow casters where necessary, but that doesn't solve the problem of Unity having a very different light attenuation from any other rendering software out there. You can tweak the brightness and ranges to get something close, but it'll always be a little off. There's also the problem of ambient lighting and reflections. For reflections you'd have to render out custom cubemap reflection probes from your software and place them in the scene. For ambient lighting you'd ether have to use custom shaders that use the reflection probes for ambient lighting, or bake your own light probes manually from script.

    For shadowing the background you'd need a decent approximation of the ground plane to cast the shadows on, along the the same realtime lights you need for the realtime objects. That or you need to render out your own custom shadows for the characters. For them to blend real time shadows properly into the prerendered backgrounds you'd need to render out your backgrounds with the albedo, ambient lighting, and direct lighting separate, or render out masks for any lights you want to have get realtime shadows. Essentially you'd need to do what Unity does for deferred and baked lighting.

    And none of this solves any problems you have if you intend to have anything in the foreground or otherwise in front of the realtime objects.


    Or if what you want really is something cartoony, where the character lighting doesn't really match the background, then none of this is a problem. Maybe have an fixed ambient color you override per camera, and use blob shadows.
     
    ZeHgS and richardkettlewell like this.
  3. ZeHgS

    ZeHgS

    Joined:
    Jun 20, 2015
    Posts:
    117
    Thank you so very much for your excellent and thorough reply! I'm very glad this is possible, it would be perfect for my game. Thanks a lot!
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    A random tip: When you build your simplified collision geometry, do it in the real rendered scene, and export it with the cameras used to render the scene. Unity will automatically create cameras in the Unity scene hierarchy with matching settings (as best it can). You can even animate your cameras if you want so your background cameras to not be totally static, export that animation to drive the real time cameras, though you may need to do some additional script side work to ensure the video and animation stay in sync.
     
    ZeHgS likes this.