Search Unity

Calculate the rendering time for every gameobject programmatically

Discussion in 'General Graphics' started by mKhandekar, Apr 15, 2019.

  1. mKhandekar

    mKhandekar

    Joined:
    Sep 20, 2018
    Posts:
    5
    I would like to calculate the time required to render every gameobject in a scene. Is there a way to do so?
    Or do I have to calculate the time required to render the entire scene?
    What would be the methods which would mark the exact beginning and end of this process?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,348
    Getting the timings for how long things take on the CPU and GPU is possible. Unity's profiler does a good job of giving you that information on the CPU, and for the GPU there are external profiling tools like Microsoft's PIX, Nvidia's Nsight, Intel's GPA, or one of the many AMD ones which I can never keep track of which is the current one you're supposed to use. But calculating a single "gameobject" is more complicated.

    Rendering a single object is not a single event that can be measured alone. In Unity's default forward rendering path with some basic post processing enabled and multiple lights, a "single" opaque object may account for >10 separate draw events issued from the CPU to the GPU, so that's 20 individual events you'd need to time right? Not really, as that doesn't take into account things like how long that object took to update the transforms for the object, any scripts that object may have run, how long to cull and gather lighting data for on the CPU, etc. On the GPU even a basic shader is at least two stages, the vertex and fragment shaders, which run separately so you'd need to time those, and they each run hundreds or thousands of times depending on the number of vertices and the number of pixels they cover, but GPUs also have a ton of parallelism, it's not as simple as doing "number vertices * vertex shader invocation time + number of pixels covered * fragment shader invocation time" for each draw. Plus there's a lot of things that are harder to confine to a single object, like the way Unity's directional shadows work is done in a deferred way where the individual objects of the scene aren't known, only a buffer of the visible scene depth. Same with any post process effects which are similarly running on texture data generated by the scene's output and other hidden passes and thus have no knowledge of individual objects.

    You could just try toggling an object on or off and getting the whole frame time, but that might end up giving you a negative time, as it's possible adding an object to a scene actually makes things faster. For example, if it's an opaque object with a simple shader that covers a large portion of the screen, it may cover up more expensive shaders making everything render faster on the GPU. Some post processing effects can be highly variable in their cost depending on things like the speed the object is moving for motion blur, the distance the object is for depth of field or ambient occlusion, or how much contrast there is in the object's texture or the current lighting for many AA techniques.



    The TLDR is it's a very complicated question, the answer to which depends on what exactly you mean by "rendering time".


    One thing, when running using VR, some platforms keep track of the actual GPU time spent, and Unity has a way of requesting that data:
    https://docs.unity3d.com/ScriptReference/XR.XRStats.TryGetGPUTimeLastFrame.html

    That will give you the actual time the GPU spent rendering the last frame from the first command after the previous present, to the next present (ie: sending the image off to the HMD). Unfortunately you can only use that while doing XR as it's something each individual XR SDK handles rather than something Unity itself is doing. Something like PIX will probably give you the best data about the overall frame time, as well as individual events, but again narrowing down a single gameobject's impact will be next to impossible using that data alone. Plus using PIX, or any GPU analyzer will add some amount of overhead making everything a little slower.
     
    EAST-DATA likes this.