Search Unity

C# Method call: how big is the overhead?

Discussion in 'Scripting' started by gian-reto-alig, Jan 24, 2014.

  1. gian-reto-alig

    gian-reto-alig

    Joined:
    Apr 30, 2013
    Posts:
    756
    At the moment I am starting to rewrite some hacked together prototype JS Scripts into nicer looking and faster running C# scripts for further development.

    As the two requierements are to both make the scripts easier to read / better to maintain, and running faster at the same time, I wonder about the overhead of method calls in C#/Unity.

    As far as I know there is always a small overhead to Method calls in all languages. Personally I only dealt with Objectoriented business coding before, where you don't waste a single though about an overhead that small normally.

    But I imagine in the world of game programming (and realtime computing as a whole) small overheads like that might add up quickly. So my question to the community:


    - How big is the overhead to calling a function/method in Unity as opposed to Inline code (same code ran in the same block)?
    - Is it advisable to go fully OO on Objects that will become bottlenecks for performance later on, or should this code specifically be as inline and monolithic as possible to speed it up?
    - Is something like this good or bad practice: Receive continous Inputs like "Left" or "Right" in one Class, call a Method on a different class to steer the moving object (at 60 or 30 FPS)? Or would it be a better Idea to receive Inputs in the class where the steering is done in the first place?


    Thanks for any Input!

    Gian-Reto
     
  2. LightStriker

    LightStriker

    Joined:
    Aug 3, 2013
    Posts:
    2,544
    - I doubt you can quantify it under tens or hundred of thousand of calls. If you were making a huge game for an old platform like the PSP, I would worry about that. However, as it is now, you'll bottleneck at a thousand other place before noticing method calls overhead.

    - For now, you should focus on readability and maintainability.

    - Depend on your design. Some will have a kind of Input Manager that will pre-chew the input data, ingame and in menus. Frankly, it's not really important, as long as it's easy to maintain.
     
  3. TheShane

    TheShane

    Joined:
    May 26, 2013
    Posts:
    136
    It's usually a waste to prematurely optimize things. If you want to know the exact overhead of a call, you could try writing a test to time it.

    For things you are doing once a frame (like the input) the difference is negligible. I wrote a particle system once where I was going through thousands of particles every frame, and there you can shave off some time by inclining functions and doing small optimizations. Even there it was probably ultimately limited by the fill-rate (this was a project I was building for an iPad) but I don't remember the details offhand. Code for fragment shaders is where you really want to shave off as much as you can because that is being executed a million times or more a frame.

    I don't know if I have a point, but I suppose it's best to keep things as high-level and object oriented as you can until it becomes a real problem. Don't waste time fighting invisible dragons when there are going to be more than enough real ones later on in the project.
     
  4. gian-reto-alig

    gian-reto-alig

    Joined:
    Apr 30, 2013
    Posts:
    756
    Guys, you have been tremendously helpful. I'll stick to my OO Knowledge and try to produce clean code, just the way future me will like it ;)

    True about the shader code. Probably I was picking up the good advice from people speaking about shader coding.


    Thanks a lot!

    Cheers

    Gian-Reto
     
  5. jackmott

    jackmott

    Joined:
    Jan 5, 2014
    Posts:
    167
    The overhead is small. A good rule of thumb is not to bother in-lining functions until the profiler indicates you should. Or, if you have method calls inside loops that get called thousands of times, you might think about doing it there. Even then, check the profiler to see if it is a problem before making code harder to read and maintain.
     
  6. Jasper-Flick

    Jasper-Flick

    Joined:
    Jan 17, 2011
    Posts:
    843
    Method call overhead and cache utilization become issues when you're working with loops that are performed very many times. Typically when doing low-level stuff per vertex, per particle, per pixel, or when working with thousands of objects. In practically all other cases the overhead is unnoticeable.
     
  7. LightStriker

    LightStriker

    Joined:
    Aug 3, 2013
    Posts:
    2,544
    Future you will thank you later.
     
    Arsonistic likes this.
  8. Dustin-Horne

    Dustin-Horne

    Joined:
    Apr 4, 2013
    Posts:
    4,562
    Even with this example one thing to think about is "Does my function get called recursively". Function calls are allocated on the stack. If you have a loop that calls the same function you're unlikely to run into an issue as each function call can be "popped" from the stack when needed. However, if you're using a function to call itself recursively, each call is still reallocated on the stack but the difference is that each call will not go out of scope until the calls it makes internally go out of scope as well.

    So to sum it up, I'll use the following very unuseful examples. :)

    Code (csharp):
    1.  
    2. public void OuterFunction()
    3. {
    4.      for(var i = 0; i < 10000; i++)
    5.      {
    6.           InnerFunction(i);
    7.      }
    8. }
    9.  
    10. public void InnerFunction(int index)
    11. {
    12.       //Do some stuff related to the index
    13.  
    14.      return; //not necessary, just symbolic to show end of function for this demo
    15. }
    16.  
    Code (csharp):
    1.  
    2. public void OuterFunction()
    3. {
    4.       var i = 0;
    5.       InnerFunction(i);
    6.  
    7. }
    8.  
    9. public void InnerFunction(int index)
    10. {
    11.      //Do some stuff related to the index
    12.      
    13.      if(index < 10000)
    14.      {
    15.           InnerFunction(++index);
    16.      }
    17.  
    18.      return; //not necessary, just symbolic to show end of function for this demo
    19. }
    20.  
    Now you see, in the first example, OuterFunction runs a loop and calls InnerFunction passing to it some index. In this case it runs 10,000 times and ever call of InnerFunction will be allowed to go out of scope and be popped off the stack if necessary.

    In the second example, InnerFunction checks the value of the index and if it's lower than the threshold (10,000) it calls itself again passing the incremented value if "index". The difference is, OuterFunction first calls InnerFunction with a value of "0", and InnerFunction is allocated on the stack. Then, InnerFunction calls itself with a value of "1", which will then call itself with a value of "2" etc. No instance of InnerFunction will be able to be popped from the stack until all 10,000 have executed. Furthermore, depending on how you're using your variables inside those functions, those may not be allowed to go out of scope either, and that's in addition to the 10,000 simultaneous copies of the "index" variable you have floating around. If this happened to be a larger struct you'd be eating up extra memory in a hurry.

    So this post wasn't about recursive functions, but I just wanted to point out the difference there as it is worth keeping in mind when you're coding and thinking about function call performance. For the most part you don't have to worry about recursion... but if you start getting into deep recursive calls or large value type variables that you may be passing around (or even deep recursion with reference types which will duplicate 32-bit pointers across calls), you might think about how you're structuring it.
     
unityunity