Search Unity

  1. Unity 2019.1 beta is now available.
    Dismiss Notice
  2. The Unity Pro & Visual Studio Professional Bundle gives you the tools you need to develop faster & collaborate more efficiently. Learn more.
    Dismiss Notice
  3. We're looking for insight from anyone who has experience with game testing to help us better Unity. Take our survey here. If chosen to participate you'll be entered into a sweepstake to win an Amazon gift card.
    Dismiss Notice
  4. Want to provide direct feedback to the Unity team? Join the Unity Advisory Panel.
    Dismiss Notice
  5. Unity 2018.3 is now released.
    Dismiss Notice
  6. Improve your Unity skills with a certified instructor in a private, interactive classroom. Watch the overview now.
    Dismiss Notice

Profiling background threads

Discussion in 'Scripting' started by bekatt, Mar 10, 2018.

  1. alexeyzakharov

    alexeyzakharov

    Unity Technologies

    Joined:
    Jul 2, 2014
    Posts:
    171
    The HierarchyFrameDataView.GetItemMergedSamplesColumnDataAsFloats API has been added to 2019.1.0a11. You can use it to get GC allocation size for a specific sample (and ResolveItemMergedSampleCallstack for a sample callstack in the editor).

    Glad to hear that, many thanks! That was a team effort - passed this to the team.
     
    Alloc likes this.
  2. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Hi @alexeyzakharov ,

    have been running into a bunch of "Non matching Profiler.EndSample (BeginSample and EndSample count must match): Profiler.Default" messages lately when not being careful with matching them. Would it be possible to get stacktraces in those messages? It's a real pain finding the place where these occur without getting any hint at all :)

    Regards,
    Cnris
     
  3. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    57
    @Alloc -- We used to run into similar all the time and it was extremely annoying. Over time we ended up writing a few little static helper methods. Mainly so that we could turn extra profiling on/off on-the-fly (and filter by category, etc).

    One of the other benefits though was that we we flip a boolean that, when on, would Push/Pop from a Stack<string> each time our ProfilerWrapper.Start()/Stop() was called, and then you can just log out that Stack<string> any time you get a mis-match to see what's left (and save the most-recently-popped if you have too many 'end' statements), etc. Kind of dumb, but it works and was <30 min work, well worth doing. :)
     
  4. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Hi Arthur,

    of course doing that works and isn't even that complicated, but I wouldn't want to bloat profiling that much. That's a bunch of additional method calls (probably even virtual methods on the collection) and allocations ... Exactly the opposite of what you want when profiling your code ;)
    So hoping for an "official" solution here, as should only impact performance in case such an issue occurs which would mean the data can't be taken serious anyway.

    Regards,
    Chris
     
  5. alexeyzakharov

    alexeyzakharov

    Unity Technologies

    Joined:
    Jul 2, 2014
    Posts:
    171
    Hi @Alloc and @Arthur-LVGameDev,

    Thanks for raising the usability issue!
    We can solve it with different approaches:
    1. Provide better information in the warning - we can add surrounding markers to the ones which have mismatch to pinpoint the unbalanced begin/end easier.
    2. Use new profiling API - Unity.Profiling.ProfilerMarker. It is ~2x faster than Profiler.BeginSample/EndSample, allows you to see the marker name in the warning, use Profiling.Recorder API to get some data in devplayer and has Auto method which can autoscope the code.
    3. Add callstack to markers. This has significant performance overhead and is usable only for diagnostic purposes and in the Editor only.
    4. Add an option to DeepProfiler to instrument only selected assemblies. Might eliminate the need of a manual code markup for some use cases.

    I guess the real problem is that API you use for adding markers is not preventing you from having copy-paste and scope mismatch errors. We think that the ProfilerMarker API can help with that.
    Would it work for you? Also would an API like this
    Code (CSharp):
    1. using (Profiler.Auto("MyAwesomeUpdate"))
    2. {
    3.    ...
    4. }
    help with manual markup?
     
  6. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Hi @alexeyzakharov ,

    thanks for the reply :)

    Yeah, I think in a lot of cases this could help, as long as it actually shows the marker's labels ;)
    So like if you hit an END without a matching BEGIN show the last two or so BEGIN's labels. Or even the whole stack? As this is only happening when there's an issue with the profiling data anyway I think adding overhead for the warning would not hurt anyone, and anything that's available for the dev to pinpoint why it happend would be good.

    That's a theoretical question, right? Could not find any "ProfilerMarker" class in the Profiling namespace documentation nor an "Auto" method/property (assuming this is what the below example refers to) in the Profiler-class documentation.
    Currently I can only see Begin/EndSample and CustomSampler. CustomSampler might be faster but it's annoying to work with as you have you pre-create the instances in a field in the class instead of just using them when needed. This clutters the code even more with profiling instrumentation :(
    If it worked like in your example without any pre-instanciation of anything that would be nice, combined with the "using" block even better (even more so as this would also automatically make sure you don't span multiple frames of a begin/end pair in a coroutine).

    I think editor only would not be the issue, performance overhead could be though. If you could turn it off and on though so that in a normal profiling session it would not hurt performance and you only turn it on when you hit such issues it would be great.
    But, would it really hurt performance in the first place? Thought the creation of the warning was independent of the actual Begin/EndSample code, as in it's only executed when a mismatch happens. So can't just the warning creation code generate the callstack when needed so it would only hurt performance in those cases? Performance again does not matter if your profiling samplers are screwed up anyway ;)

    Might be useful in general in future, but currently I try to avoid DeepProfiling. It's just such a performance hog itself it mostly does not make sense to use it. Especially with the AllocationCallstacks available to us now :)

    Yeah, we always wanted a disposable profiler marker :)
    As long as it doesn't cause allocations itself that would be damn nice.


    Regards,
    Chris
     
    alexeyzakharov likes this.
  7. alexeyzakharov

    alexeyzakharov

    Unity Technologies

    Joined:
    Jul 2, 2014
    Posts:
    171
    I'm improving the message.

    ProfilerMarker has been added to 2018.3 as a replacements for CustomSampler to support profiling in Burst code.

    Agree, CustomSampler and ProfilerMarker have the same problem - I'll add nonalloc Profiler.Auto(string) to simplify the usage.

    We avoid doing any checks when emitting events for performance reasons. Instrumentation is very simple - it is basically a function call with two "if"s and timestamp + marker id copy to the memory buffer. The warning itself is generated when we analyse the raw data on the Editor side, so we can't retroactively collect callstacks. Adding extra logic to there might have impact on normal usecase. Keeping runtime part simple is the only way we can have 30M events per second. I need to run perf tests first to see if extra logic might have any impact.
     
  8. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Thanks @alexeyzakharov :)

    Ah, didn't expect it to be in the Unity namespace instead of UnityEngine like the other stuff :)

    Ok, that makes sense. Thought it was basically part of hitting the Profiler.EndSample when the message was generated. Yeah, this should not affect performance of profiling by default, if there's no other solution maybe there could be a switch to turn this on to debug these issues?



    Also, any news on collecting allocation callstacks in the player? Currently we can either profile in the editor, with all the issues that has (especially getting a lot of useless data as in the editor Unity code creates garbage that would not happen in the player, like GetComponent) or we can profile a player but get no detailed memory data :(

    Just had it again these days of thinking "damn, 4 MB garbage, what's that now?" just to find out that this garbage came from such Unity code and was nothing to worry about (and that was only possible with checking the callstacks, there's no marking in the profiler data that this is the case).

    Regards,
    Chris
     
  9. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Oh, and could the frame limit possibly be removed? Or changed manually? Looks like players can actually record more than 300 frames in the binary profiler files, but as I still have to load them into Unity's profiler window to iterate and export the data to my format this still limits to the 300 frames max supported by Unity's profiler window.
     
    alexeyzakharov likes this.
  10. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    57
    Just wanted to jump in here -- this thread has proven to be extremely helpful, and I've probably read through it >10 times over the past couple of months. That said, I'd still like to voice my desires & briefly explain my use-case:

    We really could use the ability to have the aggregate allocations in the hierarchy. We can see them in the "Timeline" view, and with the code posted here it sounds like we could write code to aggregate them, but I hate to spend too much time on tooling if this is right around the corner regardless.

    Right now we basically have to run our app with "everything main-thread" in order to get any aggregated allocation data; we do this because we want to figure out where the best bang-for-buck is in terms of spending our time to improve our app's performance/reduce overall allocations.

    We have a ton of "ThreadedJobs" (a small abstract class that we use for our thread work) that run, and it's where most of our allocation comes from -- many of these are quite small, but hundreds or even thousands of them will run. When viewed in the 'Timeline' view of the Profiler we just see a bunch of really small "dots" -- we can zoom in, and see that they're allocating, but it really only gives us somewhat of a boolean "this job allocates - yes/no" as far as helpfulness goes.

    What we really want to be able to do is see that "Frame 5 had XXX kilobytes of allocation" and then be able to see where the majority of that allocation was -- in many cases, it was spread across 100+ "ThreadedJobs", but often they were all of a single type! Right now the Timeline would just show us a bunch of 'small dots/dashes' -- but it would not convey the fact that perhaps 90% of the allocation came from ie CheckAbstractGraph jobs, which is really the key thing we're wanting to know.

    Basically we want to be able to have a named (or integer, or whatever) "start/stop" that we can call, from a thread -- multiple times / able to switch 'buckets' as a threadrunner in the pool picks up work -- and then to see a sum of allocation that occurred within each bucket during a given period of time.

    It sounds like the above would allow us to do that via the (private?) API mentioned in this thread -- but is this something that Unity is actively working on and, if so, any idea if it's on the brink or are we thinking late this year potentially? I'm just trying to avoid us spending a week+ writing it only to have it obsoleted via a built-in soon thereafter -- and obviously I'd prefer we spent our time on gameplay code anyways, but that's just how it goes sometimes. :)

    Thanks in advance for any guidance, and a big general thank you to all in this thread as well!
     
    alexeyzakharov likes this.
  11. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    57
    They say a picture is worth 1k words, so figured I'd provide both. ;)

    Some Context -- Can probably skip this
    We have a "ThreadRunner" class which runs on a distinct thread; at game startup we instantiate N of these, each on their own thread, and we tell each of them 1 or more BlockingCollections to be watching. The runner grabs jobs that enter the 'queue', runs them, and drops them back into a 'finished' queue -- back on the main thread, at the end of LateUpdate, we have a ThreadManager which grabs those 'jobs' out of the finished queue & runs a final method on them to handle any final processing/sync-with-main-thread code. Most of these jobs are pretty short-lived, finishing within a frame or two, though occasionally some are much longer & could take as long as a few seconds.

    My issue is that we're seeing a bunch of allocation occurring that we are struggling to track down -- presumably it's occurring in one of these 'jobs' that is being run off the main thread, but tracking that down has proven very difficult thus far.

    Example Screenshot of what I'm seeing:
    https://imgur.com/a/U7Zj1fg

    Hierarchy view shows roughly just under ~200KB allocation for the frame. The 'Memory graph however shows 0.9MB -- so I'm 'missing' and need to track down nearly 700KB of allocation.

    I've followed this thread closely, and I understand there's a way to grab some of this data from Unity internals (presumably via reflection or similar)? Is that the way to go here? What am I missing?

    Example of 'Timeline View'
    So I added some "Begin/EndThreadProfiling" (at thread startup [app startup] and thread shutdown [app exit]), and I instantiate a CustomSampler and, just before a task begins & right after it ends, sampler.Start/End(). Now I let the game run for a bit & pull up the Hierarchy. The 2nd and 3rd screenshots on the Imgur link above are what I see (also ensured that the 'allocation callstacks' is on).

    Re: the 3rd screenshot
    Zoomed in, it turns out that it isn't a "solid pink bar" -- it's actually a *whole bunch* of little tiny pink bars. Some (but not all) can be hovered, and then I get a call stack, which is great & somewhat helpful. Unfortunately there's absolutely no indication of how much alloc/RAM this pink bar actually represents, and worse, there's no aggregation.

    So I'm basically stuck looking for "the largest pink bar" and trying to visually sort them in my head, and solve all of them starting from the largest bars first. That's even worse than it sounds, because the call I'm hovering here might only invoked once a frame -- meanwhile all the other pink bars may ALL be "MyBadMethod1()" which is single-handedly dumping the other 700KB of RAM on this frame.

    But I've got no way to actually know that to be the case (or not the case), because there's no way for me to get aggregate data here, even just a snapshot, at least not as far as I can tell...

    Really Simple Potential Solution...?

    If we could just get a dropdown to select which thread we're seeing on the Profiler's Hierarchy UI, that would be absolutely amazing. That is really all we need -- the goal for us is simply to be able to see the 'entire picture' via the Hierarchy view, even if we can only see 1/6th of that picture at a time, so long as we can choose with 1/6th of the picture we're seeing and switch between them then that'd completely solve our use-case. Right now there's basically just data that's completely missing, and the only way to see it is like reviewing log files one-by-one without any ability to GROUP BY...

    It sounds like the data is here for this already -- so... damn, I just really cannot emphasize enough how much time this would save us going forward (and is to say nothing of how much time we've spent working around it thus far). I'd really strongly encourage exposure of the data being implemented sooner than later, even if the "UX" is perhaps not ideal in its first incarnation. It would save us real time/dollars on day #1 and immediately improve our end-user product.
     
    alexeyzakharov likes this.
  12. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Yeah, that would be a big step forward as far as Unity's internal profiling viewer goes. Actually I'd like to see it showing all threads at once in the hierarchy though, just as a new level (or two) on the top of the hierarchy. That's how I do it in our tool: First level is thread group, second one is thread and under these we have the hierarchy that Unity's viewer shows as top level.

    Unity's viewer is lacking a lot though, so I'm happy that we have the APIs to gather the data and do our own frontends, as that's a part that can be done by us without "distracting" the profiler team from improving the actual backend stuff that we obviously can't work on :)
     
    alexeyzakharov likes this.
  13. alexeyzakharov

    alexeyzakharov

    Unity Technologies

    Joined:
    Jul 2, 2014
    Posts:
    171
    Thank you both @Alloc and @Arthur-LVGameDev so much for the feedback!
    I'll start answering :)

    I've added more information to the error message as a first step - hopefully this can provide better context.
    Code (CSharp):
    1. Missing Profiler.EndSample (BeginSample and EndSample count must match): Test Marker B
    2. Previous 5 samples:
    3.     Test Marker B
    4.     Test Marker A
    5.     GC.Alloc
    6.     HostView.OnLostFocus()
    7.     GC.Alloc
    8. In the scope:
    9.     Test Marker B
    10.     Test Marker A
    11.     EditorApplication.Internal_CallUpdateFunctions()
    12.     SceneTracker
    13.     ApplicationTickTimer
    14.  
    Next I'll add non-allocating Auto API which might reduce or eliminate the errors. If those simple steps won't solve the issue, we can add an option to enable stack collection of markers which are failing.

    No ETA on callstacks in players, unfortunately. It is already on our backlog though, and we need to land some prerequisites first.

    I understand the frustration. Perhaps "Collapse EditorOnly Samples" option in the Hierarchy View might help you. GetComponent should be filtered out as EditorOnly sample when "Collapse EditorOnly Samples" is toggled. That landed in 2018.3 and the same option is available for the crawler API here. An Editor-only code which is executed as a part of runtime code during Playmode is marked and filtered out. Its GC data is excluded from hierarchy (unless you profile Editor).
    Currently we mark GetComponentNullErrorWrapper, Loading.CheckConsistency, Prefabs.MergePrefabs as such, so you can see that this is Editor-only. Let us know if you see anything else that should be filtered out.

    This is #1 requested feature atm. Yes, we are going to fix it. Though we are still in discussion how exactly.
    I guess for you the key point here is to be able to iterate through a whole data set. I think for that kind of offline analysis the reader "reader" API is preferred - where you can just go through file and extract all frames. I'll put that to our backlog.
     
  14. Arthur-LVGameDev

    Arthur-LVGameDev

    Joined:
    Mar 14, 2016
    Posts:
    57
    @alexeyzakharov -- Any thoughts re: dropdown on the Hierarchy view to select which thread is being viewed? Or perhaps a 'mask dropdown' to multi-select which thread(s) have their data shown in the Hierarchy? Or, as @Alloc suggested, having Hierarchy view always display data for all threads..? That'd work quite well for us, too.

    Our main issue right now is that the only way (by default/without writing editor code) to see total aggregate allocation is via the 'memory graph' -- and we just have to assume that when the graph shows more allocation than the Hierarchy totals show, that all of the 'missing allocations' are all coming from threaded work. At which point, without any aggregate data/hierarchy view for the threaded work, we've basically exhausted the options to actually profile it "as-is" & must resort to old-school debugging / trial-and-error tactics (ie putting thread work on main thread to profile it & get aggregate #s, etc), or writing our own 'one-off' editor utilities using the APIs you've mentioned here...
     
  15. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Hi @alexeyzakharov ,

    nothing to thank me for. I'm making you have more work ;)
    It's really appreciated that you guys take feedback into account that much, would love to even see that many replies on other topics.


    Not sure how that would have to be read, but I suppose it's becoming clearer when looking at own code ;)
    Either way sounds like it will help ... though:

    this should actually definitely fix any of these issues. No more creating fields in classes for custom samplers, no manual begin/end scoping (especially when there's multiple return paths out of the block you want to sample). This will definitely be a big plus!
    (I assume the C# compiler will completely remove the using-block when the call inside is compiled out when not in PROFILER_ENABLED mode?)


    That's probably among my top wishes for now, thanks for keeping that one on your list :)


    Still working with 2018.2, so wasn't aware this was an option yet. Sounds good, will test it out when we switched to Unity 2019.1 (and updated my export code for the new API :) ). Not aware of any other things that do this, think the main one is really just the GetComponent one. Typically I've been looking for the "MissingComponentString" entry in the allocation's stacktraces to know it's irrelevant.
    This is the kind of trace I'm mostly facing:
    Code (csharp):
    1.  
    2. 0x00007FFC9BDA9578 (mono) [object.c:4515] mono_string_new_size
    3. 0x00007FFC9BDAA9F9 (mono) [object.c:4474] mono_string_new_utf16
    4. 0x00007FFC9BDAAB31 (mono) [object.c:4566] mono_string_new
    5. 0x0000000140BE0387 (Unity) MonoStringFormat
    6. 0x0000000140BDFFB3 (Unity) MissingComponentString
    7. 0x0000000140BEB210 (Unity) ScriptingGetComponentsOfTypeFromGameObjectFastPath
    8. 0x00000001418D91B8 (Unity) Component_CUSTOM_GetComponentFastPath
    9. 0x000000003D924DE3 (Mono JIT Code) (wrapper managed-to-native) UnityEngine.Component:GetComponentFastPath (System.Type,intptr)
    10. 0x000000003D924C2C (Mono JIT Code) [Component.bindings.cs:42] UnityEngine.Component:GetComponent<object> ()
    11.  
    Is the ViewModes enum a flags enum? I.e. can we combine MergeSamples and HideEditorOnlySamples?

    In the editor or just if we profile the editor code?


    (Re: more than 300 frames)
    Yeah, that sounds good. And maybe it's what I was wondering today too: Would it be possible to basically have the HierarchyFrameDataView class and whatever it depends on be in a separate .NET DLL that could be loaded by a custom tool to directly load Unity's profiling files (from players or when saving from the editor) without having all the other UnityEngine/UnityEditor stuff loaded in? That would be freaking amazing :)


    Regards,
    Chris
     
  16. alexeyzakharov

    alexeyzakharov

    Unity Technologies

    Joined:
    Jul 2, 2014
    Posts:
    171
    Profiler markers can serve this purpose as well IMHO. I guess having something like bottom-up hierarchy view would help finding those.

    The API is no longer internal/uncodumented - it is exposed and documented in 2019.1 - https://docs.unity3d.com/2019.1/Documentation/ScriptReference/Profiling.HierarchyFrameDataView.html

    I think right now we have a profiler backend in almost good shape. (Though there are still a couple of APIs planned to become available later this year - async task api to markup arbitrary time regions across frames/threads, execution flow api for task scheduling/execution analysis and bookmark api for gameplay events markup.
    And what you are saying is very much aligned with future plans - make profiler more intelligent and provide an answer to simple questions. This one formulates the problem really good -
    It's hard to answer the following question
    as we are working on something similar for CPU data, but that won't answer your simple question "Which of my scopes or functions allocate GC the most?".

    And I really like your request
    If that can help you, I'll just add that as a temporary solution (though it can only be released not sooner than 2019.3).
     
  17. alexeyzakharov

    alexeyzakharov

    Unity Technologies

    Joined:
    Jul 2, 2014
    Posts:
    171
    And to recap regarding GC allocations analysis.
    Are those the questions you ask when fishing for GC.Alloc markers?

    1. What are the top N of my markers which contribute to the most allocations count for frame X?
    2. Or frames range [X, Y]?
    3. And which markers allocate most bytes?
    4. If callstacks are available, which of my functions allocate the most?
    5. Is this allocation made by Unity? Can I filter it out?
    6. Is this allocation caused by boxing?
    7. Can I record only allocations with callstacks without other markers?
    8. Can I click the callstack and go to the code?
     
  18. alexeyzakharov

    alexeyzakharov

    Unity Technologies

    Joined:
    Jul 2, 2014
    Posts:
    171
    Yes, that is the plan we have as well. It has some UX and performance problems though and thus is being discussed.

    Thanks for understanding :) As mentioned above we are still working on some backend API to complete feature set for analysers.
     
  19. alexeyzakharov

    alexeyzakharov

    Unity Technologies

    Joined:
    Jul 2, 2014
    Posts:
    171
    I guess that's what Unity is and what I and others share - make a tool which allows you to spend more time on creative work :)
    So if there is something that can help/automate your work let us know and we'll try to provide a solution without creating an additional problem.

    By default "using" statement is impossible to remove in release as it generates function calls. The question is how far we can get with aggressive inlining. I'll make sure it has as little overhead as possible in release. I think we can do that with what is on compiler tech roadmap.
     
  20. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Was wondering if it wasn't compiled out by default if the using statement is empty in the first place. Assuming your "ProfilingMarker.Auto" is using the "Conditional(PROFILER_ENABLED)" attribute a non-profiling build would basically end up like this:
    Code (CSharp):
    1. using (/* nothing here as the compiler would not emit the call to ProfilerMarker.Auto */) {
    2.     // some stuff
    3. }
    Never tried it but I'd *hope* that the compiler would just omit the whole using-feature in that case :)
     
  21. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Also, the question most important to me in my previous post was the last part:
    Would that be possible? I suppose currently it's backed by native code in the editor, so might not be easy, but it would be damn helpful :)
     
  22. alexeyzakharov

    alexeyzakharov

    Unity Technologies

    Joined:
    Jul 2, 2014
    Posts:
    171
    Yes, this one is now wrapped with "GetComponentNullErrorWrapper" marker which is Editor-only and can be filtered out in the Editor.

    Yes, it is [Flags].

    I meant when you look at the capture recorded in the Editor's Playmode, you can filter out samples which are Editor-only. Editor only code is the code which is executed as a part of the game during Playmode, but doesn't exist in Player builds. That's typically diagnostic, verification, etc. code such as null wrapper.

    At some point in the future, yes. For now some data is coupled to other systems in the Editor. But generally we are moving to having profiler as a package.

    No, "using" statement is tricky. The best default result you can get is extra 2 calls to empty methods. What I want is to have those calls optimized out. (Note, that with IL2CPP/Burst it is optimized out, but with mono jit not really and requires some work to do).
     
    Alloc likes this.
  23. Alloc

    Alloc

    Joined:
    Jun 5, 2013
    Posts:
    212
    Oh yeah, the Conditional attribute was not working for non-void methods :(
    Profiler-instrumentation in code not affecting the performance of non-development players would be quite important. Suppose especially method calls would be more expensive then it should be for performance critical code.