Search Unity

  1. If you have experience with import & exporting custom (.unitypackage) packages, please help complete a survey (open until May 15, 2024).
    Dismiss Notice
  2. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice

Burst compiler in editor script?

Discussion in 'Scripting' started by TOES, Sep 21, 2020.

  1. TOES

    TOES

    Joined:
    Jun 23, 2017
    Posts:
    134
    I have some very heavy scripts that I run in the editor. Is it possible to increase the perfornance of these scripts using the burst compiler? All the samples I have seen are related to regular play mode.
     
  2. Yoreki

    Yoreki

    Joined:
    Apr 10, 2019
    Posts:
    2,606
    Not sure about ECS, but Jobs and Burst should be usable. Did you try? Pretty sure Unity will give you an error if that's for some reason not allowed in editor scripts. Jobs are basically just multithreading tho, and Burst is a custom compiler to make that faster considering the job constraints. So as long as you can define your heavy workloads in a way the job system accepts, that should be fine. I couldnt find anything online tho, except some requests for supporting pure ECS / DOTS editor scripts. So again.. try it :)
     
    Last edited by a moderator: Sep 21, 2020
  3. Yury-Habets

    Yury-Habets

    Unity Technologies

    Joined:
    Nov 18, 2013
    Posts:
    1,167
    @TOES are your heavy scripts using HPC#, and are heavy in actual calculations of large arrays of data? Or is it parsing some XML/JSON/deserializing data? You could potentially do some hacking exercise, but before diving into it make sure your scripts (a) are compatible with Burst and (b) will benefit from Burst.
     
  4. TOES

    TOES

    Joined:
    Jun 23, 2017
    Posts:
    134
    The editor scripts are working on very large arrays of data, and processing everything takes in the order of five minutes.

    I havent heard about HPC# before, but it looks interesting, I will look into it.

    One of the parts that is very slow is sorting large arrays. Would Array.sort benefit or be compatible with the burst compiler at all? I could also implement my own sort if necessary.
     
  5. Yoreki

    Yoreki

    Joined:
    Apr 10, 2019
    Posts:
    2,606
    The burst compiler is not some magic thing that makes code go faster. It was primarily designed to be used with DOTS, or more specifically to work efficiently with Jobs. It achieves these speedups by implementing optimizations specific to the constraints this puts on the developer, which enables the compiler to, for example, ignore some of the otherwise common concurrency problems. And of course there is the data oriented programming.
    So first and foremost, you'd have to use jobs to get any noticable difference. For that you will have to use native containers, so no arrays and thus no Array.Sort. As far as i know there is a Sort method for the native arrays tho, so that's not a huge difference.
    The problem is that Array.Sort is already pretty fast. If it really takes up a long time for you, you'd only be seeing a noticable difference by multithreading the sorting algorithm itself, which is also the main advantage of using jobs and where you'd be seeing the biggest difference through burst. So you'd have to look into writing a multithreaded sorting algorithm (pretty sure it's possible to implement merge-sort in that way) and implement it using jobs.

    However, that could be quite a bit of effort, especially if you have no prior DOTS experience. It's super annoying to get into and work with, as it still changes and the documentation, or the lack thereof, is not exactly helpful in a lot of cases.

    Maybe a different approach could be considered? What kind and amount of data are you loading, what calculations are you doing on them, and for what? Maybe you can save the results and only recalculate once the file you are loading has been changed? Maybe you dont have to load all the data? Maybe, if the calculations are highly parallelizable anyways, you could consider a compute shader for the calculations? Is it even the calculations that are slow, or is it so much data that loading it is slow in the first place?
    More information generally helps with finding a more fitting solution for a given problem.
     
  6. TOES

    TOES

    Joined:
    Jun 23, 2017
    Posts:
    134
    Thanks for your thoughts on the matter. The main bottleneck now is the sort method, which is where I hope to gain some performance improvements. I use HPCsharp, which implements a fast multithread sort. I also sort different arrays in different threads, using the C# Parallel method, when the array is large enough for it to make sense. So multiple arrays are sorted at the same time, as well as having multiple threads work on each array. This has provided a nice performance boost.

    So, I have pretty much maxed out on using parallel processing, and everything is now super optimized. However, there's some massive computations that has to be done, each one takes many minutes, and then that has to be batch processed. So, total processing time is almost an hour for a typical project. So, even just 10-20% performance increase would be a big deal. Not going into details about the project, but the algorithms and data structures are all very optimized. So, we are using the best method to get the results we need. It used to take weeks to process the same data, so I am quite confident about the general approach.

    I was hoping to be able to gain some additional performance squeezed out by using a native code compiler which would help me access arrays faster without bounds checks. I know I can do this using unsafe pointers, but as I am using libraries for fast sorting then I would have to rewrite everything.