Hello, barracuda calls GC on Execute(), is there any way to get rid of this? Worker is on the GPU, the garbage is about 4.5 kb, if you transfer it to Burst, then it is even worse there, more than 100 kb. By the way, what will be better on mobile phones, GPU or Burst? I'm using 2.4.0-preview version. Code (CSharp): private void TakeScreenShot() { var channelCount = 1; inputX = new Tensor(rendTexture_forNeuro, channelCount); var outputY = ExecuteInParts(_engine, inputX); prediction.SetPrediction(outputY); Receive(); } Tensor ExecuteInParts(IWorker worker, Tensor I, int syncEveryNthLayer = 5) { var executor = worker.StartManualSchedule(I); var it = 0; bool hasMoreWork; do { hasMoreWork = executor.MoveNext(); if (++it % syncEveryNthLayer == 0) worker.FlushSchedule(); } while (hasMoreWork); return worker.CopyOutput(); }
Hello XSquare2, Thanks for your feedback! Indeed it seems we have GC regressions in latest release! I'm currently investigating and will post here the findings/solutions/patchs. Thanks again! Florent
Did this ever get fixed? I went from 40kb per step in v2.0.0 and v2.1.0 to 60kb per step using v2.2.0, v2.3.1, and v3.0.0 on an Android (Quest) device using Burst as the backend.