Hello, is there any reasons why barracuda inferencing would be working perfectly on Android devices with Vulkan graphics API, but not with OpenGL ES3 ? I have been on this for a while and I just can't get it to work with OpenGL ES3. I need this specific graphics API because I am using ARFoundation. Anyone got it working with GLES3 so far ? The documentation says it is supported. I am quite new to ML + Barracuda so if there is any details I could provide don't hesitate to ask me.
Hi @rob11 Unfortunatly OpenGLES driver quality in term of compute shader is very variable (on older device especially). Considering the vast array of devices and the little control we have at this level we have preferred to support Vulkan only for android where compute shaders are in a much better state. It does not mean Barracuda won't work for your use case for OpenGL + your device. In fact it will probably just run fine! However we don't officially support OpenGL as deploying to a vast range of devices would be risky for the driver quality reason. PS: here is the doc saying we support metal/vulkan (not openGLES) https://docs.unity3d.com/Packages/com.unity.barracuda@2.4/manual/SupportedPlatforms.html Hope it explains/help! Florent
Thank you @fguinier On the documentation you linked, it says that OpenGLES is not supported for GPU Inference, that's why I tried to switch to CPU Inferencing with no luck. Is there a specific way to switch to CPU Inferencing beside choosing the appropriate IWorker ? As I understand it, CPU inferencing would not be using computer shaders.
https://docs.unity3d.com/Packages/com.unity.barracuda@2.4/manual/Worker.html here is the related documentation . So no changing the worker is the only thing you should need to do.