Hi all, I'm new to Unity and plan to use it for a 360 video related application I want to build for the upcoming GearVR. To those unfamiliar, Oculus has released a mobile SDK for this device and have included a native 360 video player as an example in the SDK docs. I would like to use Unity for many of the complicated interactions/environments/etc. while not watching a video, but once a video is has been chosen by the user, utilize native code to get the best performance when decoding/rendering the video to a sphere. Note the hardware in this case will be a Galaxy Note 4 (within the GearVR). http://docs.unity3d.com/Manual/Plugins.html http://docs.unity3d.com/Manual/NativePluginInterface.html Unity provides native plugins so it looks like I can compile native code built for android (NDK) and maybe take advantage of the hardware and avoid Unity's overhead when it comes to rendering the video. Am I on track here? In general, I'm concerned about performance constraints, or any other limitations when decoding and mapping video textures. From what I've experienced I have not been impressed with using MovieTexture. Any advice here? Any advice, guidance, what have you would be greatly appreciated!! Thanks in advance.