Search Unity

NatCorder - Video Recording API

Discussion in 'Assets and Asset Store' started by Lanre, Nov 18, 2017.

  1. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    I am working on a mobile video editor app that takes short videos and then stitches them together in a user specified order after recording them. Is there any way to combine 2 mp4 videos with natcorder? Thank you so much for any help.
     
  2. monda

    monda

    Joined:
    May 14, 2015
    Posts:
    35
    Just finished testing on all devices and received reports from the team...it works great now!
    Thanks for your help!
     
    Lanre likes this.
  3. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    NatCorder only handles video encoding. You would have to get frames from the videos then commit them in whatever order the user chooses. That aspect is out of scope for NaCorder. There is an open-source video decoding API in the NatSuite Framework, but it is pre-release. See a usage example with NatCorder.
     
    Mr-Mechanical likes this.
  4. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    Could I collect Texture2d arrays instead of mp4 clips and commit the frames in an offline approach? Would this be performant enough? Thank you for the help.
     
  5. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    The example code in the page I linked does exactly what you describe. Just don't fill up memory.
     
    Mr-Mechanical likes this.
  6. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    If my videos are only up to 1 minute long. Would you suggest to save the texture2d to disk (serializableobject) or just use memory?
     
  7. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    Save it to a video, perhaps using NatCorder. Then decode it with VideoPlayer or NatReader and commit it to a final composition using NatCorder. The MP4 container has already been designed for efficient compression to file; no need to reinvent the wheel with Texture2D I/O.
     
    Mr-Mechanical likes this.
  8. ATurati

    ATurati

    Joined:
    Aug 19, 2020
    Posts:
    8
    I am thinking of recording some gameplay for the stores, can it be recorded inside the Editor?
     
  9. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    If you want to record cinematic gameplay, I recommend using Unity's editor-only recorder package. But if you want pure gameplay in Play mode, same as would be in the app on a target device, then NatCorder works in the macOS and Windows editors the same way it does on mobile platforms--no code changes necessary.
     
  10. mahna3411

    mahna3411

    Joined:
    Dec 11, 2018
    Posts:
    39
    recorder = new HEVCRecorder(mScreenWidth, mScreenHeight, 30);

    erorr log :
    2020/09/23 18:09:36.271 14314 15161 Error ACodec Unable to instantiate a encoder for type 'video/hevc' with err 0xfffffffe.
     
  11. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    On Android, the HEVCRecorder requires Android N or newer. What device and OS version is this error happening on?
     
  12. mahna3411

    mahna3411

    Joined:
    Dec 11, 2018
    Posts:
    39
  13. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    That page is not updated. You should be looking at the Encoder column, which doesn't have a dot indicating support. HEVC encoding support was added in API level 24. Also, I don't know what version of Android you are on right now but the next NatCorder update will require Android API level 23 as a minimum.
     
  14. mahna3411

    mahna3411

    Joined:
    Dec 11, 2018
    Posts:
    39
    I did this test on Android 6 , api level 23
    When is the release date of this update?
     
  15. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    No release date yet. But yeah, you can't use the HEVC recorder unless you are running API level 24 or newer.
     
  16. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    Would storing the series of clips as MP4 and then decoding them and reencoding them into 1 clip add extra compression?
     
  17. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    Whatever quality loss you face won't be negligible. You can also increase the bitrate on the recorder. See the docs for more info.
     
    Mr-Mechanical likes this.
  18. ATurati

    ATurati

    Joined:
    Aug 19, 2020
    Posts:
    8
    Thanks :D
     
    Lanre likes this.
  19. larskroll

    larskroll

    Joined:
    Dec 17, 2013
    Posts:
    52
    Hi

    I'm using Natsuite to record a scene that includes speechbubbles.
    The speechbubble is a 2D canvas, rendered in worldspace. (Canvas rendermode is set to worldspace)
    Speechbubbles are placed in "speechbubble layer". The speechbubble is a couple of sprites and a TMPro text label
    When I set main camera to render the speechbubbles, I am able to correctly render them out also in video output with NatSuite. So, so far, so good.

    However: I want the speechbubbles to be rendered on top of everything else. I therefore add a camera to the stack of the main camera: This camera clears depth buffer, and renders everything on top of main camera. This looks fine in game, but the speechbubbles themselves are not rendered to video.
    The really weird thing is, if I put other stuff in the layer rendered by the overlay cam, this comes out fine in video render.

    So: TL:DR Using URP and a camera with another camera in overlay stack, 2D canvas elements are not rendered in the overlay camera, even if those elements are in worldspace. Yet, the elements are rendered just fine if they are in the main camera.
     
  20. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    Are you sure that your speech bubbles are being rendered by the overlay camera? Since your speech bubbles are in a specific layer, go to the main camera and ensure that it cannot see that layer. That's the only reason I can think of without having more info. Can you post screenshots of the inspectors for the main camera, overlay camera, and canvas?
     
  21. larskroll

    larskroll

    Joined:
    Dec 17, 2013
    Posts:
    52
    yes. I'm sure. And I agree it's weird.

    Here are the settings:
    Main cam:
    upload_2020-9-24_16-45-24.png

    Main cam cullling

    upload_2020-9-24_16-46-36.png

    Overlay cam:

    upload_2020-9-24_16-47-19.png
    (Overlay cam is a child of main cam of course.)
    Overlay cam culling
    upload_2020-9-24_16-48-22.png

    Speechbubble canvas :
    upload_2020-9-24_16-49-59.png


    Now, if I run this, my speechbubbles are rendered on top of everything, even though they are worldspace objects, exactly like I want. If I render to video, they are not included.

    If I include speechbubble layer in maincam culling, and exclude from overlaycam culling mask, they ARE rendered to video, but of course not on top of everything. If I include something else, e.g. actors that are just 3D objects in worldspace, they are rendered on top of everything, AND included in video render. So, my guess is, that there is something going wrong in the specific case of something rendered by a canvas renderer in worldspace mode, and only when this is rendered in an overlay camera in a camera stack.
     
  22. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    For a sanity check, can you set your overlay camera to render to a RenderTexture (create one in your project files, then assign it in the inspector). Then display that render texture on a UI panel where you can see it. Do the speech bubbles show up there?
     
  23. larskroll

    larskroll

    Joined:
    Dec 17, 2013
    Posts:
    52
    I mean .. sure, but an easier sanity check I've made, is to runtime enable and disable the overlay cam. With the overlay cam active, the speechbubbles are rendered just fine, but with the overlay inactive, no speechbubbles are rendered (in gameview). This leads me to conclude that speechbubbles must be rendered by the overlay cam, and *only* by the overlay cam.

    Are you unable to reproduce the issue?

    Try creating a scene with some 3D objects in default layer, some objects in a layer layerX and a canvas rendering in worldspace, with some 2D stuff in layerX: Create a camera( URP) with layermask excluding layerX. add a camera in camerastack which excludes everything except layerX. run game: All objects should be shown. export to video using NatSuite: All objects, except stuff from canvas should export. (including 3D objects in layerX)

    That at least is what I'm seeing. I'm on Unity 2019.4.0f1 LTS I will try to update to latest 2019 LTS and report back
     
  24. larskroll

    larskroll

    Joined:
    Dec 17, 2013
    Posts:
    52
    I have updated to 209.4.11f. Problem remains.

    I have solved my own problem by alternate means, so the issue is less pronounced for me now. But it would still be nice to get it fixed. Thanks for your quick and helpful replies!
     
  25. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    This check isn't helpful because you've already confirmed that the speech bubbles are being rendered by the overlay camera. My sanity check is to see that it is being rendered to texture, which is how CameraInput is able to record game cameras. You should render your overlay camera to a RenderTexture that you display, so you can see whether the bubbles are present or not. I wouldn't expect them to be visible, and if that's the case then you'll have to forward this to Unity as a bug.
     
    larskroll likes this.
  26. larskroll

    larskroll

    Joined:
    Dec 17, 2013
    Posts:
    52

    OK that makes sense. I'll try, and report back to you. But I'm guessing this is Unity crapping out
     
  27. larskroll

    larskroll

    Joined:
    Dec 17, 2013
    Posts:
    52

    Uhm ... overlay cameras do not have the same render to texture property that basecam has. So. No. I can't do the check you're asking for. I don't know how anyway.

    upload_2020-9-25_15-6-35.png
     
  28. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    Ah damn. Okay I recommend sending a repro to Unity. In the meantime it sounds like you found a workaround.
     
    larskroll likes this.
  29. CNGameDev01

    CNGameDev01

    Joined:
    Feb 1, 2016
    Posts:
    6
    Hi,

    We have issue when trying to build xcode to device. Xcode 11.7, Unity 2019.4, Natcorder 1.7.3. The error stated that it doesn't found symbol for arm67.
    I have attached sceenshot of the error please get back to us as soon as possible.

    Thanks
     

    Attached Files:

  30. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    These symbols are from an outdated version of NatCorder. Delete NatCorder in your project (specifically, any "NatSuite" and "NatCorder" folders). Then reimport it from the Asset Store and do a clean iOS build (don't Append).
     
  31. ghasedak3411

    ghasedak3411

    Joined:
    Aug 25, 2015
    Posts:
    23
    yes , When is the release date of this update?

     
  32. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    There isn't a set release date yet. I'm still working on improvements and fixes. Is there a specific change or bug fix you are waiting for?
     
  33. ghasedak3411

    ghasedak3411

    Joined:
    Aug 25, 2015
    Posts:
    23
    Yes, this is your answer:

    That page is not updated. You should be looking at the Encoder column, which doesn't have a dot indicating support. HEVC encoding support was added in API level 24. Also, I don't know what version of Android you are on right now but the next NatCorder update will require Android API level 23 as a minimum.
     
  34. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    I'm not sure about the misunderstanding, so let me clarify:
    • The HEVCRecorder requires API level 24 on Android.
    • NatCorder, even though it currently requires API level 21, can only successfully create the HEVCRecorder on devices running API level 24 or newer.
    • In the next NatCorder update, we are bumping up the minimum required version of Android from API level 21 to API level 23.
    • Even with this increase of the minimum required version, NatCorder will still require API level 24 or newer to be able to successfully create a HEVC recorder.
     
  35. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    I am really enjoying NatCorder, it's an amazing plugin. Everything works great.

    I am curious how do you detect the maximum recording framerate for the specific phone? Also, is there a way to detect what framerate the mp4 file was recorded in?

    Thank you so much for your help.
     
  36. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    The frame rate that you specify to the recorder is merely a hint; it has no bearing on the actual frame rate of the video. The actual frame rate depends on the spacing between timestamps on consecutive frames. And I'm not sure your second question has anything to do with NatCorder. On macOS, I use `mediainfo`.
     
    Mr-Mechanical likes this.
  37. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    Thank you for your response.

    In my use case, I use offline recording but the synthesized timesteps don't match the actual framerate of the video recorded. Is there a way to fix this?
     
  38. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    Can you share your recording code along with one such video? What clock are you using?
     
    Mr-Mechanical likes this.
  39. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    Thank you for your help. Here is the code doing the offline recording/processing. Unity forum doesn't support uploading video so I will edit and share a youtube link later today. I am using the FixedIntervalClock for creating mp4s. The video is about 2x the speed it should be.

    Code (CSharp):
    1.  
    2.  public async void Record(float time, float recordingSpeed = 1)
    3.     {
    4.         VideoContainer.Clip clip = new VideoContainer.Clip();
    5.         clip.frames = new List<Color32[]>();
    6.         VideoContainer.Instance.clips.Add(clip);
    7.         numRecordingFrames = (int)(time * 60f);
    8.         recording = true;
    9.         for (int i = 0; i < numRecordingFrames; i++)
    10.         {
    11.             recordingTime += 1/60f;
    12.             VideoContainer.Instance.clips[VideoContainer.Instance.clips.Count - 1].frames.Add(activeCameraTexture.GetPixels32());
    13.             await Task.Delay(16);
    14.         }
    15.         recording = false;
    16.     }
    17.  
    Code (CSharp):
    1. public class VideoContainer : Singleton<VideoContainer>
    2. {
    3.     // (Optional) Prevent non-singleton constructor use.
    4.     protected VideoContainer() { clips = new List<Clip>(); }
    5.  
    6.     // Then add whatever code to the class you need as you normally would.
    7.     public struct Clip
    8.     {
    9.         public List<Color32[]> frames;
    10.     }
    11.     public List<Clip> clips;
    12.     public string video;
    13. }
    Code (CSharp):
    1. public async void ProcessVideo()
    2.     {
    3.         EndRecording();
    4.         var clock = new FixedIntervalClock(60);
    5.         var recorder = new MP4Recorder(activeCameraTexture.width, activeCameraTexture.height, 60);
    6.         processing = true;
    7.         VideoContainer.Instance.video = await Task.Run(() =>
    8.         {
    9.             for (int i = 0; i < VideoContainer.Instance.clips.Count; i++)
    10.             {
    11.                 for (int f = 0; f < VideoContainer.Instance.clips[i].frames.Count; f++)
    12.                 {
    13.                     recorder.CommitFrame(VideoContainer.Instance.clips[i].frames[f], clock.timestamp);
    14.                 }
    15.             }
    16.             return recorder.FinishWriting();
    17.         });
    18.         processing = false;
    19.     }
     
    Last edited: Sep 27, 2020
  40. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    The issue is likely as a result of capturing frames from a WebCamTexture. The web cam texture might not be running at 60FPS, and using `await Task.Delay(16)` will likely skip two frames instead of one if your app is running at 60FPS (I recommend using a coroutine and WaitForEndOfFrame, as this is actually guaranteed to run in step with Unity's update loop, whereas Task.Delay is not).

    Run mediainfo on your video; it is likely going to say a constant frame rate of 60FPS, reflecting the clock you use to record, so it isn't NatCorder at fault here. Also, YouTube transcodes uploads so I don't recommend uploading to YouTube.
     
    Mr-Mechanical likes this.
  41. RSMAPPLI

    RSMAPPLI

    Joined:
    Jul 12, 2020
    Posts:
    6
    When opening the project in Xcode I have this error:

    ARC Semantic Issue Group
    /Users/Rafael/Documents/Projetos/NSF_XcodeProject/Classes/Unity/MetalHelper.mm:300:45: No known instance method for selector 'presentDrawable:afterMinimumDuration:'
     
  42. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    This has nothing to do with NatCorder. It is a Unity class, so you'll want to file a bug report with Unity.
     
  43. Juanola_

    Juanola_

    Joined:
    Sep 29, 2015
    Posts:
    38
    Hi! I had a question before buying.

    So, I have to make a "Record your Reaction" feature. Basically, get the frontal camera feed while the user uses the app, record it, and then share it. Also, camera feed should be shown on screen while recording.

    Can I use NatCorder for this requierement?

    If I can, what should I do, record the part of the screen that has the RawTexture showing the camera feed, or pass the WebCam texture directly to NatCorder?

    Thanks!
     
  44. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    You can use NatCorder to record textures or the screen. How you choose to do it depends on your app's design. I should note that there is no concept like passing a WebCamTexture directly to NatCorder. I recommend going through the docs to get a good sense of how recording works.
     
  45. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    What device and OS version does this happen on? Can you reimport NatShare from the Asset Store and try again?
     
  46. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    Hello, I have 2 errors when building, thank you for the help.

    Code (CSharp):
    1. Assets/NatSuite/Examples/WebCam/WebCam.cs(43,13): error CS0103: The name 'Handheld' does not exist in the current context
    2.  
    3. Assets/NatSuite/Examples/ReplayCam/ReplayCam.cs(67,13): error CS0103: The name 'Handheld' does not exist in the current context
     
  47. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    What platform are you building for? You might have to comment out those lines in the respective example scripts.
     
    Mr-Mechanical likes this.
  48. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    I'm building for macOS. May I just delete the examples folder to resolve this?
     
  49. Lanre

    Lanre

    Joined:
    Dec 26, 2013
    Posts:
    3,973
    Just comment out those lines. No need to throw away the entire examples folder.
     
    Mr-Mechanical likes this.
  50. Mr-Mechanical

    Mr-Mechanical

    Joined:
    May 31, 2015
    Posts:
    507
    I've finally come to my senses and I've decided to use mp4 clips instead of storing the uncompressed video in memory (this takes too much memory). What is the best way to add NatReader to a NatCorder project? Should I just download the code and import it into the existing project, or does it require special project settings?

    Thank you so much for the help.