Search Unity

  1. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    Could you tell me the environment you tested?
    OpenCVForUnity version :

    UnityCouldBuild Basic info
    Unity version :
    Builder Operating System and Version :
    Xcode version :
     
  2. AppliedEngineering

    AppliedEngineering

    Joined:
    Feb 1, 2016
    Posts:
    8
    OpenCVForUnity version: 2.5.6
    Unity version : Auto Detect 2021.3.231
    Builder Operating System and Version : MacOS Monterey
    Xcode version : 14.2.0
     
  3. AppliedEngineering

    AppliedEngineering

    Joined:
    Feb 1, 2016
    Posts:
    8
    *Unity version : Auto Detect 2021.3.31
     
  4. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    Unfortunately, in WebGL, the VideoCapture class does not support loading video stream by URL address.
    VideoPlayerWithOpenCVForUnityExample is an example of how to convert a VideoPlayer texture and mat. By combining this code with FaceMaskExample, it may be possible to modify it to use the VideoPlayer class instead of the VideoCapture class.
    https://github.com/EnoxSoftware/VideoPlayerWithOpenCVForUnityExample/
     
  5. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    The build was successful without problems in my environment.
    2023-10-24_00h36_06.png
    Rebuild with CleanBuild and the build may succeed.
     
  6. CaptainPyFace

    CaptainPyFace

    Joined:
    Apr 12, 2013
    Posts:
    9
    Thank you for your reply. Something does seem to go wrong when playing the video however. Streaming the URL file in WebGL works now but the playback speed is too slow. However, changing the playback speed does not work as you'd expect. Setting the playback speed to 2.5 should make the WebGL url video play at the correct speed, but the build seems to round off this value to 3. Changing playback speed to 2.1 also seems to play the video as playback speed 3; so it rounds off the input float value up to an integer. Is there another way to change the playback speed to have more granular control over the playback speed?
     
    Last edited: Oct 24, 2023
  7. AppliedEngineering

    AppliedEngineering

    Joined:
    Feb 1, 2016
    Posts:
    8
    Thanks for your availability, but the clean build option is no longer available on Unity Cloud, the only way we have found is to activate the toggle shown in the attached image but despite this the problem remains the same

    INFO: ❌ ld: framework not found opencv2


    Is it possible that something needs to be changed at the unity editor level before sending the project to cloud build? for example at the framework level on the inspector or at the project settings level?
     

    Attached Files:

  8. AppliedEngineering

    AppliedEngineering

    Joined:
    Feb 1, 2016
    Posts:
    8


    found how to do the clean build but the error remains
     
  9. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    From the "Build" button, you can select "Clean Build". If I set up OpenCVForUntiy in a new project, does this problem still occur?
    2023-10-24_23h15_24.png
     
  10. bumchikiboss

    bumchikiboss

    Joined:
    Mar 17, 2020
    Posts:
    2
    Hello,
    I want to implement Vuforia's ARCamera with TextRecognitionCRNNWebCamExample.
    It uses default main camera and script - WebCamTextureToMatHelper.
    It throws timeout error when using AR camera. What do I need to do in order to use AR Camera with Text Recognition.
    Sorry I'm quite novice in this. Would really appreciate if anyone explain me with brief steps.
     
  11. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    WebCamTextureToMatHelper cannot be used because WebCamTexture is not used in Vuforia's ARCamera.
    This repository is an example of how to convert from Vuforia's ARCamera to OpenCV's Mat.
    https://github.com/EnoxSoftware/VuforiaWithOpenCVForUnityExample/
     
  12. jinC_H

    jinC_H

    Joined:
    Aug 29, 2017
    Posts:
    12
    can we change language with unityopencv text recognition for example Japanese?
     
    Last edited: Oct 28, 2023
  13. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    The dnn model used in "TextRecognitionCRNNExample" does not support the Japanese language.
    To support Japanese, this model needs to be retrained anew for Japanese. The following pages may be helpful.
    https://docs.opencv.org/4.x/d4/d43/tutorial_dnn_text_spotting.html
     
  14. jinC_H

    jinC_H

    Joined:
    Aug 29, 2017
    Posts:
    12
  15. Pichardoestrada

    Pichardoestrada

    Joined:
    Aug 11, 2019
    Posts:
    1
    Hello, I am using Unity Free Assets and the trial version of openCV For Uinty. The game works perfectly in edition mode, without errors, but when I export the game the game does not work, the webcam does not react. Everything works fine in editing mode, I configure lowerUpper and other things to detect colors, everything is great, it just doesn't work when exporting. The webcam starts correctly but everything is dark, the sliders to modify the colors do not respond but when you restart you see the changes in the sliders and values but still the webcam is dark, is it due to using the trial version? or something is missing


    Hola, estoy usando Unity Free Assets y la versión de prueba de openCV For Uinty. El juego funciona perfectamente en modo edición, sin errores, pero cuando exporto el juego el juego no funciona, la webcam no reacciona. Todo funciona bien en modo edición, configuro lowerUpper y otras cosas para detectar colores, todo está genial, solo que no funciona al exportar. La webcam inicia correctamente pero todo está oscuro, los controles deslizantes para modificar los colores no responden pero al reiniciar ves los cambios en los controles deslizantes y valores pero aún así la cámara web está oscura, ¿será por usar la versión de prueba? o falta algo
     
  16. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    The free trial version only supports running in UnityEditor.
    https://enoxsoftware.com/opencvforunity/get_asset/
     
  17. AllanSamurai

    AllanSamurai

    Joined:
    Jan 29, 2015
    Posts:
    6
    Sorry for asking... Can we have a Black Friday discount please?
    Thanks! Thanks very much!!
     
    Last edited: Nov 15, 2023
  18. CaptainPyFace

    CaptainPyFace

    Joined:
    Apr 12, 2013
    Posts:
    9
    Is it possible to improve dlib performance by scaling down the resolution of the Mat used for face detection without having the scale down video resolution output? Are there other ways to speed up dlib performance (e.g. skipping frames, etc.) without sacrificing accuracy. Thank you
     
    Last edited: Nov 18, 2023
  19. Cuicui_Studios

    Cuicui_Studios

    Joined:
    May 3, 2014
    Posts:
    72
    Hi!
    We are following the PoseEstimationMediaPipeExample to try and get the landmarks of the user in front of the camera (on android devices). In the example, it seems like the MedidaPipePoseEstimator gets all the user joints and positions during the infer function and returns it in the results object:
    // results[0] = [bbox_coords, landmarks_coords, landmarks_coords_world, conf]

    Is there a clean way of getting the landmarks after the infer function? We want the user to interact with some objects (not just paint things on an image). We've tried getting them from the results[0] object after the infer function, but the Mat object does not have a function to get the landmarks_coords_world array of coordinates.
    Can anyone point us towards any useful docs or anything? We are using C#.
    Thanks in advance.
    [Edit: found the raw data]

    // # [0: 4]: person bounding box found in image of format [x1, y1, x2, y2] (top-left and bottom-right points)
    // # [4: 199]: screen landmarks with format [x1, y1, z1, v1, p1, x2, y2 ... x39, y39, z39, v39, p39], z value is relative to HIP
    // # [199: 316]: world landmarks with format [x1, y1, z1, x2, y2 ... x39, y39, z39], 3D metric x, y, z coordinate
    // # [316]: confidence

    Guess we got to deal with this...
    Thanks anyway guys!
     
    Last edited: Dec 4, 2023
  20. diegojmoreira

    diegojmoreira

    Joined:
    Dec 14, 2021
    Posts:
    2
    Hi!

    i am trying to create a tracker using Unity camera and the tracker algorithms from OpenCV. I'm having a performance problem when using the ReadPixels every frame. I'm trying to use the KCF Tracker. Code Below:


    Code (CSharp):
    1. UnityEngine.Rect rect = new UnityEngine.Rect(0, 0, mWidth, mHeight);
    2.             screenShot = new Texture2D(mWidth, mHeight, TextureFormat.RGB24, false);
    3.  
    4.             RenderTexture.active = trackingCamera.activeTexture;
    5.  
    6.             screenShot.ReadPixels(rect, 0, 0);
    7.             RenderTexture.active = null;
    8.  
    9.  
    10.             Mat cameraMat = new Mat(mHeight, mWidth, CvType.CV_8UC3);
    11.             Utils.texture2DToMat(screenShot, cameraMat); //obtain frame from unity camera
    12.  
    13.             Destroy(screenShot);
    14.  
    15.             // if(keepStartImage)
    16.             //     cameraMat = startTrackingImage;
    17.                
    18.             bool trackingActive = trackerEngine.update(cameraMat, targetBounding);

    Also, ther KCF tracker seems doesn't seems to be working, because the trackingActive variable is always true but when i draw the bounding box i can see that it is not surrounding the target.
     
  21. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    Could you try the attachment?

    TrackingRecderTexture.gif
     

    Attached Files:

  22. diegojmoreira

    diegojmoreira

    Joined:
    Dec 14, 2021
    Posts:
    2
    I tried, the performance increase greatly, but
    I've added this solution to my code and the performance has improved a lot, but now when my player start moving while tracking the enemy, i start to receive blank images from the GPU and the track is lost. movement_bug (online-video-cutter.com).gif
     
  23. Cuicui_Studios

    Cuicui_Studios

    Joined:
    May 3, 2014
    Posts:
    72
    Hi,
    We've been running some tests using the PoseEstimationMediaPipeExample and managed to get the landmarks for different people in front of the camera (on android devices). Pretty cool btw.
    We've run into one problem: The array of persons switches indexes randomly, i.e. on one frame persons[0] is associated with one person and persons[1] is associated with another person but at the following frame the landmarks switch from person[0] to person[1] (and I suspect it also happens with more persons in the array of data).
    Is there a way of getting a "consistent" array of persons between frames? Or we need to track that manually (such as storing specific landmarks and associate them to a specific person).
    Thanks.
     
  24. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    The task of tracking and assigning IDs to the results of object detection is commonly referred to as "Multiple Object Tracking.
    There are "BoT-SORT" and "ByteTrack" as recent representative algorithms. cpp implementation was found, but not C#.
    https://github.com/viplix3/BoTSORT-cpp
    https://github.com/Vertical-Beach/ByteTrack-cpp
    Some of our examples include code for multiple object tracking with very primitive algorithms.
    https://github.com/EnoxSoftware/FaceMaskExample/tree/master/Assets/FaceMaskExample/RectangleTracker
     
  25. AcuityCraig

    AcuityCraig

    Joined:
    Jun 24, 2019
    Posts:
    65
    Hello @EnoxSoftware,

    I'm current working on a project using ArUco markers based on your MarkerBasedARExample project. The tracking on that project does not seem to be as reliable as it is in the ArUcoWebCamExample that comes in the base asset. So I have a few questions:

    Is there away to adjust expected marker size in the MarkerBaseARExample? I printed the markers at 1.5 inches square and they don't detect. But will if I print at 2 inches. ArucoWebCamExample does not seem to have an issue detecting.

    In the ArucoWebCamExample scene, is there a way to return the ID of the marker that was found? I cannot find a method for it but I know it draws the ID to the screen if you have the option selected to do so.

    Thanks!
     
  26. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    MarkerBasedARExample Code is a rewrite of https://github.com/MasteringOpenCV/code/tree/master/Chapter2_iPhoneAR using “OpenCV for Unity”. The algorithm is described in detail in "Mastering OpenCV with Practical Computer Vision Projects". This example is code for training an algorithm to detect a two-dimensional code.
    For greater accuracy and speed, it is recommended to use the ArUco module.
    https://github.com/EnoxSoftware/Ope...ty/Examples/ContribModules/aruco/ArUcoExample

    You can get the each id using following code.
    for (int i = 0; i < ids.total(); i++) {
    int id = (int)ids.get (i, 0) [0];
    Debug.Log ("id " + id);
    }
     
  27. bihi10

    bihi10

    Joined:
    Jul 5, 2018
    Posts:
    6
    Hi @EnoxSoftware

    I'm current working on a project using the hand tracking tool and am trying to get the vector2 or vector3 position of every hand but i don't know where to start and couldnt fint the line of code that might help

    Thanks for your work and updates on such a great tool!
     
  28. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    Hi,

    Our asset includes an example "HandPoseEstimationMediaPipeExample" for hand detection and hand pose estimation.
    https://github.com/EnoxSoftware/Ope...Example/HandPoseEstimationMediaPipeExample.cs
    Here is the code for the part of the process that gets 2D and 3D points from the inference results and draws them.
    https://github.com/EnoxSoftware/Ope...ipeExample/MediaPipeHandPoseEstimator.cs#L367

    This example does not include the code for the processing part of drawing three-dimensional points, so please refer to the original porting code as well.
    https://github.com/opencv/opencv_zoo/tree/master/models/handpose_estimation_mediapipe
    https://github.com/opencv/opencv_zo...ndpose_estimation_mediapipe/demo.py#L134-L156
     
    Last edited: Dec 22, 2023
  29. saravanan-P

    saravanan-P

    Joined:
    Jun 12, 2015
    Posts:
    1
    i want detection of objects from the images that i upload (custom object). Using hololens 2. How can i achieve this ?
    so basically, need to detect the object from live feed of hololens 2.
     
    Last edited: Dec 31, 2023
  30. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    The OpenCV Dnn module is not supported by the UWP platform.
    https://enoxsoftware.com/opencvforunity/documentation/support-modules/
    Therefore, object detection by the dnn module is not supported in Hololens2. Object detection by the CascadeClassifier class in the ObjDetect module is available in Hololens2.
    https://github.com/EnoxSoftware/HoloLensWithOpenCVForUnityExample
     
  31. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    856
  32. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
  33. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    856
    Thanks @EnoxSoftware. Is the OpenCV with Cuda DLLs already compiled for the current version of OpenCvForUnity and available for download somewhere? I'm kind of using OpenCvForUnity exactly to avoid that kind of stuff =)
     
  34. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    It is difficult to distribute OpenCV Cuda-compatible builds because they need to be built with flags that match the "Compute Capability" of the PC's GPU hardware.
     
    sk7299580 and cecarlsen like this.
  35. AcuityCraig

    AcuityCraig

    Joined:
    Jun 24, 2019
    Posts:
    65
    Hello @EnoxSoftware,

    My project is using the AruCo marker detection but I am finding that the FPS drops to 9-10fps even in the example scenes. In the profiler for the example scene It looks like its the update loop of the ArUcoWebCamExample script.

    upload_2024-1-11_16-13-4.png
    Do you have any insight as to why this is or how it could be optimized?

    Thanks
     
  36. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    Could you tell me the environment you tested?
    OpenCVForUnity version :
    Unity version :
    Platform :

    ArUco Marker Type :
     
  37. AcuityCraig

    AcuityCraig

    Joined:
    Jun 24, 2019
    Posts:
    65
    OpenCVForUnity: 2.5.7
    Unity: 2023.2.2
    Platform: Windows (NOT UWP)

    ArUco marker library: 4x4_250 - Using the detection from the Examples/ContribModules/aruco/ArUcoExample
     
  38. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    Basically, the processing time increases in proportion to the size of the WebCamTexture, and can be optimized by using the ImageOptimizationHelper component to perform marker detection to a downscaled mat.
    You can import and try ArUcoWebCamDownScaleExample.unitypackage.
     

    Attached Files:

  39. elvis-satta

    elvis-satta

    Joined:
    Nov 13, 2013
    Posts:
    19
    Hi @EnoxSoftware

    I would like to be able to combine ARFoundationWithOpenCVForUnityExample and YOLOv5WithOpenCVForUnityExample.
    I was able to create a custom onnx model to use with YOLOv5WithOpenCVForUnityExample and I can easily place AR Anchors with AR foundation. It would be perfect for my use case, to be able to place an anchor at the world space point where the object is detected with yolo5. Surely the position of the anchor will never be very precise, but I would like to try to work on such an example.

    Will it also be possible to use Needle Tools AR Simulation in this environment?

    Could you please help me ?
     
    Last edited: Jan 18, 2024
  40. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554

    We have not tried it but we think it is possible to combine ARFoundationWithOpenCVForUnityExample and YOLOv5WithOpenCVForUnityExample.
    However, since ObjectDetection only returns a position in screen coordinates, it would be necessary to use camera depth images or PNP attitude estimation to derive the world coordinates.
    As for Needle Tools AR Simulation, we are not familiar with it, so we can't say anything about it.
     
  41. elvis-satta

    elvis-satta

    Joined:
    Nov 13, 2013
    Posts:
    19
    @EnoxSoftware Thank you, i was able to achieve what i was asking: with a new function in YOLOv5ObjectDetector to calculate the center of the bounding boxes and using the center of these positions as a screen point vector2 and then with a raycast to instantiate anchors.

    In the end this is the easiest way, also because it is the one that just works!
     
  42. apprisify_jan

    apprisify_jan

    Joined:
    Jan 22, 2024
    Posts:
    3
    I just downloaded the free trial version and everything is going great, except for the DNN object classification modules. They are all working (YOLOv4, YOLOX, NanoDetPlus) but only for 2-3 seconds, then the whole unity editor crashes. I can't test it in builds because this is only the trial version for now.

    Additionally I got this error after importing the package and had to comment it out to get everything to run at all (unsafe code is allowed).
    Code (CSharp):
    1. Assets\OpenCVForUnity\Examples\MainModules\videoio\VideoWriterAsyncExample\VideoWriterAsyncExample.cs(339,44): error CS1503: Argument 1: cannot convert from 'Unity.Collections.NativeArray<byte>' to 'System.IntPtr'
    Could that be the reason?

    I looked at the crash log and copied you the most relevant part (I think). @EnoxSoftware What could be the reason for those crashes?

    I tested it in 2 versions and it happens both in 2022.3.2f1 and 2021.3.22f1.

    Filename: Assets/OpenCVForUnity/Examples/MainModules/dnn/ObjectDetectionYOLOXExample/ObjectDetectionYOLOXExample.cs Line: 254)

    Screen.width 821 Screen.height 760 Screen.orientation Portrait
    UnityEngine.StackTraceUtility:ExtractStackTrace ()
    UnityEngine.DebugLogHandler:LogFormat (UnityEngine.LogType,UnityEngine.Object,string,object[])
    UnityEngine.Logger:Log (UnityEngine.LogType,object)
    UnityEngine.Debug:Log (object)
    OpenCVForUnityExample.ObjectDetectionYOLOXExample:OnWebCamTextureToMatHelperInitialized () (at Assets/OpenCVForUnity/Examples/MainModules/dnn/ObjectDetectionYOLOXExample/ObjectDetectionYOLOXExample.cs:264)
    UnityEngine.Events.InvokableCall:Invoke ()
    UnityEngine.Events.UnityEvent:Invoke ()
    OpenCVForUnity.UnityUtils.Helper.WebCamTextureToMatHelper/<_Initialize>d__68:MoveNext () (at Assets/OpenCVForUnity/org/opencv/unity/helper/WebCamTextureToMatHelper.cs:740)
    UnityEngine.SetupCoroutine:InvokeMoveNext (System.Collections.IEnumerator,intptr)
    UnityEngine.MonoBehaviour:StartCoroutine (System.Collections.IEnumerator)
    OpenCVForUnity.UnityUtils.Helper.WebCamTextureToMatHelper:Initialize () (at Assets/OpenCVForUnity/org/opencv/unity/helper/WebCamTextureToMatHelper.cs:445)
    OpenCVForUnityExample.ObjectDetectionYOLOXExample:Run () (at Assets/OpenCVForUnity/Examples/MainModules/dnn/ObjectDetectionYOLOXExample/ObjectDetectionYOLOXExample.cs:191)
    OpenCVForUnityExample.ObjectDetectionYOLOXExample:Start () (at Assets/OpenCVForUnity/Examples/MainModules/dnn/ObjectDetectionYOLOXExample/ObjectDetectionYOLOXExample.cs:123)

    (Filename: Assets/OpenCVForUnity/Examples/MainModules/dnn/ObjectDetectionYOLOXExample/ObjectDetectionYOLOXExample.cs Line: 264)


    =================================================================
    Native Crash Reporting
    =================================================================
    Got a UNKNOWN while executing native code. This usually indicates
    a fatal error in the mono runtime or one of the native libraries
    used by your application.
    =================================================================

    =================================================================
    Managed Stacktrace:
    =================================================================
    at <unknown> <0xffffffff>
    at OpenCVForUnity.DnnModule.Net:dnn_Net_forward_14 <0x00097>
    at OpenCVForUnity.DnnModule.Net:forward <0x000c2>
    at OpenCVForUnityExample.DnnModel.YOLOXObjectDetector:infer <0x0014a>
    at OpenCVForUnityExample.ObjectDetectionYOLOXExample:Update <0x002e6>
    at System.Object:runtime_invoke_void__this__ <0x00087>
    =================================================================
    Received signal SIGSEGV
    Obtained 36 stack frames
    0x00007ffcc434eefc (opencvforunity) tracking_legacy_1TrackerTLD_create_10
    0x00007ffd77e91716 (ntdll) RtlCaptureContext2
    0x00007ffcc35e859b (opencvforunity) tracking_legacy_1TrackerTLD_create_10
    0x00007ffcc348fa8a (opencvforunity) tracking_legacy_1TrackerTLD_create_10
    0x00007ffcc3968c71 (opencvforunity) tracking_legacy_1TrackerTLD_create_10
    0x00007ffcc3872250 (opencvforunity) tracking_legacy_1TrackerTLD_create_10
    0x00007ffcc37fba0f (opencvforunity) tracking_legacy_1TrackerTLD_create_10
    0x00007ffcc37fd07c (opencvforunity) tracking_legacy_1TrackerTLD_create_10
    0x00007ffcc37fb3a6 (opencvforunity) tracking_legacy_1TrackerTLD_create_10
    0x00007ffcc371ace2 (opencvforunity) tracking_legacy_1TrackerTLD_create_10
    0x00007ffcc32b3492 (opencvforunity) dnn_Net_forward_14
    0x00000158576cdf98 (Mono JIT Code) (wrapper managed-to-native) OpenCVForUnity.DnnModule.Net:dnn_Net_forward_14 (intptr,intptr,intptr)
    0x00000158576ccfb3 (Mono JIT Code) OpenCVForUnity.DnnModule.Net:forward (System.Collections.Generic.List`1<OpenCVForUnity.CoreModule.Mat>,System.Collections.Generic.List`1<string>)
    0x00000158576cac5b (Mono JIT Code) [YOLOXObjectDetector.cs:150] OpenCVForUnityExample.DnnModel.YOLOXObjectDetector:infer (OpenCVForUnity.CoreModule.Mat)
    0x00000158576ca6b7 (Mono JIT Code) [ObjectDetectionYOLOXExample.cs:338] OpenCVForUnityExample.ObjectDetectionYOLOXExample:Update ()
    0x00000158588bf4a8 (Mono JIT Code) (wrapper runtime-invoke) object:runtime_invoke_void__this__ (object,intptr,intptr,intptr)
    0x00007ffce1e3feb4 (mono-2.0-bdwgc) [mini-runtime.c:3445] mono_jit_runtime_invoke
    0x00007ffce1d7e764 (mono-2.0-bdwgc) [object.c:3066] do_runtime_invoke
    0x00007ffce1d7e8fc (mono-2.0-bdwgc) [object.c:3113] mono_runtime_invoke
    0x00007ff7a4bc1e94 (Unity) scripting_method_invoke
    0x00007ff7a4ba1694 (Unity) ScriptingInvocation::Invoke
    0x00007ff7a4b8a794 (Unity) MonoBehaviour::CallMethodIfAvailable
    0x00007ff7a4b8a882 (Unity) MonoBehaviour::CallUpdateMethod
    0x00007ff7a4674c3b (Unity) BaseBehaviourManager::CommonUpdate<BehaviourManager>
    0x00007ff7a467c00a (Unity) BehaviourManager::Update
    0x00007ff7a488f59d (Unity) `InitPlayerLoopCallbacks'::`2'::UpdateScriptRunBehaviourUpdateRegistrator::Forward
    0x00007ff7a4875bda (Unity) ExecutePlayerLoop
    0x00007ff7a4875d66 (Unity) ExecutePlayerLoop
    0x00007ff7a487be49 (Unity) PlayerLoop
    0x00007ff7a57edab9 (Unity) PlayerLoopController::UpdateScene
    0x00007ff7a57ebc5b (Unity) Application::TickTimer
    0x00007ff7a5c40fba (Unity) MainMessageLoop
    0x00007ff7a5c4588b (Unity) WinMain
    0x00007ff7a702315e (Unity) __scrt_common_main_seh
    0x00007ffd771d7344 (KERNEL32) BaseThreadInitThunk
    0x00007ffd77e426b1 (ntdll) RtlUserThreadStart
     
    Last edited: Jan 22, 2024
  43. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    Thank you very much for reporting.
    Code (CSharp):
    1. Assets\OpenCVForUnity\Examples\MainModules\videoio\VideoWriterAsyncExample\VideoWriterAsyncExample.cs(339,44): error CS1503: Argument 1: cannot convert from 'Unity.Collections.NativeArray<byte>' to 'System.IntPtr'
    The bug has been identified and fixed. A revised version has been uploaded.

    As for the Editor crashing issue, I could not reproduce it in my test environment.
    OpenCVForUnity version : 2.5.8
    Unity version : 2020.3.48.f1, 2022.3.2f1
    Edirot Platform : Windows10

    Could you tell me about your testing environment?
     
  44. apprisify_jan

    apprisify_jan

    Joined:
    Jan 22, 2024
    Posts:
    3
    Its a current spec machine, Intel 13900k, RTX4080, Win10. What do you want to know? Do you need more logs or similar?
     
  45. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    Could you send me the project folder where the problem occurs for investigation?
    store@enoxsoftware.com
     
  46. apprisify_jan

    apprisify_jan

    Joined:
    Jan 22, 2024
    Posts:
    3
    I did via weTransfer, furthermore I tried to find a more expressive part of the log.

    Code (CSharp):
    1. Screen.width 1469 Screen.height 756 Screen.orientation Portrait
    2. UnityEngine.StackTraceUtility:ExtractStackTrace ()
    3. UnityEngine.DebugLogHandler:LogFormat (UnityEngine.LogType,UnityEngine.Object,string,object[])
    4. UnityEngine.Logger:Log (UnityEngine.LogType,object)
    5. UnityEngine.Debug:Log (object)
    6. OpenCVForUnityExample.ObjectDetectionNanoDetPlusExample:OnWebCamTextureToMatHelperInitialized () (at Assets/OpenCVForUnity/Examples/MainModules/dnn/ObjectDetectionNanoDetPlusExample/ObjectDetectionNanoDetPlusExample.cs:265)
    7. UnityEngine.Events.InvokableCall:Invoke ()
    8. UnityEngine.Events.UnityEvent:Invoke ()
    9. OpenCVForUnity.UnityUtils.Helper.WebCamTextureToMatHelper/<_Initialize>d__68:MoveNext () (at Assets/OpenCVForUnity/org/opencv/unity/helper/WebCamTextureToMatHelper.cs:742)
    10. UnityEngine.SetupCoroutine:InvokeMoveNext (System.Collections.IEnumerator,intptr)
    11. UnityEngine.MonoBehaviour:StartCoroutine (System.Collections.IEnumerator)
    12. OpenCVForUnity.UnityUtils.Helper.WebCamTextureToMatHelper:Initialize () (at Assets/OpenCVForUnity/org/opencv/unity/helper/WebCamTextureToMatHelper.cs:447)
    13. OpenCVForUnityExample.ObjectDetectionNanoDetPlusExample:Run () (at Assets/OpenCVForUnity/Examples/MainModules/dnn/ObjectDetectionNanoDetPlusExample/ObjectDetectionNanoDetPlusExample.cs:192)
    14. OpenCVForUnityExample.ObjectDetectionNanoDetPlusExample:Start () (at Assets/OpenCVForUnity/Examples/MainModules/dnn/ObjectDetectionNanoDetPlusExample/ObjectDetectionNanoDetPlusExample.cs:124)
    15.  
    16. (Filename: Assets/OpenCVForUnity/Examples/MainModules/dnn/ObjectDetectionNanoDetPlusExample/ObjectDetectionNanoDetPlusExample.cs Line: 265)
    17.  
    18. TrimDiskCacheJob: Current cache size 0mb
    19. Scanning for USB devices : 2.109ms
    20. Android Extension - Scanning For ADB Devices 124 ms
    21. Android Extension - Scanning For ADB Devices 118 ms
    22. Attempted to call .Dispose on an already disposed CancellationTokenSource
    23. Android Extension - Scanning For ADB Devices 118 ms
    24. Android Extension - Scanning For ADB Devices 121 ms
    25. Crash!!!
    Does this help narrow it down? It crashes in all ObjectDetection Examples but not the basic DNN ones like WebcamToMat etc. But seemingly at different points.
     
    Last edited: Jan 24, 2024
  47. svenneve

    svenneve

    Joined:
    May 14, 2013
    Posts:
    79
    We've bought this plugin with the express purpose of camera calibration (Unity 2021.3.31f1, OpenCV for Unity 2.5.8, Win 11)

    But low and behold, NONE, of the charuco examples work, seems like the aruco code was completely commented out.
    Examples posted by Enox don't seem to work either (for example https://forum.unity.com/threads/released-opencv-for-unity.277080/page-63#post-9587674 will barf out errors every frame
    NullReferenceException: Object reference not set to an instance of an object
    OpenCVForUnityExample.ArUcoWebCamDownScaleExample.Update () (at Assets/OpenCVForUnity/Examples/MainModules/objdetect/ArUcoExample/ArUcoWebCamDownScaleExample.cs:538)
    )

    I probably would have had more success wrapping opencvsharp myself with the features i needed rather than this buggy, barely documented mess.
     
    Last edited: Feb 14, 2024
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,554
    As mentioned in the comments here, there seems to be an unresolved bug in aruco's ChAruco board-related methods, including calibrateCameraCharuco.
    opencv/opencv#23493 (comment)
    The opencv aruco class has had some destructive fixes over 2022 and 2023, but still does not seem to have been validated with sufficient test code or sample code.
    https://github.com/EnoxSoftware/OpenCVForUnity/issues/164
    https://github.com/opencv/opencv/issues/23493

    This bug does not occur with OpenCVForUnity, which wraps an older version of opencv. If you need an older version of OpenCVForUnity, please contact us via the form with your invoice number.
    https://enoxsoftware.com/opencvforunity/contact/technical-inquiry/
     
  49. svenneve

    svenneve

    Joined:
    May 14, 2013
    Posts:
    79
    Yeah, I read about the breaking changes on the OpenCV git, seems no one has taken charge or responsibility for that part of OpenCV, which is a shame, as aruco boards are really fast and reliable in production for us)

    Strange thing is that the charucoboard scanning does work (the charcu webcam example shows this), it seems the charuco camera calibration provided example in OpenCVforUnity simply doesn't do anything (strangely enough the checkerboard option does seem to work, somewhat, although very unreliable)

    I would like to investigate a bit more, but right now this is outside my budget for this project. I might sort it by going the micro service way and generate the camera matrix and diff outside Unity (seeing we got it working in Python already.)