Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    Could you tell me the environment you tested?
    OpenCVForUnity version :

    UnityCouldBuild Basic info
    Unity version :
    Builder Operating System and Version :
    Xcode version :
     
  2. AppliedEngineering

    AppliedEngineering

    Joined:
    Feb 1, 2016
    Posts:
    8
    OpenCVForUnity version: 2.5.6
    Unity version : Auto Detect 2021.3.231
    Builder Operating System and Version : MacOS Monterey
    Xcode version : 14.2.0
     
  3. AppliedEngineering

    AppliedEngineering

    Joined:
    Feb 1, 2016
    Posts:
    8
    *Unity version : Auto Detect 2021.3.31
     
  4. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    Unfortunately, in WebGL, the VideoCapture class does not support loading video stream by URL address.
    VideoPlayerWithOpenCVForUnityExample is an example of how to convert a VideoPlayer texture and mat. By combining this code with FaceMaskExample, it may be possible to modify it to use the VideoPlayer class instead of the VideoCapture class.
    https://github.com/EnoxSoftware/VideoPlayerWithOpenCVForUnityExample/
     
  5. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    The build was successful without problems in my environment.
    2023-10-24_00h36_06.png
    Rebuild with CleanBuild and the build may succeed.
     
  6. CaptainPyFace

    CaptainPyFace

    Joined:
    Apr 12, 2013
    Posts:
    9
    Thank you for your reply. Something does seem to go wrong when playing the video however. Streaming the URL file in WebGL works now but the playback speed is too slow. However, changing the playback speed does not work as you'd expect. Setting the playback speed to 2.5 should make the WebGL url video play at the correct speed, but the build seems to round off this value to 3. Changing playback speed to 2.1 also seems to play the video as playback speed 3; so it rounds off the input float value up to an integer. Is there another way to change the playback speed to have more granular control over the playback speed?
     
    Last edited: Oct 24, 2023
  7. AppliedEngineering

    AppliedEngineering

    Joined:
    Feb 1, 2016
    Posts:
    8
    Thanks for your availability, but the clean build option is no longer available on Unity Cloud, the only way we have found is to activate the toggle shown in the attached image but despite this the problem remains the same

    INFO: ❌ ld: framework not found opencv2


    Is it possible that something needs to be changed at the unity editor level before sending the project to cloud build? for example at the framework level on the inspector or at the project settings level?
     

    Attached Files:

  8. AppliedEngineering

    AppliedEngineering

    Joined:
    Feb 1, 2016
    Posts:
    8


    found how to do the clean build but the error remains
     
  9. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    From the "Build" button, you can select "Clean Build". If I set up OpenCVForUntiy in a new project, does this problem still occur?
    2023-10-24_23h15_24.png
     
  10. bumchikiboss

    bumchikiboss

    Joined:
    Mar 17, 2020
    Posts:
    1
    Hello,
    I want to implement Vuforia's ARCamera with TextRecognitionCRNNWebCamExample.
    It uses default main camera and script - WebCamTextureToMatHelper.
    It throws timeout error when using AR camera. What do I need to do in order to use AR Camera with Text Recognition.
    Sorry I'm quite novice in this. Would really appreciate if anyone explain me with brief steps.
     
  11. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    WebCamTextureToMatHelper cannot be used because WebCamTexture is not used in Vuforia's ARCamera.
    This repository is an example of how to convert from Vuforia's ARCamera to OpenCV's Mat.
    https://github.com/EnoxSoftware/VuforiaWithOpenCVForUnityExample/
     
  12. jinC_H

    jinC_H

    Joined:
    Aug 29, 2017
    Posts:
    12
    can we change language with unityopencv text recognition for example Japanese?
     
    Last edited: Oct 28, 2023
  13. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    The dnn model used in "TextRecognitionCRNNExample" does not support the Japanese language.
    To support Japanese, this model needs to be retrained anew for Japanese. The following pages may be helpful.
    https://docs.opencv.org/4.x/d4/d43/tutorial_dnn_text_spotting.html
     
  14. jinC_H

    jinC_H

    Joined:
    Aug 29, 2017
    Posts:
    12
  15. Pichardoestrada

    Pichardoestrada

    Joined:
    Aug 11, 2019
    Posts:
    1
    Hello, I am using Unity Free Assets and the trial version of openCV For Uinty. The game works perfectly in edition mode, without errors, but when I export the game the game does not work, the webcam does not react. Everything works fine in editing mode, I configure lowerUpper and other things to detect colors, everything is great, it just doesn't work when exporting. The webcam starts correctly but everything is dark, the sliders to modify the colors do not respond but when you restart you see the changes in the sliders and values but still the webcam is dark, is it due to using the trial version? or something is missing


    Hola, estoy usando Unity Free Assets y la versión de prueba de openCV For Uinty. El juego funciona perfectamente en modo edición, sin errores, pero cuando exporto el juego el juego no funciona, la webcam no reacciona. Todo funciona bien en modo edición, configuro lowerUpper y otras cosas para detectar colores, todo está genial, solo que no funciona al exportar. La webcam inicia correctamente pero todo está oscuro, los controles deslizantes para modificar los colores no responden pero al reiniciar ves los cambios en los controles deslizantes y valores pero aún así la cámara web está oscura, ¿será por usar la versión de prueba? o falta algo
     
  16. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    The free trial version only supports running in UnityEditor.
    https://enoxsoftware.com/opencvforunity/get_asset/
     
  17. AllanSamurai

    AllanSamurai

    Joined:
    Jan 29, 2015
    Posts:
    6
    Sorry for asking... Can we have a Black Friday discount please?
    Thanks! Thanks very much!!
     
    Last edited: Nov 15, 2023
  18. CaptainPyFace

    CaptainPyFace

    Joined:
    Apr 12, 2013
    Posts:
    9
    Is it possible to improve dlib performance by scaling down the resolution of the Mat used for face detection without having the scale down video resolution output? Are there other ways to speed up dlib performance (e.g. skipping frames, etc.) without sacrificing accuracy. Thank you
     
    Last edited: Nov 18, 2023
  19. Cuicui_Studios

    Cuicui_Studios

    Joined:
    May 3, 2014
    Posts:
    71
    Hi!
    We are following the PoseEstimationMediaPipeExample to try and get the landmarks of the user in front of the camera (on android devices). In the example, it seems like the MedidaPipePoseEstimator gets all the user joints and positions during the infer function and returns it in the results object:
    // results[0] = [bbox_coords, landmarks_coords, landmarks_coords_world, conf]

    Is there a clean way of getting the landmarks after the infer function? We want the user to interact with some objects (not just paint things on an image). We've tried getting them from the results[0] object after the infer function, but the Mat object does not have a function to get the landmarks_coords_world array of coordinates.
    Can anyone point us towards any useful docs or anything? We are using C#.
    Thanks in advance.
    [Edit: found the raw data]

    // # [0: 4]: person bounding box found in image of format [x1, y1, x2, y2] (top-left and bottom-right points)
    // # [4: 199]: screen landmarks with format [x1, y1, z1, v1, p1, x2, y2 ... x39, y39, z39, v39, p39], z value is relative to HIP
    // # [199: 316]: world landmarks with format [x1, y1, z1, x2, y2 ... x39, y39, z39], 3D metric x, y, z coordinate
    // # [316]: confidence

    Guess we got to deal with this...
    Thanks anyway guys!
     
    Last edited: Dec 4, 2023
  20. diegojmoreira

    diegojmoreira

    Joined:
    Dec 14, 2021
    Posts:
    2
    Hi!

    i am trying to create a tracker using Unity camera and the tracker algorithms from OpenCV. I'm having a performance problem when using the ReadPixels every frame. I'm trying to use the KCF Tracker. Code Below:


    Code (CSharp):
    1. UnityEngine.Rect rect = new UnityEngine.Rect(0, 0, mWidth, mHeight);
    2.             screenShot = new Texture2D(mWidth, mHeight, TextureFormat.RGB24, false);
    3.  
    4.             RenderTexture.active = trackingCamera.activeTexture;
    5.  
    6.             screenShot.ReadPixels(rect, 0, 0);
    7.             RenderTexture.active = null;
    8.  
    9.  
    10.             Mat cameraMat = new Mat(mHeight, mWidth, CvType.CV_8UC3);
    11.             Utils.texture2DToMat(screenShot, cameraMat); //obtain frame from unity camera
    12.  
    13.             Destroy(screenShot);
    14.  
    15.             // if(keepStartImage)
    16.             //     cameraMat = startTrackingImage;
    17.                
    18.             bool trackingActive = trackerEngine.update(cameraMat, targetBounding);

    Also, ther KCF tracker seems doesn't seems to be working, because the trackingActive variable is always true but when i draw the bounding box i can see that it is not surrounding the target.
     
  21. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    Could you try the attachment?

    TrackingRecderTexture.gif
     

    Attached Files:

  22. diegojmoreira

    diegojmoreira

    Joined:
    Dec 14, 2021
    Posts:
    2
    I tried, the performance increase greatly, but
    I've added this solution to my code and the performance has improved a lot, but now when my player start moving while tracking the enemy, i start to receive blank images from the GPU and the track is lost. movement_bug (online-video-cutter.com).gif
     
  23. Cuicui_Studios

    Cuicui_Studios

    Joined:
    May 3, 2014
    Posts:
    71
    Hi,
    We've been running some tests using the PoseEstimationMediaPipeExample and managed to get the landmarks for different people in front of the camera (on android devices). Pretty cool btw.
    We've run into one problem: The array of persons switches indexes randomly, i.e. on one frame persons[0] is associated with one person and persons[1] is associated with another person but at the following frame the landmarks switch from person[0] to person[1] (and I suspect it also happens with more persons in the array of data).
    Is there a way of getting a "consistent" array of persons between frames? Or we need to track that manually (such as storing specific landmarks and associate them to a specific person).
    Thanks.
     
  24. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    The task of tracking and assigning IDs to the results of object detection is commonly referred to as "Multiple Object Tracking.
    There are "BoT-SORT" and "ByteTrack" as recent representative algorithms. cpp implementation was found, but not C#.
    https://github.com/viplix3/BoTSORT-cpp
    https://github.com/Vertical-Beach/ByteTrack-cpp
    Some of our examples include code for multiple object tracking with very primitive algorithms.
    https://github.com/EnoxSoftware/FaceMaskExample/tree/master/Assets/FaceMaskExample/RectangleTracker
     
  25. AcuityCraig

    AcuityCraig

    Joined:
    Jun 24, 2019
    Posts:
    56
    Hello @EnoxSoftware,

    I'm current working on a project using ArUco markers based on your MarkerBasedARExample project. The tracking on that project does not seem to be as reliable as it is in the ArUcoWebCamExample that comes in the base asset. So I have a few questions:

    Is there away to adjust expected marker size in the MarkerBaseARExample? I printed the markers at 1.5 inches square and they don't detect. But will if I print at 2 inches. ArucoWebCamExample does not seem to have an issue detecting.

    In the ArucoWebCamExample scene, is there a way to return the ID of the marker that was found? I cannot find a method for it but I know it draws the ID to the screen if you have the option selected to do so.

    Thanks!
     
  26. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    MarkerBasedARExample Code is a rewrite of https://github.com/MasteringOpenCV/code/tree/master/Chapter2_iPhoneAR using “OpenCV for Unity”. The algorithm is described in detail in "Mastering OpenCV with Practical Computer Vision Projects". This example is code for training an algorithm to detect a two-dimensional code.
    For greater accuracy and speed, it is recommended to use the ArUco module.
    https://github.com/EnoxSoftware/Ope...ty/Examples/ContribModules/aruco/ArUcoExample

    You can get the each id using following code.
    for (int i = 0; i < ids.total(); i++) {
    int id = (int)ids.get (i, 0) [0];
    Debug.Log ("id " + id);
    }
     
  27. bihi10

    bihi10

    Joined:
    Jul 5, 2018
    Posts:
    6
    Hi @EnoxSoftware

    I'm current working on a project using the hand tracking tool and am trying to get the vector2 or vector3 position of every hand but i don't know where to start and couldnt fint the line of code that might help

    Thanks for your work and updates on such a great tool!
     
  28. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,543
    Hi,

    Our asset includes an example "HandPoseEstimationMediaPipeExample" for hand detection and hand pose estimation.
    https://github.com/EnoxSoftware/Ope...Example/HandPoseEstimationMediaPipeExample.cs
    Here is the code for the part of the process that gets 2D and 3D points from the inference results and draws them.
    https://github.com/EnoxSoftware/Ope...ipeExample/MediaPipeHandPoseEstimator.cs#L367

    This example does not include the code for the processing part of drawing three-dimensional points, so please refer to the original porting code as well.
    https://github.com/opencv/opencv_zoo/tree/master/models/handpose_estimation_mediapipe
    https://github.com/opencv/opencv_zo...ndpose_estimation_mediapipe/demo.py#L134-L156
     
    Last edited: Dec 22, 2023 at 7:06 AM