Search Unity

  1. If you have experience with import & exporting custom (.unitypackage) packages, please help complete a survey (open until May 15, 2024).
    Dismiss Notice
  2. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. madmarcs

    madmarcs

    Joined:
    Sep 22, 2020
    Posts:
    1
    Hello everyone!

    Does anyone know how I can add faces from image files and then do the recognition from the webcam?
     
  2. SavedByZero

    SavedByZero

    Joined:
    May 23, 2013
    Posts:
    124
    Hi there -- all the examples of object detection I'm finding are about faces. However, we need something that would find other random objects in the environment from an AR camera feed. Is it possible to use the OpenCV training functions to do our own object detection training with this plugin? like opencv_traincascade to build the xml file from a bunch of images, then custom_cascade.DetectMultiScale()? Are these or something like them exposed with OpenCVForUnity? (the api showed me some methods with "train" in the name, but I didn't see those)
     
    Last edited: Jun 2, 2021
  3. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
  4. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    OpenCVForUnity itself does not have the ability to train cascade files.
    Of course, it is possible to use OpenCVForUnity for object detection with cascade files trained using opencv_traincascade.
    https://github.com/opencv/opencv/bl...c3/doc/tutorials/others/traincascade.markdown
     
  5. Philkrom

    Philkrom

    Joined:
    Dec 26, 2015
    Posts:
    90
    Hello,
    Your asset seems very intersting, and I have a question : how complicated would it be to achieve this :
    ? Maybe someone already made something like that ?
    Best regards, Phil
     
  6. SavedByZero

    SavedByZero

    Joined:
    May 23, 2013
    Posts:
    124
  7. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    YoloObjectDetectionWebCamTextureExample is included in OpenCVForUnity. This is an example of detecting a car from a WebCam feed.
    opencvforunity2.4.0_feature.png
     
  8. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    That’s correct.
     
  9. BigToe

    BigToe

    Joined:
    Nov 1, 2010
    Posts:
    208
    I am trying to convert a RenderTexture directly to a Mat using a Ptr or NativeArray, but haven't had any luck. I was hoping you might be able to point me in the right direction.

    The pasted code is a sample of some of my attempts.

    The second bit is for a Texture2D which I was using to try and simplify the problem. It works, but not sure how to get a similar native array for render textures.

    Code (CSharp):
    1.  
    2. private RenderTexture _myRenderTexture;
    3. private Texture2D _myTexture2D;
    4. public void PtrToMat()
    5.  {
    6.       var targetMat = new Mat(1920, 1080, CvType.CV_8UC4);
    7.        
    8.       //This doesn't work and RenderTextures don't seem to support GetRawTextureData
    9.       var intPtr = _myRenderTexture.GetNativeTexturePtr();
    10.       MatUtils.copyToMat(intPtr,targetMat);
    11.        
    12.       //This works for a Texture2D
    13.       var nativeArray = _myTexture2D.GetRawTextureData<Color32>();
    14.       MatUtils.copyToMat(nativeArray,targetMat);
    15.  
    16.  }
    Thanks in advance.

    Also, I just tried the NativeArrawUnsafeUtility and probably don't know what I'm doing...Which seems safe.
    Code (CSharp):
    1. var ptr = _myRenderTexture.GetNativeTexturePtr();
    2.  
    3. NativeArray<Color32> rtNative = NativeArrayUnsafeUtility.ConvertExistingDataToNativeArray<Color32>((void*) ptr, 1920 * 1080 * 4,Allocator.Persistent);
    4.  
    5. MatUtils.copyToMat(rtNative,targetMat);
    6.  
    The rtNative array never seems to get populated.
     
    Last edited: Jun 10, 2021
  10. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567

    The ponters obtained from the GetNativeTexturePtr() method cannot be used for MatUtils.copyToMat(intPtr, mat) since they do not indicate the first pointer of the pixel buffer in memory.
    We looked into the latest information on this subject some time ago, but unfortunately we have not found a more efficient way to get pixel data from a RenderTexture other than "ReadPixels" or "AsyncGPUReadback".
     
  11. ScottMeyers

    ScottMeyers

    Joined:
    Aug 10, 2017
    Posts:
    5
    Need Help !!!

    Unity Editor crash because of opencvforunity.dll many times.

    upload_2021-6-11_23-29-15.png
     

    Attached Files:

  12. BigToe

    BigToe

    Joined:
    Nov 1, 2010
    Posts:
    208
    Thank you!
     
  13. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Thank you very much for reporting.

    Could you tell me the environment you tested?
    OpenCV for Unity version :
    Unity version :
    Windows OS version :
     
  14. ScottMeyers

    ScottMeyers

    Joined:
    Aug 10, 2017
    Posts:
    5
    Unity version: 2020.3.11f1 and 2021.1.9f1
    OpenCV for Unity version: v2.4.4
    Windows OS version: Windows 10 20H2 19042.985
     
  15. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    In my environment, OpenCVForUnity is running on multiple PCs without any problems.
    Unity version: 2020.3.11f1
    OpenCV for Unity version: v2.4.4
    Windows OS version: Windows 10 20H2 19042.1052

    Also make sure that you have enough free disk space.
     
  16. rstokke

    rstokke

    Joined:
    Apr 25, 2021
    Posts:
    1
    Can this asset be used with remote cameras streaming over either UDP or TCP, or would this require some 3rd party program to decode the stream and re-broadcast it as a faked webcam device?
     
  17. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    To play streaming file, ffmpeg.dll is required.
    1)Download "OpenCV for Windows Version 4.5.2"(https://sourceforge.net/projects/opencvlibrary/).
    2)Copy "opencv_videoio_ffmpeg452_64.dll" to the root directory of the Project.
    ffmpeg_copy_ffmpeg.png
    3)Edit User Variables.
    https://stackoverflow.com/questions...reply-when-cv2-videocapture-rtsp-onvif-camera
    ffmepg_edit_user_variables.png

    I succeeded in playing this file.
    Code (CSharp):
    1.             //capture.open(Utils.getFilePath(VIDEO_FILENAME));
    2.             capture.open("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov", Videoio.CAP_FFMPEG);
    ffmpeg_play_ffmpeg.png
     
  18. rvvnnn

    rvvnnn

    Joined:
    Jul 14, 2021
    Posts:
    6
    Hey @EnoxSoftware ,
    I recently purchased your asset. Im really liking it so far. I am using the resnet face detetction example and trying to integrate a customizable-scifi-holo-interface (link:https://assetstore.unity.com/packages/2d/textures-materials/customizable-scifi-holo-interface-69794) to it instead of the green bounding boxes. (Pls see attached pictures).

    According to a video I was watching, I was to modify the code of "DnnObjectDetectionWebCamTextureExample.cs" by brining in this GUI object. Here is the github link for the modified code as provided by the video(https://bit.ly/2TksAlw). However I got some errors after changing the code.
    Assets/OpenCVForUnity/Examples/MainModules/dnn/LibFaceDetectionV3Example/LibFaceDetectionV3WebCamTextureExample.cs(71,33): error CS0115: 'LibFaceDetectionV3WebCamTextureExample.postprocess(Mat, List<Mat>, Net, int)': no suitable method found to override.

    youtube video link:
    .

    Thanks so much in advance.

    Edit: I am using unity 2020.3.13f1.
    OpenCVforUnity 2.4.2
     

    Attached Files:

    Last edited: Jul 17, 2021
  19. superaldo666

    superaldo666

    Joined:
    Aug 30, 2013
    Posts:
    17
    Hello EnoxSoftware,

    I playing with the YoloObjectDetectionWebCamTextureExample its work nice, but I would like to detect only "Person" and in just specific area(rectangle area) of the video input. I try something like this in the postprocess() first for just show the "Person" green rectangles I trying this

    Code (CSharp):
    1.  
    2. if(classIdsList[idx].ToString() == "0"){
    3.                     //Debug.Log("personas: " + idx.ToString());
    4.                     Rect2d box = boxesList[idx];
    5.                     drawPred(classIdsList[idx], confidencesList[idx], box.x, box.y,
    6.                     box.x + box.width, box.y + box.height, frame);
    7.                 }
    8.  
    It works but I sure is not the correct way, after I would like to detect if is no "Person" detected for show some message, I try this

    Code (CSharp):
    1.  
    2. OpenCVForUnity.CoreModule.Rect2d[] rects = boxesList.ToArray();
    3.                     for (int i = 0; i < rects.Length; i++)
    4.                     {
    5.                         Debug.Log (rects);
    6.                         if ( rects.Length <= 0 )
    7.                         {
    8.                             Debug.Log("No person detected");
    9.                         }
    10.                     }
    11.  
    but no luck, can you help me? how can trigger a flag if is not person detected.
    Also if is not too much to ask, How can limited the area of recognized in order to no "scan" all the frame just a certain area of the background video

    Thx for awesome work,
     
  20. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    The video seems to be based on the example code included in an older version of OpenCVForUnity.
    I have created a modified version of the latest example code as per the instructions in the video and attached it so you can try it out.
     

    Attached Files:

  21. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Try replacing the code near line 606 of the DocumentDetectionWebCamTextureExample.cs with this code.
    Code (CSharp):
    1.  
    2.             bool personDetected = false;
    3.             for (int idx = 0; idx < boxesList.Count; ++idx)
    4.             {
    5.                 if (classIdsList[idx] == 0)
    6.                 {
    7.                     //Debug.Log("personas: " + idx.ToString());
    8.                     Rect2d box = boxesList[idx];
    9.                     drawPred(classIdsList[idx], confidencesList[idx], box.x, box.y,
    10.                         box.x + box.width, box.y + box.height, frame);
    11.                     personDetected = true;
    12.                 }
    13.             }
    14.             if (!personDetected)
    15.             {
    16.                 Debug.Log("No person detected");
    17.             }
    18.  
     
    superaldo666 likes this.
  22. rvvnnn

    rvvnnn

    Joined:
    Jul 14, 2021
    Posts:
    6
    Hi @EnoxSoftware,
    Thanks so much for the reply. I tried it however it doesnt seem to be doing much. It doesnt seem to detect any face. It just shows like this(With the cube encircled). Im very new to c# thats why its really hard for me.

    The main idea was to use the game object as a symbol of a face detection(job of the green bounding boxes). So I want to be able to replace the green bounding boxes with the game object as soon as it detects a face. Hope you understand where I am going.

    Thanks for all the help
     

    Attached Files:

    Last edited: Jul 18, 2021
  23. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    According to the message in the screenshot, it seems that the model file YuFaceDetectNet.onnx does not exist in the project.
    In order to run the LibFaceDetectionV3Example, you will need to download the model files from this link and place them in the "Assets/StreamingAsset/dnn/" folder.
    https://github.com/ShiqiYu/libfacedetection.train/raw/master/tasks/task1/onnx/YuFaceDetectNet.onnx
     
  24. rvvnnn

    rvvnnn

    Joined:
    Jul 14, 2021
    Posts:
    6
    I installed it and moved it to "Assets/StreamingAsset/dnn/" folder, but it results into Unity Crashing.

    Thanks once again.
     

    Attached Files:

  25. InformaticaIQx

    InformaticaIQx

    Joined:
    Jan 7, 2020
    Posts:
    6
    Hi @EnoxSoftware,
    I`m trying to run Aruco.detectMarkers () with AprilTag's corner refinement but it throws me an "Integer division by zero" exception in UWP on Hololens2. I am on version 2.4.4 of OpenCv for Unity and OpenCV4.5.2.

    As I've been reading https://github.com/opencv/opencv_contrib/issues/2643, it may be that an older version of OpenCV works, and I was wondering if you could send me an older version that has an OpenCV version lower than 4.3.0, so I can test it. Otherwise, could you tell me how to run Aruco.detectMarkers () with AprilTag's corner refinement in Hololens2?
     
  26. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Thank you very much for reporting.
    Could you send me an email using the contact form below?
    https://enoxsoftware.com/opencvforunity/contact/technical-inquiry/
     
  27. rvvnnn

    rvvnnn

    Joined:
    Jul 14, 2021
    Posts:
    6
    @EnoxSoftware pls help
     
  28. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    I just tested it in a similar environment to yours and it seems to be working fine. (I am using unity 2020.3.13f1 and OpenCVforUnity 2.4.4)
    Could you please let me know if the other examples work after completing the setup according to the "ReadMe.pdf" and "StreamingAssets/dnn/setup_dnn_module.pdf" included with OpenCVForUnity?
     

    Attached Files:

  29. rvvnnn

    rvvnnn

    Joined:
    Jul 14, 2021
    Posts:
    6
    Hey @EnoxSoftware,
    I tested each and every example in the "StreamingAssets/dnn/setup_dnn_module.pdf". Everything worked except for 3.
    • MaskRCNNExample(error message in the picture)
    • LibFaceDetectionV3Example(unity just crashes)
    • LibFaceDetectionV2Example(links dont work)
    I have attached pictures pls check.
     

    Attached Files:

  30. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Due to a mistake on my part, I have given the wrong answer to that.
    It seems that the content of the distribution site for the LibFaceDetectionV2Example and LibFaceDetectionV3Example model files has recently changed.
    The correct URL is here.

    LibFaceDetectionV2Example:
    https://github.com/ShiqiYu/libfaced...dels/caffe/yufacedetectnet-open-v2.caffemodel
    https://github.com/ShiqiYu/libfaced...models/caffe/yufacedetectnet-open-v2.prototxt

    LibFaceDetectionV3Example:
    https://github.com/ShiqiYu/libfaced...c801dd6/tasks/task1/onnx/YuFaceDetectNet.onnx

    As for MaskRCNNExample, I checked it again, but there was no problem.

    I apologize for the extra trouble this has caused.
     
    ehsan_wwe1 likes this.
  31. rvvnnn

    rvvnnn

    Joined:
    Jul 14, 2021
    Posts:
    6
    @EnoxSoftware,

    It finally works thanks to your Help. Thanks so much for this. It seems to be that link that caused the trouble. Don't worry about the trouble, It helped me try out the other examples.

    Thanks so much once again.
     

    Attached Files:

    EnoxSoftware likes this.
  32. TropicalCyborg

    TropicalCyborg

    Joined:
    Mar 19, 2014
    Posts:
    28
    Hey, @EnoxSoftware, I am trying to input the Human Segmentation Texture2D from the Occlusion Manager of ARFoundation to FloodFill with no success. the Issue, I believe is on the conversion from texture2D to Mat. the code I am using is the following:
    Code (CSharp):
    1.         void ProcessWithOpenCV(Texture2D texture)
    2.         {
    3.             Texture2D p_texture = texture;
    4.          
    5.             // Mat mat = new Mat (p_texture.width, p_texture.height, CvType.CV_8UC3);
    6.             Mat mat = new Mat ();
    7.  
    8.             Utils.fastTexture2DToMat(p_texture, mat, true);
    9.  
    10.             Imgproc.cvtColor(mat,mat,Imgproc.COLOR_RGB2GRAY);
    11.  
    12.             ProcessImageFloodFill(mat);
    13.  
    14.             Utils.fastMatToTexture2D(mat,p_texture,true,1,true,true,true);
    15.  
    16.             m_Texture = p_texture;
    17.          
    18.             ocv_rawImage.texture = m_Texture;
    19.  
    20.         }
    21.  
    22. void ProcessImageFloodFill(Mat mat)
    23. {
    24. //Clona matrix para uma nova para efeito de processamento
    25. Mat im_Threshold = mat.clone();
    26. //Aplica um gaussiam blur e em seguida um Trheshold
    27. Imgproc.GaussianBlur(im_Threshold,im_Threshold,new Size(5,5),0);
    28. Imgproc.threshold(im_Threshold,im_Threshold,220,255,Imgproc.THRESH_OTSU);
    29. //Clona matrix para uma nova para efeito de processamento
    30. Mat im_threshold_inv = im_Threshold.clone();
    31. //Inverte o Threshold e copia para a Matrix original
    32. Core.bitwise_not(im_Threshold,im_threshold_inv);
    33.  
    34. Mat im_floodFill = im_threshold_inv.clone();
    35. Imgproc.GaussianBlur(im_floodFill,im_floodFill,new Size(5,5),0);
    36. Imgproc.floodFill(im_floodFill,im_floodFill, new Point(0,0), new Scalar(0));
    37. im_floodFill.copyTo(mat);
    38. Imgproc.cvtColor(mat,mat,Imgproc.COLOR_GRAY2RGB);
    39. }
    40.  
    41.  
    the commented line of creating a Mat was crashing the app. Any ideas of what am I doing wrong?
     
  33. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567

    I found a code sample that converts ARFoundation's XRCameraImage to OpenCV's Mat here.
    https://gitlab.com/-/snippets/2041871
    I haven't tried it in detail, but it may be helpful.
     
    TropicalCyborg likes this.
  34. TropicalCyborg

    TropicalCyborg

    Joined:
    Mar 19, 2014
    Posts:
    28
    I'll give it a try. Although my problem is not with the camera image but with the Human segmentation but it may give me some clues. Thanks!
     
  35. israel_cruz80

    israel_cruz80

    Joined:
    Aug 9, 2018
    Posts:
    2
    Hi, I was wondering if there is a tutorial or a document where it could explain me how to use custom descriptors in opencv unity, i was reading the documentation and you have this metod load
    Code (CSharp):
    1. string svmFile = Utils.getFilePath("svm.xml");
    2. Debug.Log(svmFile);
    3. des.load(svmFile);
    4.  
    5. //tested too but does not work (unity freezes)
    6. string svmFile = Utils.getFilePath("svm.xml");
    7. bytes = System.IO.File.ReadAllBytes(svmFile);
    8. MatOfByte mat = new MatOfByte(bytes);
    9. des.setSVMDetector(MatOfByte.fromNativeAddr(mat.nativeObj));
    10. // or this one
    11. des.setSVMDetector(mat);
    but for me it does not work, already tested with other tipes of svm files and none worked, someone can help me? I want to use the hog detector with the camera too but that was easy, I only need the way to use custom descriptors in opencv unity.
     
    Last edited: Jul 28, 2021
  36. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    I added a short piece of code to the example "Assets\OpenCVForUnity\Examples\MainModules\ml\SVMExample\SVMExample.cs" to test saving and loading the SVM training data file.
    It appears to be working properly.

    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.SceneManagement;
    3. using System.Collections;
    4. using OpenCVForUnity.CoreModule;
    5. using OpenCVForUnity.MlModule;
    6. using OpenCVForUnity.ImgprocModule;
    7. using OpenCVForUnity.UnityUtils;
    8.  
    9. namespace OpenCVForUnityExample
    10. {
    11.     /// <summary>
    12.     /// SVM Example
    13.     /// An example of  find a separating straight line using the Support Vector Machines (SVM).
    14.     /// Referring to http://docs.opencv.org/3.1.0/d1/d73/tutorial_introduction_to_svm.html#gsc.tab=0.
    15.     /// </summary>
    16.     public class SVMExample : MonoBehaviour
    17.     {
    18.         // Use this for initialization
    19.         void Start ()
    20.         {
    21.             // Data for visual representation
    22.             int width = 512, height = 512;
    23.             Mat image = Mat.zeros (height, width, CvType.CV_8UC4);
    24.  
    25.             // Set up training data
    26.             int[] labels = { 1, -1, -1, -1 };
    27.             float[] trainingData = { 501, 10, 255, 10, 501, 255, 10, 501 };
    28.             Mat trainingDataMat = new Mat (4, 2, CvType.CV_32FC1);
    29.             trainingDataMat.put (0, 0, trainingData);
    30.             Mat labelsMat = new Mat (4, 1, CvType.CV_32SC1);
    31.             labelsMat.put (0, 0, labels);
    32.  
    33.             // Train the SVM
    34.             SVM svm = SVM.create ();
    35.             svm.setType (SVM.C_SVC);
    36.             svm.setKernel (SVM.LINEAR);
    37.             svm.setTermCriteria (new TermCriteria (TermCriteria.MAX_ITER, 100, 1e-6));
    38.             svm.train (trainingDataMat, Ml.ROW_SAMPLE, labelsMat);
    39.  
    40.             // Show the decision regions given by the SVM
    41.             byte[] green = { 0, 255, 0, 255 };
    42.             byte[] blue = { 0, 0, 255, 255 };
    43.             for (int i = 0; i < image.rows (); ++i)
    44.                 for (int j = 0; j < image.cols (); ++j) {
    45.                     Mat sampleMat = new Mat (1, 2, CvType.CV_32FC1);
    46.                     sampleMat.put (0, 0, j, i);
    47.            
    48.                     float response = svm.predict (sampleMat);
    49.                     if (response == 1)
    50.                         image.put (i, j, green);
    51.                     else if (response == -1)
    52.                         image.put (i, j, blue);
    53.                 }
    54.  
    55.             // Show the training data
    56.             int thickness = -1;
    57.             int lineType = 8;
    58.        
    59.             Imgproc.circle (image, new Point (501, 10), 5, new Scalar (0, 0, 0, 255), thickness, lineType, 0);
    60.             Imgproc.circle (image, new Point (255, 10), 5, new Scalar (255, 255, 255, 255), thickness, lineType, 0);
    61.             Imgproc.circle (image, new Point (501, 255), 5, new Scalar (255, 255, 255, 255), thickness, lineType, 0);
    62.             Imgproc.circle (image, new Point (10, 501), 5, new Scalar (255, 255, 255, 255), thickness, lineType, 0);
    63.  
    64.             // Show support vectors
    65.             thickness = 2;
    66.             lineType = 8;
    67.             Mat sv = svm.getUncompressedSupportVectors ();
    68. //                      Debug.Log ("sv.ToString() " + sv.ToString ());
    69. //                      Debug.Log ("sv.dump() " + sv.dump ());
    70.             for (int i = 0; i < sv.rows (); ++i) {
    71.                 Imgproc.circle (image, new Point ((int)sv.get (i, 0) [0], (int)sv.get (i, 1) [0]), 6, new Scalar (128, 128, 128, 255), thickness, lineType, 0);
    72.             }
    73.  
    74.  
    75.             Texture2D texture = new Texture2D (image.width (), image.height (), TextureFormat.RGBA32, false);
    76.             Utils.matToTexture2D (image, texture);
    77.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    78.  
    79.  
    80.  
    81.             //////////////////////////////////////
    82.             // Save and Load SVM trained data file Test
    83.  
    84.             Utils.setDebugMode(true);
    85.  
    86.             string SAVE_TRAINED_DATA_PATH = Application.persistentDataPath + "/svm_trained_data.xml";
    87.  
    88.             svm.save(SAVE_TRAINED_DATA_PATH);
    89.             //Debug.Log("file path: " + SAVE_TRAINED_DATA_PATH);
    90.  
    91.  
    92.             SVM test_svm = SVM.load(SAVE_TRAINED_DATA_PATH);
    93.             // SVM test_svm = SVM.load(Utils.getFilePath("_svm_test/svm_trained_data.xml"); // When loading from the StreamingAssets folder.
    94.  
    95.             Debug.Log("isTrained: " + test_svm.isTrained());
    96.  
    97.             if (test_svm.isTrained())
    98.             {
    99.                 Mat predicted = new Mat();
    100.                 test_svm.predict(trainingDataMat, predicted);
    101.  
    102.                 Debug.Log("test labels: " + labelsMat.dump());
    103.                 Debug.Log("predicted: " + predicted.dump());
    104.             }
    105.  
    106.             Utils.setDebugMode(false);
    107.  
    108.             //
    109.             // test labels: [1; -1; -1; -1]
    110.             // predicted: [1; -1; -1; -1]
    111.             //
    112.  
    113.             //////////////////////////////////////
    114.         }
    115.  
    116.         // Update is called once per frame
    117.         void Update ()
    118.         {
    119.  
    120.         }
    121.  
    122.         /// <summary>
    123.         /// Raises the back button click event.
    124.         /// </summary>
    125.         public void OnBackButtonClick ()
    126.         {
    127.             SceneManager.LoadScene ("OpenCVForUnityExample");
    128.         }
    129.     }
    130. }
     
  37. laymelek

    laymelek

    Joined:
    Apr 21, 2017
    Posts:
    16
  38. israel_cruz80

    israel_cruz80

    Joined:
    Aug 9, 2018
    Posts:
    2
    ok, maybe it's because I’m training in Dlib using python, or opencv in python, I’m not training in unity opencv because I don't really understand how I can train using images, I can see you are using just an array of values as training data, but I need an array of images, could you tell me how can I accomplice that?
     
    Last edited: Aug 2, 2021
  39. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    I think Unity's opencv is compatible with python's opencv SVM training data.

    Here's how to flatten an image into an array of values.
    Code (CSharp):
    1.  
    2. Mat img = Imgcodecs.imread(Utils.getFilePath("svm/traning_image01.png"), Imgcodecs.IMREAD_GRAYSCALE);
    3. Imgproc.resize(img, img, new Size(64,64));
    4. Mat flatImg = img.reshape(1, 1); // to flatten
    5. flatImg.convertTo(flatImg, CvType.CV_32FC1);
    6.  
     
    Last edited: Aug 3, 2021
  40. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
  41. GameLoverPassion

    GameLoverPassion

    Joined:
    Jul 27, 2021
    Posts:
    9
    Hi for my client I need to create a WebGL version with face recognition.
    I downloaded and imported the package in an empty project version 2019.1.6f1
    and built the example scene:

    FaceDetectionWebCamTextureExample

    But I got this error on the play


    Any way to solve this?
     
  42. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Thank you very much for reporting.
    Could you tell me the environment you tested?
    OpenCV for Unity version :
    Unity version :
    browser :
     
  43. GameLoverPassion

    GameLoverPassion

    Joined:
    Jul 27, 2021
    Posts:
    9

    yes, sure:

    Unity version: 2019.1.6
    Opencv Version: 2.4.2
    Browser Google Chrome/Firefox

    Please help me
     
  44. NiklasMoller

    NiklasMoller

    Joined:
    Dec 12, 2018
    Posts:
    6
    Hi @EnoxSoftware

    When running the WebCamTextureMarkerBasedARExample my AR Object flicks when keeping the marker still. See attached Animation2.gif for a screencast.

    I wrote several variable-values to a csv file when debugging. I found that the Vector3 forward and upwards produces by ARUtils.ExtractRotationFromMatrix() seems to be changing rapidly and I assume this is what causes my object to make sudden rotations. Highlighted in red is what I am referring to.

    upload_2021-8-6_9-19-36.png

    Do you know what is causing this?

    The problem only seems to occur when holding the marker directly facing the camera. When the marker is tilted a bit, the problem disappears.

    A noise reduction algorithm did not work since the shifts are to big. Any tips of how to mitigate/prevent this is much appreciated since this is for a client.
     

    Attached Files:

  45. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    It worked fine in my environment. I would like to send you the project I have set up, could you send me an email using the form below?
    https://enoxsoftware.com/opencvforunity/contact/technical-inquiry/
     
  46. Jove25

    Jove25

    Joined:
    Mar 8, 2019
    Posts:
    12
    Calling the calcOpticalFlowPyrLK function in OpenCV for Unity WebGL ~ 20-100 times slower than calling the same function in Android version or via the official version opencv.js. Similarly poor performance with other functions such as estimateAffinePartial2D.

    Function calcOpticalFlowPyrLK execution time (tested on the same Android device):
    OpenCV for Unity WebGL ~ 270 ms
    OpenCV for Unity Android ~ 10 ms
    opencv.js ~ 12 ms (can be run from Unity WebGL too)

    In a real project, the time can increase up to 700 ms or more (when the camera is running and other functions are called). At the same time, performance in Android version or in opencv.js does not drop.

    How to improve the performance of these functions or are they impossible to use in real time and should be used opencv.js?

    Code (CSharp):
    1.     // Init
    2.     Mat localPrevMat, localNextMat;
    3.     MatOfPoint2f mOP2fPrevTrackPts = new MatOfPoint2f();
    4.     MatOfPoint2f mOP2fNextTrackPts = new MatOfPoint2f();
    5.     MatOfByte status = new MatOfByte();
    6.     MatOfFloat err = new MatOfFloat();
    7.     private readonly Size winSize = new Size(22, 7);
    8.     private readonly TermCriteria criteria = new TermCriteria(TermCriteria.EPS + TermCriteria.COUNT, 10, 0.03);
    9.  
    10.  
    11.         localPrevMat = new Mat(160, 160, CvType.CV_8UC4);
    12.         localNextMat = new Mat(160, 160, CvType.CV_8UC4);
    13.  
    14.  
    15.  
    16.         // Call in Update
    17.         Point[] localPrevPoints = new Point[500];
    18.         Point[] localNextPoints = new Point[500];
    19.         for (int i = 0; i < localPrevPoints.Length; i++)
    20.         {
    21.             localPrevPoints[i] = new Point(UnityEngine.Random.Range(0, 159), UnityEngine.Random.Range(0, 159));
    22.             localNextPoints[i] = new Point(localPrevPoints[i].x, localPrevPoints[i].y);
    23.         }
    24.  
    25.         mOP2fPrevTrackPts.fromArray(localPrevPoints);
    26.         mOP2fNextTrackPts.fromArray(localNextPoints);
    27.  
    28.         System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
    29.         sw.Restart();
    30.         Video.calcOpticalFlowPyrLK(localPrevMat, localNextMat, mOP2fPrevTrackPts, mOP2fNextTrackPts, status, err, winSize, 5, criteria, Video.OPTFLOW_USE_INITIAL_FLOW, 1e-4);
    31.         sw.Stop();
    32.         Debug.Log(("calcOpticalFlowPyrLK: ", sw.ElapsedMilliseconds));
     
  47. Ikaro88

    Ikaro88

    Joined:
    Jun 6, 2016
    Posts:
    300
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    MarkerBasedARExample Code is a rewrite of https://github.com/MasteringOpenCV/code/tree/master/Chapter2_iPhoneAR/Example_MarkerBasedAR using “OpenCV for Unity”. The algorithm is described in detail in "Mastering OpenCV with Practical Computer Vision Projects". http://www.packtpub.com/cool-projects-with-opencv/book

    This example is a tutorial code, so it is not very accurate. You may be able to get better accuracy by using the ArUco class instead.
    https://github.com/EnoxSoftware/Ope...uco/ArUcoExample/ArUcoWebCamTextureExample.cs
     
    NiklasMoller likes this.
  49. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Thank you very much for reporting.
    Could you send me an email using the contact form below?
    https://enoxsoftware.com/opencvforunity/contact/technical-inquiry/
     
  50. Jove25

    Jove25

    Joined:
    Mar 8, 2019
    Posts:
    12