Search Unity

  1. Unity 6 Preview is now available. To find out what's new, have a look at our Unity 6 Preview blog post.
    Dismiss Notice
  2. Unity is excited to announce that we will be collaborating with TheXPlace for a summer game jam from June 13 - June 19. Learn more.
    Dismiss Notice

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    OpenCV for Unity
    Released Version 2.0.2


    Version changes
    2.0.2
    [Common]Fixed CS0618 warnings: `UnityEngine.Application.LoadLevel(string)' is obsolete: `Use SceneManager.LoadScene'.
     
  2. gringofxs

    gringofxs

    Joined:
    Oct 14, 2012
    Posts:
    240
    can i use slam with
    OpenCV for Untiy?
     
  3. aesparza

    aesparza

    Joined:
    Apr 4, 2016
    Posts:
    29
    [RELEASED] OpenCV for Unity

    I found the solution to my problem, here is it!

    To create a mask, setting a ROI to ones:

    Mat mask = new Mat (input_mat.height(), input_mat.width(), CvType.CV_8UC1, new Scalar(0));
    float w = input_mat.width();
    float h = input_mat.height();
    float xMin = 300f/800, xMax = 500f/800, yMin = 40f/600, yMax = 560f/600;
    OpenCVForUnity.Rect rect = new OpenCVForUnity.Rect (Mathf.RoundToInt(xMin*w), Mathf.RoundToInt(yMin*h),
    Mathf.RoundToInt((xMax-xMin)*w), Mathf.RoundToInt((yMax-yMin)*h));

    mask.submat(rect).setTo( new Scalar( 255));
     
  4. FarazKhalid

    FarazKhalid

    Joined:
    Jan 13, 2014
    Posts:
    4
    Hello,
    I have updated the new OpenCV update. I am having issue in cam shift sample scene. after giving 4 points it is not tracking. Can you please check that demo again.
    Thanks.
     
  5. aespinosa

    aespinosa

    Joined:
    Feb 1, 2016
    Posts:
    33
    please could you tell me how to implement SVM and Boost classifier?¿ I try to do this, but it not works for me. The result for SVM is an empty array and the result for Boost is all ones.

    Mat samples, etiquetas, results1, results2;
    results1 = new Mat ();
    results2 = new Mat ();

    OpenCVForUnity.SVM svm_cla = OpenCVForUnity.SVM.create ();
    OpenCVForUnity.Boost b = OpenCVForUnity.Boost.create ();

    samples = new Mat (10,2,CvType.CV_32FC1);
    samples.put (0,0,5,2,0,2,1,1,0,4,0,5,3,5,4,6,3,3,3,4,4,2);

    etiquetas = new Mat (10,1,CvType.CV_32FC1);
    etiquetas.put (0,0,2,1,1,1,1,1,1,2,1,2);

    Mat test_feat = new Mat (6,2,CvType.CV_32FC1);
    test_feat.put (0,0,2,4,5,0,5,2,0,0,4,3,0,3);

    //boost classifier
    b.setBoostType (OpenCVForUnity.Boost.PREDICT_MAX_VOTE);
    b.train (samples, Ml.ROW_SAMPLE, etiquetas);
    b.predict (test_feat, results1, 1);

    //SVM classifier
    svm_cla.train (samples, Ml.ROW_SAMPLE, etiquetas);
    svm_cla.predict (test_feat, results2, 1);
     
  6. Khaled-Abd-El-Nasser

    Khaled-Abd-El-Nasser

    Joined:
    Feb 18, 2014
    Posts:
    1
     
  7. aespinosa

    aespinosa

    Joined:
    Feb 1, 2016
    Posts:
    33
    I don't want to classify faces, i want a SVM or BOOST classifier to train with my own data.
     
  8. rohrnico

    rohrnico

    Joined:
    May 3, 2016
    Posts:
    1
    Hello
    Nice work with this this plugin!!

    I have a question: does anyone solve the problem with custom images as markers in augmented reality app like vulforia does?
    Like marker based AR?

    Thanks!
     
    tatavarthitarun likes this.
  9. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Unfortunately, I have not yet tested slam.
     
  10. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    OpenCV for Unity v2.0.2?
    Which platform does it occur on?
     
  11. BOswalt

    BOswalt

    Joined:
    May 4, 2016
    Posts:
    1
    Has anyone tested the plugin on the Microsoft HoloLens? I'm assuming that since it supports UWP 10 that it should also run on this device?
     
  12. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    In order to display the error code, please enclose the code in Utils.setDebugMode(true) and Utils.setDebugMode(false).

    Also, Please refer to the following sample.
    http://docs.opencv.org/3.1.0/d1/d73/tutorial_introduction_to_svm.html#gsc.tab=0
    Code (CSharp):
    1. // Data for visual representation
    2.                         int width = 512, height = 512;
    3.                         Mat image = Mat.zeros (height, width, CvType.CV_8UC4);
    4.                         // Set up training data
    5.                         int[] labels = {1, -1, -1, -1};
    6.                         float[] trainingData = { 501, 10, 255, 10, 501, 255, 10, 501 };
    7.                         Mat trainingDataMat = new Mat (4, 2, CvType.CV_32FC1);
    8.                         trainingDataMat.put (0, 0, trainingData);
    9.                         Mat labelsMat = new Mat (4, 1, CvType.CV_32SC1);
    10.                         labelsMat.put (0, 0, labels);
    11.                         // Train the SVM
    12.                         SVM svm = SVM.create ();
    13.                         svm.setType (SVM.C_SVC);
    14.                         svm.setKernel (SVM.LINEAR);
    15.                         svm.setTermCriteria (new TermCriteria (TermCriteria.MAX_ITER, 100, 1e-6));
    16.                         svm.train (trainingDataMat, Ml.ROW_SAMPLE, labelsMat);
    17.                         // Show the decision regions given by the SVM
    18.                         byte[] green = {0,255,0,255};
    19.                         byte[] blue = {0,0,255,255};
    20.                         for (int i = 0; i < image.rows(); ++i)
    21.                                 for (int j = 0; j < image.cols(); ++j) {
    22.                                         Mat sampleMat = new Mat (1, 2, CvType.CV_32FC1);
    23.                                         sampleMat.put (0, 0, j, i);
    24.            
    25.                                         float response = svm.predict (sampleMat);
    26.                                         if (response == 1)
    27.                                                 image.put (i, j, green);
    28.                                         else if (response == -1)
    29.                                                 image.put (i, j, blue);
    30.                                 }
    31.                         // Show the training data
    32.                         int thickness = -1;
    33.                         int lineType = 8;
    34.        
    35.                         Imgproc.circle (image, new Point (501, 10), 5, new Scalar (0, 0, 0, 255), thickness, lineType, 0);
    36.                         Imgproc.circle (image, new Point (255, 10), 5, new Scalar (255, 255, 255, 255), thickness, lineType, 0);
    37.                         Imgproc.circle (image, new Point (501, 255), 5, new Scalar (255, 255, 255, 255), thickness, lineType, 0);
    38.                         Imgproc.circle (image, new Point (10, 501), 5, new Scalar (255, 255, 255, 255), thickness, lineType, 0);
    39.                         // Show support vectors
    40.                         thickness = 2;
    41.                         lineType = 8;
    42.                         Mat sv = svm.getUncompressedSupportVectors ();
    43. //                        Debug.Log ("sv.ToString() " + sv.ToString ());
    44. //                        Debug.Log ("sv.dump() " + sv.dump ());
    45.                         for (int i = 0; i < sv.rows(); ++i) {
    46.                                 Imgproc.circle (image, new Point ((int)sv.get (i, 0) [0], (int)sv.get (i, 1) [0]), 6, new Scalar (128, 128, 128, 255), thickness, lineType, 0);
    47.                         }
    48.  
    49.                         Texture2D texture = new Texture2D (image.width (), image.height (), TextureFormat.RGBA32, false);
    50.                         Utils.matToTexture2D (image, texture);
    51.                         gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    svm.PNG
     
  13. aespinosa

    aespinosa

    Joined:
    Feb 1, 2016
    Posts:
    33
    Thank you so much!! I modify the example to use it on my own project, and it works perfectly!!! THANK YOU SO MUCH!
     
  14. FarazKhalid

    FarazKhalid

    Joined:
    Jan 13, 2014
    Posts:
    4
    Yes V2.02
    It occurs on Windows platform
     
  15. aespinosa

    aespinosa

    Joined:
    Feb 1, 2016
    Posts:
    33
    Hi!! I am trying tu classify with more than 2 features in SVM classifier and the result is very strange. Could you tell me why? Is it possible use more than 2 features?
     
  16. aesparza

    aesparza

    Joined:
    Apr 4, 2016
    Posts:
    29
    Hi!
    Is any OpenCV.OCR class implemented in your plugin? I haven't seen any through the documentation.
    Thanks for your support!
     
  17. Zaicheg

    Zaicheg

    Joined:
    Sep 30, 2009
    Posts:
    46
    I can see this example only if I have purchased a copy of OpenCV?
     
  18. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
  19. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
  20. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    OpenCV for Unity
    Released Version 2.0.3


    Version changes
    2.0.3
    [Common]Added SVMSample.
    [Common]Fixed VideoCaptureSample and WebCamTextureAsyncDetectFaceSample.
    [UWP]Added OpenCVForUnityUWP_Beta2.zip.
     
  21. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    I succeeded to run the "OpenCV for Unity" Sample on Hololens Emulator using OpenCVForUnityUWP_Beta2.unitypackage. However, I do not test it with the actual machine because I do not have Hololens.
    hololens_emu.PNG
    hololens_settings.PNG
     
  22. aesparza

    aesparza

    Joined:
    Apr 4, 2016
    Posts:
    29
    Hi, could you please tell me if the plugin contains the OBJECT CATEGORIZATION classes from OpenCV?
    I mean those functions related to BOW.

    Thanks!
     
  23. tbbucs4755

    tbbucs4755

    Joined:
    Oct 30, 2012
    Posts:
    6
  24. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    BOWImgDescriptorExtractor is not implemented. But, I plan to implement it in the future.
     
  25. kerolpaiva

    kerolpaiva

    Joined:
    Mar 21, 2016
    Posts:
    1
    Hi,
    I have buy the asset, but when i try to do the download, it stop in 11% and don't continue. Anyone have tis problem?
     
  26. caleb_bughunter

    caleb_bughunter

    Joined:
    May 18, 2016
    Posts:
    3
    I'm attempting to calibrate a camera using the Calib3d.calibrateCamera(...) function call, but I'm receiving an exception from within the function. Digging into the source, the problem is *internal to the call, although it may be triggered by a problem with my input arguments.

    Specifically, I get the following exception:

    "CvException: CvType.CV_32SC2 != m.type() || m.cols() != 1"

    This exception is being triggered from within the Mat_to_vector_Mat(...) function, which is called by calibrateCamera() to convert the rVec and tVec outputs of the Java call to the appropriate format for return. Digging a little deeper, it is clear that the object that is causing the exception is NOT provided by me - it is created from within calibrateCamera(), and then modified by the Java OpenCV call. Apparently that Java call isn't doing what it is supposed to do, or it isn't being given something that it can work with.

    Anyway, if I dive into the source and comment out those calls to Mat_to_vector_Mat, then the exception goes away. I don't need access to the rVec and tVec data anyway. BUT the calibration function call isn't doing anything - the re-projection errors returns exactly 0 (impossible), and the camera matrix and distortion coefficients that I give it are returned unmodified. So something is broken, and I have no idea what.

    Has anyone else got camera calibration to work with this?
     
  27. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Please refer to this post.
    http://forum.unity3d.com/threads/released-opencv-for-unity.277080/page-8#post-2348856
     
  28. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
  29. aesparza

    aesparza

    Joined:
    Apr 4, 2016
    Posts:
    29
    Hi,
    I am using KNearest class for KNN classification. The point is that I would like to know if there is an option to change the distance function that the algorithm uses to Hamming distance instead.

    Plus, I would like to know how can I use the method setAlgorithmType() since it is not explained in the Documentation.

    Thanks for your support!
     
  30. aesparza

    aesparza

    Joined:
    Apr 4, 2016
    Posts:
    29
    Hi again,
    I am having some problems with SVM classifier. I want to classify "histograms" of Bag of words, ORB feature descriptors. I define the SVM type as follow:

    svm = SVM.create ();
    svm.setType (SVM.C_SVC);
    svm.setKernel (SVM.POLY);
    //svm.setTermCriteria(newTermCriteria(TermCriteria.MAX_ITER,100,1e-6));
    svm.setTermCriteria (new TermCriteria (TermCriteria.EPS, 100, 1e-6));


    The problem is that when I do the classification I don't get any class label. The output are as if I were doing regression whereas I am trying to do classification.

    Thanks!
     
  31. caleb_bughunter

    caleb_bughunter

    Joined:
    May 18, 2016
    Posts:
    3
    Hi All,

    I need to convert a byte[] array containing a JPG to an OpenCV data structure so that I can operate on it. Currently, I'm using Unity's Texture2D.LoadImage() function to read the data, and then I'm using the Utils.texture2DToMat() function call to get the OpenCV Mat data type.

    This roundabout method works, but it is slow, and Texture2D.LoadImage() can only be called from within the main thread. I need a way to go directly from a byte[] array to the OpenCV data type.

    Normally, I'd use OpenCV's imdecode(.) function call, but the expected datatype in OpenCVForUnity is a Mat object. How do I convert a C# byte array to a OpenCV Mat object?
     
  32. PranavBuradkar

    PranavBuradkar

    Joined:
    Oct 15, 2015
    Posts:
    4
    Hi All,

    I am using FeatureDetection.ORB to detect Homography between two Images.

    I am trying to separate the good matched keypoints, to detect exact position of object. For this I need to get the list of all the matching keypoints and find the good matches between a min & max distance range.

    So I tried to convert the MatOfDMatch to the list of DMatch
    by using the function MatOfDMatch.toList().
    So now whenever I am trying to get the Distance out of each item in DMatch list is always returning 0.

    Following is the code snippet, I am not able to figure whats am I doing wrong.

    Code (CSharp):
    1. DescriptorMatcher matcher = DescriptorMatcher.create (DescriptorMatcher.BRUTEFORCE_HAMMINGLUT);
    2. MatOfDMatch matches = new MatOfDMatch ();
    3.  
    4. matcher.match (descriptorsSrc, descriptorsScene, matches);
    5. List<DMatch> matchesList = matches.toList ();
    6.  
    7. //– Quick calculation of max and min distances between keypoints
    8. double max_dist = 0;
    9. double min_dist = 100;
    10. for (int i = 0; i < descriptorsSrc.rows(); i++)
    11. {
    12. double dist = (double)matchesList [i].distance;
    13.  
    14. if (dist < min_dist)    min_dist = dist;
    15. if (dist > max_dist)    max_dist = dist;
    16. Debug.Log ( "Distance::" + matchesList [i].distance );
    17. }

    Ref:
    http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html
    https://rkdasari.com/2013/11/09/homography-between-images-using-opencv-for-android/

    I would really appreciate if any help this.

    Feature2DSample



    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3. using System.Collections.Generic;
    4. using System.Collections.Specialized;
    5.  
    6. using OpenCVForUnity;
    7.  
    8. namespace OpenCVForUnitySample
    9. {
    10.     public class Feature2DSample : MonoBehaviour
    11.     {
    12.         void Start ()
    13.         {
    14.  
    15.             Texture2D imgTemplate = Resources.Load ("lena") as Texture2D;
    16.             Texture2D imgTexture = Resources.Load ("lena") as Texture2D;
    17.          
    18.             Mat matSrc = new Mat (imgTemplate.height, imgTemplate.width, CvType.CV_8UC3);
    19.             Utils.texture2DToMat (imgTemplate, matSrc);
    20.             Debug.Log ("img1Mat dst ToString " + matSrc.ToString ());
    21.  
    22.             Mat matScene = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC3);
    23.             Utils.texture2DToMat (imgTexture, matScene);
    24.             Debug.Log ("img2Mat dst ToString " + matScene.ToString ());
    25.                      
    26.             FeatureDetector detector = FeatureDetector.create (FeatureDetector.ORB);
    27.             DescriptorExtractor extractor = DescriptorExtractor.create (DescriptorExtractor.ORB);
    28.  
    29.             MatOfKeyPoint keypointsSrc = new MatOfKeyPoint ();
    30.             Mat descriptorsSrc = new Mat ();
    31.  
    32.             detector.detect (matSrc, keypointsSrc);
    33.             extractor.compute (matSrc, keypointsSrc, descriptorsSrc);
    34.  
    35.             MatOfKeyPoint keypointsScene = new MatOfKeyPoint ();
    36.             Mat descriptorsScene = new Mat ();
    37.      
    38.             detector.detect (matScene, keypointsScene);
    39.             extractor.compute (matScene, keypointsScene, descriptorsScene);
    40.  
    41.             DescriptorMatcher matcher = DescriptorMatcher.create (DescriptorMatcher.BRUTEFORCE_HAMMINGLUT);
    42.             MatOfDMatch matches = new MatOfDMatch ();
    43.  
    44.             matcher.match (descriptorsSrc, descriptorsScene, matches);
    45.  
    46.             //NEW CODE
    47.             List<DMatch> matchesList = matches.toList ();
    48.                      
    49.             //– Quick calculation of max and min distances between keypoints
    50.             double max_dist = 0;
    51.             double min_dist = 100;
    52.             for (int i = 0; i < descriptorsSrc.rows(); i++)
    53.             {
    54.                 double dist = (double)matchesList [i].distance;
    55.  
    56.                 if (dist < min_dist)
    57.                     min_dist = dist;
    58.                 if (dist > max_dist)
    59.                     max_dist = dist;
    60.  
    61.                 Debug.Log ( "Distance::" + matchesList [i].distance );
    62.                           // +" queryIdx::"+matchesList [i].queryIdx+
    63.                           // " imgIdx::"+matchesList [i].imgIdx+
    64.                           // " trainIdx::"+matchesList [i].trainIdx);
    65.             }
    66.              
    67.             List<DMatch> good_matches = new List<DMatch> ();
    68.             for (int i = 0; i < descriptorsSrc.rows(); i++) {
    69.                 if (matchesList [i].distance < 3 * min_dist) {
    70.                     good_matches.Add (matchesList [i]);
    71.                 }
    72.             }
    73.             MatOfDMatch gm = new MatOfDMatch ();
    74.             gm.fromList (good_matches);
    75.  
    76.             List<Point> objList = new List<Point> ();
    77.             List<Point> sceneList = new List<Point> ();
    78.  
    79.             List<KeyPoint> keypoints_objectList = keypointsSrc.toList ();
    80.             List<KeyPoint> keypoints_sceneList = keypointsScene.toList ();
    81.  
    82.             for (int i = 0; i<good_matches.Count; i++)
    83.             {
    84.                 objList.Add (keypoints_objectList [good_matches [i].queryIdx].pt);
    85.                 sceneList.Add (keypoints_sceneList [good_matches [i].trainIdx].pt);
    86.             }
    87.  
    88.             MatOfPoint2f obj = new MatOfPoint2f ();
    89.             MatOfPoint2f scene = new MatOfPoint2f ();
    90.  
    91.             obj.fromList (objList);  
    92.             scene.fromList (sceneList);
    93.  
    94.             Mat H = Calib3d.findHomography (obj, scene);
    95.             Mat warpimg = matSrc.clone ();
    96.             //NEW CODE
    97.  
    98.             Mat resultImg = new Mat ();
    99.  
    100.             Features2d.drawMatches (matSrc, keypointsSrc, matScene, keypointsScene, matches, resultImg);
    101.  
    102.             Texture2D texture = new Texture2D (resultImg.cols (), resultImg.rows (), TextureFormat.RGBA32, false);
    103.      
    104.             Utils.matToTexture2D (resultImg, texture);
    105.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    106.  
    107.         }
    108.     }
    109. }
     
    Last edited: May 20, 2016
  33. caleb_bughunter

    caleb_bughunter

    Joined:
    May 18, 2016
    Posts:
    3
    A follow-up to this problem - I found a way to do the conversion without using Unity's Texture2D.LoadImage(.), but it is even slower. Here's what I did:

    Code (csharp):
    1.  
    2. byte[] jpegData = [An image retrieved over a TCP socket]
    3. Mat byteMat = new MatOfByte(jpegData);
    4. Mat decodedImage = Imgcodecs.imdecode(byteMat, Imgcodecs.CV_LOAD_IMAGE_COLOR);
    5. Mat flippedImage = decodedImage.clone();
    6. Imgproc.cvtColor(decodedImage, flippedImage, 4);
    7.  
    The last instruction converts the image from a BGR image to an RGB image. Absent that conversion, the image ends up swapping the R and B channels in the displayed image in Unity.

    I'm not sure how to streamline the above code, but it is much slower than just using Unity's texture2D class. Here's what I'm doing now, which is faster, but still not great:

    Code (csharp):
    1.  
    2. byte[] _imageByteArray = [JPEG image retrieved over a TCP socket]
    3.  
    4. // ....
    5.  
    6. // Create a placeholder texture.  It will be resized when loading the actual image.
    7. Texture2D processingTexture = new Texture2D(2,2);
    8. processingTexture.LoadImage(_imageByteArray);
    9.  
    10. // Create an OpenCV mat that is the same size as the image
    11. Mat cvImageData = new Mat(...);
    12. Utils.texture2DToMat(processingTexture, _cvImageData);
    13.  
    14. // Now we can use OpenCV to operate on the image
    15. SomeProcessingFunction(cvImageData);
    16.  
    17. // And with the processing finished, we can convert back a Unity texture
    18. Utils.matToTexture2D(cvImageData, someUnityTexture);
    19.  
    Again, this is a roundabout method, and it requires run-time instantiation of objects on the heap, which isn't great. But I'm not sure of a better way to do it.

    In other notes, considering the cost of this plugin, the author's support is disappointingly poor.
     
  34. PranavBuradkar

    PranavBuradkar

    Joined:
    Oct 15, 2015
    Posts:
    4
    Hi can anyone please test this for me by replacing this code in Feature2DSample.cs (in opencv samples), I want to know does this function really works.

    Code (CSharp):
    1.  
    2. MatOfDMatch.toList();
    I think they have not implemented the toList() function correctly. Whenever I am trying to get the Distance from the obtained list its returning me "0".

    I have tested it on C++ and its works perfectly.
    Can someone please acknowledge this issue.

    Thanks.


    Code (CSharp):
    1. DescriptorMatcher matcher = DescriptorMatcher.create (DescriptorMatcher.BRUTEFORCE_HAMMINGLUT);
    2. MatOfDMatch matches = new MatOfDMatch ();
    3. matcher.match (descriptorsSrc, descriptorsScene, matches);
    4. List<DMatch> matchesList = matches.toList ();
    5. //– Quick calculation of max and min distances between keypoints
    6. double max_dist = 0;
    7. double min_dist = 100;
    8. for (int i = 0; i < descriptorsSrc.rows(); i++)
    9. {
    10. double dist = (double)matchesList [i].distance;
    11. if (dist < min_dist)    min_dist = dist;
    12. if (dist > max_dist)    max_dist = dist;
    13. Debug.Log ( "Distance::" + matchesList [i].distance );
    14. }




     
    Last edited: May 21, 2016
  35. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Sorry for my late reply.
    Since this package is a clone of OpenCV Java, you are able to use the same API as OpenCV Java 3.1.0. There does not seem to be the method to convert directly from byte[](jpeg format) to Mat.

    Imgcodecs.imdecode() seems to be faster than LoadImage() in my environment(Windows8.1 Unity5.3.4f1 OpenCVforUnity2.0.3).
    Code (CSharp):
    1.                         if (!File.Exists(Application.dataPath + "/OpenCVForUnity/Samples/Resources/lena.jpg"))
    2.                         {
    3.                                 Debug.LogError("Editor Invalid Level Name: Does not exist" + Application.dataPath + "/OpenCVForUnity/Samples/Resources/lena.jpg");
    4.                                 return;
    5.                         }
    6.                         byte [] jpegData = File.ReadAllBytes(Application.dataPath + "/OpenCVForUnity/Samples/Resources/lena.jpg");
    7.  
    8.                         Profiler.BeginSample("LoadImage");
    9.                         Texture2D processingTexture = new Texture2D(2,2);
    10.                         processingTexture.LoadImage(jpegData);
    11.                         Mat mat = new Mat(processingTexture.height,processingTexture.width,CvType.CV_8UC4);
    12.                         Utils.texture2DToMat(processingTexture, mat);
    13.                         Profiler.EndSample();
    14.  
    15.                         Profiler.BeginSample("imdecode");
    16.                         Mat byteMat = new Mat(1, jpegData.Length, CvType.CV_8UC1);
    17.                         Utils.copyToMat<byte>( jpegData, byteMat);
    18.                         Mat decodedImage = Imgcodecs.imdecode(byteMat, Imgcodecs.CV_LOAD_IMAGE_COLOR);
    19.                         Imgproc.cvtColor(decodedImage, decodedImage, Imgproc.COLOR_BGR2RGB);
    20.                         Profiler.EndSample();
    profiler.PNG profiler_deep.PNG
     
    Last edited: May 21, 2016
  36. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Because the two images are the same, it seems to distance always returns 0.

    Please add the following code.
    Code (CSharp):
    1.                         Texture2D imgTemplate = Resources.Load ("lena") as Texture2D;
    2.                         Texture2D imgTexture = Resources.Load ("lena") as Texture2D;
    3.  
    4.                         Mat matSrc = new Mat (imgTemplate.height, imgTemplate.width, CvType.CV_8UC3);
    5.                         Utils.texture2DToMat (imgTemplate, matSrc);
    6.                         Debug.Log ("img1Mat dst ToString " + matSrc.ToString ());
    7.  
    8.                         Mat matScene = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC3);
    9.                         Utils.texture2DToMat (imgTexture, matScene);
    10.                         Debug.Log ("img2Mat dst ToString " + matScene.ToString ());
    11.  
    12.                                                 float angle = UnityEngine.Random.Range (0, 360), scale = 1.0f;
    13.                        
    14.                         Point center = new Point (matScene.cols () * 0.5f, matScene.rows () * 0.5f);
    15.                        
    16.                                                 Mat affine_matrix = Imgproc.getRotationMatrix2D (center, angle, scale);
    17.                        
    18.                         Imgproc.warpAffine (matSrc, matScene, affine_matrix, matScene.size ());
     
  37. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Unfortunately, I am not familiar with SVM.
    However, I can advise the method to convert sample source written in Java into "OpenCV for Untiy".
    Also, Please refer to OpenCV official document(http://docs.opencv.org/3.1.0/index.html) for the details of the argument of the method.
     
  38. aesparza

    aesparza

    Joined:
    Apr 4, 2016
    Posts:
    29
    Hi, I would like to know how can I save the model I've trained with KNearest classifier.
    In OpenCV I would do that with the methods from StatModel Class, SAVE, and later, LOAD the previous trained model at anytime.
    I don't know how to do this with the plugin in Unity because I don't find those methods available.

    Thanks!
     
  39. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    "OpenCV for Unity"v2.0.3 is a clone of OpenCV3.1 Java bindings.
    Algorithm.load() method is not implemented in the Java bindings of OpenCV3.1.
    Also,
    Algorithm.save() method seems to have a bug in the Java bindings of OpenCV3.1.
    https://github.com/Itseez/opencv/issues/5894
     
  40. BuiltForTheKill

    BuiltForTheKill

    Joined:
    Mar 2, 2013
    Posts:
    1
    Hello Enox,

    Thank you for the Asset, I bought it 2 days ago, I am building a QR code scanner, for the moment I am just extracting the image to decode.

    I am implementing the same method as this blog article:
    http://dsynflo.blogspot.fr/2014/10/opencv-qr-code-detection-and-extraction.html

    I am stuck in detecting contours, this is my code so far:

    //thesetwovectorsneededforoutputof findContours
    Mat webCamTextureMat = webCamTextureToMatHelper.GetMat ();
    grayMat = new Mat (webCamTextureMat.rows (), webCamTextureMat.rows (), CvType.CV_8UC1);
    thresholdMat = new Mat();

    Imgproc.threshold (grayMat, thresholdMat, 0, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);


    List<MatOfPoint> contours = new List<MatOfPoint> ();
    Mat hierarchy = new Mat ();


    Imgproc.findContours (thresholdMat, contours,hierarchy, Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE);

    for( int i = 0; i< contours.Count; i++ )
    {
    Scalar color = new Scalar (Random.Range (0, 255), Random.Range (0, 255), Random.Range (0, 255));
    Imgproc.drawContours(grayMat,contours,i,color);
    }​

    Best regards,
     
  41. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Please add this line.
    Imgproc.cvtColor(webCamTextureMat, grayMat, Imgproc.COLOR_RGBA2GRAY);
     
  42. ocontreras309

    ocontreras309

    Joined:
    Jan 26, 2016
    Posts:
    5
    Hello:

    I am considering purchasing this asset, however I have a question. Will I be able to call my JNI C++ functions that interact with Android OpenCV by using this plugin? Thank you.
     
  43. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    Perhaps it is not possible to call directly with other JNI C ++ functions.
     
  44. duzc2

    duzc2

    Joined:
    Nov 20, 2014
    Posts:
    1
    I have paid for the plugin , but any time I try to download it fail.
    Unity tells me ERROR.
     
  45. Greg-Bassett

    Greg-Bassett

    Joined:
    Jul 28, 2009
    Posts:
    628
    Hi, what's the maximum distance from the camera that face recognition works? For instance could someone be as far as 10 metres away?

    Also, can it be used to actually recognise different people, for example store 3 or 4 peoples faces as images and then be able to recognise which persons face its detected when you point it at one of the 3 or 4 people in realtime.

    Thanks in advance!
     
  46. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
    The size of the face to be detected is determined by minSize of detectMultiScale() method.
    https://github.com/EnoxSoftware/Ope...ples/DetectFaceSample/DetectFaceSample.cs#L43
    http://enoxsoftware.github.io/OpenC...sifier.html#a240aa13047368e54387d51ca6ba4527f
     
  47. Greg-Bassett

    Greg-Bassett

    Joined:
    Jul 28, 2009
    Posts:
    628
    Thanks for reply, what about identifying faces? so give it 2 or more faces and it can tell if face already in its database.
     
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,567
  49. iPhoneCoder

    iPhoneCoder

    Joined:
    Sep 16, 2014
    Posts:
    5
    Hi, this asset looks like a great way to boost OpenCV for Unity performance... but I'm not seeing any documentation on how to use the two together. Yes, there's an link to a description of a NatCam.PreviewMatrix object, but I don't see how it's meant to be used, especially with OpenCV. Could you or the NatCam dev spell this out more explicitly? I've found and uncommented the #define that enables openCV use, and set the preview type to readable (looks like that will keep the NatCam buffer in RAM)... but what else do I need to do?

    Specifically, I'm trying to use this with the FaceTrackerAR sample scene, and it's so tightly interwoven with your webcamTexture code, I'm finding it very difficult to get it working at all. Is there any chance you'd be willing to update that project to include NatCam support directly? I'd be a very happy customer, and I think you could do it 10x faster than I could + benefit other users. Thanks for considering.
     
    Last edited: May 31, 2016
  50. iPhoneCoder

    iPhoneCoder

    Joined:
    Sep 16, 2014
    Posts:
    5
    Speaking of the FaceRecognizer Sample - I have a question on that. For the life of me, I cannot get the webCamTexture (camera image) on an iPhone to be full screen as expected. On my iPhone 6s running iOS 9.2 it's something like 512x626. Is this expected behavior? I runs full screen if I build to the Mac OS and run it 1280x720, for example. The way that the size for the webCamTexture is calculated in this sample seems rather complex, and it could take me many hours to figure out how to change this. Can you investigate please? I really appreciate your help.

    Edit to add snippet of hopefully relevant logging data:
    Screen.width 750 Screen.height 1334 Screen.orientation Portrait
    imageSize 512x626
    apertureWidth 0
    apertureHeight 0