Search Unity

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    The current version of OpenCVForUnity is based on this repository(opencv, opencv-contrib).
    Unfortunately, this issue will be fixed in a future OpenCVForUnity update.
     
  2. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Are you using FaceTracker Example?
    If more accurate tracking is required, I recommend to use Dlib FaceLandmark Detector.
     
  3. Ichini_24

    Ichini_24

    Joined:
    Mar 24, 2017
    Posts:
    1
    Hello. I've got a next problem:
    I'm trying to load trained SVM, but I'm getting crush.
    I trained my SVM, using C++ and OpenCV 3.2.0. Aftr that I've put the results to the .yml file. Then I copy this .yml file to the root of the StreamingAssets folder. When start I'm searching for this file and then put trained SVM to my SVM object. I tried to combine parts of code of the "SimpleBlobSample", where I knew how to include .yml file to my project and "SVMSample", where I saw how to do SVM with OpenCVForUnity. Could you, please, tell me, what am I doing wrong?
    P.S. Below you can see bug report and parts of my code:

    Code (CSharp):
    1.         void Start()
    2.         {
    3.             webCamTextureToMatHelper = gameObject.GetComponent<WebCamTextureToMatHelper>();
    4.             webCamTextureToMatHelper.Init();
    5.             _yml = Utils.getFilePath("gestureclassifire.yml");
    6.             svm = OpenCVForUnity.SVM.load(_yml); //trying to load data of my SVM
    7.         }
    8.  
    9. /*================================*/
    10. float predicted = svm.predict(testMat);   //trying to predict my "testMat"
     

    Attached Files:

    Last edited: Jun 21, 2017
  4. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    In my environment yml files trained using OpenCVforUnity work fine.

    1. save "svm.yml".
    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3.  
    4. #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    5. using UnityEngine.SceneManagement;
    6. #endif
    7. using OpenCVForUnity;
    8.  
    9. namespace OpenCVForUnityExample
    10. {
    11.     /// <summary>
    12.     /// SVM example. (Example of  find a separating straight line using the Support Vector Machines (SVM))
    13.     /// referring to the http://docs.opencv.org/3.1.0/d1/d73/tutorial_introduction_to_svm.html#gsc.tab=0
    14.     /// </summary>
    15.     public class SVMExample : MonoBehaviour
    16.     {
    17.         // Use this for initialization
    18.         void Start ()
    19.         {
    20.             // Data for visual representation
    21.             int width = 512, height = 512;
    22.             Mat image = Mat.zeros (height, width, CvType.CV_8UC4);
    23.  
    24.             // Set up training data
    25.             int[] labels = {1, -1, -1, -1};
    26.             float[] trainingData = { 501, 10, 255, 10, 501, 255, 10, 501 };
    27.             Mat trainingDataMat = new Mat (4, 2, CvType.CV_32FC1);
    28.             trainingDataMat.put (0, 0, trainingData);
    29.             Mat labelsMat = new Mat (4, 1, CvType.CV_32SC1);
    30.             labelsMat.put (0, 0, labels);
    31.  
    32.             // Train the SVM
    33.             SVM svm = SVM.create ();
    34.             svm.setType (SVM.C_SVC);
    35.             svm.setKernel (SVM.LINEAR);
    36.             svm.setTermCriteria (new TermCriteria (TermCriteria.MAX_ITER, 100, 1e-6));
    37.             svm.train (trainingDataMat, Ml.ROW_SAMPLE, labelsMat);
    38.  
    39. //save trained file
    40.             svm.save ("C:\\Users\\xxxxxx\\Desktop\\svm.yml");
    41.  
    42.  
    43.         }
    44.    
    45.         // Update is called once per frame
    46.         void Update ()
    47.         {
    48.  
    49.         }
    50.  
    51.         /// <summary>
    52.         /// Raises the back button click event.
    53.         /// </summary>
    54.         public void OnBackButtonClick ()
    55.         {
    56.             #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    57.             SceneManager.LoadScene ("OpenCVForUnityExample");
    58.             #else
    59.             Application.LoadLevel ("OpenCVForUnityExample");
    60.             #endif
    61.         }
    62.     }
    63. }

    2. copy "svm.yml" file to the root of the StreamingAssets folder.

    3. load "svm.yml".
    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3.  
    4. #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    5. using UnityEngine.SceneManagement;
    6. #endif
    7. using OpenCVForUnity;
    8.  
    9. namespace OpenCVForUnityExample
    10. {
    11.     /// <summary>
    12.     /// SVM example. (Example of  find a separating straight line using the Support Vector Machines (SVM))
    13.     /// referring to the http://docs.opencv.org/3.1.0/d1/d73/tutorial_introduction_to_svm.html#gsc.tab=0
    14.     /// </summary>
    15.     public class SVMExample : MonoBehaviour
    16.     {
    17.         // Use this for initialization
    18.         void Start ()
    19.         {
    20.             // Data for visual representation
    21.             int width = 512, height = 512;
    22.             Mat image = Mat.zeros (height, width, CvType.CV_8UC4);
    23.  
    24.             // Set up training data
    25.             int[] labels = {1, -1, -1, -1};
    26.             float[] trainingData = { 501, 10, 255, 10, 501, 255, 10, 501 };
    27.             Mat trainingDataMat = new Mat (4, 2, CvType.CV_32FC1);
    28.             trainingDataMat.put (0, 0, trainingData);
    29.             Mat labelsMat = new Mat (4, 1, CvType.CV_32SC1);
    30.             labelsMat.put (0, 0, labels);
    31.  
    32.             // Train the SVM
    33. //            SVM svm = SVM.create ();
    34. //            svm.setType (SVM.C_SVC);
    35. //            svm.setKernel (SVM.LINEAR);
    36. //            svm.setTermCriteria (new TermCriteria (TermCriteria.MAX_ITER, 100, 1e-6));
    37. //            svm.train (trainingDataMat, Ml.ROW_SAMPLE, labelsMat);
    38.  
    39. //load trained file
    40.             SVM svm = SVM.load (Utils.getFilePath ("svm.yml"));
    41.  
    42.  
    43.             // Show the decision regions given by the SVM
    44.             byte[] green = {0,255,0,255};
    45.             byte[] blue = {0,0,255,255};
    46.             for (int i = 0; i < image.rows(); ++i)
    47.                 for (int j = 0; j < image.cols(); ++j) {
    48.                     Mat sampleMat = new Mat (1, 2, CvType.CV_32FC1);
    49.                     sampleMat.put (0, 0, j, i);
    50.            
    51.                     float response = svm.predict (sampleMat);
    52.                     if (response == 1)
    53.                         image.put (i, j, green);
    54.                     else if (response == -1)
    55.                         image.put (i, j, blue);
    56.                 }
    57.  
    58.             // Show the training data
    59.             int thickness = -1;
    60.             int lineType = 8;
    61.        
    62.             Imgproc.circle (image, new Point (501, 10), 5, new Scalar (0, 0, 0, 255), thickness, lineType, 0);
    63.             Imgproc.circle (image, new Point (255, 10), 5, new Scalar (255, 255, 255, 255), thickness, lineType, 0);
    64.             Imgproc.circle (image, new Point (501, 255), 5, new Scalar (255, 255, 255, 255), thickness, lineType, 0);
    65.             Imgproc.circle (image, new Point (10, 501), 5, new Scalar (255, 255, 255, 255), thickness, lineType, 0);
    66.  
    67.             // Show support vectors
    68.             thickness = 2;
    69.             lineType = 8;
    70.             Mat sv = svm.getUncompressedSupportVectors ();
    71. //                      Debug.Log ("sv.ToString() " + sv.ToString ());
    72. //                      Debug.Log ("sv.dump() " + sv.dump ());
    73.             for (int i = 0; i < sv.rows(); ++i) {
    74.                 Imgproc.circle (image, new Point ((int)sv.get (i, 0) [0], (int)sv.get (i, 1) [0]), 6, new Scalar (128, 128, 128, 255), thickness, lineType, 0);
    75.             }
    76.  
    77.  
    78.             Texture2D texture = new Texture2D (image.width (), image.height (), TextureFormat.RGBA32, false);
    79.             Utils.matToTexture2D (image, texture);
    80.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    81.         }
    82.    
    83.         // Update is called once per frame
    84.         void Update ()
    85.         {
    86.  
    87.         }
    88.  
    89.         /// <summary>
    90.         /// Raises the back button click event.
    91.         /// </summary>
    92.         public void OnBackButtonClick ()
    93.         {
    94.             #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    95.             SceneManager.LoadScene ("OpenCVForUnityExample");
    96.             #else
    97.             Application.LoadLevel ("OpenCVForUnityExample");
    98.             #endif
    99.         }
    100.     }
    101. }
     
  5. Westerby

    Westerby

    Joined:
    Jun 20, 2017
    Posts:
    8
    Hello,

    I have problems using projectPoints function. I pass arguments and declare output variables as follows:

    Code (CSharp):
    1. MatOfPoint2f output_points= new MatOfPoint2f();
    2. Mat jacobian = new Mat();
    3.  
    4. Calib3d.projectPoints(
    5.             new MatOfPoint3f(new Point3(0, 0, 1000)),
    6.             rvec,
    7.             tvec,
    8.             cameraMatrix,
    9.             distCoeff,
    10.             output_points,
    11.             jacobian,
    12.             0
    13.         );
    But the returned values for jacobian and output_points are wrong. I compared the results with exactly the same arguments passed through function in Python. First my jacobian matrix holds different values for almost every run. Second my output_points matrix is empty.

    Maybe something is wrong with declaring parameters in Mat constructors? I tried different options, but with no result.
     
  6. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Please try this code.

    http://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/
    Code (CSharp):
    1.             //if true, The error log of the Native side OpenCV will be displayed on the Unity Editor Console.
    2.             Utils.setDebugMode (true);
    3.  
    4.             // Read input image
    5.             Mat im = Imgcodecs.imread (Utils.getFilePath ("headPose.jpg"));
    6.  
    7.             Imgproc.cvtColor (im, im, Imgproc.COLOR_BGR2RGB);
    8.            
    9.             // 2D image points. If you change the image, you need to change vector
    10.             List<Point> imagePointsList = new List<Point> ();
    11.             imagePointsList.Add (new Point (359, 391));    // Nose tip
    12.             imagePointsList.Add (new Point (399, 561));    // Chin
    13.             imagePointsList.Add (new Point (337, 297));     // Left eye left corner
    14.             imagePointsList.Add (new Point (513, 301));    // Right eye right corner
    15.             imagePointsList.Add (new Point (345, 465));    // Left Mouth corner
    16.             imagePointsList.Add (new Point (453, 469));    // Right mouth corner
    17.  
    18.             MatOfPoint2f image_points = new MatOfPoint2f ();
    19.             image_points.fromList (imagePointsList);
    20.  
    21.            
    22.             // 3D model points.
    23.             List<Point3> objectPointsList = new List<Point3> ();
    24.             objectPointsList.Add (new Point3 (0.0f, 0.0f, 0.0f));               // Nose tip
    25.             objectPointsList.Add (new Point3 (0.0f, -330.0f, -65.0f));          // Chin
    26.             objectPointsList.Add (new Point3 (-225.0f, 170.0f, -135.0f));       // Left eye left corner
    27.             objectPointsList.Add (new Point3 (225.0f, 170.0f, -135.0f));        // Right eye right corner
    28.             objectPointsList.Add (new Point3 (-150.0f, -150.0f, -125.0f));      // Left Mouth corner
    29.             objectPointsList.Add (new Point3 (150.0f, -150.0f, -125.0f));       // Right mouth corner
    30.  
    31.             MatOfPoint3f model_points = new MatOfPoint3f ();
    32.             model_points.fromList (objectPointsList);
    33.            
    34.             // Camera internals
    35.             double focal_length = im.cols (); // Approximate focal length.
    36.             Point center = new Point (im.cols () / 2, im.rows () / 2);
    37.             Mat camera_matrix = Mat.eye (3, 3, CvType.CV_32F);
    38.             camera_matrix.put (0, 0, focal_length, 0, center.x, 0, focal_length, center.y, 0, 0, 1);
    39.  
    40.             MatOfDouble dist_coeffs = new MatOfDouble (0, 0, 0, 0);
    41.  
    42.             Debug.Log ("Camera Matrix " + camera_matrix.dump ());
    43.  
    44.             // Output rotation and translation
    45.             Mat rotation_vector = new Mat (); // Rotation in axis-angle form
    46.             Mat translation_vector = new Mat ();
    47.            
    48.             // Solve for pose
    49.             Calib3d.solvePnP (model_points, image_points, camera_matrix, dist_coeffs, rotation_vector, translation_vector);
    50.            
    51.            
    52.             // Project a 3D point (0, 0, 1000.0) onto the image plane.
    53.             // We use this to draw a line sticking out of the nose
    54.            
    55.             List<Point3> nosePointsList = new List<Point3> ();
    56.             nosePointsList.Add (new Point3 (0, 0, 1000.0));
    57.  
    58.             MatOfPoint3f nose_end_point3D = new MatOfPoint3f ();
    59.             nose_end_point3D.fromList (nosePointsList);
    60.                        
    61.             MatOfPoint2f nose_end_point2D = new MatOfPoint2f ();
    62.  
    63.             Mat jacobian = new Mat ();
    64.  
    65.             Calib3d.projectPoints (nose_end_point3D, rotation_vector, translation_vector, camera_matrix, dist_coeffs, nose_end_point2D, jacobian, 0);
    66.            
    67.            
    68.             for (int i=0; i < image_points.toList().Count; i++) {
    69.                 Imgproc.circle (im, image_points.toList () [i], 3, new Scalar (0, 0, 255), -1);
    70.             }
    71.            
    72.             Imgproc.line (im, image_points.toList () [0], nose_end_point2D.toList () [0], new Scalar (255, 0, 0), 2);
    73.            
    74.             Debug.Log ("Rotation Vector " + rotation_vector.dump ());
    75.             Debug.Log ("Translation Vector" + translation_vector.dump ());
    76.  
    77.             Debug.Log ("jacobian" + jacobian.dump ());
    78.            
    79. //            Debug.Log(nose_end_point2D[0]);
    80.  
    81.             Texture2D texture = new Texture2D (im.cols (), im.rows (), TextureFormat.RGBA32, false);
    82.            
    83.             Utils.matToTexture2D (im, texture);
    84.            
    85.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    86.  
    87.             Utils.setDebugMode (false);
    http://www.learnopencv.com/wp-content/uploads/2016/09/headPose.jpg


    dump jacobian
    Code (CSharp):
    1. jacobian[-16.58539577501191, 382.3244557370233, 10.19336478405512, -0.4088564513410258, 0, -0.1748942637484027, -0.4277644713071286, 0, 1, 0, -94.14831460415888, -17.26788481622075, -21.26571595335831, 659.2516784409178;
    2. 149.5411872827439, 3.204576318653627, -294.9955309277132, 0, -0.4088564513410258, 0.008469054178876336, 0, 0.0207140040253696, 0, 1, 4.559024179200949, 0.8361775219429131, 221.1235833611476, -21.26571595335831]
    projectPoints.PNG
     
    Westerby likes this.
  7. sticklezz

    sticklezz

    Joined:
    Oct 27, 2015
    Posts:
    33
    Hey I noticed the greenscreen options have changed from 5.3 to 5.6 - there is no longer filtering- was there a reason for this?
    greenscreen.jpg
     
  8. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    To simplify the Example, I deleted some options.
     
  9. Westerby

    Westerby

    Joined:
    Jun 20, 2017
    Posts:
    8
    @EnoxSoftware - thank's for reply! Turns out that I had cameraMatrix initialized wrongly - it should be as is in your example. I wouldn't have figured it out for the world:)
     
  10. ToolkitSpA

    ToolkitSpA

    Joined:
    Jun 7, 2017
    Posts:
    3
    I'm using DlibFaceLandmarkDetector and I'm having that problem too... I realized that adding a Y Offset value don't solve very well the problem because it work only for some distances, if you are too near or too far the AR appears like the photo that I sent you in the link... (Sorry the late answer)

    Also, Is there a way to merge the Optimization Example with the WebCamTextureAR example? I've been trying but it's strange because it seemed to work for a couple of minutes and then now the AR objects are far away in the plane (even my camera doesn't render them) or if its a better way to optimize the performance in the webCamAr I really appreciate it...

    One last thing, yesterday I faced twice a problem with the plugin about an error of "Out of frustum" in Unity, it didn't allow me to see the AR's and I had to copy the scene to an older version of my project (fortunately I save the previous version), it happens twice without a reason...
     
    Last edited: Jun 27, 2017
  11. Mr-KoSiarz

    Mr-KoSiarz

    Joined:
    Apr 22, 2017
    Posts:
    4
    Hi,
    We are using the OpenCV for Unitty asset and it work really well but we have some performance issue. Can you help me with:
    1. How to load YML file into FaceRecognizer using byte array instead of file?
    2. How to initialize (create) Mat object from PhotoCaptureFrame.CopyRawImageDataIntoBuffer istead of
    Mat _cameraImage = ...
    photoCaptureFrame.UploadImageDataToTexture(_texture);
    fastTexture2DToMat(_texture, _cameraImage);
     
  12. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Have you try this solution already?
    https://forum.unity3d.com/threads/released-dlib-facelandmark-detector.408958/page-2#post-2815448

    "Screen position out of view frustum"
    Does this error occur only using DlibFaceLandmarkDetector v1.1.2?
     
  13. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I think that there is not the method to load YML file into FaceRecognizer using byte array.

    Code (CSharp):
    1. List<byte> imageBufferList = new List<byte>();
    2. // Copy the raw IMFMediaBuffer data into our empty byte list.
    3. photoCaptureFrame.CopyRawImageDataIntoBuffer(imageBufferList);
    4.  
    5. OpenCVForUnity.Utils.copyToMat<byte> (imageBufferList, bgraMat);
     
  14. mj-3dqr

    mj-3dqr

    Joined:
    Mar 30, 2017
    Posts:
    14
    Hi,
    we are using the OpenCV2Unity and develop on microsoft hololens. All is working fine with the existing examples.

    But for some reason we need to manipulate the exposure of the webcam. thats why we try to use our own capturing:

    Code (CSharp):
    1.  
    2. //capture is a OpenCVForUnity.VideoCapture
    3. cameraFeed = new Mat();
    4. //capture.set(Videoio.CAP_PROP_EXPOSURE, -1);
    5. capture.grab();
    6. capture.retrieve(cameraFeed);
    7.  
    In UnityEditor this works fine. But on hololens we get a "System.AccessViolationException" with capture.retrieve(). To be exact, with:
    Code (CSharp):
    1. bool retVal = videoio_VideoCapture_retrieve_11 (nativeObj, image.nativeObj); //in VideoCapture.cs
    Is it possible to manipulate the exposure and use the WebCamTextureToMatHelper?

    Is it possible to locate and correct the System.AccessViolationException ?
     
  15. ToolkitSpA

    ToolkitSpA

    Joined:
    Jun 7, 2017
    Posts:
    3
    Now it worked (I've been using the Y Offset in the wrong place)

    I used 1.1.1 of DLib (For some reason when I tried to upgrade the plugin in the DlibFaceLandmarkDetectorWithOpenCVExample comes with multiple folders with files of really small size (and there's no scene or script inside)

    Regards
     
  16. ManuSanchez

    ManuSanchez

    Joined:
    May 18, 2017
    Posts:
    1
    Good morning,
    I'm using WebCamTextureFaceMaskExample with DLib but I have a problem positioning the Quad (And camera) in distinct webcams resolutions. I would like to know how I can get the center of the Landmarks to position my Quad without worry in the webcam aspect ratio. I want to put the face in the correct position.

    The point is that I can make the landmark points looks well in a Mac Book Pro with 1280x720 (web cam resolution) but when I install the app in a Note 3 (640x480) the land marks appear at the left. I'll need the Max point and Minimum point in the world where the quad sould be positioned. Or de center where I should position the Quad depending on the webCam Resolution.

    I have that problem positioning the Quad (And camera) when I use an only portrait device and rotate the Quad to fit the points with the camera (webCamTexture.videoRotationAngle). With Landscape I have no problem I position the Quad in the world position (texture.width/2, -texture.height/2) without problem

    Regards
     
    Last edited: Jun 29, 2017
  17. Gustavo-Quiroz

    Gustavo-Quiroz

    Joined:
    Jul 26, 2013
    Posts:
    38
    Hello Enox,

    I'm required to process images from Unity's Video Player, I created Mat objects using Texture and RenderTexture from Unity's Video Player but when I convert Mat to Texture2D the resulting Texture2D is totally corrupted and nothing more than noise is seen.

    I tried to print the textures on RawImages before create the Mat object and everything seems to be OK.

    I know OpenCV for Unity already have a video player built-in but I really need to work with Unity's Video Player.

    I don't really know if this issue is from OpenCV for Unity or from Unity's Video Player (I have noticed that VideoPlayer.texture works similarly ad a RenderTexture because it continously updates the texture).

    Could you help me to find a way to integrate Unity's Video Player with OpenCV for Unity?
     
  18. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Is there a screenshot of this state?
     
  19. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Have you tried this example already?
    https://github.com/EnoxSoftware/VideoPlayerWithOpenCVForUnityExample
     
    Gustavo-Quiroz likes this.
  20. Gustavo-Quiroz

    Gustavo-Quiroz

    Joined:
    Jul 26, 2013
    Posts:
    38
  21. shogo0808

    shogo0808

    Joined:
    Jul 5, 2017
    Posts:
    1
    Hi, I am a student and I use OpenCV for Unity for my research.
    I am moving a sample of camshift, I want to acquire coordinates of the rect connected points and give Effect.

    GameObject effect;
    Effect.setActive (true);
    Effect.transform.position =?

    How can I convert the OpenCV coordinate format when entering acquired coordinates that can be used with?

    I think RotatedRect r is important.
    RotatedRect r = Video.CamShift (backProj, roiRect, termination);
    r.points (points);

    Shogo
     
  22. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Could you try this code?
    Unfortunately, I do not know the simple way to convert from OpenCV's point to Unity's point.

    Code (CSharp):
    1. using UnityEngine;
    2. using System.Collections;
    3.  
    4. #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    5. using UnityEngine.SceneManagement;
    6. #endif
    7. using OpenCVForUnity;
    8.  
    9. namespace OpenCVForUnitySample
    10. {
    11.     /// <summary>
    12.     /// DetectFace sample.
    13.     /// </summary>
    14.     public class DetectFace2DTo3DExample : MonoBehaviour
    15.     {
    16.  
    17.         public GameObject point3D;
    18.  
    19.         // Use this for initialization
    20.         void Start ()
    21.         {
    22.    
    23.             Texture2D imgTexture = Resources.Load ("lena") as Texture2D;
    24.            
    25.             Mat imgMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC4);
    26.            
    27.             Utils.texture2DToMat (imgTexture, imgMat);
    28.             Debug.Log ("imgMat dst ToString " + imgMat.ToString ());
    29.  
    30.  
    31.             //CascadeClassifier cascade = new CascadeClassifier (Utils.getFilePath ("lbpcascade_frontalface.xml"));
    32.             CascadeClassifier cascade = new CascadeClassifier (Utils.getFilePath ("haarcascade_frontalface_alt.xml"));
    33. //                        if (cascade.empty ()) {
    34. //                                Debug.LogError ("cascade file is not loaded.Please copy from “OpenCVForUnity/StreamingAssets/” to “Assets/StreamingAssets/” folder. ");
    35. //                        }
    36.  
    37.             Mat grayMat = new Mat ();
    38.             Imgproc.cvtColor (imgMat, grayMat, Imgproc.COLOR_RGBA2GRAY);
    39.             Imgproc.equalizeHist (grayMat, grayMat);
    40.  
    41.  
    42.             MatOfRect faces = new MatOfRect ();
    43.        
    44.             if (cascade != null)
    45.                 cascade.detectMultiScale (grayMat, faces, 1.1, 2, 2,
    46.                                            new Size (20, 20), new Size ());
    47.  
    48.             OpenCVForUnity.Rect[] rects = faces.toArray ();
    49.             for (int i = 0; i < rects.Length; i++) {
    50.                 Debug.Log ("detect faces " + rects [i]);
    51.  
    52.                 Imgproc.rectangle (imgMat, new Point (rects [i].x, rects [i].y), new Point (rects [i].x + rects [i].width, rects [i].y + rects [i].height), new Scalar (255, 0, 0, 255), 2);
    53.  
    54.  
    55.  
    56.             }
    57.  
    58.  
    59.  
    60.             Texture2D texture = new Texture2D (imgMat.cols (), imgMat.rows (), TextureFormat.RGBA32, false);
    61.  
    62.             Utils.matToTexture2D (imgMat, texture);
    63.  
    64.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    65.    
    66.  
    67.  
    68.  
    69.             Matrix4x4 point3DMatrix4x4 = Get2DTo3DMatrix4x4 (new Point (rects [0].x + rects [0].width / 2, rects [0].y + rects [0].height / 2), gameObject);
    70.            
    71.             Vector3 point3DVec = new Vector3 (0, 0, 0);
    72.             point3DVec = point3DMatrix4x4.MultiplyPoint3x4 (point3DVec);
    73.            
    74.             point3D.transform.position = point3DVec;
    75.             point3D.transform.eulerAngles = gameObject.transform.eulerAngles;
    76.         }
    77.        
    78.         // Update is called once per frame
    79.         void Update ()
    80.         {
    81.  
    82.         }
    83.  
    84.         public void OnBackButton ()
    85.         {
    86.             #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    87.             SceneManager.LoadScene ("OpenCVForUnitySample");
    88.             #else
    89.             Application.LoadLevel ("OpenCVForUnitySample");
    90.             #endif
    91.         }
    92.  
    93.         /// <summary>
    94.         /// Get2s the D to3 D matrix4x4.
    95.         /// </summary>
    96.         /// <returns>The D to3 D matrix4x4.</returns>
    97.         /// <param name="point2D">Point2 d.</param>
    98.         /// <param name="quad">Quad.</param>
    99.         private Matrix4x4 Get2DTo3DMatrix4x4 (Point point2D, GameObject quad)
    100.         {
    101.             float textureWidth = GetComponent<Renderer> ().material.mainTexture.width;
    102.             float textureHeight = GetComponent<Renderer> ().material.mainTexture.height;
    103.            
    104.             Matrix4x4 transCenterM =
    105.                 Matrix4x4.TRS (new Vector3 (((float)point2D.x) - textureWidth / 2, (textureHeight - (float)point2D.y) - textureHeight / 2, 0), Quaternion.identity, new Vector3 (1, 1, 1));
    106.            
    107.            
    108.             Vector3 translation = new Vector3 (quad.transform.localPosition.x, quad.transform.localPosition.y, quad.transform.localPosition.z);
    109.            
    110.             Quaternion rotation =
    111.                 Quaternion.Euler (quad.transform.localEulerAngles.x, quad.transform.localEulerAngles.y, quad.transform.localEulerAngles.z);
    112.            
    113.             Vector3 scale = new Vector3 (quad.transform.localScale.x / textureWidth, quad.transform.localScale.y / textureHeight, 1);
    114.            
    115.             Matrix4x4 trans2Dto3DM =
    116.                 Matrix4x4.TRS (translation, rotation, scale);
    117.  
    118.             Matrix4x4 resultM = trans2Dto3DM * transCenterM;
    119.  
    120.             return resultM;
    121.         }
    122.     }
    123. }
    DetectFace2DTo3DExample.PNG
     
  23. Gunhi

    Gunhi

    Joined:
    Apr 18, 2012
    Posts:
    300
    How can I detect a single letter (img)? Can we using detecting shapes? Please help!
     
  24. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Have you already tried MatchShapesExample?
     
  25. Ash4DVW

    Ash4DVW

    Joined:
    Jul 6, 2017
    Posts:
    4
    Bought OpenCV on the asset store today. I've been trying to get it to download since morning and it's been stuck at 19% for more than 2 hours now. Is this a problem on your end or is it just unity's servers?
     
  26. Gunhi

    Gunhi

    Joined:
    Apr 18, 2012
    Posts:
    300
    Do you have any tutorial or document for MatchShapesExample?
     
  27. Gentatsu

    Gentatsu

    Joined:
    Oct 21, 2016
    Posts:
    6
    Hey, I was wondering if you had plans to include the fisheye methods in the Calib3D module?

    Thanks!
     
  28. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    It is Unity's server problem.If the problem persists, please contact Unity.
     
  29. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  30. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    the fisheye methods in the Calib3D module have already been implemented.
    https://github.com/opencv/opencv/blob/master/modules/calib3d/include/opencv2/calib3d.hpp
    http://enoxsoftware.github.io/OpenC...alib3d.html#a9318759c0b9f2833a7168f5c7ddf6685

    static void projectPoints (MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat rvec, Mat tvec, Mat K, Mat D, double alpha, Mat jacobian)

    static void projectPoints (MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat rvec, Mat tvec, Mat K, Mat D)
    static void distortPoints (Mat undistorted, Mat distorted, Mat K, Mat D, double alpha)

    static void distortPoints (Mat undistorted, Mat distorted, Mat K, Mat D)
    static void undistortPoints (Mat distorted, Mat undistorted, Mat K, Mat D, Mat R, Mat P)

    static void undistortPoints (Mat distorted, Mat undistorted, Mat K, Mat D)
    static void initUndistortRectifyMap (Mat K, Mat D, Mat R, Mat P, Size size, int m1type, Mat map1, Mat map2)
    static void undistortImage (Mat distorted, Mat undistorted, Mat K, Mat D, Mat Knew, Size new_size)

    static void undistortImage (Mat distorted, Mat undistorted, Mat K, Mat D)
    static void estimateNewCameraMatrixForUndistortRectify (Mat K, Mat D, Size image_size, Mat R, Mat P, double balance, Size new_size, double fov_scale)

    static void estimateNewCameraMatrixForUndistortRectify (Mat K, Mat D, Size image_size, Mat R, Mat P)
    static double calibrate (List< Mat > objectPoints, List< Mat > imagePoints, Size image_size, Mat K, Mat D, List< Mat > rvecs, List< Mat > tvecs, int flags, TermCriteria criteria)

    static double calibrate (List< Mat > objectPoints, List< Mat > imagePoints, Size image_size, Mat K, Mat D, List< Mat > rvecs, List< Mat > tvecs, int flags)

    static double calibrate (List< Mat > objectPoints, List< Mat > imagePoints, Size image_size, Mat K, Mat D, List< Mat > rvecs, List< Mat > tvecs)
    static void stereoRectify (Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, Size newImageSize, double balance, double fov_scale)

    static void stereoRectify (Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags)
    static double stereoCalibrate (List< Mat > objectPoints, List< Mat > imagePoints1, List< Mat > imagePoints2, Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat T, int flags, TermCriteria criteria)

    static double stereoCalibrate (List< Mat > objectPoints, List< Mat > imagePoints1, List< Mat > imagePoints2, Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat T, int flags)

    static double stereoCalibrate (List< Mat > objectPoints, List< Mat > imagePoints1, List< Mat > imagePoints2, Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat T)
     
  31. Badim

    Badim

    Joined:
    May 17, 2017
    Posts:
    1
    Been trying to understand the Hololens With Open CV For Unity Example, but I've been unable to get the video feed from the Hololens instead of the WebCam. Is there a simple way to do it or is the example build exclusively for the WebCam?
     
  32. thomas545455

    thomas545455

    Joined:
    May 23, 2017
    Posts:
    1
    Hello,

    I am adapting my Unity application to iOS by using XCode.
    The compilation works fine but when I try to save image with OpenCV imwrite function I get the following error:
    ProductName[1074:177529] bool imgcodecs_Imgcodecs_imwrite_11(char *, cv::Mat *) [Line 268] imgcodecs_Imgcodecs_imwrite_11() : /Users/satoo/opencv/ios/opencv/modules/imgcodecs/src/loadsave.cpp:531: error: (-2) could not find a writer for the specified extension in function imwrite_
    I tried to reimport all OpenCV librairies and I don't think I have forgotten any of them, so I think it's a bug from iOS librairies (I have never got errors with imwrite on my PC and on Android). Do you know how to fix that problem?

     
  33. ddsinteractive

    ddsinteractive

    Joined:
    May 1, 2013
    Posts:
    28
    Will this asset plugin work to detect top down blob detection with a webcam? We are developing a floor projection interaction and just need the people blobs to become inputs via TUIO/Touchscript.

    Thank you in advance!
    Cheers,
    Monica
     
  34. henry10210

    henry10210

    Joined:
    Jan 14, 2017
    Posts:
    2
    I upgraded to the latest (2.1.9), and I am having problem building for iOS. I don't know when the problem was introduced, but it seems that there are 2 version of opencv2.framework in the plugin, as you can see from the build error log:

    Plugin 'opencv2.framework' is used from several locations:
    Assets/OpenCVForUnity/Extra/exclude_contrib/iOS/opencv2.framework would be copied to <PluginPath>/opencv2.framework
    Assets/OpenCVForUnity/Plugins/iOS/opencv2.framework would be copied to <PluginPath>/opencv2.framework
    Please fix plugin settings and try again.
     
  35. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    It is possible to acquire the HoloLens Feed using HoloLensCameraStream.
    https://github.com/VulcanTechnologies/HoloLensCameraStream

    Please try an example combining HoloLensCameraStream and OpenCVForUnity.
    https://github.com/VulcanTechnologies/HoloLensCameraStream/issues/6
    https://github.com/VulcanTechnologi...447/HoloLensVideoCaptureDetectFaceExample.zip
     
  36. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Which image format do you want to output?
    Unfortunately,The png file is not supported on the iOS platform.
    The Jpeg format is supported on the all platform.
    imread("xxxxxx.jpg");
     
  37. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I do not have such an example, but I think it is probably possible.
    Since this asset is a clone of OpenCV Java, you are able to use the same API as OpenCV Java.
    If there is implementation example using "OpenCV Java", I think that it can be implemented even using "OpenCV for Unity".
    https://github.com/EnoxSoftware/Ope...ets/OpenCVForUnity/Examples/SimpleBlobExample
    https://github.com/EnoxSoftware/Ope...stimationExample/HandPoseEstimationExample.cs
     
  38. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Is the import setting correct? Please set ImportSettings again.
    Select MenuItem[Tools/OpenCV for Unity/Set Plugin Import Settings].
     
  39. yumianhuli1

    yumianhuli1

    Joined:
    Mar 14, 2015
    Posts:
    92
    Hi!I saw Provides a method to interconversion ofUnity's Texture2D and OpenCV's Mat from your introduce.
    So can I get each element from Mat like Mat[ i ] [J ] int Unity?
     
    Last edited: Jul 22, 2017
  40. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Since this asset is a clone of OpenCV Java, you are able to use the same API as OpenCV Java.
    http://answers.opencv.org/question/5/how-to-get-and-modify-the-pixel-of-mat-in-java/
    Code (CSharp):
    1. byte[] b = new byte[4];
    2. Mat mat = new Mat (100,100 , CvType.CV_8UC4);
    3. mat.get(i, j, b);
     
  41. Abirami-Govindarajan

    Abirami-Govindarajan

    Joined:
    Jul 6, 2017
    Posts:
    2
    I'm using Opencvunity package. From the image I have selected a region of interest. I'm trying to overlay an image on top of that ROI with the ROI co-ordinates.Is there a way to achieve this? I have tried many ways but it didn't give a proper result,my image didn't fit into the ROI. Kindly Help.
     
  42. devpilgrim

    devpilgrim

    Joined:
    Jan 26, 2014
    Posts:
    3
    Hello. We just yesterday bought your plugin. I can not understand what happened to all the functions of "cv", for example cvMatchTemplate, cvMinMaxLoc, cvNormalize and others?
    There is no description or documentation on your site.
    Where can I find them?


    Done independently, but only partially.
    All my favorite functions are in the "Core" class
    But why did you change the names and remove the prefix "cv"?
    Where functions "cvLoadImage", "cvReleaseImage", "cvCloneImage", and others?
     
    Last edited: Jul 26, 2017
  43. grobm

    grobm

    Joined:
    Aug 15, 2005
    Posts:
    217
    Just started a new project with OpenCV again in Unity. I am looking for a good example of implementing the Unity Ucanvas inside the object of my Markerless Example. eg. I am looking for a working (proper translation of taps on the screen to rays on to the UI Canvas), I have tackled it in a logical way... but I am not getting any results. I am currently working in Unity 5.6.x and would love some advise on a direction to get it working? Or should I start looking at basic recasting from touch points on the device into the 3D space (old school)?
     
  44. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Please try to test this code.
    Code (CSharp):
    1.             Texture2D imgTexture = Resources.Load ("lena") as Texture2D;
    2.             Mat imgMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC4);
    3.             Utils.texture2DToMat (imgTexture, imgMat);
    4.                      
    5.  
    6.             Texture2D imgTexture2 = Resources.Load ("chessboard") as Texture2D;
    7.             Mat imgMat2 = new Mat (imgTexture2.height, imgTexture2.width, CvType.CV_8UC4);
    8.             Utils.texture2DToMat (imgTexture2, imgMat2);
    9.  
    10.  
    11.             Mat roi1 = new Mat (imgMat, new OpenCVForUnity.Rect (0, 0, imgMat.width (), 200));
    12.             Mat roi2 = new Mat (imgMat2, new OpenCVForUnity.Rect (0, 200, imgMat.width (), 200));
    13.  
    14.  
    15.             roi1.copyTo (roi2);
    16.  
    17.  
    18.             Texture2D texture = new Texture2D (imgMat2.cols (), imgMat2.rows (), TextureFormat.RGBA32, false);
    19.             Utils.matToTexture2D (imgMat2, texture);
    20.  
    21.  
    22.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    roitest.PNG
     
  45. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Since this asset is a clone of OpenCV Java wrapper, you are able to use the same API as OpenCV Java.
    http://enoxsoftware.github.io/OpenC...1_core.html#a5d62b913357eb54850358f95ad28f356
     
    Last edited: Jul 28, 2017
  46. devpilgrim

    devpilgrim

    Joined:
    Jan 26, 2014
    Posts:
    3
    Where can I find functions: "cvLoadImage", "cvReleaseImage", "cvCloneImage"?
     
  47. Abirami-Govindarajan

    Abirami-Govindarajan

    Joined:
    Jul 6, 2017
    Posts:
    2
    Thank you. This is the one I was searching for. With the same code I'm also trying to overlay transparent png image. But when overlaying the transparency is not there. I don't understand the problem.How can do this? Kindly help.
     
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    IplImage class is not implemented in OpenCV Java wrapper.
     
  49. neshius108

    neshius108

    Joined:
    Nov 19, 2015
    Posts:
    110
    Heya,

    Great stuff, just having a small problem that I seem unable to overcome.
    I would like to use `warpAffine` but there is something wrong, it doesn't seem to work like in C++. A previous post here showed a solution to a similar problem but it just won't work for me.

    The current algorithm is fairly simple but I get "CvException: Native object address is NULL":

    Code (CSharp):
    1.  
    2.                     List<Point> psSrc = new List<Point>();
    3.                     List<Point> psDest = new List<Point>();
    4.  
    5.                     for (int i = 0; i < landmarkPoints.Count; i++) {
    6.                         foreach (Vector2 p in landmarkPoints[i]) {
    7.                             psSrc.Add(new Point (p.x, p.y));
    8.                             psDest.Add(new Point (p.x + 100, p.y));
    9.                         }
    10.                     }
    11.  
    12.                     MatOfPoint2f src = new MatOfPoint2f (psSrc.ToArray());
    13.                     MatOfPoint2f dst = new MatOfPoint2f (psDest.ToArray());
    14.  
    15.                     // It fails here
    16.                     Mat M = Imgproc.getAffineTransform (src, dst);
    17.                     // -----------------------
    18.                     Imgproc.warpAffine (rgbaMat, rgbaMat, M, rgbaMat.size());
    19.  
    Any thoughts? The code is so simple but it's still quite hard to see where the problem lies.
     
  50. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    It is possible to display the native side error code by enclosing the code with Utils.setDebugMode ().
    imgproc::getAffineTransform_10() : C:\\opencv\modules\imgproc\src\imgwarp.cpp:6428: error: (-215) src.checkVector(2, CV_32F) == 3 && dst.checkVector(2, CV_32F) == 3 in function cv::getAffineTransform

    It seems that src and dst need to be 3 points.
    http://geekn-nerd.blogspot.jp/2012/02/1-opencv-for-android.html