Search Unity

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    OpenCV for Unity
    Released Version 2.1.3


    Version changes
    2.1.3
    [UWP]Added OpenCVForUnityUWP_Beta3.zip
     
    BeyondWang likes this.
  2. pestantium

    pestantium

    Joined:
    Jun 30, 2011
    Posts:
    49
    Last edited: Feb 2, 2017
  3. ezsomething

    ezsomething

    Joined:
    Feb 1, 2017
    Posts:
    2
    Thank you for the fast reply. I see a large monolithic file 'opencv2' that is about 190M in the opencv2 framework. Do you think you could split apart that file by library/module so that we could selectively delete them as well? Thanks !
     
  4. LAFI

    LAFI

    Joined:
    Sep 5, 2014
    Posts:
    47
    Hello, i'm working on app for ios device , the problem is i want that the quad fit all screen size , but every time it change , i'm using HandPoseEstimation sample, how i can fix it so he can fit all screen size??
    Thank you
     
  5. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  6. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Since 1.5b3 has not yet been released, I have not tried it yet.
     
  7. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Is there a screen shot of that state?
     
  8. LAFI

    LAFI

    Joined:
    Sep 5, 2014
    Posts:
    47
    yes here it's is , i want the screen to fit all screen size , so the blue area will not appear(becuase when i test it in the mobile they appear), the quad have to fit the whole screen not just some area.
     

    Attached Files:

  9. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    It is possible to adjust the display of "Quad" by changing this part.
    Code (CSharp):
    1.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    2.          
    3.             gameObject.transform.localScale = new Vector3 (webCamTextureMat.cols (), webCamTextureMat.rows (), 1);
    4.          
    5.             Debug.Log ("Screen.width " + Screen.width + " Screen.height " + Screen.height + " Screen.orientation " + Screen.orientation);
    6.  
    7.             float width = webCamTextureMat.width();
    8.             float height = webCamTextureMat.height();
    9.          
    10.             float widthScale = (float)Screen.width / width;
    11.             float heightScale = (float)Screen.height / height;
    12.             if (widthScale < heightScale) {
    13. //                Camera.main.orthographicSize = (width * (float)Screen.height / (float)Screen.width) / 2;
    14.                 Camera.main.orthographicSize = height / 2;
    15.             } else {
    16. //                Camera.main.orthographicSize = height / 2;
    17.                 Camera.main.orthographicSize = (width * (float)Screen.height / (float)Screen.width) / 2;
    18.             }
     
  10. eco_bach

    eco_bach

    Joined:
    Jul 8, 2013
    Posts:
    1,601
    Hi
    Is there a way to use OpenCV to accurately track the 4 corners of a large 10ft by 10ft by 10ft box in a webcam feed and then map those to the corners of a virtual box in Unity?
    I am creating a AR experience with a large plexiglass box but anticipate tracking problems from reflections using traditional image marker based tracking.
     
  11. trevorchico

    trevorchico

    Joined:
    Feb 1, 2016
    Posts:
    7
    Hi,

    I have an AR headset experience where you can swipe your hand infront of the screen to move to the next camera filter.

    I'm currently doing that with code from the hand pose estimation sample.

    Do you recommend a less expensive way to do that?

    Cheers,

    Trevor
     
  12. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I do not know if it is possible.
    Probably, if it is possible to detect the four corners, I think it is possible to estimate the posture of cube.
     
  13. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    There seems to be such an implementation example.
     
  14. sticklezz

    sticklezz

    Joined:
    Oct 27, 2015
    Posts:
    33
    Can't get the background subtraction example to work. How do you initialize it to capture background first? (or maybe it seems to never stop initializing?)
     
    Last edited: Feb 12, 2017
  15. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    The background image seems to be updated every frame. BackgroundSubtractorMOG2Sample is almost same as the source code of this page.
    http://docs.opencv.org/3.2.0/d1/dc5/tutorial_background_subtraction.html
     
  16. sticklezz

    sticklezz

    Joined:
    Oct 27, 2015
    Posts:
    33
    I don't have results that look similar to that page though, I'm not sure how it would remove the background if it is always initializing, everything would always be the background

    Here is a OpenCV video and python source code where the take an 'empty' picture and then it only draws what is new to that image (person) . I'm not technical so I have no idea how to add this in .



    python code
    https://gist.github.com/drscotthawley/2d6bbffce9dda5f3057b4879c3bd4422
     
  17. kan_chan

    kan_chan

    Joined:
    Sep 17, 2016
    Posts:
    1
    Hi, Just wondering how I could realize the multi-image tracking(two more image targets) with Markerless Example and whether I could make the tracking datas(Mat) into XML file or individual file just like Vuforia. Thank you!
     
  18. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    This code is a rewrite of python code to C #.
    Code (CSharp):
    1.  
    2. using UnityEngine;
    3. using System.Collections;
    4. using UnityEngine.UI;
    5.  
    6. #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    7. using UnityEngine.SceneManagement;
    8. #endif
    9. using OpenCVForUnity;
    10.  
    11. namespace OpenCVForUnitySample
    12. {
    13.     /// <summary>
    14.     /// WebCamTexture to mat sample.
    15.     /// </summary>
    16.     [RequireComponent(typeof(WebCamTextureToMatHelper))]
    17.     public class GreenScreenSample : MonoBehaviour
    18.     {
    19.         /// <summary>
    20.         /// The texture.
    21.         /// </summary>
    22.         Texture2D texture;
    23.  
    24.         /// <summary>
    25.         /// The web cam texture to mat helper.
    26.         /// </summary>
    27.         WebCamTextureToMatHelper webCamTextureToMatHelper;
    28.  
    29.         Mat bgMat;
    30.         Mat fgMaskMat;
    31.         Mat bgMaskMat;
    32.         Mat greenMat;
    33.  
    34.         [Range(0 , 255)]
    35.         public float thresh = 50.0f;
    36.  
    37.         public bool use_denoise;
    38.         public float denoise_h = 10.0f;
    39.  
    40.         public bool use_time_avg;
    41.         Mat avg1Mat;
    42.  
    43.         // Use this for initialization
    44.         void Start ()
    45.         {
    46.  
    47.             webCamTextureToMatHelper = gameObject.GetComponent<WebCamTextureToMatHelper> ();
    48.             webCamTextureToMatHelper.Init ();
    49.  
    50.         }
    51.  
    52.         /// <summary>
    53.         /// Raises the web cam texture to mat helper inited event.
    54.         /// </summary>
    55.         public void OnWebCamTextureToMatHelperInited ()
    56.         {
    57.             Debug.Log ("OnWebCamTextureToMatHelperInited");
    58.  
    59.             Mat webCamTextureMat = webCamTextureToMatHelper.GetMat ();
    60.  
    61.             texture = new Texture2D (webCamTextureMat.cols (), webCamTextureMat.rows (), TextureFormat.RGBA32, false);
    62.  
    63.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    64.  
    65.             gameObject.transform.localScale = new Vector3 (webCamTextureMat.cols (), webCamTextureMat.rows (), 1);
    66.             Debug.Log ("Screen.width " + Screen.width + " Screen.height " + Screen.height + " Screen.orientation " + Screen.orientation);
    67.  
    68.                                    
    69.             float width = webCamTextureMat.width();
    70.             float height = webCamTextureMat.height();
    71.                                    
    72.             float widthScale = (float)Screen.width / width;
    73.             float heightScale = (float)Screen.height / height;
    74.             if (widthScale < heightScale) {
    75.                 Camera.main.orthographicSize = (width * (float)Screen.height / (float)Screen.width) / 2;
    76.             } else {
    77.                 Camera.main.orthographicSize = height / 2;
    78.             }
    79.  
    80.  
    81.             bgMat = new Mat(webCamTextureMat.rows (), webCamTextureMat.cols (), CvType.CV_8UC4);
    82.             fgMaskMat = new Mat(webCamTextureMat.rows (), webCamTextureMat.cols (), CvType.CV_8UC1);
    83.             bgMaskMat = new Mat(webCamTextureMat.rows (), webCamTextureMat.cols (), CvType.CV_8UC1);
    84.             greenMat = new Mat(webCamTextureMat.rows (), webCamTextureMat.cols (), CvType.CV_8UC4, new Scalar(0,255,0,255));
    85.  
    86.             avg1Mat = new Mat(webCamTextureMat.rows (), webCamTextureMat.cols (), CvType.CV_32FC4);
    87.         }
    88.  
    89.         /// <summary>
    90.         /// Raises the web cam texture to mat helper disposed event.
    91.         /// </summary>
    92.         public void OnWebCamTextureToMatHelperDisposed ()
    93.         {
    94.             Debug.Log ("OnWebCamTextureToMatHelperDisposed");
    95.  
    96.             if(bgMat != null){
    97.                 bgMat.Dispose();
    98.                 bgMat = null;
    99.             }
    100.             if(fgMaskMat != null){
    101.                 fgMaskMat.Dispose();
    102.                 fgMaskMat = null;
    103.             }
    104.             if(bgMaskMat != null){
    105.                 bgMaskMat.Dispose();
    106.                 bgMaskMat = null;
    107.             }
    108.             if(greenMat != null){
    109.                 greenMat.Dispose();
    110.                 greenMat = null;
    111.             }
    112.             if(avg1Mat != null){
    113.                 avg1Mat.Dispose();
    114.                 avg1Mat = null;
    115.             }
    116.         }
    117.  
    118.         /// <summary>
    119.         /// Raises the web cam texture to mat helper error occurred event.
    120.         /// </summary>
    121.         /// <param name="errorCode">Error code.</param>
    122.         public void OnWebCamTextureToMatHelperErrorOccurred(WebCamTextureToMatHelper.ErrorCode errorCode){
    123.             Debug.Log ("OnWebCamTextureToMatHelperErrorOccurred " + errorCode);
    124.         }
    125.  
    126.         // Update is called once per frame
    127.         void Update ()
    128.         {
    129.             if (webCamTextureToMatHelper.IsPlaying () && webCamTextureToMatHelper.DidUpdateThisFrame ()) {
    130.  
    131.                 Mat rgbaMat = webCamTextureToMatHelper.GetMat ();
    132.  
    133.                 if (Input.GetKeyUp(KeyCode.Space) || Input.touchCount > 0)
    134.                 {
    135.                     rgbaMat.copyTo(bgMat);
    136.                 }
    137.  
    138.  
    139.                 if(use_time_avg){
    140.                     Imgproc.accumulateWeighted(rgbaMat, avg1Mat, 0.09);
    141.                     Core.convertScaleAbs(avg1Mat, rgbaMat);
    142.                 }
    143.  
    144.  
    145.                 find_fgmask(rgbaMat, bgMat, thresh, use_denoise, denoise_h);
    146.                 Core.bitwise_not(fgMaskMat, bgMaskMat);
    147.  
    148.                 greenMat.copyTo(rgbaMat, bgMaskMat);
    149.  
    150.  
    151.                 Imgproc.putText (rgbaMat, "SPACE KEY: Reset backgroud img", new Point (5, rgbaMat.rows () - 10), Core.FONT_HERSHEY_SIMPLEX, 1.0, new Scalar (255, 255, 255, 255), 2, Imgproc.LINE_AA, false);
    152.  
    153.                 Utils.matToTexture2D (rgbaMat, texture, webCamTextureToMatHelper.GetBufferColors());
    154.             }
    155.  
    156.  
    157.         }
    158.  
    159.         private void find_fgmask(Mat fgMat, Mat bgMat, float thresh=13.0f, bool use_denoise=false, float h=10.0f){
    160.             Mat diff1 = new Mat();
    161.             Core.absdiff( fgMat, bgMat, diff1);
    162.             Mat diff2 = new Mat();
    163.             Core.absdiff( bgMat, fgMat, diff2);
    164.             Mat diff = diff1 + diff2;
    165.  
    166.             Imgproc.threshold(diff, diff, thresh, 0, Imgproc.THRESH_TOZERO);
    167.  
    168.             Imgproc.cvtColor(diff, fgMaskMat, Imgproc.COLOR_RGBA2GRAY);
    169.  
    170.             Imgproc.threshold(fgMaskMat, fgMaskMat, 10, 0, Imgproc.THRESH_TOZERO);
    171.  
    172.             if(use_denoise){
    173.                 int sws = (int)(Mathf.Ceil(21*h/10) / 2 * 2 + 1);
    174.  
    175.                 Photo.fastNlMeansDenoising(fgMaskMat, fgMaskMat, h, 5, sws);
    176.             }
    177.  
    178.             Imgproc.threshold(fgMaskMat, fgMaskMat, 0, 255, Imgproc.THRESH_BINARY);
    179.  
    180.             diff1.Dispose();
    181.             diff2.Dispose();
    182.             diff.Dispose();
    183.  
    184.         }
    185.    
    186.         /// <summary>
    187.         /// Raises the disable event.
    188.         /// </summary>
    189.         void OnDisable ()
    190.         {
    191.             webCamTextureToMatHelper.Dispose ();
    192.  
    193.         }
    194.  
    195.         /// <summary>
    196.         /// Raises the back button event.
    197.         /// </summary>
    198.         public void OnBackButton ()
    199.         {
    200.             #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    201.             SceneManager.LoadScene ("OpenCVForUnitySample");
    202.             #else
    203.             Application.LoadLevel ("OpenCVForUnitySample");
    204.             #endif
    205.         }
    206.  
    207.         /// <summary>
    208.         /// Raises the play button event.
    209.         /// </summary>
    210.         public void OnPlayButton ()
    211.         {
    212.             webCamTextureToMatHelper.Play ();
    213.         }
    214.  
    215.         /// <summary>
    216.         /// Raises the pause button event.
    217.         /// </summary>
    218.         public void OnPauseButton ()
    219.         {
    220.             webCamTextureToMatHelper.Pause ();
    221.         }
    222.  
    223.         /// <summary>
    224.         /// Raises the stop button event.
    225.         /// </summary>
    226.         public void OnStopButton ()
    227.         {
    228.             webCamTextureToMatHelper.Stop ();
    229.         }
    230.  
    231.         /// <summary>
    232.         /// Raises the change camera button event.
    233.         /// </summary>
    234.         public void OnChangeCameraButton ()
    235.         {
    236.             webCamTextureToMatHelper.Init (null, webCamTextureToMatHelper.requestWidth, webCamTextureToMatHelper.requestHeight, !webCamTextureToMatHelper.requestIsFrontFacing);
    237.         }
    238.  
    239.  
    240.     }
    241. }
    242.  
    GreenScreen.PNG
     
    twobob likes this.
  19. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I think that it is probably possible.But I do not have an implementation example.
    Since this asset is a clone of OpenCV Java, you are able to use the same API as OpenCV Java.
    If there is implementation example using "OpenCV Java", I think that it can be implemented even using "OpenCV for Unity".
     
  20. sticklezz

    sticklezz

    Joined:
    Oct 27, 2015
    Posts:
    33
    thank you so much!!!!!
     
  21. Gustavo-Quiroz

    Gustavo-Quiroz

    Joined:
    Jul 26, 2013
    Posts:
    38
    Hello Enox,

    Is ti possible to develop an app that replaces colour of walls like this?:


    I see this last sample its pretty similar.
    Can you at least guide me how to achieve this?

    Thanks
     
  22. jasper1993

    jasper1993

    Joined:
    Feb 21, 2017
    Posts:
    2
    When I use HOGDescriptor's methods "compute(Mat img, MatOfFloat descriptors, Size winStride, Size padding, MatOfPoint locations)" , the param "descriptors" return zero, the "compute" method seems wrong, can you help me?



    SVM svm = SVM.create();
    svm = SVM.load(Utils.getFilePath("SVM_DATA.xml")); Texture2D screenShot = new Texture2D((int)rect.width, (int)rect.height, TextureFormat.RGB24, false);
    screenShot.ReadPixels(rect, 0, 0);
    screenShot.Apply();
    Mat imgMat = new Mat(screenShot.height, screenShot.width, CvType.CV_8UC4);
    Utils.texture2DToMat(screenShot, imgMat);
    Mat imgMat2 = new Mat(64, 64, CvType.CV_8UC4);
    Imgproc.resize(imgMat, imgMat2, imgMat2.size());
    HOGDescriptor hog;
    MatOfFloat descriptors = new MatOfFloat()
    MatOfPoint locations = new MatOfPoint()
    hog = new HOGDescriptor(new Size(64, 64), new Size(16, 16), new Size(8, 8), new Size(8, 8), 9);
    hog.compute(imgMat2,descriptors, new Size(0, 0), new Size(0, 0), locations); float ret = svm.predict(descriptors);
    Debug.Log("answer: " + ret);
     
    Last edited: Feb 21, 2017
  23. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    It seems that the parameters are not correct.
    The following code worked fine.
    Code (CSharp):
    1.             Mat forHOGim = new Mat();
    2.             Size sz = new Size(64,128);
    3.             Imgproc.resize( myImage, myImage, sz );
    4.             Imgproc.cvtColor(myImage,forHOGim,Imgproc.COLOR_RGB2GRAY);
    5.             //forHOGim = myImage.clone();
    6.             MatOfFloat descriptors = new MatOfFloat(); //an empty vector of descriptors
    7.             Size winStride = new Size(64/2,128/2); //50% overlap in the sliding window
    8.             Size padding = new Size(0,0); //no padding around the image
    9.             MatOfPoint locations = new MatOfPoint(); ////an empty vector of locations, so perform full search
    10.             //HOGDescriptor hog = new HOGDescriptor();
    11.             HOGDescriptor hog = new HOGDescriptor(sz,new Size(16,16),new Size(8,8),new Size(8,8),9);
    12.             Debug.Log ("Constructed");
    13.             hog.compute(forHOGim , descriptors, new Size(16,16), padding, locations);
    14.             Debug.Log ("Computed");
    15.             Debug.Log (hog.getDescriptorSize()+" "+descriptors.size());
    16.             Debug.Log (descriptors.get(12,0)[0]);
    17.             double dd=0.0;
    18.             for (int i=0;i<3780;i++){
    19.                 if (descriptors.get(i,0)[0]!=dd) Debug.Log ("NOT ZERO");
    20.             }
    21.  
    22.  
    23.             Texture2D texture = new Texture2D (myImage.cols (), myImage.rows (), TextureFormat.RGBA32, false);
    24.  
    25.             Utils.matToTexture2D (myImage, texture);
    26.  
    27.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
     
    jasper1993 likes this.
  24. jasper1993

    jasper1993

    Joined:
    Feb 21, 2017
    Posts:
    2

    Thank you!I find that if I don't use "Imgproc.cvtColor" to transform the image to gary, the hog method doesn't work, right?
     
  25. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    When enclosing the code with Utils.setDebugMode(true) and Utils.setDebugMode(false), the following error is displayed.
    Code (CSharp):
    1. objdetect::compute_10() : ..\..\..\modules\objdetect\src\hog.cpp:241: error: (-215) img.type() == CV_8U || img.type() == CV_8UC3 in function cv::HOGDescriptor::computeGradient
    Mat seems to need to be CV_8U or CV_8UC3.
     
    Last edited: Feb 24, 2017
  26. charyyc

    charyyc

    Joined:
    Jun 1, 2016
    Posts:
    10
    Hello!
    How to merge two images (one with half alpha) into one in unity? I try to use opencv’s split and merge methods to make it. And the following codes in C++ work. However, in unity I don’t find the corresponding methods.

    int cvAdd4cMat_q(cv::Mat &dst, cv::Mat &scr, double scale);

    int main()
    {
    char str[16];
    Mat img1 = imread("bk.jpg"), img2 = imread("img.png", -1);
    Mat img1_t1(img1, cvRect(0, 0, img2.cols, img2.rows));
    cvAdd4cMat_q(img1_t1,img2,1.0);
    imshow("final",img1);
    waitKey(0);
    return 0;
    }

    int cvAdd4cMat_q(cv::Mat &dst, cv::Mat &scr, double scale)
    {
    if (dst.channels() != 3 || scr.channels() != 4)
    {
    return true;
    }
    if (scale < 0.01)
    return false;
    std::vector<cv::Mat>scr_channels;
    std::vector<cv::Mat>dstt_channels;
    split(scr, scr_channels);
    split(dst, dstt_channels);
    CV_Assert(scr_channels.size() == 4 && dstt_channels.size() == 3);

    if (scale < 1)
    {
    scr_channels[3] *= scale;
    scale = 1;
    }
    for (int i = 0; i < 3; i++)
    {
    dstt_channels = dstt_channels.mul(255.0 / scale - scr_channels[3], scale / 255.0);
    dstt_channels += scr_channels.mul(scr_channels[3], scale / 255.0);
    }
    merge(dstt_channels, dst);
    return true;
    }

    Thank you so much!
     

    Attached Files:

  27. phantan

    phantan

    Joined:
    Nov 21, 2016
    Posts:
    1

    Attached Files:

  28. ctswearableglass

    ctswearableglass

    Joined:
    Feb 10, 2017
    Posts:
    2
    AM using OpenCV Unity UWP for my HoloLens app. I used my own car detecting Cascade classifier and replaced the frontal face XML in face detecting overlay sample. The result is the app became very slow with very low fps(without debugging). And the overlay red rectangle is way off, inaccurate and not resizing (maybe because of the XML I use). Are there any examples on how to replace the face recognition XML with our own real world object detection XMLs . Are there any specific way to train the cascade so that OpenCV for HoloLens detects it smoothly? I even tried with trucks XML too. Uploaded the cards.xml cascade file along with this post. Am looking for a way to detect any random objects of my choice by just feeding the appropriate cascade trained XML for it. Please help me out here. Thanks
     

    Attached Files:

  29. NeedNap

    NeedNap

    Joined:
    Nov 1, 2012
    Posts:
    22
    I need to detect multiple marker-less image (about 6) and the face tracking in an "efficent" way (performance speed-up).
    I tried to write the code myself using the sample Unity scene I found on the Assets Store, but the performance are very poor.
    Do you have any suggestions about it?
     
  30. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Gustavo-Quiroz likes this.
  31. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Code (CSharp):
    1.         void Start ()
    2.         {
    3.  
    4.             Mat img1 = Imgcodecs.imread (Utils.getFilePath("monalisa.png")), img2 = Imgcodecs.imread (Utils.getFilePath("bgtest.png"), Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
    5.  
    6.             Imgproc.cvtColor(img1, img1, Imgproc.COLOR_BGR2RGB);
    7.             Imgproc.cvtColor(img2, img2, Imgproc.COLOR_BGRA2RGBA);
    8.  
    9.             Mat img1_t1 = new Mat (img1, new OpenCVForUnity.Rect (0, 0, img2.cols (), img2.rows ()));
    10.  
    11.             cvAdd4cMat_q (img1_t1, img2, 1.0);
    12.  
    13.  
    14.             Texture2D texture = new Texture2D (img1.cols (), img1.rows (), TextureFormat.RGBA32, false);
    15.            
    16.             Utils.matToTexture2D (img1, texture);
    17.            
    18.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    19.         }
    20.  
    21.         private bool cvAdd4cMat_q (Mat dst, Mat scr, double scale)
    22.         {
    23.             if (dst.channels () != 3 || scr.channels () != 4) {
    24.                 return true;
    25.             }
    26.             if (scale < 0.01)
    27.                 return false;
    28.             List<Mat> scr_channels = new List<Mat> ();
    29.             List<Mat> dstt_channels = new List<Mat> ();
    30.             Core.split (scr, scr_channels);
    31.             Core.split (dst, dstt_channels);
    32. //            CV_Assert(scr_channels.size() == 4 && dstt_channels.size() == 3);
    33.            
    34.             if (scale < 1) {
    35.                 scr_channels [3] *= scale;
    36.                 scale = 1;
    37.             }
    38.             for (int i = 0; i < 3; i++) {
    39.                 dstt_channels[i] = dstt_channels[i].mul ( new Mat(scr_channels[3].size(), CvType.CV_8UC1, new Scalar(255.0 / scale)) - scr_channels [3], scale / 255.0);
    40.                
    41.                 dstt_channels[i] += scr_channels[i].mul (scr_channels [3], scale / 255.0);
    42.             }
    43.             Core.merge (dstt_channels, dst);
    44.             return true;
    45.         }
    alpha.PNG
     
  32. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    FaceTracker Example Code is the rewrite of https://github.com/MasteringOpenCV/code/tree/master/Chapter6_NonRigidFaceTracking using the “OpenCV for Unity”. Unfortunately, It's a tutorial code, so it's not high performance.You may need to customize the code.(use Multithread or Specify a smaller size of WebCamTexture.)
     
  33. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I have not tried yet another cascade file.
    Does DetectSample work fine on UnityEditor using your cascade file ?
     
  34. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    processing marker detection in another thread may improve performance.Currently all processing is done in single thread.
     
    NeedNap likes this.
  35. ctswearableglass

    ctswearableglass

    Joined:
    Feb 10, 2017
    Posts:
    2
    Thanks for the reply. I have not tried in Unity Editor. Is there any specific attributes for cascade training for HoloLens based app? Like changing specific bounding rectangles etc.
     
  36. charyyc

    charyyc

    Joined:
    Jun 1, 2016
    Posts:
    10
    Hi!
    When I use these codes,

    Hi!
    When I use these codes, it cannot compile with the error that mat object can not use +(-) operator. My OpenCV’s version is 2.0.5.
    Thank you!
     

    Attached Files:

  37. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  38. broadfire0016

    broadfire0016

    Joined:
    Apr 4, 2014
    Posts:
    1
    Good Day!

    I've tried to build an app using this plugin and it crashes every time I deny the access to the camera. My question is how can I create an error handler to prevent the app from crashing? Thanks.
     
  39. blqck

    blqck

    Joined:
    Dec 2, 2012
    Posts:
    23
    Hi Guys,
    can some one help me to change this cpp code to c# code


    #include "stdafx.h"
    #include "cv.h"
    #include "highgui.h"

    int _tmain(int argc, _TCHAR* argv[])
    {
    // load the input image
    IplImage* img = cvLoadImage("test.jpg");

    // define the seed point
    CvPoint seedPoint = cvPoint(200,200);

    // flood fill with red
    cvFloodFill(img, seedPoint, CV_RGB(255,0,0), CV_RGB(8,90,60), CV_RGB(10,100,70),NULL,4,NULL);

    // draw a blue circle at the seed point
    cvCircle(img, seedPoint, 3, CV_RGB(0,0,255), 3, 8);

    // show the output
    cvNamedWindow("Output", CV_WINDOW_AUTOSIZE);
    cvShowImage("Output", img);

    // wait for user
    cvWaitKey(0);

    // save image
    cvSaveImage("output.jpg",img);

    // garbage collection
    cvReleaseImage(&img);
    cvDestroyWindow("Output");
    return 0;
    }


    this is the URL of the code http://www.andrew-seaford.co.uk/flood-fill-opencv/
     
  40. blqck

    blqck

    Joined:
    Dec 2, 2012
    Posts:
    23
    In currently working on a project using this asset.

    I want to detect edges of any surface and to fill inside a red color exemple:

    Im currently doing this :
    Converting real image to grayimage (video)
    Detect edge using canny edge detection
    Appliying contour detection Then draw contour

    And when im trying to apply floodfill algo

    Nothing change like im not puting the floodfill function

    Anyhelp mates?

    Code (CSharp):
    1. Imgproc.cvtColor (rgbaMat, grayMat, Imgproc.COLOR_RGBA2GRAY);
    2. Mat thresholdMat = new Mat();
    3. Imgproc.threshold(grayMat, thresholdMat, 0, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
    4.  
    5. Mat hierarchy = new Mat();
    6. List<MatOfPoint> contours = new List<MatOfPoint>();
    7. Imgproc.findContours(thresholdMat, contours, hierarchy, Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE);
    8. for (int i = 0; i < contours.Count; i++)
    9.            {
    10. Scalar color = new Scalar(Random.Range(0, 255),  Random.Range(0, 255), Random.Range(0, 255));
    11. Imgproc.drawContours(grayMat, contours, i, color);
    12.            }
    13. Imgproc.floodFill(rgbaMat, grayMat, p, color);
    14. Utils.matToTexture2D (grayMat, texture, colors);
     
  41. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    The crash occurs on any platform?
     
  42. charyyc

    charyyc

    Joined:
    Jun 1, 2016
    Posts:
    10
  43. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    When enclosing the code with Utils.setDebugMode(true) and Utils.setDebugMode(false), the error is displayed on console.
    Code (CSharp):
    1.             Utils.setDebugMode(true);
    2.  
    3.             Texture2D imgTexture = Resources.Load ("detect_blob") as Texture2D;
    4.  
    5.             Mat rgbaMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC4);
    6.  
    7.             Utils.texture2DToMat (imgTexture, rgbaMat);
    8.             Debug.Log ("imgMat.ToString() " + rgbaMat.ToString ());
    9.  
    10.             Mat rgbMat = new Mat();
    11.             Imgproc.cvtColor (rgbaMat, rgbMat, Imgproc.COLOR_RGBA2RGB);
    12.  
    13.             Mat grayMat = new Mat();
    14.             // define the seed point
    15.             Point p = new Point(200,200);
    16.  
    17.             Imgproc.cvtColor (rgbMat, grayMat, Imgproc.COLOR_RGB2GRAY);
    18.             Mat thresholdMat = new Mat();
    19.             Imgproc.threshold(grayMat, thresholdMat, 0, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
    20.            
    21.             Mat hierarchy = new Mat();
    22.             List<MatOfPoint> contours = new List<MatOfPoint>();
    23.             Imgproc.findContours(thresholdMat, contours, hierarchy, Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE);
    24.  
    25.             Scalar color = null;
    26.             for (int i = 0; i < contours.Count; i++)
    27.             {
    28.                 color = new Scalar(Random.Range(0, 255),  Random.Range(0, 255), Random.Range(0, 255));
    29.                 Imgproc.drawContours(rgbMat, contours, i, color);
    30.             }
    31.  
    32.             color = new Scalar(255,  0, 0);
    33.             Imgproc.floodFill(rgbMat, new Mat(), p, color);
    34.  
    35.             color = new Scalar(255,  255, 0);
    36.             // draw a blue circle at the seed point
    37.             Imgproc.circle(rgbMat, p, 3, color, 3, 8, 0);
    38.  
    39.  
    40.             Texture2D texture = new Texture2D (rgbaMat.cols (), rgbaMat.rows (), TextureFormat.RGBA32, false);
    41.  
    42.             Utils.matToTexture2D (rgbMat, texture);
    43.  
    44.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    45.  
    46.  
    47.             Utils.setDebugMode(false);
     
  44. blqck

    blqck

    Joined:
    Dec 2, 2012
    Posts:
    23

    Thx EnoxSoftware,

    i have tested the code , and everything is clear but there's something i didn't understand,
    check the picture of the test,
    this show the working sample,
    http://imgur.com/a/x5Wv3
    this show the not working sample

    http://imgur.com/a/RKDcX
    i dont know why it fill's in certain condition like this
    http://imgur.com/a/VTBPP
    there's something related to luminance ?

    i really want to fill any suface (edge detect..),and i didnt find the problem in the code !
     
    Last edited: Mar 6, 2017
  45. skuby

    skuby

    Joined:
    Oct 27, 2014
    Posts:
    2
    EnoxSoftware,

    I want to know when a face is not being detected, but i dont see a result

    OpenCVForUnity.Rect[] rects = faces.toArray ();
    for (int i = 0; i < rects.Length; i++)

    {
    Debug.Log (rects);

    if ( rects.Length <= 0 )
    {
    Debug.Log(" Face NOT Detected");
    }
     
  46. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Probably the result is influenced by shadows and lighting.
     
  47. Felix_M

    Felix_M

    Joined:
    Feb 28, 2016
    Posts:
    11
    i actually had that happen just using the unity camera app and loading scenes/turning off camera. both crashed, so i bought natcam.

    there was a post on the issue tracker saying it should be fixed in the beta and i downloaded it and it was still broken. not sure if its same for you but may be worth a shot if you havent to try in a new project
     
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Could you tell me the environment you tested?
    In my environment this code works fine.
    Unity5.0.0
    Windows8.1
    OpenCV for Unity 2.1.4
     
  49. blqck

    blqck

    Joined:
    Dec 2, 2012
    Posts:
    23
    and how do i avoid this problem ?

    and also another question : i'm using canny edge, and with that i cannot detect walls edge ! , only object ect ..
    is there an algo that can help me detecting wall edges ..?
     
  50. Akeru

    Akeru

    Joined:
    May 2, 2014
    Posts:
    13
    Hi!

    I recently purchased both OpenCV and Dlib, test their examples and I saw one thing; FaceSwapper and FaceMask examples work very fluid but FaceTracker with that kind of red-head example using OpenCV (without Dlib I think) runs a bit slowly and not as precise as Mask examples.

    I want to do something like this. Is this result possible to get using your OpenCV and Dlib?

    Thank you in advance.