Search Unity

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    As a result of the investigation, on Google Pixel, the low light condition seems to change depending on the value of requestedFPS of WebCamTexturer initialization parameter.

    • requestedFPS=30 rear camera(fps is 30) is fine. front camera(fps is 30) is very dark.
    • requestedFPS=15 rear camera(fps is 15) is fine. front camera(fps is 15) is fine.
    • requestedFPS=0 or null rear camera(fps is 25) is fine. front camera(fps is 25) is middle dark.
    • requestedFPS=1 rear camera(fps is 30) is fine. front camera(fps is 15) is fine.
    This problem seems to occur only in the Google Pixel series.
     
  2. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
  3. sticklezz

    sticklezz

    Joined:
    Oct 27, 2015
    Posts:
    33
    >>This problem seems to occur only in the Google Pixel series.<<

    Ah- so this is a specific bug w/ Google Pixel hardware?

    thank you so much for looking into this
     
  4. nandonandito

    nandonandito

    Joined:
    Nov 24, 2016
    Posts:
    43
    hi dear, i bought this plugin on asset store.. but why my camera if i play it on the editor is orange? thankyou..
     
  5. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    I do not know if it is GooglePixel's bug or a Unity's bug. I have tested on some devices, but this bug has not occurred on devices other than Google Pixel.
     
  6. evolutionco

    evolutionco

    Joined:
    Apr 2, 2018
    Posts:
    4
    any video tutorial series especially that uses unity?
     
  7. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    Could you upload a screenshot?
     
  8. HeavyArmoredMan

    HeavyArmoredMan

    Joined:
    Apr 9, 2017
    Posts:
    8
    Hi EnoxSoftware,

    If I only use the Aruco contrib module, any idea how to reduce the build size by excluding all other modules except the Aruco?

    Thanks
     
  9. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    Unfortunately, there is no way to remove unnecessary modules.
     
  10. VHMOliveira

    VHMOliveira

    Joined:
    May 19, 2017
    Posts:
    3
    Hello everyone, I am new to opencv and would like to know how I could get the positions of the points of the face during RealTime FaceRecognition, and what would be the best training module to leave more precise while maintaining speed.
     
  11. nandonandito

    nandonandito

    Joined:
    Nov 24, 2016
    Posts:
    43

    why in my computer the camera is doesn't work? im using usb webcam microsoft lifecam studio.. and doesn't work like this screenshot..
     
  12. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
  13. Lai-Wei-Han

    Lai-Wei-Han

    Joined:
    Nov 12, 2017
    Posts:
    5
    Hello @EnoxSoftware ,

    Thank you for your last answer.

    I'm a child in both OpenCV and Tensorflow.I'm tring to use my pb file was trained by Faster-RCNN-Inception-V2-COCO model from TensorFlow's model zoo and pbtxt file created by myself in TensorFlowWebCamTextureExamole.But I got the Unity to crash when I tried to Play.The model could be tested in TensorFlow without problems.Can you tell me which model I should use and how to create imagenet_comp_graph_label_strings.txt file or my pbtxt file is correct ?

    I have seen some Reply in here (https://forum.unity.com/search/9105053/?q=tensorflow&t=post&o=date&c[thread]=277080).

    pbtxt file like :

    item {
    id: 1
    name: 'nine'
    }

    item {
    id: 2
    name: 'ten'
    }



    Work with
    OS : windows 10
    Unity version : 2017.2.0f3
    OpenCV for Unity version : 2.2.8
    Tensorflow : 1.7
     
  14. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    The Faster-RCNN model by TensorFlow seems not to be implemented in the OpenCV Dnn module yet.
    https://github.com/opencv/opencv/issues/10393
    https://github.com/opencv/opencv/pull/11255
     
  15. Totoro205

    Totoro205

    Joined:
    Dec 12, 2017
    Posts:
    18
    I was wondering if you found a solution to your problem, I'm using this asset to take input from a webcam, use canny edge detector and then transform it into a race track/maze (by adding colliders).. I would appreciate your help.
     
  16. Lai-Wei-Han

    Lai-Wei-Han

    Joined:
    Nov 12, 2017
    Posts:
    5
    I tried this Reply (https://forum.unity.com/threads/released-opencv-for-unity.277080/page-32#post-3427386) yesterday.But it stell got Unity crash too.
    Can I use ssd_mobilenet_v1_coco model from TensorFlow's model zoo to train my model ? Or something esle.

    THK !

    Work with
    OS : windows 10
    Unity version : 2017.2.0f3
    OpenCV for Unity version : 2.2.8
     
  17. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    Which model did you try? I tried this model worked without problems.
    https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API
    MobileNet-SSD TensorFlow >= 1.4 weights config

    Code (CSharp):
    1. #if !UNITY_WSA_10_0
    2.  
    3. using UnityEngine;
    4. using System.Collections;
    5. using System.Collections.Generic;
    6.  
    7. #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    8. using UnityEngine.SceneManagement;
    9. #endif
    10. using OpenCVForUnity;
    11.  
    12. namespace OpenCVForUnityExample
    13. {
    14.     /// <summary>
    15.     /// MobileNet SSD Example
    16.     /// This example uses Single-Shot Detector (https://arxiv.org/abs/1512.02325) to detect objects.
    17.     /// Referring to https://github.com/opencv/opencv/blob/master/samples/dnn/mobilenet_ssd_python.py.
    18.     /// </summary>
    19.     public class MobileNetSSDExample : MonoBehaviour
    20.     {
    21.         const float inWidth = 300;
    22.         const float inHeight = 300;
    23.         //        float inScaleFactor = 0.007843f;
    24.         float inScaleFactor = 1.0f / 127.5f;
    25.         float meanVal = 127.5f;
    26.                
    27.         //        string[] classNames = {"background",
    28.         //            "aeroplane", "bicycle", "bird", "boat",
    29.         //            "bottle", "bus", "car", "cat", "chair",
    30.         //            "cow", "diningtable", "dog", "horse",
    31.         //            "motorbike", "person", "pottedplant",
    32.         //            "sheep", "sofa", "train", "tvmonitor"
    33.         //        };
    34.  
    35.         string dnn004545_jpg_filepath;
    36.         string MobileNetSSD_deploy_caffemodel_filepath;
    37.         string MobileNetSSD_deploy_prototxt_filepath;
    38.  
    39.         #if UNITY_WEBGL && !UNITY_EDITOR
    40.         Stack<IEnumerator> coroutines = new Stack<IEnumerator> ();
    41. #endif
    42.  
    43.         // Use this for initialization
    44.         void Start ()
    45.         {
    46. #if UNITY_WEBGL && !UNITY_EDITOR
    47. var getFilePath_Coroutine = GetFilePath ();
    48. coroutines.Push (getFilePath_Coroutine);
    49. StartCoroutine (getFilePath_Coroutine);
    50. #else
    51.             dnn004545_jpg_filepath = Utils.getFilePath ("dnn/004545.jpg");
    52. //            MobileNetSSD_deploy_caffemodel_filepath = Utils.getFilePath ("dnn/MobileNetSSD_deploy.caffemodel");
    53. //            MobileNetSSD_deploy_prototxt_filepath = Utils.getFilePath ("dnn/MobileNetSSD_deploy.prototxt");
    54.             MobileNetSSD_deploy_caffemodel_filepath = Utils.getFilePath ("dnn/frozen_inference_graph.pb");
    55.             MobileNetSSD_deploy_prototxt_filepath = Utils.getFilePath ("dnn/ssd_mobilenet_v1_coco_2017_11_17.pbtxt");
    56.             Run ();
    57. #endif
    58.         }
    59.  
    60.         #if UNITY_WEBGL && !UNITY_EDITOR
    61.         private IEnumerator GetFilePath()
    62. {
    63.             var getFilePathAsync_0_Coroutine = Utils.getFilePathAsync ("dnn/004545.jpg", (result) => {
    64. dnn004545_jpg_filepath = result;
    65. });
    66. coroutines.Push (getFilePathAsync_0_Coroutine);
    67. yield return StartCoroutine (getFilePathAsync_0_Coroutine);
    68.  
    69.             var getFilePathAsync_1_Coroutine = Utils.getFilePathAsync ("dnn/MobileNetSSD_deploy.caffemodel", (result) => {
    70. MobileNetSSD_deploy_caffemodel_filepath = result;
    71. });
    72. coroutines.Push (getFilePathAsync_1_Coroutine);
    73. yield return StartCoroutine (getFilePathAsync_1_Coroutine);
    74.  
    75.             var getFilePathAsync_2_Coroutine = Utils.getFilePathAsync ("dnn/MobileNetSSD_deploy.prototxt", (result) => {
    76. MobileNetSSD_deploy_prototxt_filepath = result;
    77. });
    78. coroutines.Push (getFilePathAsync_2_Coroutine);
    79. yield return StartCoroutine (getFilePathAsync_2_Coroutine);
    80.  
    81. coroutines.Clear ();
    82.  
    83. Run ();
    84. }
    85. #endif
    86.  
    87.         // Use this for initialization
    88.         void Run ()
    89.         {
    90.             //if true, The error log of the Native side OpenCV will be displayed on the Unity Editor Console.
    91.             Utils.setDebugMode (true);
    92.  
    93.  
    94.             Mat img = Imgcodecs.imread (dnn004545_jpg_filepath);
    95.             #if !UNITY_WSA_10_0
    96.             if (img.empty ()) {
    97.                 Debug.LogError ("dnn/004545.jpg is not loaded.The image file can be downloaded here: \"https://github.com/chuanqi305/MobileNet-SSD/blob/master/images/004545.jpg\".Please copy to \"Assets/StreamingAssets/dnn/\" folder. ");
    98.                 img = new Mat (375, 500, CvType.CV_8UC3, new Scalar (0, 0, 0));
    99.  
    100.             }
    101.             #endif
    102.  
    103.  
    104.             //Adust Quad.transform.localScale.
    105.             gameObject.transform.localScale = new Vector3 (img.width (), img.height (), 1);
    106.             Debug.Log ("Screen.width " + Screen.width + " Screen.height " + Screen.height + " Screen.orientation " + Screen.orientation);
    107.  
    108.             float imageWidth = img.width ();
    109.             float imageHeight = img.height ();
    110.  
    111.             float widthScale = (float)Screen.width / imageWidth;
    112.             float heightScale = (float)Screen.height / imageHeight;
    113.             if (widthScale < heightScale) {
    114.                 Camera.main.orthographicSize = (imageWidth * (float)Screen.height / (float)Screen.width) / 2;
    115.             } else {
    116.                 Camera.main.orthographicSize = imageHeight / 2;
    117.             }
    118.  
    119.  
    120.             Net net = null;
    121.          
    122.             if (string.IsNullOrEmpty (MobileNetSSD_deploy_caffemodel_filepath) || string.IsNullOrEmpty (MobileNetSSD_deploy_prototxt_filepath)) {
    123.                 Debug.LogError ("model file is not loaded.The model and prototxt file can be downloaded here: \"https://github.com/chuanqi305/MobileNet-SSD\".Please copy to “Assets/StreamingAssets/dnn/” folder. ");
    124.             } else {
    125. //                net = Dnn.readNetFromCaffe (MobileNetSSD_deploy_prototxt_filepath, MobileNetSSD_deploy_caffemodel_filepath);
    126.                 net = Dnn.readNetFromTensorflow (MobileNetSSD_deploy_caffemodel_filepath, MobileNetSSD_deploy_prototxt_filepath);
    127.             }
    128.  
    129.             if (net == null) {
    130.  
    131.                 Imgproc.putText (img, "model file is not loaded.", new Point (5, img.rows () - 30), Core.FONT_HERSHEY_SIMPLEX, 0.7, new Scalar (255, 255, 255), 2, Imgproc.LINE_AA, false);
    132.                 Imgproc.putText (img, "Please read console message.", new Point (5, img.rows () - 10), Core.FONT_HERSHEY_SIMPLEX, 0.7, new Scalar (255, 255, 255), 2, Imgproc.LINE_AA, false);
    133.  
    134.             } else {
    135.  
    136.                 Mat blob = Dnn.blobFromImage (img, inScaleFactor, new Size (inWidth, inHeight), new Scalar (meanVal, meanVal, meanVal), true, false);
    137.  
    138.                 net.setInput (blob);
    139.  
    140.  
    141.                 TickMeter tm = new TickMeter ();
    142.                 tm.start ();
    143.  
    144.                 Mat prob = net.forward ();
    145.                 prob = prob.reshape (1, (int)prob.total () / 7);
    146.  
    147.                 tm.stop ();
    148.                 Debug.Log ("Inference time, ms: " + tm.getTimeMilli ());
    149.  
    150.  
    151.  
    152.                 float[] data = new float[7];
    153.  
    154.                 float confidenceThreshold = 0.3f;
    155.                 for (int i = 0; i < prob.rows (); i++) {
    156.  
    157.                     prob.get (i, 0, data);
    158.  
    159.                     float confidence = data [2];
    160.  
    161.                     if (confidence > confidenceThreshold) {
    162.                         int class_id = (int)(data [1]);
    163.  
    164.                         float left = data [3] * img.cols ();
    165.                         float top = data [4] * img.rows ();
    166.                         float right = data [5] * img.cols ();
    167.                         float bottom = data [6] * img.rows ();
    168.  
    169.                         Debug.Log ("class_id: " + class_id);
    170.                         Debug.Log ("Confidence: " + confidence);
    171.  
    172.                         Debug.Log (" " + left
    173.                         + " " + top
    174.                         + " " + right
    175.                         + " " + bottom);
    176.  
    177.                         Imgproc.rectangle (img, new Point (left, top), new Point (right, bottom),
    178.                             new Scalar (0, 255, 0), 2);
    179. //                        string label = classNames [class_id] + ": " + confidence;
    180. //                        int[] baseLine = new int[1];
    181. //                        Size labelSize = Imgproc.getTextSize (label, Core.FONT_HERSHEY_SIMPLEX, 0.5, 1, baseLine);
    182. //
    183. //                        top = Mathf.Max (top, (float)labelSize.height);
    184. //
    185. //                        Imgproc.rectangle (img, new Point (left, top),
    186. //                            new Point (left + labelSize.width, top + labelSize.height + baseLine [0]),
    187. //                            new Scalar (255, 255, 255), Core.FILLED);
    188. //                        Imgproc.putText (img, label, new Point (left, top + labelSize.height),
    189. //                            Core.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar (0, 0, 0));
    190.                     }
    191.                 }
    192.  
    193.                 prob.Dispose ();
    194.  
    195.             }
    196.            
    197.             Imgproc.cvtColor (img, img, Imgproc.COLOR_BGR2RGB);
    198.  
    199.             Texture2D texture = new Texture2D (img.cols (), img.rows (), TextureFormat.RGBA32, false);
    200.  
    201.             Utils.matToTexture2D (img, texture);
    202.  
    203.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    204.  
    205.  
    206.             Utils.setDebugMode (false);
    207.         }
    208.    
    209.         // Update is called once per frame
    210.         void Update ()
    211.         {
    212.  
    213.         }
    214.  
    215.         /// <summary>
    216.         /// Raises the back button click event.
    217.         /// </summary>
    218.         public void OnBackButtonClick ()
    219.         {
    220.             #if UNITY_5_3 || UNITY_5_3_OR_NEWER
    221.             SceneManager.LoadScene ("OpenCVForUnityExample");
    222.             #else
    223.             Application.LoadLevel ("OpenCVForUnityExample");
    224.             #endif
    225.         }
    226.     }
    227. }
    228. #endif
     
  18. Totoro205

    Totoro205

    Joined:
    Dec 12, 2017
    Posts:
    18
    Hello, is there a method to convert from
    List<MatOfPoint>
    to
    Vector2
    ?? I'm using the findContours method and then trying to use setPath of a polygoncollider, I need to convert the contours list first.
    I appreciate your help.
     
  19. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    Unfortunately, there is no convenient method to convert from List<MatOfPoint> to Vector2.
    Code (CSharp):
    1.             List<MatOfPoint> contours = new List<MatOfPoint> ();
    2.             for (int i = 0; i < contours.Count; i++) {
    3.                 List<Point> points = contours [i].toList ();
    4.                 for (int p = 0; p < points.Count; p++) {
    5.                     Vector2 vec2 = new Vector2 ((float)points [p].x, (float)points [p].y);
    6.                 }
    7.             }
     
    Totoro205 likes this.
  20. Lai-Wei-Han

    Lai-Wei-Han

    Joined:
    Nov 12, 2017
    Posts:
    5
    Thank a lot ! Now Unity didn't get crash and this model worked without problems.
    https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API

    But my own trained model from
    Tensorflow detection model zoo
    Base on ssd_mobilenet_v1_coco_2017_11_17 and Tutorial. have tow error.

    Code (CSharp):
    1. Mat prob = net.forward ();
    2. prob = prob.reshape (1, (int)prob.total () / 7);
    CvException: Native object address is NULL

    Can you help me ?

    THK !

    Work with
    OS : windows 10
    Unity version : 2017.2.0f3
    OpenCV for Unity version : 2.2.8
     

    Attached Files:

    • 擷取.PNG
      擷取.PNG
      File size:
      24.2 KB
      Views:
      947
    • 2.PNG
      2.PNG
      File size:
      49.9 KB
      Views:
      1,141
    • 3.PNG
      3.PNG
      File size:
      28.5 KB
      Views:
      1,005
    Last edited: Apr 12, 2018
  21. Lai-Wei-Han

    Lai-Wei-Han

    Joined:
    Nov 12, 2017
    Posts:
    5
  22. wmarsman

    wmarsman

    Joined:
    Oct 27, 2017
    Posts:
    1
    I know this was your answer back in 2014; but is there any interest in revisiting this? As better support to dnn models has been added to both OpenCV and OpenCVForUnity, GPU support would be great to speed it up as well. Even reduced neural nets are executing slowly in unity. As quick workarounds for this issue, I push most of the processing out to other threads and then continue processing when done. But this obviously has its own issues as well.
     
    Lai-Wei-Han likes this.
  23. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    Currently, there is no plans to add GPU support.
    Also,
    Strangely, Processing time of dnn module seems to take longer when using GPU than using CPU.
    https://github.com/opencv/opencv/wiki/DNN-Efficiency
     
  24. Totoro205

    Totoro205

    Joined:
    Dec 12, 2017
    Posts:
    18
    Hello @EnoxSoftware , I would like to know if it's possible to get the extreme points/corners of a contour after applying the
    approxPolyDP
    method on it? (I need those points to define the array for the polygoncollider2d in Unity)

    Here's a sample of the code I'm using
    Code (CSharp):
    1. foreach (var c in contours)
    2.                 {
    3.                     MatOfPoint2f c2 = new MatOfPoint2f(c.toArray());
    4.                     double aDist = Imgproc.arcLength(c2, true) * 0.01f;
    5.                     MatOfPoint2f aCurve = new MatOfPoint2f();
    6.                     MatOfPoint aC = new MatOfPoint();
    7.                     Imgproc.approxPolyDP(c2, aCurve, aDist, true);
    8.                     aCurve.convertTo(aC, CvType.CV_32S);
    9.                     polyContours.Add(aC);
    10. Imgproc.drawContours(inverted, polyContours, -1, color, 3);
    11.                 }
     
    Last edited: Apr 20, 2018
  25. PSpiroz

    PSpiroz

    Joined:
    Feb 15, 2015
    Posts:
    21
    I have succeeded something, that has limitations and not so good accuracy at the edges, but at least for my project it is pretty acceptable.
    First of all I have purchased this package: Feature Detection and Texture2D processing
    It is not the best solution but it is more than decent (11$) regarding the support. CrimsonRatStudio (creators) is more than useful and friendly. I believe that the help they give me (answers about the package/methods/classes as well as good practices, arrive always in the same day, sometimes even in the same hour you send the e-mail!) worth more than the package (at least in my case... maybe they already hate me ;) ). I suggest you communicate with them, describe what you want and they tell you if and how you can use their package.
    Anyway, I will tel you about my experience and problems with this package that maybe a problem to other packages too, depending the pipeline they follow for blob detection:
    This package does image processing and blob recognition, returning blobs in a well organized matter. The accuracy and speed are not what everyone's hope but they (CrimsonRatStudio) willing to speed it up and also, they help you with that too. They also provide algorithms to reduce size, filter results and get the edge points.
    There are 3 crucial problems in our project:
    1) Video Input: For me it is a project that uses shadows on white wall so things are easier and clearer. For you real world video input has many colors, shadows etc. and also resolution matters a lot. This means that this is a pixel - to - pixel comparison process so the more pixels, the more processing. I advice you, if it is to try this package asset, first try the image processing with various filters so when you give the input video frame to find blobs, it has the less noise possible.
    2) Collider handling: This is what might discourage you a lot and you need to put a lot of effort to come to a good solution.
    Image processing is not happening in video but in frame. So you have to check for blobs every frame. Every frame will give you a list with blobs (every blob has a list with the points it consists it in Vector2 (XY-Coordinates), these might be only the edge points as well). So every frame you have a new set of the blobs. I don't think this package has a way to keep tracking blobs and tell you that Frame_1 's blob_No.1 is the same with Frame_2 's blob_No.5 that it changed position (camera moves) and now has 3 more points. So this is a problem you have to face. In my case i did not try to create new object from points and give it a collider, but I created a edgeCollider2D setting its points every frame. My problem with this is that in either you create and destroy colliders every frame, or you have colliders that you change their points, sometimes physics will break the chain of frame processing, so one second your "unreal 3d object / ex. car) is above the collider and in the very next update collider points will appear in new coordinates and your car will be trapped under the collider, inside a building for example, because one point of the collider is at (5,5) and in the next frame is at (8,8) without stepping in (5.5 , 5.5) , (6,6),.....,(7,7),... where your car is. This is a problem that you have to overcome with a solution depending your problem. Maybe change the script orders, maybe find a way to pseudo-track blobs and transform colliders instead of just setting points. I don;t know...
    That 's the story
     
  26. DanFlanaganCodes

    DanFlanaganCodes

    Joined:
    Jun 21, 2014
    Posts:
    17
    Any chance that SLAM will be implemented in the future?
     
  27. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    I think that it is probably possible.
    Please look for an example of OpenCV Java.
     
  28. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    Currently there is no plan to implement the sfm module.
     
  29. czhao7654651793

    czhao7654651793

    Joined:
    Apr 25, 2018
    Posts:
    1
    Hi,

    Is there a way to detect more Aruco markers, not as board? The example can only detect one marker at a time.
     
  30. itgviet

    itgviet

    Joined:
    Dec 1, 2016
    Posts:
    1
    Hi,
    I'm building application on Hololens device
    I want to detect object by yolo - opencv. But dnn don't support on UWP.
    So, Have there any other way to recognize object on UWP.
    Thank you
     
  31. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    ArUcoWebCamTextureExample can detect multiple markers. However, in this example only one AR object can be displayed.
    ARUcoWebCamTexture.png
     
  32. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    I have not tried it yet, but HoloLens RS4 Preview seems to be able to use WindowsML.
     
  33. Zeeshan-Aslam

    Zeeshan-Aslam

    Joined:
    Oct 22, 2015
    Posts:
    6
    Hi,

    I am detecting pedestrian using OpenCV. I want to place Image where OpenCV detect pedestrian.
    Class MatOfRect returns array of OpenCV.Rect , but OpenCV.Rect values are different from Unity Canvas Rect values. I tried converting using Unity API's(WoldSpaceToScreenSpace, TransfromPoint etc.) but all in vain.


    i am searching from one week but could not found anything.
    How can i place unity (Default) Image at detected Place.
     
  34. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    I think that this example will be helpful.
    https://www.dropbox.com/s/uu57w1vyv9ap7jq/2DTo3DExample.unitypackage?dl=0
    2DTo3D.PNG
     
  35. Zeeshan-Aslam

    Zeeshan-Aslam

    Joined:
    Oct 22, 2015
    Posts:
    6
  36. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    Have you already tried YoloObjectDetectionWebCamTextureExample?
    YoloObjectDetectionWebCamTextureExample can detect "person".
     
    ina likes this.
  37. Zeeshan-Aslam

    Zeeshan-Aslam

    Joined:
    Oct 22, 2015
    Posts:
    6
    There is currently no such Example YoloObjectDetectionWebCamTextureExample.
    There is a class HOGSampleDescriptor, which i am using to detect moving objects but it is not precise, more often it does not detect object when object is too close to it.
     
  38. Sejadis

    Sejadis

    Joined:
    Jun 24, 2015
    Posts:
    20
    I want to adjust the perspective transform of my image.
    after some calculations im calling

    Imgproc.warpPerspective(image, result, Imgproc.getPerspectiveTransform(cornerMat, sceneMat), result.size());


    image beeing the original image, result is a black image with same size, cornerMat are the original points and sceneMat contains the fixed points.

    getPerspectiveTransform (or specifically imgproc_Imgproc_getPerspectiveTransform_10(...)) returns the zero pointer.
    both mats are not null and have the same size - what could be the problem? Do the points/mats need to be in some specific format?

    EDIT: ok i think i got it and apparently really bad format - a working format is :
    srcRectMat.put (0, 0, tl.x, tl.y, tr.x, tr.y, br.x, br.y, bl.x, bl.y);                        
    dstRectMat.put (0, 0, 0.0, 0.0, quad.transform.localScale.x, 0.0, quad.transform.localScale.x, quad.transform.localScale.y, 0.0, quad.transform.localScale.y);
     
    Last edited: May 2, 2018
    cghci-kl likes this.
  39. FlashyGoblin

    FlashyGoblin

    Joined:
    Apr 1, 2017
    Posts:
    23
    I'm looking for a quick way to track the brightest spot of a webcam image, like a flashlight pointing at the camera. Does anyone have a working example of this?

    Thanks so much!
     
  40. tatavarthitarun

    tatavarthitarun

    Joined:
    Dec 26, 2017
    Posts:
    1
    hey im also trying to achieve the same ,did u got the work done ?Can u guide me how to do it?
     
  41. Sayugo

    Sayugo

    Joined:
    Aug 23, 2016
    Posts:
    6
    Hei, I just want to know what is difference between standard webcamtexture to mat and postrender vuforia. Which one is better? Thanks in advance
     
  42. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Is this included with any release? I notice the version updates indicates this but can't find this file in the asset?
     
  43. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    What does "postrender vuforia" mean?
     
  44. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    YoloObjectDetectionWebCamTextureExample is included with OpenCVForUnity version 2.2.4 or later.
     
  45. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
  46. Sayugo

    Sayugo

    Joined:
    Aug 23, 2016
    Posts:
    6
    You have an example project that combine OpenCV with Vuforia right? I just want to know, what for you do that? Is if you using OpenCV combine with Vuforia will get better result in some feature compare with plain webcamtexture?
     
  47. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,566
    Thank you very much for reporting.
    Download the file from the following link and rename it to "tiny-yolo.cfg" and "tiny-yolo.weights".
    https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov2-tiny.cfg
    https://pjreddie.com/media/files/yolov2-tiny.weights
     
  48. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
  49. ferretnt

    ferretnt

    Joined:
    Apr 10, 2012
    Posts:
    412
    It did seem to work for me with a clean download of the latest OpenCVForUnity. The only thing to be careful of is you need to move the whole StreamingAssetsFolder to the root of your project or else it won't find those files.

    (I'd include a screenshot, but I'm still not actually sure how to attach a screenshot in the forums...)
     
  50. OSagioma

    OSagioma

    Joined:
    May 12, 2015
    Posts:
    13
    The webgl build of the markerless AR example doesn't work. I get the folowing error:

    Code (JavaScript):
    1. An error occurred running the Unity content on this page. See your browser JavaScript console for more info. The error was:
    2. uncaught exception: abort(-1) at jsStackTrace (public.asm.framework.unityweb:2:27619)
    3. stackTrace (public.asm.framework.unityweb:2:27790)
    4. abort (public.asm.framework.unityweb:4:64755)
    5. __ZNSt3__112basic_stringIcNS_11char_traitsIcEENS_9allocatorIcEEE6__initEPKcj (public.asm.framework.unityweb:2:305855)
    6. jHa (public.asm.code.unityweb:24:1)
    7. func (public.asm.framework.unityweb:2:35140)
    8. callRuntimeCallbacks (public.asm.framework.unityweb:2:30414)
    9. ensureInitRuntime (public.asm.framework.unityweb:2:30914)
    10. doRun (public.asm.framework.unityweb:4:63693)
    11. run (public.asm.framework.unityweb:4:64009)
    12. runCaller (public.asm.framework.unityweb:4:62637)
    13. removeRunDependency (public.asm.framework.unityweb:2:34585)
    14. UnityLoader["0ce12ea2c1ed777eb05ccf5b361357b2"]/</unityFileSystemInit</<@blob:null/9ca58685-ffb4-4d73-a6ab-b2525db6c0fe:2:357
    15. doCallback (public.asm.framework.unityweb:2:145714)
    16. done (public.asm.framework.unityweb:2:145852)
    17. reconcile (public.asm.framework.unityweb:2:128601)
    18. UnityLoader["0ce12ea2c1ed777eb05ccf5b361357b2"]/syncfs/</<@blob:null/9ca58685-ffb4-4d73-a6ab-b2525db6c0fe:2:124595 (public.asm.framework.unityweb:2:126426)
    19.  
    20. If this abort() is unexpected, build with -s ASSERTIONS=1 which can give more information.