Search Unity

  1. New Unity Live Help updates. Check them out here!

    Dismiss Notice

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. link1375

    link1375

    Joined:
    Nov 9, 2017
    Posts:
    9
    I edited your Feature2DExample (see code https://pastebin.com/HBTSPDfb) and found that if I change the
    DescriptorMatcher
    type (line 58) from
    BRUTEFORCE_HAMMINGLUT
    (everything works fine) to
    FLANNBASED
    , then I get the Mat error from above.
     
  2. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    The ORB algorithm does not seem to support FLANNBASED.
    https://docs.opencv.org/master/dc/dc3/tutorial_py_matcher.html
    For BF matcher, first we have to create the BFMatcher object using cv.BFMatcher(). It takes two optional params. First one is normType. It specifies the distance measurement to be used. By default, it is cv.NORM_L2. It is good for SIFT, SURF etc (cv.NORM_L1 is also there). For binary string based descriptors like ORB, BRIEF, BRISK etc, cv.NORM_HAMMING should be used, which used Hamming distance as measurement. If ORB is using WTA_K == 3 or 4, cv.NORM_HAMMING2 should be used.

    For FLANN based matcher, we need to pass two dictionaries which specifies the algorithm to be used, its related parameters etc. First one is IndexParams. For various algorithms, the information to be passed is explained in FLANN docs. As a summary, for algorithms like SIFT, SURF etc.
     
  3. dorukeker

    dorukeker

    Joined:
    Dec 6, 2016
    Posts:
    31
    Hello All,

    First of thank you for this asset and the prompt replies on this forum.

    TL;DR
    We have a web cam in our setup. We want to place a physical image / marker in front of that web cam. And calibrate the Unity camera with the same values. So the CG graphics align correctly on the ground the web cam is seeing. How can we apply the camera calibration from OpenCV to a Unity camera?

    More Details
    I understand what all the examples are doing but cannot find the explebation showing how to combine them for our purpose.I looked into the "Camera Calibration" example and the "Markerless AR examples"

    So far I have come up with the following flow. Can some one tell me if this is the way to go? Or are there easier ways?

    Here is my understanding:
    The Camera has 3 main properties we want to match the reallife camera
    1) FoV
    2) Rotation
    3) Position (in our case distance from the marker)

    FoV
    The camera calibration sample spits out an XML with the OpenCV version of the camera matrix (3x3). My understanding is we can use a formula like this (https://blog.noctua-software.com/opencv-opengl-projection-matrix.html) and convert it to 4x4 OpenGL projection matrix. Then either feed this to the Unity camera or us it to calculate the FoV (which in Unity's case is vertical FoV)

    Rotation
    We can spin-off from MarkerlessAR example. We can place the marker on the flat ground. When the AR object is placed on the marker we can get it's rotation and back calculate to the camera's rotation.

    Position
    After the first 2 are measured and fixed we can gradually move the camera towards its own -z direction until the real life object is the same size with the the game object in the screen space.

    So far these are the steps I come up with. Are they the correct way to go with it? Or is there a more simple way?

    Any help is much appreciated.
    Cheers,
    Doruk

    PS In addition to this forum I went through the following links
    http://xdpixel.com/decoding-a-projection-matrix/
    https://blog.noctua-software.com/opencv-opengl-projection-matrix.html
    https://answers.opencv.org/question...-principal-point-to-be-inside-the-image-size/
    https://forum.unity.com/threads/how...ion-to-set-physical-camera-parameters.704120/
     
  4. Labecki

    Labecki

    Joined:
    Apr 14, 2015
    Posts:
    16
    Has anyone else had trouble with the Aruco Calibration Example?

    I have been using a 5x7 ChArUco board to create calibration files, but when I use them the ar cubes will be very far away from the markers. I have tried many times, but the results have always been very poor.

    I noticed that the "dummy CameraParameters" in the Texture Webcam Texture Example work quite well when using the default 640x480 width and height but does not work so well with other dimensions. I wonder if there might be other (wider) resolutions that will also work?
     
    Last edited: Feb 14, 2020
  5. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Even from my experience, calibration using a ChArUco board is often less accurate than using a chessboard.
    I don't know if this is an OpenCV issue.
     
    Labecki likes this.
  6. kotsopoulos

    kotsopoulos

    Joined:
    Nov 6, 2014
    Posts:
    20
  7. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    559
    Hi @EnoxSoftware

    Feature request. My StreamingAssets folder is getting quite messy. Would it be possible to move all OpenCV related streaming assets into a designated sub folder? It would also make it easier to upgrade version.

    Thanks for all the work you put into this.
     
    EnoxSoftware likes this.
  8. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Your understanding is correct.
    But I also don't fully understand the conversion from OpenCV camera parameters to Unity camera parameters. So there may be a simpler way.
     
  9. kotsopoulos

    kotsopoulos

    Joined:
    Nov 6, 2014
    Posts:
    20
    Hi @EnoxSoftware, I have written something in forum 3 times and you have not answer something yet while responding to others
    Where I sould go in order to get helped for your asset?
     
  10. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Sorry for my late reply.
    Could you send me the code you tested?
    https://enoxsoftware.com/opencvforunity/contact/technical-inquiry/
     
  11. kotsopoulos

    kotsopoulos

    Joined:
    Nov 6, 2014
    Posts:
    20
    Hi @EnoxSoftware, I have already sent you in the technical-inquiry
    but i will send it again, never mind
     
  12. kotsopoulos

    kotsopoulos

    Joined:
    Nov 6, 2014
    Posts:
    20
  13. kotsopoulos

    kotsopoulos

    Joined:
    Nov 6, 2014
    Posts:
    20
    Hi @EnoxSoftware
    I trained a custom mobilenetSsd object detection model and because the fps in my scene are dropping i tried to try something like the AsynchronousFaceDetection scene in your asset.
    Am i doing something wrong
    Code (CSharp):
    1. #if !(PLATFORM_LUMIN && !UNITY_EDITOR)
    2.  
    3. #if !UNITY_WSA_10_0
    4.  
    5. using System;
    6. using System.Threading;
    7. using System.Threading.Tasks;
    8. using System.Collections;
    9. using System.Collections.Generic;
    10. using System.Linq;
    11. using UnityEngine;
    12. using UnityEngine.UI;
    13. using UnityEngine.SceneManagement;
    14. using OpenCVForUnity.CoreModule;
    15. using OpenCVForUnity.DnnModule;
    16. using OpenCVForUnity.ImgprocModule;
    17. using OpenCVForUnity.UnityUtils;
    18. using OpenCVForUnity.UnityUtils.Helper;
    19.  
    20. namespace OpenCVForUnityExample
    21. {
    22.     /// <summary>
    23.     /// Dnn ObjectDetection Example
    24.     /// Referring to https://github.com/opencv/opencv/blob/master/samples/dnn/object_detection.cpp.
    25.     /// </summary>
    26.     [RequireComponent(typeof(WebCamTextureToMatHelper))]
    27.     public class AsynchronousDetection : MonoBehaviour
    28.     {
    29.  
    30.         [TooltipAttribute("Path to a binary file of model contains trained weights. It could be a file with extensions .caffemodel (Caffe), .pb (TensorFlow), .t7 or .net (Torch), .weights (Darknet).")]
    31.         public string model;
    32.  
    33.         [TooltipAttribute("Path to a text file of model contains network configuration. It could be a file with extensions .prototxt (Caffe), .pbtxt (TensorFlow), .cfg (Darknet).")]
    34.         public string config;
    35.  
    36.         [TooltipAttribute("Optional path to a text file with names of classes to label detected objects.")]
    37.         public string classes;
    38.  
    39.         [TooltipAttribute("Optional list of classes to label detected objects.")]
    40.         public List<string> classesList;
    41.  
    42.         [TooltipAttribute("Confidence threshold.")]
    43.         public float confThreshold;
    44.  
    45.         [TooltipAttribute("Preprocess input image by multiplying on a scale factor.")]
    46.         public float scale;
    47.  
    48.         [TooltipAttribute("Preprocess input image by subtracting mean values. Mean values should be in BGR order and delimited by spaces.")]
    49.         public Scalar mean;
    50.  
    51.         [TooltipAttribute("Indicate that model works with RGB input images instead BGR ones.")]
    52.         public bool swapRB;
    53.  
    54.         [TooltipAttribute("Preprocess input image by resizing to a specific width.")]
    55.         public int inpWidth;
    56.  
    57.         [TooltipAttribute("Preprocess input image by resizing to a specific height.")]
    58.         public int inpHeight;
    59.  
    60.         [SerializeField] private Text Display;
    61.  
    62.         Texture2D texture;
    63.        
    64.         WebCamTextureToMatHelper webCamTextureToMatHelper;
    65.        
    66.         Mat bgrMat;
    67.         Mat rgbaMat;
    68.  
    69.         Net net;
    70.  
    71.         FpsMonitor fpsMonitor;
    72.  
    73.         List<string> classNames;
    74.         List<string> outBlobNames;
    75.         List<string> outBlobTypes;
    76.  
    77.         string classes_filepath;
    78.         string config_filepath;
    79.         string model_filepath;
    80.  
    81.         private IEnumerator renderThreadCoroutine;
    82.  
    83.         Mat blob;
    84.         Mat bgrMat4Thread;
    85.         Net net4Thread;
    86.         List<string> outBlobNames4Thread;
    87.         List<string> outBlobTypes4Thread;
    88.         List<Mat> outs4Thread;// = new List<Mat>();
    89.         System.Object sync = new System.Object();
    90.  
    91.         bool m_shouldFreezeFrame = false;
    92.  
    93.         bool shouldFreezeFrame
    94.         {
    95.             get
    96.             {
    97.                 lock (sync)
    98.                     return m_shouldFreezeFrame;
    99.             }
    100.             set
    101.             {
    102.                 lock (sync)
    103.                     m_shouldFreezeFrame = value;
    104.             }
    105.         }
    106.  
    107.         bool _isThreadRunning = false;
    108.  
    109.         bool isThreadRunning
    110.         {
    111.             get
    112.             {
    113.                 lock (sync)
    114.                     return _isThreadRunning;
    115.             }
    116.             set
    117.             {
    118.                 lock (sync)
    119.                     _isThreadRunning = value;
    120.             }
    121.         }
    122.  
    123.         bool _shouldStopThread = false;
    124.  
    125.         bool shouldStopThread
    126.         {
    127.             get
    128.             {
    129.                 lock (sync)
    130.                     return _shouldStopThread;
    131.             }
    132.             set
    133.             {
    134.                 lock (sync)
    135.                     _shouldStopThread = value;
    136.             }
    137.         }
    138.  
    139.         bool _shouldDetectInMultiThread = false;
    140.  
    141.         bool shouldDetectInMultiThread
    142.         {
    143.             get
    144.             {
    145.                 lock (sync)
    146.                     return _shouldDetectInMultiThread;
    147.             }
    148.             set
    149.             {
    150.                 lock (sync)
    151.                     _shouldDetectInMultiThread = value;
    152.             }
    153.         }
    154.  
    155.         bool _didUpdateTheDetectionResult = false;
    156.  
    157.         bool didUpdateTheDetectionResult
    158.         {
    159.             get
    160.             {
    161.                 lock (sync)
    162.                     return _didUpdateTheDetectionResult;
    163.             }
    164.             set
    165.             {
    166.                 lock (sync)
    167.                     _didUpdateTheDetectionResult = value;
    168.             }
    169.         }
    170.  
    171. #if UNITY_WEBGL && !UNITY_EDITOR
    172.         IEnumerator getFilePath_Coroutine;
    173. #endif
    174.  
    175.         // Use this for initialization
    176.         void Start()
    177.         {
    178.             renderThreadCoroutine = CallAtEndOfFrames();
    179.  
    180.             fpsMonitor = GetComponent<FpsMonitor>();
    181.  
    182.             webCamTextureToMatHelper = gameObject.GetComponent<WebCamTextureToMatHelper>();
    183.  
    184. #if UNITY_WEBGL && !UNITY_EDITOR
    185.             getFilePath_Coroutine = GetFilePath();
    186.             StartCoroutine(getFilePath_Coroutine);
    187. #else
    188.             if (!string.IsNullOrEmpty(classes)) classes_filepath = Utils.getFilePath("dnn/" + classes);
    189.             if (!string.IsNullOrEmpty(config)) config_filepath = Utils.getFilePath("dnn/" + config);
    190.             if (!string.IsNullOrEmpty(model)) model_filepath = Utils.getFilePath("dnn/" + model);
    191.             Run();
    192. #endif
    193.         }
    194.  
    195. #if UNITY_WEBGL && !UNITY_EDITOR
    196.         private IEnumerator GetFilePath()
    197.         {
    198.             if (!string.IsNullOrEmpty(classes))
    199.             {
    200.                 var getFilePathAsync_0_Coroutine = Utils.getFilePathAsync("dnn/" + classes, (result) =>
    201.                 {
    202.                     classes_filepath = result;
    203.                 });
    204.                 yield return getFilePathAsync_0_Coroutine;
    205.             }
    206.  
    207.             if (!string.IsNullOrEmpty(config))
    208.             {
    209.                 var getFilePathAsync_1_Coroutine = Utils.getFilePathAsync("dnn/" + config, (result) =>
    210.                 {
    211.                     config_filepath = result;
    212.                 });
    213.                 yield return getFilePathAsync_1_Coroutine;
    214.             }
    215.  
    216.             if (!string.IsNullOrEmpty(model))
    217.             {
    218.                 var getFilePathAsync_2_Coroutine = Utils.getFilePathAsync("dnn/" + model, (result) =>
    219.                 {
    220.                     model_filepath = result;
    221.                 });
    222.                 yield return getFilePathAsync_2_Coroutine;
    223.             }
    224.  
    225.             getFilePath_Coroutine = null;
    226.  
    227.             Run();
    228.         }
    229. #endif
    230.  
    231.         // Use this for initialization
    232.         void Run()
    233.         {
    234.             //if true, The error log of the Native side OpenCV will be displayed on the Unity Editor Console.
    235.             Utils.setDebugMode(true);
    236.  
    237.             if (!string.IsNullOrEmpty(classes))
    238.             {
    239.                 classNames = readClassNames(classes_filepath);
    240.                 if (classNames == null)
    241.                 {
    242.                     Debug.LogError(classes_filepath + " is not loaded. Please see \"StreamingAssets/dnn/setup_dnn_module.pdf\". ");
    243.                 }
    244.             }
    245.             else if (classesList.Count > 0)
    246.             {
    247.                 classNames = classesList;
    248.             }
    249.  
    250.             if (string.IsNullOrEmpty(config_filepath) || string.IsNullOrEmpty(model_filepath))
    251.             {
    252.                 Debug.LogError(config_filepath + " or " + model_filepath + " is not loaded. Please see \"StreamingAssets/dnn/setup_dnn_module.pdf\". ");
    253.             }
    254.             else
    255.             {
    256.                 net4Thread = Dnn.readNet(model_filepath, config_filepath);
    257.  
    258.                 outBlobNames4Thread = getOutputsNames(net4Thread);
    259.  
    260.                 outBlobTypes4Thread = getOutputsTypes(net4Thread);
    261.             }
    262.  
    263.  
    264. #if UNITY_ANDROID && !UNITY_EDITOR
    265.             // Avoids the front camera low light issue that occurs in only some Android devices (e.g. Google Pixel, Pixel2).
    266.             webCamTextureToMatHelper.avoidAndroidFrontCameraLowLightIssue = true;
    267. #endif
    268.             webCamTextureToMatHelper.Initialize();
    269.         }
    270.  
    271.  
    272.         public void OnWebCamTextureToMatHelperInitialized()
    273.         {
    274.             Debug.Log("OnWebCamTextureToMatHelperInitialized");
    275.  
    276.             Mat webCamTextureMat = webCamTextureToMatHelper.GetMat();
    277.  
    278.  
    279.             texture = new Texture2D(webCamTextureMat.cols(), webCamTextureMat.rows(), TextureFormat.RGBA32, false);
    280.  
    281.             gameObject.GetComponent<Renderer>().material.mainTexture = texture;
    282.  
    283.             gameObject.transform.localScale = new Vector3(webCamTextureMat.cols(), webCamTextureMat.rows(), 1);
    284.             Debug.Log("Screen.width " + Screen.width + " Screen.height " + Screen.height + " Screen.orientation " + Screen.orientation);
    285.  
    286.             if (fpsMonitor != null)
    287.             {
    288.                 fpsMonitor.Add("width", webCamTextureMat.width().ToString());
    289.                 fpsMonitor.Add("height", webCamTextureMat.height().ToString());
    290.                 fpsMonitor.Add("orientation", Screen.orientation.ToString());
    291.             }
    292.  
    293.  
    294.             float width = webCamTextureMat.width();
    295.             float height = webCamTextureMat.height();
    296.  
    297.             float widthScale = (float)Screen.width / width;
    298.             float heightScale = (float)Screen.height / height;
    299.             if (widthScale < heightScale)
    300.             {
    301.                 Camera.main.orthographicSize = (width * (float)Screen.height / (float)Screen.width) / 2;
    302.             }
    303.             else
    304.             {
    305.                 Camera.main.orthographicSize = height / 2;
    306.             }
    307.  
    308.  
    309.             bgrMat = new Mat(webCamTextureMat.rows(), webCamTextureMat.cols(), CvType.CV_8UC3);
    310.             //StartCoroutine(renderThreadCoroutine);
    311.             InitThread();
    312.         }
    313.  
    314.         private IEnumerator CallAtEndOfFrames()
    315.         {
    316.             while (true)
    317.             {
    318.                 // Wait until all frame rendering is done
    319.                 yield return new WaitForEndOfFrame();
    320.  
    321.                 if (rgbaMat != null)
    322.                 {
    323.                     Utils.matToTextureInRenderThread(rgbaMat, texture);
    324.                 }
    325.             }
    326.         }
    327.  
    328.         public void OnWebCamTextureToMatHelperDisposed()
    329.         {
    330.             Debug.Log("OnWebCamTextureToMatHelperDisposed");
    331.  
    332.             if (bgrMat != null)
    333.                 bgrMat.Dispose();
    334.  
    335.             if (net != null)
    336.                 net.Dispose();
    337.  
    338.             if (texture != null)
    339.             {
    340.                 Texture2D.Destroy(texture);
    341.                 texture = null;
    342.             }
    343.         }
    344.  
    345.         public void OnWebCamTextureToMatHelperErrorOccurred(WebCamTextureToMatHelper.ErrorCode errorCode)
    346.         {
    347.             Debug.Log("OnWebCamTextureToMatHelperErrorOccurred " + errorCode);
    348.         }
    349.  
    350.         public void OnChangeCameraButtonClick()
    351.         {
    352.             webCamTextureToMatHelper.requestedIsFrontFacing = !webCamTextureToMatHelper.IsFrontFacing();
    353.         }
    354.  
    355.         // Update is called once per frame
    356.         void Update()
    357.         {
    358.             if (webCamTextureToMatHelper.IsPlaying() && webCamTextureToMatHelper.DidUpdateThisFrame())
    359.             {
    360.                 rgbaMat = webCamTextureToMatHelper.GetMat();
    361.  
    362.                 if (net4Thread == null)
    363.                 {
    364.                     Imgproc.putText(rgbaMat, "model file is not loaded.", new Point(5, rgbaMat.rows() - 30), Imgproc.FONT_HERSHEY_SIMPLEX, 0.7, new Scalar(255, 255, 255, 255), 2, Imgproc.LINE_AA, false);
    365.                     Imgproc.putText(rgbaMat, "Please read console message.", new Point(5, rgbaMat.rows() - 10), Imgproc.FONT_HERSHEY_SIMPLEX, 0.7, new Scalar(255, 255, 255, 255), 2, Imgproc.LINE_AA, false);
    366.                 }
    367.                 else
    368.                 {
    369.                     Imgproc.cvtColor(rgbaMat, bgrMat, Imgproc.COLOR_RGBA2BGR);
    370.  
    371.                     if (!shouldDetectInMultiThread)
    372.                     {
    373.                         bgrMat.copyTo(bgrMat4Thread);
    374.  
    375.                         shouldDetectInMultiThread = true;
    376.                     }
    377.  
    378.                     if (didUpdateTheDetectionResult)
    379.                     {
    380.                         didUpdateTheDetectionResult = false;
    381.  
    382.                         Mat boxes4Thread = outs4Thread[0];
    383.                         float[] data4Thread = new float[boxes4Thread.size(3)];
    384.                         boxes4Thread = boxes4Thread.reshape(1, (int)boxes4Thread.total() / boxes4Thread.size(3));
    385.  
    386.                         ProcessMat(rgbaMat, boxes4Thread, net4Thread, data4Thread);
    387.  
    388.                         if (shouldFreezeFrame)
    389.                             Thread.Sleep(200);
    390.  
    391.                         /*for (int i = 0; i < outs4Thread.Count; i++)
    392.                         {
    393.                             outs4Thread[i].Dispose();
    394.                         }*/
    395.                         //blob.Dispose();
    396.                     }
    397.                 }
    398.  
    399.                 Utils.fastMatToTexture2D(rgbaMat, texture);
    400.             }
    401.         }
    402.  
    403.         private void InitThread()
    404.         {
    405.             StopThread();
    406.  
    407.             bgrMat4Thread = new Mat();
    408.  
    409.             if (net4Thread.empty())
    410.             {
    411.                 Debug.LogError("net4Thread file is not loaded. Please copy from “OpenCVForUnity/StreamingAssets/” to “Assets/StreamingAssets/” folder. ");
    412.             }
    413.  
    414.             shouldDetectInMultiThread = false;
    415.  
    416. #if !UNITY_WEBGL
    417.             StartThread(ThreadWorker);
    418. #else
    419.             StartCoroutine ("ThreadWorker");
    420. #endif
    421.         }
    422.  
    423.         private void StartThread(Action action)
    424.         {
    425.             shouldStopThread = false;
    426.  
    427. #if UNITY_METRO && NETFX_CORE
    428.             System.Threading.Tasks.Task.Run(() => action());
    429. #elif UNITY_METRO
    430.             action.BeginInvoke(ar => action.EndInvoke(ar), null);
    431. #else
    432.             new Thread(() =>
    433.             {
    434.                 action();
    435.             }).Start();
    436.             //System.Threading.Tasks.Task.Run(() => action());
    437.             //ThreadPool.QueueUserWorkItem(_ => action());
    438. #endif
    439.             Debug.Log("Thread Start");
    440.         }
    441.  
    442.         private void StopThread()
    443.         {
    444.             if (!isThreadRunning)
    445.                 return;
    446.  
    447.             shouldStopThread = true;
    448.  
    449.             while (isThreadRunning)
    450.             {
    451.                 //Wait threading stop
    452.             }
    453.             Debug.Log("Thread Stop");
    454.         }
    455.  
    456. #if !UNITY_WEBGL
    457.         private void ThreadWorker()
    458.         {
    459.             isThreadRunning = true;
    460.  
    461.             while (!shouldStopThread)
    462.             {
    463.                 if (!shouldDetectInMultiThread)
    464.                     continue;
    465.  
    466.                 Detect();
    467.  
    468.                 shouldDetectInMultiThread = false;
    469.                 didUpdateTheDetectionResult = true;
    470.             }
    471.  
    472.             isThreadRunning = false;
    473.         }
    474.  
    475. #else
    476.         private IEnumerator ThreadWorker ()
    477.         {
    478.             while (true) {
    479.                 while (!shouldDetectInMultiThread) {
    480.                     yield return null;
    481.                 }
    482.  
    483.                 Detect ();
    484.  
    485.                 shouldDetectInMultiThread = false;
    486.                 didUpdateTheDetectionResult = true;
    487.             }
    488.         }
    489. #endif
    490.  
    491.         void Detect()
    492.         {
    493.             Size inpSize = new Size(inpWidth > 0 ? inpWidth : bgrMat4Thread.cols(), inpHeight > 0 ? inpHeight : bgrMat4Thread.rows());
    494.             blob = Dnn.blobFromImage(bgrMat4Thread, scale, inpSize, new Scalar(0, 0, 0, 0), swapRB, false);
    495.  
    496.             net4Thread.setInput(blob);
    497.  
    498.             outs4Thread = new List<Mat>();
    499.             net4Thread.forward(outs4Thread, outBlobNames4Thread);
    500.         }
    501.  
    502.         void ProcessMat(Mat frame, Mat boxes, Net net, float[] data)
    503.         {
    504.             int? class_id = null;
    505.             List<int> classIdsList = new List<int>();
    506.             List<float> confidencesList = new List<float>();
    507.             List<OpenCVForUnity.CoreModule.Rect> boxesList = new List<OpenCVForUnity.CoreModule.Rect>();
    508.             for (int i = 0; i < boxes.rows(); i++)
    509.             {
    510.  
    511.                 boxes.get(i, 0, data);
    512.  
    513.                 float score = data[2];
    514.  
    515.                 if (score > confThreshold)
    516.                 {
    517.                     class_id = (int)(data[1]);
    518.  
    519.                     if (class_id == null || class_id == 4)
    520.                         shouldFreezeFrame = false;
    521.                     else
    522.                         shouldFreezeFrame = true;
    523.  
    524.                     if (class_id == 4)
    525.                         continue;
    526.  
    527.                     float left = (float)(data[3] * frame.cols());
    528.                     float top = (float)(data[4] * frame.rows());
    529.                     float right = (float)(data[5] * frame.cols());
    530.                     float bottom = (float)(data[6] * frame.rows());
    531.  
    532.                     left = (int)Mathf.Max(0, Mathf.Min(left, frame.cols() - 1));
    533.                     top = (int)Mathf.Max(0, Mathf.Min(top, frame.rows() - 1));
    534.                     right = (int)Mathf.Max(0, Mathf.Min(right, frame.cols() - 1));
    535.                     bottom = (int)Mathf.Max(0, Mathf.Min(bottom, frame.rows() - 1));
    536.                     int width = (int)right - (int)left + 1;
    537.                     int height = (int)bottom - (int)top + 1;
    538.  
    539.                     classIdsList.Add((int)(class_id) - 0);
    540.                     confidencesList.Add((float)score);
    541.                     boxesList.Add(new OpenCVForUnity.CoreModule.Rect((int)left, (int)top, (int)width, (int)height));
    542.  
    543.                     //draw boxes
    544.  
    545.                     Display.text = String.Format(classNames[(int)class_id] + " ( " + score * 100 +" ) ");
    546.                 }
    547.             }
    548.  
    549.             MatOfRect m_boxes = new MatOfRect();
    550.             m_boxes.fromList(boxesList);
    551.  
    552.             MatOfFloat confidences = new MatOfFloat();
    553.             confidences.fromList(confidencesList);
    554.  
    555.  
    556.             MatOfInt indices = new MatOfInt();
    557.             Dnn.NMSBoxes(m_boxes, confidences, confThreshold, 0.5f, indices);
    558.  
    559.             for (int i = 0; i < indices.total(); ++i)
    560.             {
    561.                 int idx = (int)indices.get(i, 0)[0];
    562.                 OpenCVForUnity.CoreModule.Rect box = boxesList[idx];
    563.                 drawPred(classIdsList[idx], confidencesList[idx], box.x, box.y,
    564.                     box.x + box.width, box.y + box.height, frame);
    565.             }
    566.             //boxes.Dispose();
    567.         }
    568.  
    569.         void OnDestroy()
    570.         {
    571.             webCamTextureToMatHelper.Dispose();
    572.  
    573.             if (net != null)
    574.                 net.Dispose();
    575.            
    576.  
    577.             Utils.setDebugMode(true);
    578.  
    579. #if UNITY_WEBGL && !UNITY_EDITOR
    580.             if (getFilePath_Coroutine != null)
    581.             {
    582.                 StopCoroutine(getFilePath_Coroutine);
    583.                 ((IDisposable)getFilePath_Coroutine).Dispose();
    584.             }
    585. #endif
    586.         }
    587.        
    588.         private List<string> readClassNames(string filename)
    589.         {
    590.             List<string> classNames = new List<string>();
    591.  
    592.             System.IO.StreamReader cReader = null;
    593.             try
    594.             {
    595.                 cReader = new System.IO.StreamReader(filename, System.Text.Encoding.Default);
    596.  
    597.                 while (cReader.Peek() >= 0)
    598.                 {
    599.                     string name = cReader.ReadLine();
    600.                     classNames.Add(name);
    601.                 }
    602.             }
    603.             catch (System.Exception ex)
    604.             {
    605.                 Debug.LogError(ex.Message);
    606.                 return null;
    607.             }
    608.             finally
    609.             {
    610.                 if (cReader != null)
    611.                     cReader.Close();
    612.             }
    613.  
    614.             return classNames;
    615.         }
    616.  
    617.         private void drawPred(int? classId, float conf, int left, int top, int right, int bottom, Mat frame)
    618.         {
    619.             Imgproc.rectangle(frame, new Point(left, top), new Point(right, bottom), new Scalar(0, 255, 0, 255), 2);
    620.  
    621.             string label = conf.ToString();
    622.             if (classNames != null && classNames.Count != 0)
    623.             {
    624.                 if (classId < (int)classNames.Count)
    625.                 {
    626.                     label = classNames[(int)classId] + ": " + label;
    627.                 }
    628.             }
    629.  
    630.             int[] baseLine = new int[1];
    631.             Size labelSize = Imgproc.getTextSize(label, Imgproc.FONT_HERSHEY_SIMPLEX, 0.5, 1, baseLine);
    632.  
    633.             top = Mathf.Max(top, (int)labelSize.height);
    634.             Imgproc.rectangle(frame, new Point(left, top - labelSize.height),
    635.                 new Point(left + labelSize.width, top + baseLine[0]), Scalar.all(255), Core.FILLED);
    636.             Imgproc.putText(frame, label, new Point(left, top), Imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 0, 0, 255));
    637.         }
    638.  
    639.         private List<string> getOutputsNames(Net net)
    640.         {
    641.             List<string> names = new List<string>();
    642.  
    643.  
    644.             MatOfInt outLayers = net.getUnconnectedOutLayers();
    645.             for (int i = 0; i < outLayers.total(); ++i)
    646.             {
    647.                 names.Add(net.getLayer(new DictValue((int)outLayers.get(i, 0)[0])).get_name());
    648.             }
    649.             outLayers.Dispose();
    650.  
    651.             return names;
    652.         }
    653.  
    654.         private List<string> getOutputsTypes(Net net)
    655.         {
    656.             List<string> types = new List<string>();
    657.  
    658.  
    659.             MatOfInt outLayers = net.getUnconnectedOutLayers();
    660.             for (int i = 0; i < outLayers.total(); ++i)
    661.             {
    662.                 types.Add(net.getLayer(new DictValue((int)outLayers.get(i, 0)[0])).get_type());
    663.             }
    664.             outLayers.Dispose();
    665.  
    666.             return types;
    667.         }
    668.     }
    669. }
    670. #endif
    671.  
    672. #endif
    My model: https://github.com/ctheodorak/DemoTflite/tree/master/MobilenetV1_frozen
    Sample photos: https://github.com/ctheodorak/DemoTflite/tree/master/Demo_Model
    Please Confirm
     
  14. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Could you try the attached file ( VRParkExample.zip )?

    Import VRParkExample.unitypackage into your project.
    vrpark_demo_setting.PNG
    Copy the model file to the "StreamingAssets / dnn" folder.
    vrpark_demo_stremingassets.PNG
    vrpark_demo.PNG
     

    Attached Files:

  15. kotsopoulos

    kotsopoulos

    Joined:
    Nov 6, 2014
    Posts:
    20
    As for the asynchronous detection
    Am i doing something wrong???
     
  16. laurentAA

    laurentAA

    Joined:
    Mar 7, 2019
    Posts:
    1
    Hello @EnoxSoftware,
    we want to use the face detection in our game, we added a script based
    on the FaceDetectionWebCamTextureExample sample scene,
    but our build size increased a lot, from 80mb to 250mb(iOS).

    these files seems to be the cause:
    Assets/OpenCVForUnity/Plugins/iOS/libopencvforunity.a
    Assets/OpenCVForUnity/Plugins/iOS/opencv2.framework/opencv

    Is there a solution about that ?
    Thanks a lot for your precious help.
     
  17. spelafort

    spelafort

    Joined:
    May 8, 2017
    Posts:
    29
    hi @EnoxSoftware ,

    Just starting out with openCV, so apologies if this is a daft question. I want to use openCV for resizing/cropping. Basically I have a set of 512x512 textures and I'd like them to inherit the aspect ratio of the plane they're placed upon, so they would be cropped from the center but not stretched. I have a script in Unity that does this, but I find it's quite slow and was hoping the openCV solution would be faster.

    Here's my script:

    Code (CSharp):
    1.  
    2.     Texture2D CropImg()
    3.     {
    4.         double fx = meshGO.GetComponent<Renderer>().bounds.size.x/10;
    5.         double fy = meshGO.GetComponent<Renderer>().bounds.size.y/10;
    6.         Debug.Log(fx);
    7.         Debug.Log(fy);
    8.  
    9.         int sourceWidth = imgTexture2D.width;
    10.         int sourceHeight = imgTexture2D.height;
    11.         float sourceAspect = (float)sourceWidth / sourceHeight;
    12.         float targetAspect = (float)(fx / fy);
    13.  
    14.         Mat originalMat = new Mat(sourceWidth, sourceHeight, CvType.CV_8UC4);
    15.         Utils.texture2DToMat(imgTexture2D, originalMat);
    16.         Mat cropMat;
    17.  
    18.         if ( sourceAspect > 1)
    19.         {
    20.             var cropRect = new OpenCVForUnity.CoreModule.Rect(0, 0, sourceWidth, (int)(sourceHeight * (1 / targetAspect)));
    21.             cropMat = new Mat(originalMat, cropRect);
    22.         }else
    23.         {
    24.             var cropRect = new OpenCVForUnity.CoreModule.Rect(0, 0, (int)(sourceWidth*targetAspect), sourceHeight);
    25.             cropMat = new Mat(originalMat, cropRect);
    26.         }
    27.  
    28.         Texture2D textureFinal = new Texture2D(cropMat.rows(), cropMat.cols(), TextureFormat.RGBA32, false);
    29.         Utils.matToTexture2D(cropMat, textureFinal);
    30.         Debug.Log("NEW SIZE IS " + textureFinal.width + " AND " + textureFinal.height);
    31.         return textureFinal;
    32.     }
    33.  
    I thought this would be fairly basic, but I'm getting the following error:
    Code (CSharp):
    1. ArgumentException: The Texture2D object must have the same size.
    Clearly it's the conversion back to tex2d that's the problem, but I can't seem to figure out why. Any help or more general advice appreciated!
     
    Last edited: Feb 21, 2020
  18. guidosalimbeni

    guidosalimbeni

    Joined:
    Nov 10, 2017
    Posts:
    12
    Hi,
    I have a mat of type CV_8UC3 height= 64 and width = 64 and that has values between 0 to 256 as usual for images. I would like to produce a new mat of the same shape with the same 3 channels (rgb) but with the values scaled between -1 and +1. I then need to convert the rusulted and scaled mat into a new texture2d . I have tried different methods but I getting strange results and I really hope you can help with an example.

    this is my code so far:

    Code (CSharp):
    1. private Texture2D ToTexture2DAndResize(RenderTexture rTex, int W, int H)
    2.     {
    3.         Texture2D tex = new Texture2D(rTex.width, rTex.height, TextureFormat.RGB24, false); //RGBA32
    4.         RenderTexture.active = rTex;
    5.         tex.ReadPixels(new UnityEngine.Rect(0, 0, rTex.width, rTex.height), 0, 0);
    6.         tex.Apply();
    7.  
    8.         Mat imgMat = new Mat(tex.height, tex.width, CvType.CV_8UC3);
    9.         Utils.texture2DToMat(tex, imgMat);
    10.  
    11.         Size scaleSize = new Size(W, H);
    12.         Imgproc.resize(imgMat, imgMat, scaleSize, 0, 0, interpolation: Imgproc.INTER_AREA);
    13.         Debug.Log(imgMat.dump());
    14.  
    15.         Mat imgMatNormalised = new Mat(H, W, CvType.CV_64FC3);
    16.  
    17.         Core.normalize(imgMat, imgMatNormalised, -1, +1, Core.NORM_L1);
    18.  
    19.  
    20.         //Debug.Log(imgMatNormalised.dump()); // might want to use the mat and convert into a tensor?
    21.  
    22.         Texture2D resizedImg = new Texture2D(W, H, TextureFormat.RGB24, false);
    23.         Utils.matToTexture2D(imgMatNormalised, resizedImg);
    24.  
    25.  
    26.  
    27.         return resizedImg;
    28.  
    29.     }
    I also tried to substract 127.5 and divide by 127.5 but without success... I believe it should be a fairly simple operation but I am stack.. probably I am doing something wrong with the type format?
     
  19. link1375

    link1375

    Joined:
    Nov 9, 2017
    Posts:
    9
    As
    Imgproc.Canny
    is slow on complex images, I searched for alternatives to detect edges. I found the
    StructuredEdgeDetection
    , but I guess this is not yet implemented?
     
  20. Labecki

    Labecki

    Joined:
    Apr 14, 2015
    Posts:
    16
    (Sorry for the slow reply, I was out with the flu all week).
    I though ChArUco boards were supposed to be more accurate than chessboards.
    I will try using chessboard and see how that works.
     
    EnoxSoftware likes this.
  21. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    If you change the arguments to the Core.normalize method as follows, Mat will be converted correctly.
    Code (CSharp):
    1. Core.normalize(imgMat, imgMatNormalised, -1, +1, Core.NORM_MINMAX, CvType.CV_64F);
    In Utils.matToTexture2D(), The input Mat object has to be of the types 'CV_8UC4' (RGBA) , 'CV_8UC3' (RGB) or 'CV_8UC1' (GRAY). The Texture2D object must have the TextureFormat 'RGBA32' , 'ARGB32' , 'RGB24' or 'Alpha8'.
    However, in the case of CvType.CV_32F, it is possible to convert to Texture2D as in the next post.
    https://forum.unity.com/threads/released-opencv-for-unity.277080/page-34#post-3525849
     
  22. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
  23. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197

    I created a new example based on your code.
    Could you try the attached example package? (It depends on the Example folder of OpenCVForUnity)

    Code (CSharp):
    1.  
    2.         Texture2D CropImg()
    3.         {
    4.             double fx = meshGO.GetComponent<Renderer>().bounds.size.x / 10;
    5.             double fy = meshGO.GetComponent<Renderer>().bounds.size.y / 10;
    6.             Debug.Log(fx);
    7.             Debug.Log(fy);
    8.  
    9.             int sourceWidth = imgTexture2D.width;
    10.             int sourceHeight = imgTexture2D.height;
    11.             float sourceAspect = (float)sourceWidth / sourceHeight;
    12.             float targetAspect = (float)(fx / fy);
    13.  
    14.             // In the OpenCV Mat, rows mean height and columns mean width.
    15.             Mat originalMat = new Mat(sourceHeight, sourceWidth, CvType.CV_8UC4);
    16.             Utils.texture2DToMat(imgTexture2D, originalMat);
    17.             Mat cropMat;
    18.  
    19.             // Crop from the center.
    20.             if (sourceAspect > targetAspect)
    21.             {
    22.                 int w = (int)(sourceWidth * targetAspect);
    23.                 int h = sourceHeight;
    24.                 int x = (sourceWidth - w) / 2;
    25.                 int y = (sourceHeight - h) / 2;
    26.  
    27.                 var cropRect = new OpenCVForUnity.CoreModule.Rect(x, y, w, h);
    28.                 cropMat = new Mat(originalMat, cropRect);
    29.             }
    30.             else
    31.             {
    32.                 int w = sourceWidth;
    33.                 int h = (int)(sourceHeight * (1 / targetAspect));
    34.                 int x = (sourceWidth - w) / 2;
    35.                 int y = (sourceHeight - h) / 2;
    36.  
    37.                 var cropRect = new OpenCVForUnity.CoreModule.Rect(x, y, w, h);
    38.                 cropMat = new Mat(originalMat, cropRect);
    39.             }
    40.  
    41.             Texture2D textureFinal = new Texture2D(cropMat.cols(), cropMat.rows(), TextureFormat.RGBA32, false);
    42.             Utils.matToTexture2D(cropMat, textureFinal);
    43.             Debug.Log("NEW SIZE IS " + textureFinal.width + " AND " + textureFinal.height);
    44.             return textureFinal;
    45.         }
    46.  
     

    Attached Files:

  24. link1375

    link1375

    Joined:
    Nov 9, 2017
    Posts:
    9
  25. Labecki

    Labecki

    Joined:
    Apr 14, 2015
    Posts:
    16
    I tried calibrating with a chessboard a few time, and, while the results were somewhat better than they were with the ChArUco board, both the position and rotation of the AR Objects turn out to be insufficiently accurate.

    I noticed that, in the ArUco Calibration Example script, there is no way to specify the length of the square size when calibrating with a chessboard (such a parameter is used for the ChArUco board, and that value is private for some reason). A video I saw on chessboard calibrating with vanilla OpenCV claimed that the square length is important.

    Do you have any further suggestions.
     
    Last edited: Feb 25, 2020
  26. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    https://stackoverflow.com/questions...for-opencvs-structured-edge-detector/33318560
    Code (CSharp):
    1. StructuredEdgeDetection sed = Ximgproc.createStructuredEdgeDetection("path_to_model.yml.gz");
    2. sed.detectEdges(src, dst);
     
  27. link1375

    link1375

    Joined:
    Nov 9, 2017
    Posts:
    9
    Is native c++ code really much faste than c#?

    I want to detect custom images, which are plased inside a black rectangle and so I looked at the aruco implementation (https://github.com/opencv/opencv_contrib/blob/master/modules/aruco/src/aruco.cpp) and rewrote some parts in c# (https://pastebin.com/Hp5k7xWH). In the
    FindContours
    function I can switch between
    Canny
    ,
    threshold
    and
    adaptiveThreshold 
    to detect edges. The aruco implementation uses
    adaptiveThreshold
    . If I use
    Canny
    , then the blinkin problem is much more present than when using thesholds.



    That is why I wanted to switch in the first place. But now I have a massive fps drop when using the
    adaptiveThreshold
    in comparison to your
    Aruco.detectMarkers(rgbMat, dictionary, corners, ids);
    , which I guess just calls the native c++ function
    detectMarkers
    , which uses
    adaptiveThreshold
    . The fps with my code is around 85 and can drop to around 50 on a complex image, wheres the fps in the
    Aruco.detectMarkers
    call is constantly above 90 fps.
     
    Last edited: Feb 25, 2020
  28. spelafort

    spelafort

    Joined:
    May 8, 2017
    Posts:
    29
    Thank you, this worked!

    I'm now trying to work with yolov object detection.

    the dnn object detection script says that it can take other models/config/classes. I've been playing around trying to get it to accept other things, but it seems to crash every time. Any idea where I can look for different models that would work with this? The example model from github seems to have a pretty limited range.

    EDIT- also looking to get color descriptions for the detected objects. Something like densecap. I'm getting a bit lost in the API, but I'm thinking I will try: make a submat object for each detected object Rect region; average out the colors of all the pixels; measure their distance from a set of colors. Please let me know if you have any advice for this one :)
     
    Last edited: Mar 1, 2020
  29. zedaidai

    zedaidai

    Joined:
    Oct 14, 2014
    Posts:
    2
    Hello , I'm trying to build for Android platform with openCV 2.3.8 and unity 2019.3.3f1 on macOs .
    The console throws this error :
    Could not compile build file '/Users/zedaidai/Documents/DEV/git/mila/opencvTest/Temp/gradleOut/launcher/build.gradle'.


    Any hint to make this work ?
     
  30. zedaidai

    zedaidai

    Joined:
    Oct 14, 2014
    Posts:
    2
    Finaly built successfully by removing the StreamingAssets folder, which I don't need
     
    EnoxSoftware likes this.
  31. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Thank you very much for reporting.

    The toArray() method that converts Mat to an array may be the cause of the drop in fps.
    Code (CSharp):
    1. cnt2fArray[i] = new MatOfPoint2f(cntArray[i].toArray());
    I’ll do more research.
     
  32. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Unfortunately, I do not yet understand the model requirements for the OpenCV dnn module. The following wiki information is detailed for the openCV dnn module.
    https://github.com/opencv/opencv/wiki/Deep-Learning-in-OpenCV

    You can use the mean() method to get the average of Mat's pixels.
    https://enoxsoftware.github.io/Open...1_core.html#a5f0ad71fe7fe4203a49e0cd1c112ec88
    https://answers.opencv.org/question...or-of-image-inside-the-circlehough-transform/
     
  33. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Hi, Labecki

    I created a sample project to improve calibration performance using the findChessboardCornersSB method and the calibrateCameraRO method of the Calib3D class.
    https://github.com/EnoxSoftware/OpenCVCameraCalibrationTest
    This sample project has slightly improved the re-projection error value of calibration using chessboard.
     
    cecarlsen likes this.
  34. Labecki

    Labecki

    Joined:
    Apr 14, 2015
    Posts:
    16
    Thank you so much. I will see if it yields better results.
     
  35. ghasedak3411

    ghasedak3411

    Joined:
    Aug 25, 2015
    Posts:
    5
    hi , i use OpenCV_for_Unity_2.3.8
    and mobile : xiaomi note 5

    This picture was taken with my mobile camera at one time
    photo_2020-03-15_14-05-09.jpg


    And this picture with OpenCV
    photo_2020-03-15_14-05-19.jpg
    With a slight difference, there is the same problem with other phones


    I set the frame to 15 (avoidAndroidFrontCameraLowLightIssue = true)and that just makes the light a little better and still different from the Android camera.



    Please help me, thank you
     
    Last edited: Mar 15, 2020
  36. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197

    The problem seems to be due to a bug in Unity's WebCamTexture API.
    https://forum.unity.com/threads/android-webcamtexture-in-low-light-only-some-models.520656/
    https://forum.unity.com/threads/released-opencv-for-unity.277080/page-33#post-3445178
    https://answers.unity.com/questions/1425938/android-native-camera-low-lightlow-iso.html

    I actually tried the simple code (without OpenCV) using WebCamTexture but found that some models (google pixel) had the same problem.

    Code (CSharp):
    1. using System.Collections;
    2. using System.Collections.Generic;
    3. using UnityEngine;
    4.  
    5. public class WebcamTextureLowLightIssueOnAndroid : MonoBehaviour
    6. {
    7.  
    8.     WebCamTexture webCamTexture;
    9.  
    10.     // Use this for initialization
    11.     void Start()
    12.     {
    13.         var devices = WebCamTexture.devices;
    14.         for (int cameraIndex = 0; cameraIndex < devices.Length; cameraIndex++)
    15.         {
    16.             // get front camera.
    17.             if (WebCamTexture.devices[cameraIndex].isFrontFacing == true)
    18.             {
    19.                 var webCamDevice = devices[cameraIndex];
    20.                 webCamTexture = new WebCamTexture(webCamDevice.name, 640, 480, 30);
    21.                 break;
    22.             }
    23.         }
    24.  
    25.         webCamTexture.Play();
    26.  
    27.         gameObject.GetComponent<Renderer>().material.mainTexture = webCamTexture;
    28.     }
    29.  
    30.     void OnDisable()
    31.     {
    32.         if (webCamTexture != null)
    33.         {
    34.             webCamTexture.Stop();
    35.             Destroy(webCamTexture);
    36.             webCamTexture = null;
    37.         }
    38.     }
    39. }
    And I reported the bug a few months ago using Unity Editor's Bug Reporter and exchanged several times with Unity developers, but unfortunately, the report seems to have been closed now, without being able to reproduce the problem in the Unity-side verification environment.
    To my knowledge, this problem remains unfixed.

    To fix the problem, many people need to file a bug report.
    Could you report bugs to Unity as well?
     
  37. shi_no_gekai

    shi_no_gekai

    Joined:
    Mar 3, 2018
    Posts:
    3
    Hi, please help!

    I wanted to merge between the OpenPose and HandPoseEstimation script, instead of using an image file, I wanted to apply openpose dnn model of every frame coming from the camera, but i'm getting this error "CvException: Native object address is NULL" as show in the screenshot attached
     

    Attached Files:

  38. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Code (CSharp):
    1. 'Number of input channels should be multiple of 3 but got 4 in function'
    You need to convert the input Mat from 4 channels to 3 channels.
    Code (CSharp):
    1. Imgproc.cvtColor(rgbaMat, bgrMat, Imgproc.COLOR_RGBA2BGR);
    https://github.com/EnoxSoftware/Ope...nnObjectDetectionWebCamTextureExample.cs#L289
     
    shi_no_gekai likes this.
  39. shi_no_gekai

    shi_no_gekai

    Joined:
    Mar 3, 2018
    Posts:
    3
    thanks alot! it worked, i have another question please.
    Code (CSharp):
    1.                 Mat output = net.forward();
    2.  
    i need to apply the dnn model to detect the hand joints on every frame, however the editor crushed when that line of code is executed since alot of calculation are made every frame, is there anyway to reduce it?
     
  40. jetspeed

    jetspeed

    Joined:
    Feb 7, 2013
    Posts:
    2
    Hi, EnoxSoftware

    I always use this asset. Thank you.
    Now I am in trouble.
    “YoloObjectDetectionExample” does not work.
    When run in the editor, It freezes.
    Please help me.

    Develop environment is below.

    macOS Catalina Version 10.15.3
    Unity 2018.4.19.f1
    OpenCV for Unity 2.3.8
     

    Attached Files:

    Last edited: Mar 24, 2020
  41. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Thank you very much for reporting.
    Are yolov3-tiny.cfg and coco.names saved as plain text?
     
  42. jetspeed

    jetspeed

    Joined:
    Feb 7, 2013
    Posts:
    2
    Hi, EnoxSoftware

    No, I used ".cfg" and ".names" .
    Now, I changed to ".txt" and It works.

    Thank you for your help.
     
  43. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    559
    Hi @EnoxSoftware

    I am trying the Calib3d.findChessboardCornersSB method and noticed that Calib3d.CALIB_CB_LARGER and Calib3d.CALIB_CB_MARKER flags are missing. Any chance they will make it into the next update?
     
  44. zapatadesign

    zapatadesign

    Joined:
    Nov 10, 2016
    Posts:
    1
    @EnoxSoftware,
    Sorry if this has been asked/answered.
    I'm using a Intel D435 depth camera over a table and I'm trying to detect hand movement to simulate interaction. I'm trying to determine if I OpenCV will help me achieve my desire of simulated touch.
    My desire levels of interaction are Swipe Left/Right, Swipe Up/Down and Hover.

    Thoughts?
     
  45. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    cecarlsen likes this.
  46. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    Unfortunately, I don't have any examples of hand movement detection.
    An example of color-based hand detection is bundled with OpenCVForUnity.
    https://github.com/EnoxSoftware/Ope...stimationExample/HandPoseEstimationExample.cs
     
  47. shi_no_gekai

    shi_no_gekai

    Joined:
    Mar 3, 2018
    Posts:
    3
    Hello @EnoxSoftware , im trying to draw a rectangle using
    Code (CSharp):
    1. public void draw_rectangle(){
    2.             Imgproc.rectangle (frame_pot, new Point (cornerx, cornery), new Point (cornerx + sqlen, cornery + sqlen), new Scalar(0, 255, 0), 2);
    3.         }
    but i get this error:
    EntryPointNotFoundException: imgproc_Imgproc_rectangle_12
    OpenCVForUnity.ImgprocModule.Imgproc.rectangle (OpenCVForUnity.CoreModule.Mat img, OpenCVForUnity.CoreModule.Point pt1, OpenCVForUnity.CoreModule.Point pt2, OpenCVForUnity.CoreModule.Scalar color, System.Int32 thickness) (at Assets/OpenCVForUnity/org/opencv/imgproc/Imgproc.cs:9632)

    also when i use drawContours i get EntryPointNotFoundException: imgproc_Imgproc_drawContours_14
     
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,197
    ImportSettings in the native library may not be set correctly.
    MenuItem.png
     
unityunity