Search Unity

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. vice39

    vice39

    Joined:
    Nov 11, 2016
    Posts:
    108
    Not in the examples, it's my code that crashes. I know it's not a memory leak, because that I checked. It will work for hours on desktop but on android devices it crashes after a random time, sometimes 1 minute, sometimes 5, but it always crashes eventually.
     
  2. kingbaggot

    kingbaggot

    Joined:
    Jun 6, 2013
    Posts:
    51
    I am using opencv to detect where users touch a tabletop projected interactive (the camera is above the circular table) . However the blob detection only finds the center of the hand/elbow blob ( the red dot in the pic) rather than where the hand is. Users will be standing all around the edge of the table.

    Can you think of anyway I could get the blob to pick only the hand area ?

    ok thanks - and thanks for replying to my previous post !

    dog72.jpg





     
  3. DirkDenzer

    DirkDenzer

    Joined:
    May 8, 2019
    Posts:
    11
    Hi, thanks for replying.
    I did use the contact form last Friday as you suggested, but no response so far.
    I would really like to test this lib as I'm stuck with the one we're using now (it doesn't support using TrainedData, does this lib support it?)

    Thanks
     
  4. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Could you send your code to the contact form? https://enoxsoftware.com/opencvforunity/contact/other-inquiry/
    I can probably give advice in regards to that.
     
  5. SamVickery

    SamVickery

    Joined:
    Jul 19, 2013
    Posts:
    3
    I'm trying to calculate the difference matrix between two sets of points. Basically, I have a set of points in unity world space (2d) and a set of points in a depth camera space (again 2d). I can see Calib3d.estimateAffine2D but not sure how to pass in the Vector2s into a Mat or get the Matrix4x4 out of the Mat after
     
  6. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    You might be able to detect the tip of the hand in the following way:
    1. Detect the convex hull of the blob.
    2. Find the nearest convex point from the center point of the table.
    dog72_new.jpg

    https://medium.com/@soffritti.pierfrancesco/handy-hands-detection-with-opencv-ac6e9fb3cec1
    https://medium.com/@muehler.v/simpl...tion-using-opencv-and-javascript-eb3d6ced28a0
    https://www.lzane.com/fingers-detection-using-opencv-and-python/
     
  7. kingbaggot

    kingbaggot

    Joined:
    Jun 6, 2013
    Posts:
    51
    thanks for the response - I'd like to try finding that convex point - but the blobDetection class just gives me a list of keypoints which have position,size, angle, octave, response and classId - I can't find the part that allows me to detect it's convex hull

    I found this but it's not csharp - https://www.learnopencv.com/convex-hull-using-opencv-in-python-and-c/
     
    Last edited: Aug 7, 2019
  8. DirkDenzer

    DirkDenzer

    Joined:
    May 8, 2019
    Posts:
    11
    With the risk of being annoying but I still can't download the free trial and I never got a response to my request with the form.
    I would really like to test this as it looks very promising. However my boss will not approve the almost 100$ purchase without me being able to confirm it can provide what we need.

    Couldn't you just send me a download link to the trial via PM? For example on a google drive or something.
    Thank you!

     
  9. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    How to use Calib3d.estimateAffine2D method and get Matrix4x4 from the result:
    Code (CSharp):
    1.  
    2.             // create key points.
    3.             Mat from = new MatOfPoint2f (new Point (0, 0), new Point (0, 512), new Point (512, 512));
    4.             Mat to = new MatOfPoint2f (new Point (256, -106.03867), new Point (-106.03867, 256), new Point (256, 618.03864));
    5.             Debug.Log (from.dump ());
    6.             Debug.Log (to.dump ());
    7.  
    8.             // estimate affine2D.
    9.             Mat affineMat = Calib3d.estimateAffine2D (from, to);
    10.             Debug.Log (affineMat.dump ());
    11.  
    12.             // OpenCV Mat to Unity Matrix4x4
    13.             // |a,b,tx| => |a,b,0,tx|
    14.             // |c,d,ty| => |c,d,0,ty|
    15.             //             |0,0,1, 0|
    16.             //             |0,0,0, 1|
    17.             Matrix4x4 transformM = Matrix4x4.identity;
    18.             double[] affineM_Arr = new double[affineMat.total ()];
    19.             affineMat.get (0, 0, affineM_Arr);
    20.             transformM.m00 = (float)affineM_Arr [0];
    21.             transformM.m01 = (float)affineM_Arr [1];
    22.             transformM.m03 = (float)affineM_Arr [2];
    23.             transformM.m10 = (float)affineM_Arr [3];
    24.             transformM.m11 = (float)affineM_Arr [4];
    25.             transformM.m13 = (float)affineM_Arr [5];
    26.             Debug.Log (transformM);
    27.  
     
    Last edited: Aug 10, 2019
  10. Qpp1125

    Qpp1125

    Joined:
    Mar 7, 2018
    Posts:
    1
    @EnoxSoftware, have you thought about optimizing the MarkerLessARExample project?

    Download the GooglePlay MarkerLessARExample app.
    The project has only one recognition pattern, but using my phone will cause a delay.

    MarkerLessARExample
    How to train more than one pattern recognition, like MarkerBased AR.

    Can you provide a solution?

    Device : Redmi Note 4X.
    Unity version : Unity 2018.4.4f1 (64-bit)
    OpenCVforUnity version : 2.3.6

    Example 1 - ( Serious delay !!)
    Code (CSharp):
    1.  void Update()
    2.         {
    3.             if (webCamTextureToMatHelper.IsPlaying() && webCamTextureToMatHelper.DidUpdateThisFrame())
    4.             {
    5.  
    6.                 Mat rgbaMat = webCamTextureToMatHelper.GetMat();
    7.  
    8.                 Imgproc.cvtColor(rgbaMat, grayMat, Imgproc.COLOR_RGBA2GRAY);
    9.                 for (int i = 0; i < TextureArray.Length; i++)
    10.                 {
    11.  
    12.                   [B]  [/B]//Cause of inefficiency
    13.                     patternFound[i] = patternDetector[i].findPattern(grayMat, patternTrackingInfo[i], patternDetector);
    14.                     //---------------------------
    15.  
    16.                     if (patternFound[i])
    17.                     {
    18.                         patternTrackingInfo[i].computePose(pattern[i], camMatrix, distCoeffs);
    19.  
    20.                         //Marker to Camera Coordinate System Convert Matrix
    21.                         transformationM = patternTrackingInfo[i].pose3d;
    22.                         //Debug.Log ("transformationM " + transformationM.ToString ());
    23.  
    24.                         if (shouldMoveARCamera)
    25.                         {
    26.                             ARM = ARGameObject.transform.localToWorldMatrix * invertZM * transformationM.inverse * invertYM;
    27.                             //Debug.Log ("ARM " + ARM.ToString ());
    28.  
    29.                             ARUtils.SetTransformFromMatrix(ARCamera.transform, ref ARM);
    30.                         }
    31.                         else
    32.                         {
    33.                             ARM = ARCamera.transform.localToWorldMatrix * invertYM * transformationM * invertZM;
    34.                             //Debug.Log ("ARM " + ARM.ToString ());
    35.  
    36.                             ARUtils.SetTransformFromMatrix(ARGameObject.transform, ref ARM);
    37.                         }
    38.  
    39.                         ARGameObject.GetComponent<DelayableSetActive>().SetActive(true);
    40.                     }
    41.                     else
    42.                     {
    43.  
    44.                         ARGameObject.GetComponent<DelayableSetActive>().SetActive(false, 0.5f);
    45.                     }
    46.                 }
    47.  
    48.                 Utils.fastMatToTexture2D(rgbaMat, texture);
    49.             }
    50.         }
     
  11. zyonneo

    zyonneo

    Joined:
    Apr 13, 2018
    Posts:
    386
    Planning to buy this for doing a real time text detection in IOS devices and android.I dont think there is an example project in it.Any support to clear issues and doubts?Once I buy the plugin and if it cannot be done then I will be in trouble.I was planning to read something that is engraved on the surface of a metal piece just as shown in the picture.Can this be achieved?
     
    Last edited: Aug 15, 2019
  12. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    MarkerLessARExample Code is a rewrite of https://github.com/MasteringOpenCV/code/tree/master/Chapter3_MarkerlessAR using “OpenCV for Unity”.
    Since this example is a tutorial code, I recommend using Vuforia etc. for more advanced functions.
    I think that this page is helpful for hints on performance improvement.
    https://github.com/MasteringOpenCV/code/issues/52
     
  13. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Have you already tested TextDetectionExample and TextRecognitionExample using OpenCVForUnity_TrialVersion?
    https://enoxsoftware.com/opencvforunity/get_asset/
    In my experience, it is necessary to have a high contrast between the background and the letters.
     
  14. zyonneo

    zyonneo

    Joined:
    Apr 13, 2018
    Posts:
    386
    No I have not tested using trial version.Does it support real time text recognition? It only supports windows and mac Unity editor versions,so if I do it using trial version can it be done in mobile versions for pro version?
     
    Last edited: Aug 20, 2019
  15. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    OpenCVforUnity does not include real-time text recognition examples using WebCamTexture. However, the example of text recognition from an image is included. https://github.com/EnoxSoftware/Ope...es/text/TextExample/TextRecognitionExample.cs
    Also, TrialVersion only supports windows and mac Unity editor versions.
     
  16. zyonneo

    zyonneo

    Joined:
    Apr 13, 2018
    Posts:
    386
    I dont want to do it by presetting the image.I want to do it real time for handheld device and not for editors.Looking to do an app which in real time detect test for(IOS AND ANDROID). Is that possible?
     
  17. zyonneo

    zyonneo

    Joined:
    Apr 13, 2018
    Posts:
    386
    I have tried the trial version and added the above image into the streaming folder.The image got loaded on to the cube but it is not recognising any text.. :(
     
  18. johnymetalheadx

    johnymetalheadx

    Joined:
    Feb 20, 2016
    Posts:
    12
    @EnoxSoftware Hi, I m trying to detect an object in the real world and trying to black out everything on the camera stream apart from the detected object. In other words, I m trying to black out the background. Any guideline on how to do it?
     
  19. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    The surface of the metal piece seems unsuitable for text recognition.
    In my experience, it is necessary to have a high contrast between the background and the letters.
     
  20. fherbst

    fherbst

    Joined:
    Jun 24, 2012
    Posts:
    802
    Hey, we're using versions of MarkerlessAR and WebcamTextureToMat in a project that is supposed to run on Android tablets for extended periods of time.

    After some hours of running (between 10 minutes and 6 hours so far), the app crashes with access violations inside libopencvforunity:
    Code (CSharp):
    1. 2019.08.22 14:12:22.651 9584 9653 Fatal libc Invalid address 0xa6973000 passed to free: value not allocated
    2. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime FATAL EXCEPTION: UnityMain
    3. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime Process: com.prefrontalcortex.markerartest, PID: 9584
    4. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime java.lang.Error: signal 6 (SIGABRT), code -6 (?), fault addr --------
    5. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime Build fingerprint: 'samsung/gts4lwifixx/gts4lwifi:9/PPR1.180610.011/T830XXU3BSF3:user/release-keys'
    6. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime Revision: '7'
    7. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime pid: 9584, tid: 9653, name: UnityMain  >>> com.prefrontalcortex.markerartest <<<
    8. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime r0 00000000  r1 000025b5  r2 00000006  r3 00000008
    9. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime r4 00002570  r5 000025b5  r6 abefe2a4  r7 0000010c
    10. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime r8 c2a7ce08  r9 e938e3bc  sl 0000006d  fp a63ff624
    11. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime ip e938e3bc  sp abefe290  lr e92f8bb5  pc e92ee22a  cpsr abefdfa0
    12. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime
    13. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime at libc.abort(abort:57)
    14. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime at libc.ifree(ifree:876)
    15. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime at libc.je_free(je_free:76)
    16. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime at libopencvforunity.002193d9(Native Method)
    17. 2019.08.22 14:12:23.292 9584 9604 Error AndroidRuntime at libopencvforunity.005bc989(Native Method)
    Code (CSharp):
    1. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime FATAL EXCEPTION: UnityMain
    2. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime Process: com.prefrontalcortex.markerartest, PID: 11233
    3. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime java.lang.Error: signal 11 (SIGSEGV), code 2 (SEGV_ACCERR), fault addr d150d9ea
    4. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime Build fingerprint: 'samsung/gts4lwifixx/gts4lwifi:9/PPR1.180610.011/T830XXU3BSF3:user/release-keys'
    5. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime Revision: '7'
    6. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime pid: 11233, tid: 11465, name: UnityMain  >>> com.prefrontalcortex.markerartest <<<
    7. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime r0 5150d9ea  r1 bb36401f  r2 bb364100  r3 9bef01ea
    8. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime r4 00000220  r5 b8bc11ea  r6 80000000  r7 00000000
    9. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime r8 bb365080  r9 be5f4be0  sl a87fdda0  fp 00000381
    10. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime ip acd4bda0  sp a87fda10  lr e826ea0c  pc ac3e92b4  cpsr da630d10
    11. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime
    12. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime at libopencvforunity.004472b4(Native Method)
    13. 2019.08.22 14:42:39.887 11233 11255 Error AndroidRuntime at libm.sin(sin:912)
    We heavily optimized the matrix handling in OpenCVForUnity (it's allocating like 100x more garbage than it should), but while we were able to extend the time until the app crashes it still does after some hours.

    Any idea?
     
  21. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Please change the postprocess method of DnnObjectDetectionExample as follows:
    Code (CSharp):
    1.         /// <summary>
    2.         /// Postprocess the specified frame, outs and net.
    3.         /// </summary>
    4.         /// <param name="frame">Frame.</param>
    5.         /// <param name="outs">Outs.</param>
    6.         /// <param name="net">Net.</param>
    7.         private void postprocess (Mat frame, List<Mat> outs, Net net)
    8.         {
    9.             string outLayerType = outBlobTypes [0];
    10.  
    11.  
    12.             List<int> classIdsList = new List<int> ();
    13.             List<float> confidencesList = new List<float> ();
    14.             List<OpenCVForUnity.CoreModule.Rect> boxesList = new List<OpenCVForUnity.CoreModule.Rect> ();
    15.             if (net.getLayer (new DictValue (0)).outputNameToIndex ("im_info") != -1) {  // Faster-RCNN or R-FCN
    16.                 // Network produces output blob with a shape 1x1xNx7 where N is a number of
    17.                 // detections and an every detection is a vector of values
    18.                 // [batchId, classId, confidence, left, top, right, bottom]
    19.  
    20.                 if (outs.Count == 1) {
    21.  
    22.                     outs [0] = outs [0].reshape (1, (int)outs [0].total () / 7);
    23.  
    24. //                    Debug.Log ("outs[i].ToString() " + outs [0].ToString ());
    25.  
    26.                     float[] data = new float[7];
    27.  
    28.                     for (int i = 0; i < outs [0].rows (); i++) {
    29.  
    30.                         outs [0].get (i, 0, data);
    31.  
    32.                         float confidence = data [2];
    33.  
    34.                         if (confidence > confThreshold) {
    35.                             int class_id = (int)(data [1]);
    36.  
    37.  
    38.                             int left = (int)(data [3] * frame.cols ());
    39.                             int top = (int)(data [4] * frame.rows ());
    40.                             int right = (int)(data [5] * frame.cols ());
    41.                             int bottom = (int)(data [6] * frame.rows ());
    42.                             int width = right - left + 1;
    43.                             int height = bottom - top + 1;
    44.  
    45.  
    46.                             classIdsList.Add ((int)(class_id) - 0);
    47.                             confidencesList.Add ((float)confidence);
    48.                             boxesList.Add (new OpenCVForUnity.CoreModule.Rect (left, top, width, height));
    49.                         }
    50.                     }
    51.                 }
    52.             } else if (outLayerType == "DetectionOutput") {
    53.                 // Network produces output blob with a shape 1x1xNx7 where N is a number of
    54.                 // detections and an every detection is a vector of values
    55.                 // [batchId, classId, confidence, left, top, right, bottom]
    56.  
    57.                 if (outs.Count == 1) {
    58.  
    59.                     outs [0] = outs [0].reshape (1, (int)outs [0].total () / 7);
    60.  
    61. //                    Debug.Log ("outs[i].ToString() " + outs [0].ToString ());
    62.  
    63.                     float[] data = new float[7];
    64.  
    65.                     for (int i = 0; i < outs [0].rows (); i++) {
    66.  
    67.                         outs [0].get (i, 0, data);
    68.  
    69.                         float confidence = data [2];
    70.  
    71.                         if (confidence > confThreshold) {
    72.                             int class_id = (int)(data [1]);
    73.  
    74.  
    75.                             int left = (int)(data [3] * frame.cols ());
    76.                             int top = (int)(data [4] * frame.rows ());
    77.                             int right = (int)(data [5] * frame.cols ());
    78.                             int bottom = (int)(data [6] * frame.rows ());
    79.                             int width = right - left + 1;
    80.                             int height = bottom - top + 1;
    81.  
    82.                             classIdsList.Add ((int)(class_id) - 0);
    83.                             confidencesList.Add ((float)confidence);
    84.                             boxesList.Add (new OpenCVForUnity.CoreModule.Rect (left, top, width, height));
    85.                         }
    86.                     }
    87.                 }
    88.             } else if (outLayerType == "Region") {
    89.                 for (int i = 0; i < outs.Count; ++i) {
    90.                     // Network produces output blob with a shape NxC where N is a number of
    91.                     // detected objects and C is a number of classes + 4 where the first 4
    92.                     // numbers are [center_x, center_y, width, height]
    93.  
    94.                     //                        Debug.Log ("outs[i].ToString() "+outs[i].ToString());
    95.  
    96.                     float[] positionData = new float[5];
    97.                     float[] confidenceData = new float[outs [i].cols () - 5];
    98.  
    99.                     for (int p = 0; p < outs [i].rows (); p++) {
    100.  
    101.  
    102.                  
    103.                         outs [i].get (p, 0, positionData);
    104.                  
    105.                         outs [i].get (p, 5, confidenceData);
    106.                  
    107.                         int maxIdx = confidenceData.Select ((val, idx) => new { V = val, I = idx }).Aggregate ((max, working) => (max.V > working.V) ? max : working).I;
    108.                         float confidence = confidenceData [maxIdx];
    109.                  
    110.                         if (confidence > confThreshold) {
    111.                  
    112.                             int centerX = (int)(positionData [0] * frame.cols ());
    113.                             int centerY = (int)(positionData [1] * frame.rows ());
    114.                             int width = (int)(positionData [2] * frame.cols ());
    115.                             int height = (int)(positionData [3] * frame.rows ());
    116.                             int left = centerX - width / 2;
    117.                             int top = centerY - height / 2;
    118.                          
    119.                             classIdsList.Add (maxIdx);
    120.                             confidencesList.Add ((float)confidence);
    121.                             boxesList.Add (new OpenCVForUnity.CoreModule.Rect (left, top, width, height));
    122.                                          
    123.                         }
    124.                     }
    125.                 }
    126.             } else {
    127.                 Debug.Log ("Unknown output layer type: " + outLayerType);
    128.             }
    129.  
    130.  
    131.             MatOfRect boxes = new MatOfRect ();
    132.             boxes.fromList (boxesList);
    133.  
    134.             MatOfFloat confidences = new MatOfFloat ();
    135.             confidences.fromList (confidencesList);
    136.  
    137.  
    138.             MatOfInt indices = new MatOfInt ();
    139.             Dnn.NMSBoxes (boxes, confidences, confThreshold, nmsThreshold, indices);
    140.  
    141.             //            Debug.Log ("indices.dump () "+indices.dump ());
    142.             //            Debug.Log ("indices.ToString () "+indices.ToString());
    143.  
    144.  
    145. ///////////////////////////////////////////
    146. /// Mask other than detected objects.
    147.             Mat cloneMat = frame.clone();
    148.             Mat maskMat = new Mat(frame.rows(),frame.cols(), CvType.CV_8UC1,new Scalar(0));
    149.             for (int i = 0; i < indices.total(); ++i)
    150.             {
    151.                 int idx = (int)indices.get(i, 0)[0];
    152.                 OpenCVForUnity.CoreModule.Rect box = boxesList[idx];
    153.  
    154.                 Imgproc.rectangle(maskMat, box, new Scalar(255), -1);
    155.  
    156.             }
    157.  
    158.             frame.setTo(new Scalar(0, 0, 0, 255));
    159.             cloneMat.copyTo(frame, maskMat);
    160.  
    161.             cloneMat.Dispose();
    162.             maskMat.Dispose();
    163. ///////////////////////////////////////////
    164.  
    165.  
    166.             for (int i = 0; i < indices.total(); ++i)
    167.             {
    168.                 int idx = (int)indices.get(i, 0)[0];
    169.                 OpenCVForUnity.CoreModule.Rect box = boxesList[idx];
    170.                 drawPred(classIdsList[idx], confidencesList[idx], box.x, box.y,
    171.                     box.x + box.width, box.y + box.height, frame);
    172.             }
    173.  
    174.  
    175.             indices.Dispose ();
    176.             boxes.Dispose ();
    177.             confidences.Dispose ();
    178.  
    179.         }
    dnn_mask.PNG
     
    johnymetalheadx likes this.
  22. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Thank you very much for reporting.
    Could you tell me about your test environment?
    OpenCVForUnity version :
    Unity version :
    Android device name :
     
  23. johnymetalheadx

    johnymetalheadx

    Joined:
    Feb 20, 2016
    Posts:
    12
    Thanks!

    I'm receiving these errors while I run the YoloObjectDetectionExample Scene

    1:
    Empty path name is not legal.
    UnityEngine.Debug:LogError(Object)
    OpenCVForUnityExample.DnnObjectDetectionExample:readClassNames(String) (at Assets/OpenCVForUnity/Examples/MainModules/dnn/DnnObjectDetectionExample.cs:363)
    OpenCVForUnityExample.DnnObjectDetectionExample:Run() (at Assets/OpenCVForUnity/Examples/MainModules/dnn/DnnObjectDetectionExample.cs:201)
    OpenCVForUnityExample.DnnObjectDetectionExample:Start() (at Assets/OpenCVForUnity/Examples/MainModules/dnn/DnnObjectDetectionExample.cs:152)

    2:
    is not loaded. Please see "StreamingAssets/dnn/setup_dnn_module.pdf".
    UnityEngine.Debug:LogError(Object)
    OpenCVForUnityExample.DnnObjectDetectionExample:Run() (at Assets/OpenCVForUnity/Examples/MainModules/dnn/DnnObjectDetectionExample.cs:203)
    OpenCVForUnityExample.DnnObjectDetectionExample:Start() (at Assets/OpenCVForUnity/Examples/MainModules/dnn/DnnObjectDetectionExample.cs:152)

    3:
    or /Users/Mac/OpenCV Test/Assets/StreamingAssets/dnn/yolov3-tiny.weights is not loaded. Please see "StreamingAssets/dnn/setup_dnn_module.pdf".
    UnityEngine.Debug:LogError(Object)
    OpenCVForUnityExample.DnnObjectDetectionExample:Run() (at Assets/OpenCVForUnity/Examples/MainModules/dnn/DnnObjectDetectionExample.cs:235)
    OpenCVForUnityExample.DnnObjectDetectionExample:Start() (at Assets/OpenCVForUnity/Examples/MainModules/dnn/DnnObjectDetectionExample.cs:152)


    Although I have followed all the instructions in "setup_dnn_module"

    Except:
    https://raw.githubusercontent.com/chuanqi305/MobileNet-SSD/master/MobileNetSSD_deploy.prototxt

    Because this file does not exist!
     
  24. fherbst

    fherbst

    Joined:
    Jun 24, 2012
    Posts:
    802
    @EnoxSoftware
    OpenCVForUnity version : 2.3.6 (latest)
    Unity version : 2019.1.14f1
    Android device name : Samsung Galaxy Tab S4
     
  25. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Are the following 4 files located in the StreamingAssets folder?
    yolov3_setup.PNG
     
  26. boshu

    boshu

    Joined:
    Oct 5, 2017
    Posts:
    2
  27. johnymetalheadx

    johnymetalheadx

    Joined:
    Feb 20, 2016
    Posts:
    12
    Yes, see the attached pic.
     

    Attached Files:

  28. johnymetalheadx

    johnymetalheadx

    Joined:
    Feb 20, 2016
    Posts:
    12
    So basically I had to add ".txt" at the end of the 'Config' and 'Classes' fields of DnnObjectDetectionExample script.

    It worked!
     

    Attached Files:

  29. johnymetalheadx

    johnymetalheadx

    Joined:
    Feb 20, 2016
    Posts:
    12
    I want to detect my own objects such as medical equipments and stuff in webcam stream. Any tips on how to do that?
     
  30. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I think the following post will be helpful.
    The error was caused by the wrong input .pbtxt file passed into the function readNetFromTensorflow because the .pbtxt has to be geneated by tf_text_graph_ssd.py as describe here:​
    https://stackoverflow.com/questions...t-work-with-opencv-after-retraining-mobilenet
    https://stackoverflow.com/questions...n-failed-in-getmemoryshapes?noredirect=1&lq=1
    https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API
     
  31. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I tried to reproduce the crash using Google Pixel, but I haven't been able to reproduce it yet.
    Could you provide additional information about the situation where the crash occurred?
    1. IL2CPP or Mono ?
    2. arm64-v8a or armeabi-v7a or x86 ?

    Which scene did the crash occur during?
    I tested in the next step.
    1. Run WebCamTextureMarkerLessARExample.
    2. Capture Marker Pattern and save.
    3. Detect Marker and Leave in this state for several hours.
     
  32. look001

    look001

    Joined:
    Mar 23, 2017
    Posts:
    111
    Hi Enox,
    if i want to use your asset for commercial apps what licenses do i have to check? I heard that opencv has some third party modules like SURF that don't allow to use it for commercial use. Do you have a list of licenses of the third party modules in the asset? What do i need for using this asset in a commercial app regarding licensing and credits?
    Thank you!
     
  33. boshu

    boshu

    Joined:
    Oct 5, 2017
    Posts:
    2
    Thank you for your reply!!

    So I have to generate the pbtxt file first.
    python tf_text_graph_ssd.py  --input /path/to/model.pb --config /path/to/example.config --output /path/to/graph.pbtxt

    Put the code from
    Net = Dnn.readNetFromTensorflow (model_filepath);

    Changed to
    Net = Dnn.readNetFromTensorflow (model_filepath, pbtxt_filepath);

    Is that right?

    configuration file
    There are a lot of config files here. I don't know which one I should choose.
     
  34. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    OpenCVForUnity libraries are built with the build option set to OPENCV_ENABLE_NONFREE=OFF. So, SIFT and SURF algorithms are not included in OpenCV for Unity.
    https://github.com/opencv/opencv/blob/834c99255320fb565259d3e9177edcb590d4a6be/CMakeLists.txt#L1096

    Since OpenCV is The 3-Clause BSD License, it is necessary to display The 3-Clause BSD License when publishing an application using OpenCVforUntiy.
     
    look001 likes this.
  35. wightwhale

    wightwhale

    Joined:
    Jul 28, 2011
    Posts:
    397
    How would i get the following information from openCV is it pretty easy to get most of this information?

    https://research.nvidia.com/sites/default/files/pubs/2018-06_Falling-Things/readme_0.txt

    - 4x4 Euclidean transformation (`fixed_model_transform`). This transformation is applied to the original publicly-available YCB object in order to center and align it (translation values are in centimeters) with the coordinate system (see the discussion above on the NDDS tool). Note that this is actually the transpose of the matrix.
    - dimensions of the 3D bounding cuboid along the XYZ axes (`cuboid_dimensions`)

    - XYZ position and orientation of the camera in the world coordinate frame (`camera_data`)
    - for each object,
    - visibility, defined as the percentage of the object that is not occluded (`visibility`). (0 means fully occluded whereas 1 means fully visible)
    - XYZ position (in centimeters) and orientation (`location` and `quaternion_xyzw`)
    - 4x4 transformation (redundant, can be computed from previous) (`pose_transform_permuted`)
    - 3D position of the centroid of the bounding cuboid (in centimeters) (`cuboid_centroid`)
    - 2D projection of the previous onto the image (in pixels) (`projected_cuboid_centroid`)
    - 2D bounding box of the object in the image (in pixels) (`bounding_box`)
    - 3D coordinates of the vertices of the 3D bounding cuboid (in centimeters) (`cuboid`)
    - 2D coordinates of the projection of the above (in pixels (`projected_cuboid`)

    I'm trying to replicate the FAT data set for DOPE
     
  36. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Unfortunately, I'm not familiar with tensorflow, so I don't know which Config file to choose.
     
  37. jasonmcguirk

    jasonmcguirk

    Joined:
    Apr 20, 2013
    Posts:
    10
    Heya! I'm trying to make some performance improvements in my app that's using OpenCV

    I noticed you've obsoleted and commented out some of the Utils code that was previously using GetNativeTexturePtr() in favor of GetRawTextureData.

    Unfortunately, GetRawTextureData results in a full C# GC Alloc of the entire texture, which is somewhat unfortunate and causes some unnecessary CPU churn. It'd be fantastic if there was a path where this was possible

    I've tried calling into Utils.copyToMat with GetNativeTexturePtr() but it seems to crash approximately 10% of the time here

    Receiving unhandled NULL exception
    Obtained 36 stack frames.
    #0 0x007fff65c2bd09 in _platform_memmove$VARIANT$Haswell
    #1 0x0000015dfefc90 in (wrapper managed-to-native) OpenCVForUnity.UnityUtils.Utils:OpenCVForUnity_ByteArrayToMatData (intptr,intptr) {0x1332e44c0} + 0xc0 (0x15dfefbd0 0x15dfefd43) [0x15748ac80 - Unity Child Domain]

    At first I thought maybe it was a race condition with the render thread / unity allocating some of the underlying machinery. I've tried disabling MT rendering and delaying the call until the next frame or OnPostRender with no luck, and with out the source I can't really figure out where it might be crashing.

    Is there any way to avoid the alloc that GetRawTextureData generates and use GetNativeTexturePtr?

    Cheers!
     
  38. johnymetalheadx

    johnymetalheadx

    Joined:
    Feb 20, 2016
    Posts:
    12
    Hi @EnoxSoftware

    Is there a way to zoom in the webcam stream to the maximum scale by default?
    Without the need to implement pinch gesture.
     
  39. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I don't have an example using FAT dataset and OpenCVForUnity.
    Also,
    Since this asset is a clone of OpenCV Java, you are able to use the same API as OpenCV Java.http://enoxsoftware.github.io/OpenCVForUnity/3.0.0/doc/html/index.html
     
  40. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    The Utils.copyToMat method simply uses memcpy method to copy memory.
    Code (CSharp):
    1. extern "C" UNITY_INTERFACE_EXPORT void UNITY_INTERFACE_API OpenCVForUnity_ByteArrayToMatData(uchar* byteArray,
    2.     cv::Mat* mat) {
    3.     static const char method_name[] = "OpenCVForUnity_ByteArrayToMatData()";
    4.     try {
    5.         LOGD("%s", method_name);
    6.         if (mat->isContinuous()) {
    7.             memcpy(mat->data, byteArray, mat->total() * mat->elemSize());
    8.         }
    9.         else {
    10.             size_t rowBytes = mat->cols * mat->elemSize();
    11.             for (int i = 0; i < mat->rows; ++i) {
    12.                 memcpy(mat->ptr(i, 0), byteArray, rowBytes);
    13.                 byteArray += rowBytes;
    14.             }
    15.         }
    16.     }
    17.     catch (const std::exception &e) {
    18.         LOGE("%s : %s", method_name, e.what());
    19.     }
    20.     catch (...) {
    21.         LOGE("%s : %s", method_name, "unknown exception");
    22.     }
    23. }
    The Utils.copyToMat method does not support a native (underlying graphics API) pointer.
    https://docs.unity3d.com/ScriptReference/Texture.GetNativeTexturePtr.html
    Retrieve a native (underlying graphics API) pointer to the texture resource.

    I haven't tried it yet, but NativeArray <T> GetRawTextureData () method might be useful for efficient pixel copying.
    https://docs.unity3d.com/ScriptReference/Texture2D.GetRawTextureData.html
    GetRawTextureData does not allocate memory; the returned NativeArray directly points to the texture system memory data buffer.
     
  41. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  42. SamVickery

    SamVickery

    Joined:
    Jul 19, 2013
    Posts:
    3
    Hi

    It there a way to disable the plugin for certain platforms. We use the plugin on a desktop version of the software but not on iOS or Android

    Thanks
     
  43. sravanthiN

    sravanthiN

    Joined:
    Jun 28, 2019
    Posts:
    2
    Hi @EnoxSoftware ,

    Today we bought opencvforunity from unity assets store. I am following the below youtube url which was published by EnoxSoftware. I am getting the error message while executing samples.



    Unity Version : Unity 2019.1.10f1 personal

    Attached screen shot. Please help us. ErrorMessageOpencvforUnity.PNG
     
  44. tabulatouch

    tabulatouch

    Joined:
    Mar 12, 2015
    Posts:
    23
    Hello Enox,
    I have your superb asset and used it in various AR projects.
    Now i am trying to do the following:
    - have a pattern printed and hanged on a wall
    - detect the pattern and get the Unity camera transform so it matches the real world position

    Of course i have to feed the pattern size in real world units.
    Do you have any advice to estimate the camera transform so it matches the real world? It seems so similar to an AR marker example but somewhat different..

    Thank you!
     
  45. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    If you do not use the iOS and Android platforms, delete the iOS and Android folders in the plug-in folder.
    ImportingPackage.png
     
  46. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Thank you very much for reporting.
    Could you tell me about your test environment?
    OpenCVForUnity version :
    Unity version :
    Editor Platform :
     
  47. johnymetalheadx

    johnymetalheadx

    Joined:
    Feb 20, 2016
    Posts:
    12
    Hi @EnoxSoftware
    I want to use OpenCV asset functionalities with Unity's AR Foundation. They have a different way of getting camera stream. Do you know a work around for this?
     
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Unfortunately, I don't have an example of combining OpenCVForUnity and Unity's AR Foundation camera streams.
     
  49. Deleted User

    Deleted User

    Guest

    @EnoxSoftware
    Hello. Planing to buy this asset. Though as im not rly familiar with this plugin and opencv i have a question.
    Is it posible to train a model in open cv (i assume its in pure .Net or maybe Python) and then import the trained model into unity using your plugin? So basicaly what environment i need and is the best for training a new network and how do i add the new network into the project?
    Correct me on these steps if im wrong. Thank you.
     
    Last edited by a moderator: Sep 26, 2019
  50. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
    Is there a way to make the masking more accurate