Search Unity

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. Packedbox

    Packedbox

    Joined:
    Jun 20, 2013
    Posts:
    20
    Hi I have a crash on ArmV7 devices on iOS ( iPad 4 Typically )
    using WarpPerspective
    works fine on ARM64

    Thread 1: EXC_BAD_ACCESS (code=EXC_ARM_DA_ALIGN, address=0x16c732e9)

    0xca87f42 <+866>: vld1.16 {d20[2]}, [r4:16]
    0xca87f46 <+870>: vld1.16 {d22[2]}, [r5:16]

    r4 is : unsigned int 0x16c732e9


    Code (CSharp):
    1. #0    0x0ca87f42 in cv::RemapVec_8u::operator()(cv::Mat const&, void*, short const*, unsigned short const*, void const*, int) const ()
    2. #1    0x0ca68f04 in void cv::remapBilinear<cv::FixedPtCast<int, unsigned char, 15>, cv::RemapVec_8u, short>(cv::Mat const&, cv::Mat&, cv::Mat const&, cv::Mat const&, void const*, int, cv::Scalar_<double> const&) ()
    3. #2    0x0ca8479a in cv::RemapInvoker::operator()(cv::Range const&) const ()
    4. #3    0x0c671f6a in cv::parallel_for_(cv::Range const&, cv::ParallelLoopBody const&, double) ()
    5. #4    0x0ca65db8 in cv::remap(cv::_InputArray const&, cv::_OutputArray const&, cv::_InputArray const&, cv::_InputArray const&, int, int, cv::Scalar_<double> const&) ()
    6. #5    0x0ca87440 in cv::WarpPerspectiveInvoker::operator()(cv::Range const&) const ()
    7. #6    0x0c67204a in cv::parallel_for_(cv::Range const&, cv::ParallelLoopBody const&, double) ()
    8. #7    0x0ca7ab90 in cv::hal::warpPerspective(int, unsigned char const*, unsigned long, int, int, unsigned char*, unsigned long, int, int, double const*, int, int, double const*) ()
    9. #8    0x0ca7b318 in cv::warpPerspective(cv::_InputArray const&, cv::_OutputArray const&, cv::_InputArray const&, cv::Size_<int>, int, int, cv::Scalar_<double> const&) ()
    10. #9    0x031cc1a2 in ::imgproc_Imgproc_warpPerspective_13(cv::Mat *, cv::Mat *, cv::Mat *, double, double) at /Users/satoo/opencv/ios/opencvforunity/opencvforunity/imgproc.inl.hpp:7928
    11. #10    0x00660ce0 in ::Imgproc_imgproc_Imgproc_warpPerspective_13_m03BA780C12E99C824D9C008D71EDE58E795F6643(intptr_t, intptr_t, intptr_t, double, double, const RuntimeMethod *) at /Users/nico/Dev/Wakatoon_U2019/wakatoon-app/Builds/3.26.825/Release/AppStore/iOS/Wakatoon/Classes/Native/Assembly-CSharp7.cpp:40617
    12. #11    0x00660c8c in ::Imgproc_warpPerspective_mABACE99922CB32615EA54FD314D56FC4386C631C(Mat_t5700E97FC23BEBB18DB979934E55BA217DF56ABA *, Mat_t5700E97FC23BEBB18DB979934E55BA217DF56ABA *, Mat_t5700E97FC23BEBB18DB979934E55BA217DF56ABA *, Size_tC77933A87BEB21A122862E88DA94FF80955E7E09 *, const RuntimeMethod *) at /Users/nico/Dev/Wakatoon_U2019/wakatoon-app/Builds/3.26.825/Release/AppStore/iOS/Wakatoon/Classes/Native/Assembly-CSharp7.cpp:37311
    13. #12    0x0057ac24 in ::SquareDetector_warpIdentificationImage_mDA9EFDE863CBE0CB88FEE8306B016425916C4F51(SquareDetector_t48FEB31804CCB38E8896AC5498411AE30F46395B *, wvImage_t217F8842A96CF74A6E223CC14474EE7EEB811C3B *, Square_t4DE7FAD409BD5E08C897EF73591C6BF191B805F0 *, const RuntimeMethod *) at /Users/nico/Dev/Wakatoon_U2019/wakatoon-app/Builds/3.26.825/Release/AppStore/iOS/Wakatoon/Classes/Native/Assembly-CSharp23.cpp:12233
    14.  
     
  2. cel

    cel

    Joined:
    Feb 15, 2011
    Posts:
    46
    Hi enox,

    First a big thank you for making playmaker actions for code challenged people like me.
    Second, I've been trying to detect a marker with the playmaker actions but have no idea where/how to start, any chance you could make an example or tutorial on how to detect a marker with a camera with playmaker?

    Thanks for your time
     
  3. Aidan-Wolf

    Aidan-Wolf

    Joined:
    Jan 6, 2014
    Posts:
    59
  4. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Thank you very much for reporting.
    Could you send your code to the contact form? https://enoxsoftware.com/opencvforunity/contact/other-inquiry/
     
  5. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Is the marker you want to detect an ArUco module marker?
     
  6. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  7. cel

    cel

    Joined:
    Feb 15, 2011
    Posts:
    46
    Yes
     
  8. rammahajan009

    rammahajan009

    Joined:
    Jun 11, 2018
    Posts:
    1
    @EnoxSoftware I am trying OpenPose example, but it is crashing Unity Editor itself. I also tried with Android build, but same issue it is crashing when I press Open Pose Example under dnn. Please help me out.

    Unity version: 2018.4.0f1
     
  9. chrisk

    chrisk

    Joined:
    Jan 23, 2009
    Posts:
    704
    Hi,
    I'm curious about Facial capture feature and I love to try it for my game.
    But many of your samples require both OpenCV and Dlib and I think it makes sense to combine the two products into one, perhaps at a significant discount. Even with 50% discount, buying both of them seems pretty expensive.
    Isn't it just a wrapper to open source OpenCV? Or there is something I'm missing?
    Thanks.
     
  10. viscopic_leon

    viscopic_leon

    Joined:
    Jan 8, 2019
    Posts:
    2
    Hi,

    I'm wondering if there is some way to have a Dnn Net running in a seperate Thread, or Job.
    Right now it's freezing my main thread whenever it runs, which is something that's not desired in my application.

    Has anyone tried or achieved running the net.forward in a seperate Thread or Job?

    Thanks!
     
  11. JonBanana

    JonBanana

    Joined:
    Feb 5, 2014
    Posts:
    85
  12. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Unfortunately, Since SIFT and SURF algorithms have patent issues, they are not included in OpenCV for Unity.
    The native library included in OpenCVForUnity is built with the OPENCV_ENABLE_NONFREE flag disabled. To use the SIFT and SURF algorithms, rebuild OPENCV library with OPENCV_ENABLE_NONFREE enabled. For more details, see the section on “How to use OpenCV Dynamic Link Library with customized build settings” in ReadMe.pdf.
     
  13. vkajudiya

    vkajudiya

    Joined:
    Mar 24, 2014
    Posts:
    9
    is it possible to detect text or character like alphabets, numbers 0-9 and shapes using opencv with device Camera? i found sample to detect text from image but is it possible to do it with real time camera output?
     
  14. JonBanana

    JonBanana

    Joined:
    Feb 5, 2014
    Posts:
    85
    thanks for your answer , i ll try it
     
  15. vkajudiya

    vkajudiya

    Joined:
    Mar 24, 2014
    Posts:
    9
    TextDetectionExample and TextRecognitionExample work on words not on single character like atoz or 1to9.
    is there any way to detect character recognition instead of text?
     

    Attached Files:

    • 00.jpg
      00.jpg
      File size:
      31.5 KB
      Views:
      430
  16. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I tested an example using DeepLearning for character detection. I think that the accuracy of character detection has improved a little.
    textbox.prototxt https://raw.githubusercontent.com/opencv/opencv_contrib/master/modules/text/samples/textbox.prototxt
    TextBoxes_icdar13.caffemodel https://www.dropbox.com/s/g8pjzv2de9gty8g/TextBoxes_icdar13.caffemodel?dl=0
    DnnTextRecognitionExample.PNG
    Code (CSharp):
    1. #if !UNITY_WSA_10_0
    2.  
    3. using UnityEngine;
    4. using UnityEngine.SceneManagement;
    5. using System;
    6. using System.Collections;
    7. using System.Collections.Generic;
    8. using System.Xml;
    9. using OpenCVForUnity.CoreModule;
    10. using OpenCVForUnity.ImgcodecsModule;
    11. using OpenCVForUnity.TextModule;
    12. using OpenCVForUnity.ImgprocModule;
    13. using OpenCVForUnity.UnityUtils;
    14.  
    15. namespace OpenCVForUnityExample
    16. {
    17.     /// <summary>
    18.     /// Text Detection Example
    19.     /// A demo script of the Extremal Region Filter algorithm described in:Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012.
    20.     /// Referring to https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/textdetection.py.
    21.     /// </summary>
    22.     public class DnnTextRecognitionExample : MonoBehaviour
    23.     {
    24.  
    25.         /// <summary>
    26.         /// IMAGE_FILENAME
    27.         /// </summary>
    28.         protected static readonly string IMAGE_FILENAME = "text/test_text.jpg";
    29.         //protected static readonly string IMAGE_FILENAME = "text/00.jpg";
    30.         //protected static readonly string IMAGE_FILENAME = "text/scenetext01.jpg";
    31.         //protected static readonly string IMAGE_FILENAME = "text/scenetext04.jpg";
    32.  
    33.         /// <summary>
    34.         /// The image filepath.
    35.         /// </summary>
    36.         string image_filepath;
    37.  
    38.         /// <summary>
    39.         /// MODEL_ARCH_FILENAME https://raw.githubusercontent.com/opencv/opencv_contrib/master/modules/text/samples/textbox.prototxt
    40.         /// </summary>
    41.         protected static readonly string MODEL_ARCH_FILENAME = "text/textbox.prototxt";
    42.  
    43.         /// <summary>
    44.         /// model_arch_filepath
    45.         /// </summary>
    46.         string model_arch_filepath;
    47.  
    48.         /// <summary>
    49.         /// MODEL_WEIGHTS_FILENAME https://www.dropbox.com/s/g8pjzv2de9gty8g/TextBoxes_icdar13.caffemodel?dl=0
    50.         /// </summary>
    51.         protected static readonly string MODEL_WEIGHTS_FILENAME = "text/TextBoxes_icdar13.caffemodel";
    52.  
    53.         /// <summary>
    54.         /// model_weights_filepath
    55.         /// </summary>
    56.         string model_weights_filepath;
    57.  
    58.         /// <summary>
    59.         /// CLASSIFIER_NM_2_FILENAME
    60.         /// </summary>
    61.         protected static readonly string OCRHMM_TRANSITIONS_TABLE_FILENAME = "text/OCRHMM_transitions_table.xml";
    62.  
    63.         /// <summary>
    64.         /// The OCRHMM transitions table filepath.
    65.         /// </summary>
    66.         string OCRHMM_transitions_table_filepath;
    67.  
    68.         /// <summary>
    69.         /// CLASSIFIER_NM_2_FILENAME
    70.         /// </summary>
    71.         protected static readonly string OCRHMM_KNN_MODEL_FILENAME = "text/OCRHMM_knn_model_data.xml";
    72.  
    73.         /// <summary>
    74.         /// The OCRHMM knn model data filepath.
    75.         /// </summary>
    76.         string OCRHMM_knn_model_data_filepath;
    77.  
    78.         #if UNITY_WEBGL && !UNITY_EDITOR
    79.         IEnumerator getFilePath_Coroutine;
    80.         #endif
    81.  
    82.  
    83.         // Use this for initialization
    84.         void Start ()
    85.         {
    86.             #if UNITY_WEBGL && !UNITY_EDITOR
    87.             getFilePath_Coroutine = GetFilePath ();
    88.             StartCoroutine (getFilePath_Coroutine);
    89.             #else
    90.             image_filepath = Utils.getFilePath (IMAGE_FILENAME);
    91.             model_arch_filepath = Utils.getFilePath (MODEL_ARCH_FILENAME);
    92.             model_weights_filepath = Utils.getFilePath (MODEL_WEIGHTS_FILENAME);
    93.             OCRHMM_transitions_table_filepath = Utils.getFilePath (OCRHMM_TRANSITIONS_TABLE_FILENAME);
    94.             #if UNITY_ANDROID && !UNITY_EDITOR
    95.             OCRHMM_knn_model_data_filepath = Utils.getFilePath (OCRHMM_KNN_MODEL_FILENAME);
    96.             #else
    97.             OCRHMM_knn_model_data_filepath = Utils.getFilePath (OCRHMM_KNN_MODEL_FILENAME + ".gz");
    98.             #endif
    99.             Run ();
    100.             #endif
    101.         }
    102.  
    103.         #if UNITY_WEBGL && !UNITY_EDITOR
    104.         private IEnumerator GetFilePath ()
    105.         {
    106.             var getFilePathAsync_0_Coroutine = Utils.getFilePathAsync (IMAGE_FILENAME, (result) => {
    107.                 image_filepath = result;
    108.             });
    109.             yield return getFilePathAsync_0_Coroutine;
    110.  
    111.             var getFilePathAsync_1_Coroutine = Utils.getFilePathAsync (MODEL_ARCH_FILENAME, (result) => {
    112.                 model_arch_filepath = result;
    113.             });
    114.             yield return getFilePathAsync_1_Coroutine;
    115.  
    116.             var getFilePathAsync_2_Coroutine = Utils.getFilePathAsync (MODEL_WEIGHTS_FILENAME, (result) => {
    117.                 model_weights_filepath = result;
    118.             });
    119.             yield return getFilePathAsync_2_Coroutine;
    120.  
    121.             var getFilePathAsync_3_Coroutine = Utils.getFilePathAsync (OCRHMM_TRANSITIONS_TABLE_FILENAME, (result) => {
    122.                 OCRHMM_transitions_table_filepath = result;
    123.             });
    124.             yield return getFilePathAsync_3_Coroutine;
    125.  
    126.             var getFilePathAsync_4_Coroutine = Utils.getFilePathAsync (OCRHMM_KNN_MODEL_FILENAME+".gz", (result) => {
    127.                 OCRHMM_knn_model_data_filepath = result;
    128.             });
    129.             yield return getFilePathAsync_4_Coroutine;
    130.  
    131.             getFilePath_Coroutine = null;
    132.  
    133.             Run ();
    134.         }
    135.         #endif
    136.  
    137.         private void Run ()
    138.         {
    139.             //if true, The error log of the Native side OpenCV will be displayed on the Unity Editor Console.
    140.             Utils.setDebugMode (true);
    141.  
    142.  
    143.             Mat frame = Imgcodecs.imread (image_filepath);
    144.             if (frame.empty ()) {
    145.                 Debug.LogError ("text/scenetext01.jpg is not loaded. Please copy from “OpenCVForUnity/StreamingAssets/text/” to “Assets/StreamingAssets/” folder. ");
    146.             }
    147.  
    148.             Mat binaryMat = new Mat();
    149.             Mat maskMat = new Mat();
    150.  
    151.  
    152.  
    153.             //Text Detection Dnn
    154.             Imgproc.cvtColor (frame, frame, Imgproc.COLOR_BGR2RGB);
    155.             Imgproc.cvtColor (frame, binaryMat, Imgproc.COLOR_RGB2GRAY);
    156.             Imgproc.threshold (binaryMat, binaryMat, 0, 255, Imgproc.THRESH_BINARY | Imgproc.THRESH_OTSU);
    157.             Core.absdiff (binaryMat, new Scalar (255), maskMat);
    158.  
    159.  
    160.  
    161.             TextDetectorCNN textSpotter = TextDetectorCNN.create(model_arch_filepath, model_weights_filepath);
    162.             MatOfRect bbox = new MatOfRect();
    163.             MatOfFloat confidence = new MatOfFloat();
    164.             textSpotter.detect(frame, bbox, confidence);
    165.  
    166.  
    167.             float thres = 0.50f;
    168.  
    169.             List<OpenCVForUnity.CoreModule.Rect> rects = bbox.toList();
    170.             List<float> confidences = confidence.toList();
    171.  
    172.             bbox.Dispose();
    173.             confidence.Dispose();
    174.  
    175.  
    176.             //Text Recognition (OCR)
    177.  
    178.             List<Mat> detections = new List<Mat> ();
    179.  
    180.             //Extend rects
    181.             for (int i = 0; i < (int)rects.Count; i++)
    182.             {
    183.  
    184.                 rects[i].inflate(6, 5);
    185.                 rects[i] = rects[i].intersect(new OpenCVForUnity.CoreModule.Rect(0, 0, frame.cols(), frame.rows()));
    186.  
    187.                 //Debug.Log(i + " " + rects[i].ToString());
    188.             }
    189.  
    190.             //Extract words
    191.             for (int i = 0; i < (int)rects.Count; i++)
    192.             {
    193.                 //Debug.Log(i + " " + rects[i].ToString());
    194.                 //Debug.Log(i + " " + confidenceList[i].ToString());
    195.  
    196.                 if (confidences[i] > thres)
    197.                 {
    198.                     Mat group_img = new Mat();
    199.                     maskMat.submat(rects[i]).copyTo(group_img);
    200.                     int border = 15;
    201.                     Core.copyMakeBorder(group_img, group_img, border, border, border, border, Core.BORDER_CONSTANT, new Scalar(0));
    202.                     detections.Add(group_img);
    203.                 }
    204.                 else
    205.                 {
    206.                     rects.RemoveAt(i);
    207.                     confidences.RemoveAt(i);
    208.                     i--;
    209.                 }
    210.  
    211.             }
    212.  
    213.             Debug.Log("detections.Count " + detections.Count);
    214.  
    215.  
    216.             Mat transition_p = new Mat(62, 62, CvType.CV_64FC1);
    217.             //            string filename = "OCRHMM_transitions_table.xml";
    218.             //            FileStorage fs(filename, FileStorage::READ);
    219.             //            fs["transition_probabilities"] >> transition_p;
    220.             //            fs.release();
    221.  
    222.             //Load TransitionProbabilitiesData.
    223.             transition_p.put(0, 0, GetTransitionProbabilitiesData(OCRHMM_transitions_table_filepath));
    224.  
    225.             Mat emission_p = Mat.eye(62, 62, CvType.CV_64FC1);
    226.             string voc = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789";
    227.             OCRHMMDecoder decoder = OCRHMMDecoder.create(
    228.                                         OCRHMM_knn_model_data_filepath,
    229.                                         voc, transition_p, emission_p);
    230.  
    231.             //#Visualization
    232.             for (int i = 0; i < rects.Count; i++)
    233.             {
    234.  
    235.                 Imgproc.rectangle(frame, new Point(rects[i].x, rects[i].y), new Point(rects[i].x + rects[i].width, rects[i].y + rects[i].height), new Scalar(255, 0, 0), 2);
    236.                 Imgproc.rectangle(frame, new Point(rects[i].x, rects[i].y), new Point(rects[i].x + rects[i].width, rects[i].y + rects[i].height), new Scalar(255, 255, 255), 1);
    237.  
    238.  
    239.                 string output = decoder.run(detections[i], 0);
    240.                 if (!string.IsNullOrEmpty(output))
    241.                 {
    242.                     Debug.Log(i + " output " + output);
    243.                     Imgproc.putText(frame, output, new Point(rects[i].x, rects[i].y), Imgproc.FONT_HERSHEY_SIMPLEX, 0.5, new Scalar(0, 0, 255), 1, Imgproc.LINE_AA, false);
    244.                 }
    245.             }
    246.  
    247.  
    248.             Texture2D texture = new Texture2D(frame.cols(), frame.rows(), TextureFormat.RGBA32, false);
    249.  
    250.             Utils.matToTexture2D(frame, texture);
    251.  
    252.  
    253.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    254.  
    255.  
    256.             for (int i = 0; i < detections.Count; i++)
    257.             {
    258.                 detections[i].Dispose();
    259.             }
    260.             binaryMat.Dispose();
    261.             maskMat.Dispose();
    262.  
    263.  
    264.             Utils.setDebugMode (false);
    265.         }
    266.    
    267.         // Update is called once per frame
    268.         void Update ()
    269.         {
    270.  
    271.         }
    272.  
    273.         /// <summary>
    274.         /// Gets the transition probabilities data.
    275.         /// </summary>
    276.         /// <returns>The transition probabilities data.</returns>
    277.         /// <param name="filePath">File path.</param>
    278.         double[] GetTransitionProbabilitiesData (string filePath)
    279.         {
    280.             XmlDocument xmlDoc = new XmlDocument ();
    281.             xmlDoc.Load (filePath);
    282.  
    283.  
    284.             XmlNode dataNode = xmlDoc.GetElementsByTagName ("data").Item (0);
    285. //            Debug.Log ("dataNode.InnerText " + dataNode.InnerText);
    286.             string[] dataString = dataNode.InnerText.Split (new string[] {
    287.                 " ",
    288.                 "\r\n", "\n"
    289.             }, StringSplitOptions.RemoveEmptyEntries);
    290. //            Debug.Log ("dataString.Length " + dataString.Length);
    291.  
    292.             double[] data = new double[dataString.Length];
    293.             for (int i = 0; i < data.Length; i++) {
    294.                 try {
    295.                     data [i] = Convert.ToDouble (dataString [i]);
    296.                 } catch (FormatException) {
    297.                     Debug.Log ("Unable to convert '{" + dataString [i] + "}' to a Double.");
    298.                 } catch (OverflowException) {
    299.                     Debug.Log ("'{" + dataString [i] + "}' is outside the range of a Double.");
    300.                 }
    301.             }      
    302.  
    303.             return data;
    304.         }
    305.  
    306.         /// <summary>
    307.         /// Raises the destroy event.
    308.         /// </summary>
    309.         void OnDestroy ()
    310.         {
    311.             #if UNITY_WEBGL && !UNITY_EDITOR
    312.             if (getFilePath_Coroutine != null) {
    313.                 StopCoroutine (getFilePath_Coroutine);
    314.                 ((IDisposable)getFilePath_Coroutine).Dispose ();
    315.             }
    316.             #endif
    317.         }
    318.  
    319.         /// <summary>
    320.         /// Raises the back button click event.
    321.         /// </summary>
    322.         public void OnBackButtonClick ()
    323.         {
    324.             SceneManager.LoadScene ("OpenCVForUnityExample");
    325.         }
    326.     }
    327. }
    328. #endif
     

    Attached Files:

    vkajudiya likes this.
  17. viscopic_leon

    viscopic_leon

    Joined:
    Jan 8, 2019
    Posts:
    2
    I followed the steps in the ReadMe and replaced the opencvforunity dll's with the dll's from the Extra/dll_version/Windows/ folder and added [...]\opencv\build\install\x64\vc16\bin to my path. However, now I'm getting the following errors:

    "Plugins: Failed to load 'Assets/OpenCVForUnity/Extra/dll_version/Windows/x86_64/opencvforunity.dll' because one or more of its dependencies could not be loaded."
    "DllNotFoundException: opencvforunity"

    Did I miss something when building OpenCV? I made sure to include the opencv_contrib modules.

    Since I'm building from the master branch, all my DLL's are named opencv_xxx420.dll. Are you expecting opencv_xxx412.dlls? Or are you just linking the opencv_world412.dll? Then I would have to build that too.
     
    Last edited: Dec 9, 2019
  18. JonBanana

    JonBanana

    Joined:
    Feb 5, 2014
    Posts:
    85

    Hi,
    I succeed to made python code for feature matching with AKAZE instead of SIFT , i m trying to convert it in C# but i dont know how to convert these lines :

    FLANN_INDEX_LSH = 6

    index_params= dict(algorithm = FLANN_INDEX_LSH,
    table_number = 6,
    key_size = 12,
    multi_probe_level = 1)

    search_params = dict(checks = 60)

    flann = cv2.FlannBasedMatcher(index_params, search_params)
    matches = flann.knnMatch(des1, des2, k=2)

    Can you help me ? thanks a lot
     
  19. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    It is necessary to build the OpenCV version used in OpenCVForUnity2.3.7.
    opencv 4.1.0 https://github.com/opencv/opencv/tree/64168fc20aa8a914cb5529f90ffac309854563b1
    opencv_contrib 4.1.0 https://github.com/opencv/opencv_contrib/tree/2c32791a9c500343568a21ea34bf2daeac2adae7
     
  20. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    You can read parameters using the read () method.
    Code (CSharp):
    1.         var flann = FlannBasedMatcher.create();
    2.         flann.read(Utils.getFilePath("conf.yml"));
    Assets/StreamingAssets/conf.yml
    Code (CSharp):
    1. %YAML:1.0
    2. ---
    3. format: 3
    4. indexParams:
    5.    -
    6.       name: algorithm
    7.       type: 9
    8.       value: 6
    9.    -
    10.       name: table_number
    11.       type: 4
    12.       value: 6
    13.    -
    14.       name: key_size
    15.       type: 4
    16.       value: 12
    17.    -
    18.       name: multi_probe_level
    19.       type: 4
    20.       value: 1
    21. searchParams:
    22.    -
    23.       name: checks
    24.       type: 4
    25.       value: 50
    26.    -
    27.       name: eps
    28.       type: 5
    29.       value: 0.
    30.    -
    31.       name: sorted
    32.       type: 8
    33.       value: 1
    34.  
     

    Attached Files:

  21. JonBanana

    JonBanana

    Joined:
    Feb 5, 2014
    Posts:
    85

    Thanks a lot for your answer !
     
  22. cel

    cel

    Joined:
    Feb 15, 2011
    Posts:
    46
    Just a friendly bump...yes, using aruco
     
  23. fengkan

    fengkan

    Joined:
    Jul 10, 2018
    Posts:
    82
    Is there any way to detect expression easily now?
     
  24. vkajudiya

    vkajudiya

    Joined:
    Mar 24, 2014
    Posts:
    9

    this example is work fine but i want to detect single character only like a to z and 1,2,3,4,5,6, to 9 single character is not working.

    -> I tried markerless ar example and find all numbers pattern like this

    patternvk = new Pattern[8];
    patternTrackingInfovk = new PatternTrackingInfo[8];
    patternDetectorvk = new PatternDetector[8];
    for (int i = 0; i < texturepattern.Length-1; i++)
    {
    patternvk = new Pattern();
    patternTrackingInfovk = new PatternTrackingInfo();
    patternDetectorvk = new PatternDetector(null, null, null, true);

    Mat patternMat = new Mat(texturepattern.height, texturepattern.width, CvType.CV_8UC4);
    Utils.texture2DToMat(texturepattern, patternMat);

    patternDetectorvk.buildPatternFromImage(patternMat, patternvk);
    patternDetectorvk.train(patternvk);
    }


    -- > and compare pattern in update with webcam texture like this

    for (int i = 0; i < texturepattern.Length-1; i++)
    {
    bool patternFound = patternDetectorvk.findPattern(grayMat, patternTrackingInfovk);
    if (patternFound)
    {
    patternFound = false;
    txtdetection.text = "" + texturepattern.name;
    }
    }


    above code is able to detect numbers realtime but its take around 5 to 10 second fps is very low.

    is there any other way to detect such Numbers only , i just want to detect which number is scanned, not want to show webcam texture and even not want to show any 3d model on currently detect marker. and marker size is suqare 1.5x1.5 inch.

    please suggest any solution for such detection.
     

    Attached Files:

    Last edited: Dec 16, 2019
  25. LR-Developer

    LR-Developer

    Joined:
    May 5, 2017
    Posts:
    109
    Hello,

    I bought the OPENCV asset and imported it into a new Project.
    Then I downloaded and added the MarkerBasedARExample.
    I want to create an app, that runs on both Hololens and Android device.
    I have read that OPENCV ís Windows UWP compatible.

    The MarkerBased WebCam sample runs very well on my andorid device, but how do I get it working on my hololens?

    I switched to UWP, set "Virtual Reality supported" to true and added the "Windows mixed reality".
    I Build UWP and use Visual Studio 2019 to deploay, but when the Scene starts I get an Exception in App.cpp:

    upload_2019-12-16_9-7-0.png

    Any help please? How do I get markers working on both Android and hololens?

    Thanks a lot :)
     
  26. LR-Developer

    LR-Developer

    Joined:
    May 5, 2017
    Posts:
    109
    PS: I am using Unity 2019.3.0f1. I tried switching to Net Standard 2.0, Net 4.x, I added permissions for Webcam and microphone, I deactivated that Quad renderer with the webcam Picture, I have no idea what's wrong, still the same error above...

    PPS: I downloaded the HoloLens With OpenCVforUnity Example. I requires a 3 years old Holotoolkit package? Is there a newer Version with the mixed reality toolkit somewhere?

    How do I get the marker based sample working on my hololens?

    Thanks a lot :)
     
    Last edited: Dec 16, 2019
  27. YujiOkaniwa

    YujiOkaniwa

    Joined:
    Dec 16, 2019
    Posts:
    1
    Hi,
    I am creating a video capture app with UWP.
    I want to create a VideoWriter codec with H.264.
    However, only MJPEG can be exported.
    Is there a way?
     
  28. vkajudiya

    vkajudiya

    Joined:
    Mar 24, 2014
    Posts:
    9
    i used array of PatternDetector with 9 objects. and try to compare it with realtime webcam texture in update() but response is very slow.

    void Update ()
    {
    if (webCamTextureToMatHelper.IsPlaying () && webCamTextureToMatHelper.DidUpdateThisFrame ())
    {
    Mat rgbaMat = webCamTextureToMatHelper.GetMat ();
    Imgproc.cvtColor (rgbaMat, grayMat, Imgproc.COLOR_RGBA2GRAY);
    //bool patternFound = patternDetector.findPattern (grayMat, patternTrackingInfo);



    for (int i = 0; i < texturepattern.Length-1; i++)
    {
    bool patternFound = patternDetectorvk.findPattern(grayMat, patternTrackingInfovk);
    if (patternFound)
    {
    patternFound = false;
    txtdetection.text = "" + texturepattern.name;
    }
    }
    }

    is there any way to use multiple custom marker? we just want to detect it which marker is this, i used vuforia , arcore,arkit but noone help cause of marker image is simple number and size is 1.5x1.5 inch its conflict in 6 and 9 both marker is almost same we use underscore in both marker but still its confusing.

    i want to detect below markers.
     

    Attached Files:

  29. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  30. fengkan

    fengkan

    Joined:
    Jul 10, 2018
    Posts:
    82
  31. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Thank you very much for reporting.
    Can you tell me the environment you tested?
    Unity version :
    OpenCVforUnity version :
    MarkerBasedARExample version :
    Visual Studio version :

    Also,
    HoloLensWithOpenCVforUnityExample does not currently support MRTKv2.
     
  32. LR-Developer

    LR-Developer

    Joined:
    May 5, 2017
    Posts:
    109
    Unity 2019.3.0f1
    OpenCV: latest from store (2.3.7)
    MarkerBased: latest from store (1.2.2)
    Visual Studio 2019 also latest 16.4.1 with all updates available at this point

    Other hololens Unity apps work great. currently I would just like to get the markerbased working in a Project that I could switch platform between UWP for hololens and Android.

    If this is a Problem, I could also use 2019.2 or visual Studio 2017...

    Thanks a lot for helping :)
     
  33. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Unfortunately, OpenCVForUnity does not support VideoWriter codec with H.264 on the UWP platform.
     
  34. link1375

    link1375

    Joined:
    Nov 9, 2017
    Posts:
    11
    Hi I get an error, when I want to calibrate the camera with a CharUco Board. I used the example scene ArUcoCameraCalibrationExample for this, but when I click the capture button a CvException is thrown.

    CvException: CvType.CV_32SC2 != m.type() || m.cols()!=1
    Mat [ 0*0*CV_8UC1, isCont=False, isSubmat=False, nativeObj=0x2672589964496, dataAddr=0x0 ]
    OpenCVForUnity.UtilsModule.Converters.Mat_to_vector_Mat (OpenCVForUnity.CoreModule.Mat m, System.Collections.Generic.List`1[T] mats) (at Assets/OpenCVForUnity/org/opencv/utils/Converters.cs:336)
    OpenCVForUnity.ArucoModule.Aruco.calibrateCameraCharuco (System.Collections.Generic.List`1[T] charucoCorners, System.Collections.Generic.List`1[T] charucoIds, OpenCVForUnity.ArucoModule.CharucoBoard board, OpenCVForUnity.CoreModule.Size imageSize, OpenCVForUnity.CoreModule.Mat cameraMatrix, OpenCVForUnity.CoreModule.Mat distCoeffs, System.Collections.Generic.List`1[T] rvecs, System.Collections.Generic.List`1[T] tvecs, System.Int32 flags) (at Assets/OpenCVForUnity/org/opencv_contrib/aruco/Aruco.cs:483)
    OpenCVForUnityExample.ArUcoCameraCalibrationExample.CalibrateCameraCharuco (System.Collections.Generic.List`1[T] allCorners, System.Collections.Generic.List`1[T] allIds, OpenCVForUnity.ArucoModule.CharucoBoard board, OpenCVForUnity.CoreModule.Size imageSize, OpenCVForUnity.CoreModule.Mat cameraMatrix, OpenCVForUnity.CoreModule.Mat distCoeffs, System.Collections.Generic.List`1[T] rvecs, System.Collections.Generic.List`1[T] tvecs, System.Int32 calibrationFlags, System.Int32 minMarkers) (at Assets/OpenCVForUnity/Examples/ContribModules/aruco/ArUcoExample/ArUcoCameraCalibrationExample.cs:698)
    OpenCVForUnityExample.ArUcoCameraCalibrationExample.CaptureFrame (OpenCVForUnity.CoreModule.Mat frameMat) (at Assets/OpenCVForUnity/Examples/ContribModules/aruco/ArUcoExample/ArUcoCameraCalibrationExample.cs:552)
    OpenCVForUnityExample.ArUcoCameraCalibrationExample.Update () (at Assets/OpenCVForUnity/Examples/ContribModules/aruco/ArUcoExample/ArUcoCameraCalibrationExample.cs:280)
     
  35. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    MarkerLessARExample Code is a rewrite of https://github.com/MasteringOpenCV/code/tree/master/Chapter3_MarkerlessAR using “OpenCV for Unity”.

    In your code, the findPattern () method is called multiple times to detect multiple markers. However, since the getGray () and extractFeatures () methods do not need to be called multiple times for each frame, it is possible to reduce the amount of calculation by reusing the result calculated at the beginning of the frame.
    https://github.com/EnoxSoftware/Mar...ple/MarkerLessAR/PatternDetector.cs#L215-L219
     
    Last edited: Dec 18, 2019
  36. pekarnik1

    pekarnik1

    Joined:
    Sep 11, 2019
    Posts:
    5
    Hello. Are you have asynchronousfacedetectionwebcam with dlib example?
     
  37. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Thank you very much for reporting.
    Can you tell me the environment you tested?
    Unity version :
    OpenCVforUnity version :
    Build Platform :
     
    Last edited: Dec 19, 2019
  38. link1375

    link1375

    Joined:
    Nov 9, 2017
    Posts:
    11
    Unity Version: 2019.1.3f1
    OpenCVforUnity Version: 2.3.7
    Build Platform: Windows x86_64, but I only used the Editor and did not build
     
  39. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Unfortunately, I don't have an example that combines asynchronousfacedetectionwebcam and dlib.
     
  40. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    The test project worked without problems in the following environment.
    If "WebCam" is not set in Capabilities setting, delete the build project folder that was output once, and then build again.

    Windows 10 Pro Version 1903
    Visual Studio 2019
    Windows Mixed Reality Simulator
    Unity 2019.3.0f3
    OpenCVForUnity2.3.7
    MarkerBasedARExample1.2.2

    UWP_MarkerBasedARExample_Setting.png
    UWP_MarkerBasedARExample_Simulator.png
     
  41. ynuteminnof

    ynuteminnof

    Joined:
    Apr 1, 2018
    Posts:
    7
    Hello, I wanted to use SURF, but found out it is not prepared, so I am trying to use FREAK. I tried to adapt Java example for SURF from OpenCV docs, but I am probably doing something wrong:

    I realize I am trying to match two same images, it's only for testing, since I thought for keypoints and descriptors it shouldn't matter. Input image looks right, but keypoints and descriptors are empty. :(

    I would appreciate any help...

    Unity version : 2019.3.0b12
    OpenCVforUnity version : 2.3.7
    Build Platform : only trying it in editor (Linux)
     
  42. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    When I ran ArUcoCameraCalibrationExample with the following procedure, it worked without any problem.

    1. Create ChArUcoBoard Marker. ( OpenCVForUnity/Examples/Resources/ar_markers/ChArUcoBoard-mx5-my7-d10-os1000-bb1.pdf )
    charucoboard_create.PNG

    2. Adjust Inspector Settings.
    charucoboard_inspector.PNG

    3. Play ArUcoCameraCalibrationExample.
    charucoboard.PNG
     
  43. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    The FREAK algorithm seems to support only feature description. So, You have to get the keypoints from a separate feature detector.


    Code (CSharp):
    1.         var orb = ORB.create();
    2.         var freak = FREAK.create();
    3.  
    4.         MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
    5.         Mat descriptors1 = new Mat();
    6.         Mat mask = new Mat();
    7.  
    8.         orb.detect(img1Mat, keypoints1, mask);
    9.         freak.compute(img1Mat, keypoints1, descriptors1);
    10.  
    11.         MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
    12.         Mat descriptors2 = new Mat();
    13.  
    14.         orb.detect(img2Mat, keypoints2, mask);
    15.         freak.compute(img2Mat, keypoints2, descriptors2);
     
  44. fengkan

    fengkan

    Joined:
    Jul 10, 2018
    Posts:
    82
    Hi, I have tried integrating the sample above into CVVtuber, it works perfect, thank you!

    But the detection accuracy is not very good, is it possible to replace the model without rewriting the code completely? I don't know much about deep learning, so I am wondering whether you can give me some guidance, thank you.
     
  45. darrenbellenger

    darrenbellenger

    Joined:
    Feb 16, 2014
    Posts:
    6
    I'm using the code from the WebCamTextureExample and wondered if there was anyway to easily perform facial alignment? I have purchased both Dlib and OpenCV for Unity.
     
  46. fengkan

    fengkan

    Joined:
    Jul 10, 2018
    Posts:
    82
    I have integrated the code in the article into CV VTuber Example BTW

    https://github.com/fengkan/OpenCVDnnEmotionFerPlusExample
     
    EnoxSoftware likes this.
  47. creat327

    creat327

    Joined:
    Mar 19, 2009
    Posts:
    1,756
    hi
    can this be used without unity too? I'm also writing some other .net code outside of unity and it would be very helpful that this package works on external apps without unity.
     
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Unfortunately, I'm not too familiar with deep learning model training.
     
  49. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  50. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Of course, it is possible to use OpenCVForUnity for projects that do not use Unity.