Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    162
    Cool, using F11 on any native code call should be sufficient to get to C++ part (given that you have pdb debug symbols available). Then just open Image Watch window (View->Other Windows->Image Watch) you should see local Mat objects as images there. There are docs here and also here
     
    EnoxSoftware likes this.
  2. Kluarc

    Kluarc

    Joined:
    May 1, 2018
    Posts:
    5
    Hey Enox,

    I am working on the WebCamTextureMarkerLessARExample scene, I click a picture of an object using the capture scene which goes to my PersistentDataPath. I would like to remove the Capture scene and set a single image to be scanned by the webCamTextureMarkerLessARExample. After implementing this, the asset does not recognise the object when moved in different locations. There has been issues with the light as well. I have attached a few screenshots of the problem. A work round this would be highly appreciated.
     

    Attached Files:

    Last edited: May 18, 2020
  3. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    This example seems to be susceptible to slight environmental changes. The phone you're trying to detect has a color or shape that is particularly hard to detect feature points on.

    MarkerLessARExample Code is a rewrite of https://github.com/MasteringOpenCV/code/tree/master/Chapter3_MarkerlessAR using “OpenCV for Unity”. The algorithm is described in detail in "Mastering OpenCV with Practical Computer Vision Projects".
    https://github.com/MasteringOpenCV/code/issues/52
    http://www.packtpub.com/cool-projects-with-opencv/book
    Since this example is a tutorial code, I recommend using Vuforia etc. for more advanced functions.
     
  4. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    162
    Hi. Is it possible to use TensorFlow Lite models with OpenCV for Unity? The model file extension is .tflite
     
  5. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    Unfortunately, the TFLite file doesn't seem to be compatible with OpenCV's dnn module.
    https://answers.opencv.org/question/213204/tensorflow-lite-graph-with-opencv-dnn/
    In addition, in order to read the pb file with OpenCV, it seems that it is necessary to convert it for OpenCV.
    https://stackoverflow.com/questions/60625128/importing-automl-edge-train-model-in-opencv
    https://github.com/opencv/opencv/wiki/TensorFlow-text-graphs
    https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API
     
  6. zyonneo

    zyonneo

    Joined:
    Apr 13, 2018
    Posts:
    386
    For Jewely AR try on...can we use this plugin?face including ears(earings) and hand for rings(to test in ring finger)?
     
  7. glreese

    glreese

    Joined:
    Mar 25, 2017
    Posts:
    13
    Just bought OpenCV-Unity - so new to opencv in unity. Built a simple cartoonify function... and tried to deploy to IOS. The app crashes on startup in the Iphone.

    Version of Unity: 2019.2.17f1 Version of Xcode: Version 11.4.

    Wrote a simple process to turn image into cartoon-ish when u click camera but it does not get to run that code. Crashes when the app starts to load. Does Opencv work with the Unity/Xcode versions?

    Any help appreciated. Thanks
     
  8. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    Is the "Camera Usage Description" already set?
    camera_ios.png
     
  9. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    zyonneo likes this.
  10. glreese

    glreese

    Joined:
    Mar 25, 2017
    Posts:
    13
    Yes. That is set. I added a screen shot of the crash screen in xcode (not sure that is helpful). Screen Shot 2020-05-21 at 7.57.27 AM.png Screen Shot 2020-05-21 at 7.57.47 AM.png
     
  11. glreese

    glreese

    Joined:
    Mar 25, 2017
    Posts:
    13
    Hello. This issue is SOLVED. I needed to run the Set Plugin Import Settings. OpenCV Newbie here. Thank you,
     
    EnoxSoftware likes this.
  12. crogers

    crogers

    Joined:
    Jul 14, 2012
    Posts:
    8
    I get an access violation when i try to use the opencv.dll. Windows10 Unity 2019.3.13 when trying MaskRNN, MobileNET SSD would also crash:

    Unity Editor by Unity Technologies [version: Unity 2019.3.13f1_d4ddf0d95db9]

    opencvforunity.dll caused an Access Violation (0xc0000005)
    in module opencvforunity.dll at 0033:9becadf9.

    Error occurred at 2020-05-23_195832.
    C:\Program Files\Unity\2019.3.13f1\Editor\Unity.exe, run by chris.

    41% physical memory in use.
    65355 MB physical memory [38111 MB free].
    1192 MB process peak paging file [963 MB used].
    813 MB process peak working set [612 MB used].
    System Commit Total/Limit/Peak: 39312MB/75083MB/41130MB
    System Physical Total/Available: 65355MB/38111MB
    System Process Count: 347
    System Thread Count: 5279
    System Handle Count: 171665
    Disk space data for 'C:\Users\chris\AppData\Local\Temp\Unity\Editor\Crashes\Crash_2020-05-24_025828081\': 27008286720 bytes free of 495080960000 total.

    Read from location 0000000000000000 caused an access violation.

    Context:
    RDI: 0x0000002a6f6ed9d8 RSI: 0x0000000000000000 RAX: 0x0000002a6f6ed810
    RBX: 0x00000204a22d9c20 RCX: 0x0000002a6f6ed810 RDX: 0x00007ff99cba9098
    RIP: 0x00007ff99becadf9 RBP: 0x0000002a6f6ed8a0 SegCs: 0x00007ff900000033
    EFlags: 0x0000000000010246 RSP: 0x0000002a6f6ed7a0 SegSs: 0x00007ff90000002b
    R8: 0x0000002a6f6ed9d8 R9: 0x00007ff99a650000 R10: 0x00007ff99d3499f5
    R11: 0x0000002a6f6ed9d8 R12: 0x000000000000005a R13: 0x0000000000000000
    R14: 0x00000000ffffffff R15: 0x0000002a6f6ed990


    Bytes at CS:EIP:
    48 8b 1e 33 d2 48 8b cb e8 ba 98 ff ff 48 8b d0

    Mono DLL loaded successfully at 'C:\Program Files\Unity\2019.3.13f1\Editor\Data\MonoBleedingEdge\EmbedRuntime\mono-2.0-bdwgc.dll'.


    Stack Trace of Crashed Thread 22900:
    0x00007FF99BECADF9 (opencvforunity) tracking_TrackerCSRT_setInitialMask_10
    0x00007FF99BAD895B (opencvforunity) dnn_Net_setInput_13
    0x00000204BE3324B1 (Assembly-CSharp) OpenCVForUnity.DnnModule.Net.dnn_Net_setInput_13()
    0x00000204BE332373 (Assembly-CSharp) OpenCVForUnity.DnnModule.Net.setInput()
    0x00000204BE2F3283 (Assembly-CSharp) OpenCVForUnityExample.MaskRCNNExample.Run()
    0x00000204BE2EF88B (Assembly-CSharp) OpenCVForUnityExample.MaskRCNNExample.Start()
    0x00000205083C2AB8 (mscorlib) System.Object.runtime_invoke_void__this__()
    0x00007FF9A976CBA0 (mono-2.0-bdwgc) mono_get_runtime_build_info
    0x00007FF9A96F2112 (mono-2.0-bdwgc) mono_perfcounters_init
    0x00007FF9A96FB10F (mono-2.0-bdwgc) mono_runtime_invoke
    ERROR: SymGetSymFromAddr64, GetLastError: 'Attempt to access invalid address.' (Address: 00007FF6C0FD326E)
    0x00007FF6C0FD326E (Unity) (function-name not available)
     
  13. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    Are the following files already located in the "Assets/StreamingAssets" folder?
    MaskRcnnExampleFile.PNG
    MaskRcnnExample.PNG
     
  14. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    162
  15. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    Since this package is a clone of OpenCV Java, you are able to use the same API as OpenCV Java 4.3.0.
    For now, "OpenCV for Unity" does not support OpenCL module.
    https://github.com/opencv/opencv/issues/8560
     
  16. keremsl123

    keremsl123

    Joined:
    Apr 24, 2016
    Posts:
    2
    Hi i want to recognize sign language letters for exampla "a" in sing language.How can i do it?
     
  17. hungvt_unity

    hungvt_unity

    Joined:
    Apr 28, 2020
    Posts:
    6
    Hi, I'm using Unity2019.3.9f1, after import OpenCVForUnity 2.3.9, I got these errors:
    Assets/OpenCVForUnity/Examples/Advanced/ComicFilterExample/ComicFilter.cs(50,32): error CS1729: 'Size' does not contain a constructor that takes 2 arguments.
    and many errors about Size on other files. please help! I've tried replace Size to OpenCVForUnity.CoreModule.Size but not working
    ================
    updated: nevermind, I have a class named Size in my codes, rename this class solve this problem
     
    Last edited: May 27, 2020
    EnoxSoftware likes this.
  18. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    162
    Oh, sorry to hear that. But the DNN module is supported, right? Is it also lacking GPU acceleration?
     
  19. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
  20. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    The native library included in OpenCVForUnity by default is built with the OPENCV_DNN_CUDA flag disabled. To use CUDA backend, rebuild OPENCV library with OPENCV_DNN_CUDA enabled.
    https://jamesbowley.co.uk/buildcompile-opencv-v3-3-on-windows-with-cuda-8-0-and-intel-mkltbb/
    https://qiita.com/utibenkei/items/3d9ce5c30ef666e14f44
    For more details, see the section on “How to use OpenCV Dynamic Link Library with customized build settings” in ReadMe.pdf.
     
    leavittx likes this.
  21. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    It is possible. The following example shows how to convert a VideoPlayer feed to an OpenCV Mat.
    https://github.com/EnoxSoftware/VideoPlayerWithOpenCVForUnityExample
     
  22. marcaurelio74

    marcaurelio74

    Joined:
    Sep 2, 2012
    Posts:
    60
    Hello,
    in your MarkerBasedARExample how can I generate the marker Image, based on the grid shown in MarkerSetting
    Gameobject?


    Thanks
     
  23. glreese

    glreese

    Joined:
    Mar 25, 2017
    Posts:
    13
    Hi, For the FaceMarkExample - the textures are drawn on the face. How would I access the transform of those location? I see the rectangle points but it doesn't look like those translate to a Vector3d position? And webCamTextureToMatHelper.transform seems to be all zero's?

    I wanted to place an object (prefab) relative to the face marking rectangle -
    Imgproc.rectangle(rgbaMat, new Point(rects.x, rects.y), new Point(rects.x + rects.width, rects.y + rects.height), new Scalar(255, 0, 0, 255), 2);

    Thank you,
     
  24. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    Unfortunately, the MarkerBasedARExample does not have a function to generate a Marker image with a MarkerSetting setting.
     
    marcaurelio74 likes this.
  25. glreese

    glreese

    Joined:
    Mar 25, 2017
    Posts:
    13
    Is the FaceTrackerARExample project included with the Unity OpenCV examples? I am not seeing it in there....
     
  26. zyonneo

    zyonneo

    Joined:
    Apr 13, 2018
    Posts:
    386
    Is body tracking available for ios and android apps using OpenCV plugin?
     
  27. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
  28. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    OpenPoseExample is included with OpenCVForUnity. Of course, it's multi-platform compatible.
    opencv3.4.1_features.png
     
  29. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    162
    What is the suggested way of uploading CvType.CV_32FC2 into a Texture?
    I need the texture to be in TextureFormat.RGFloat or TextureFormat.RGBAFloat format. And also it needs to be fast (unsafe code)
     
  30. marcaurelio74

    marcaurelio74

    Joined:
    Sep 2, 2012
    Posts:
    60
    Thanks for your answer.

    Apart from the function, you know the rule to generate the Image from the MarkerSetting?
    I mean in pseudocode.
    Thanks

    P.S.
    The Marker Image is an Aruco Marker?
     
    Last edited: Jun 11, 2020
  31. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    162
    Also, uploading CV_8UC1 to an Alpha8 Texture seems to be not working (I have unsafe code enabled).
    Tried both fastMatToTexture2D and matToTexture2D methods
     
  32. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    Code (CSharp):
    1.             Texture2D uvmap = new Texture2D (512, 512, TextureFormat.RGFloat, false, true) {
    2.                 wrapMode = TextureWrapMode.Clamp,
    3.                 filterMode = FilterMode.Point,
    4.             };
    5.             Mat mat = new Mat (uvmap.width, uvmap.height, CvType.CV_32FC2);
    6.             Utils.fastTexture2DToMat (uvmap, mat);
    7.  
    Code (CSharp):
    1.             Texture2D uvmap = new Texture2D (512, 512, TextureFormat.RGBAFloat, false, true) {
    2.                 wrapMode = TextureWrapMode.Clamp,
    3.                 filterMode = FilterMode.Point,
    4.             };
    5.             Mat mat = new Mat (uvmap.width, uvmap.height, CvType.CV_CV_32FC4);
    6.             Utils.fastTexture2DToMat (uvmap, mat);
    7.  
     
  33. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    There is no pseudo-code, but the simple rules are as follows

    https://github.com/EnoxSoftware/Mar...ets/MarkerBasedARExample/Resources/marker.png
    Surround the marker with black squares as shown in the image above. There is no limit to the size of the image.
    MarkerSettings checkbox
    Checked - square black blocks
    Unchecked - square white blocks

    MakerBasedARExample markers are unrelated to Aruco markers.
     
    marcaurelio74 likes this.
  34. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    The following code works fine.(unsafe code enabled)
    Code (CSharp):
    1.             Utils.setDebugMode (true);
    2.  
    3.  
    4.             Texture2D imgTexture = Resources.Load ("lena") as Texture2D;
    5.  
    6.             Mat imgMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC1);
    7.  
    8.             Utils.texture2DToMat (imgTexture, imgMat);
    9.             Debug.Log ("imgMat.ToString() " + imgMat.ToString ());
    10.  
    11.             Texture2D texture = new Texture2D (imgMat.cols (), imgMat.rows (), TextureFormat.Alpha8, false);
    12.  
    13.             //Utils.matToTexture2D (imgMat, texture);
    14.             Utils.fastMatToTexture2D(imgMat, texture);
    15.  
    16.             gameObject.GetComponent<Renderer> ().material.mainTexture = texture;
    17.  
    18.  
    19.             Utils.setDebugMode (false);
     
  35. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    162

    I'm using the code you've provided and with
    Code (CSharp):
    1. Mat imgMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC4);
    2. ...
    3. Texture2D texture = new Texture2D (imgMat.cols (), imgMat.rows (), TextureFormat.RGBA32, false);

    upload_2020-6-13_19-7-50.png

    But with
    Code (CSharp):
    1. Mat imgMat = new Mat (imgTexture.height, imgTexture.width, CvType.CV_8UC1);
    2. ...
    3. Texture2D texture = new Texture2D (imgMat.cols (), imgMat.rows (), TextureFormat.Alpha8, false);
    (no other changes), it's all black:
    upload_2020-6-13_19-10-9.png

    Here is the generated debug output:
    imgMat.ToString() Mat [ 512*512*CV_8UC1, isCont=True, isSubmat=False, nativeObj=0x2545546382944, dataAddr=0x2545759596608 ]
     
  36. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    162
    That's for Texture->Mat conversion I guess, and I'd like to make Mat->Texture conversion of same types: CV_32FC2->RGFloat. Tried fastMatToTexture2D and had same completely black results as above
     
  37. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    Could you change the background color?
    alpha8.PNG
     
  38. leavittx

    leavittx

    Joined:
    Dec 27, 2013
    Posts:
    162
    Ok, it seems to be working now (thanks for the background color tip).
    The main problem when sampling uploaded 1-channel texture in a shader was about using the wrong color component:
    float val = tex2D(_Alpha8Tex, i.uv).r;
    will always be 0, while
    float val = tex2D(_Alpha8Tex, i.uv).a;
    gives the actual texture value. The RGFloat textures seem to be also working now. Thank you!
     
    EnoxSoftware likes this.
  39. link1375

    link1375

    Joined:
    Nov 9, 2017
    Posts:
    11
    I am not sure, if it is a bug, or if I did not look closely enough at the ArUcoWebCamTextureExample, but if the resolution is not 640x480, but 640x360 or 1280x720, the estimated position of the ArUco marker is off to some degree. For instance at the real distance of 15cm, and a marker size of 4cm (original), the estimated position is around 20cm for one of the last two mentioned resolution, but roughly 16,5cm for 640x480. Can the ratio (3/4 vs 9/16) of the resolution something to do with it?
     
  40. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    Our test PC had a camera resolution of 640x360 and the results were fine, and we ran the example on multiple devices and did not see any difference in the results depending on the aspect ratio.

    Can you tell me the environment you tested?
    Unity version :
    OpenCVforUnity version :
    Build Platform :
     

    Attached Files:

  41. fm64

    fm64

    Joined:
    Oct 30, 2018
    Posts:
    3
    Hi, I am a beginner with OpenCV and I'm trying to achieve body tracking and attach it to a 3D humanoid model for unity android. To do this, I am combining this openpose example https://github.com/faem/OpenPose-Unity using the COCO dataset and this face AR example https://assetstore.unity.com/packages/templates/tutorials/facetracker-example-35284

    so far I am able to detect the full body and putting Unity-chan on it but the rotation and scale are a little off. Just like the Facetracker example, I applied Camera Calibration, SolvePnP() and rodrigues(). I don't know much about 3D modeling, but the facetracker example uses 1 3D model where you set the landmanrks and they all move as a block, however for a 3D avatar this needs to move differently for each joint, therefore, I am setting all the object3DPoints from unity-chan's T pose for now, I am aware that doing this it will always output the same pose, but I don't know how to assign the detected 2D points to each of the joints (I am using 13 joints from the dataset, removing eyes and ears since my priority is the body)

    EDIT: just realized that SolvePnP() might not be used this time like that since it needs at least 6 points, so even if we try to segment the joints it may not do the trick


    here is some relevant code( combining this script from the openpose example https://github.com/faem/OpenPose-Unity/blob/master/Assets/OpenPoseTF/OpenPoseWebcam.cs and adding the camera calibration and scene setting from facetracker):

    Code (CSharp):
    1.  
    2.     Texture2D camTexture;
    3.     public Camera ARCamera;
    4.     Mat camMatrix;
    5.     MatOfDouble distCoeffs;
    6.     Matrix4x4 invertYM;
    7.     Matrix4x4 transformationM = new Matrix4x4();
    8.     Matrix4x4 invertZM;
    9.     Matrix4x4 ARM;
    10.     public GameObject unityChan3Dmodel ;
    11.  
    12.     MatOfPoint3f object3DPoints;
    13.     MatOfPoint2f image2DPoints;
    14.     Mat rvec;
    15.     Mat tvec;
    16.     Mat rotM;
    17.  
    18.     Mat oldRvec;
    19.     Mat oldTvec;
    20.  
    21.  
    22. void Start(){
    23.         image2DPoints = new MatOfPoint2f();
    24.         rvec = new Mat();
    25.         tvec = new Mat();
    26.         rotM = new Mat(3, 3, CvType.CV_64FC1);
    27.         SetObjectWorldPoints();
    28.  
    29.        //setting camera
    30. }
    31.  
    32.  
    33. void Run(){
    34.         //... after getting the 2Dpoints from the model
    35.  
    36.         AddVisiblePoints(bodyPoints2D); //bodyPoints2D are the detected points from the COCO model (List<Point>()))
    37.  
    38.         if (image2DPoints.toList().Count >= 6) //mininum detection points
    39.         {
    40.             if (!Calib3d.solvePnP(object3DPoints, image2DPoints, camMatrix, distCoeffs, rvec, tvec)) {
    41.                 unityChan3Dmodel.SetActive(false);
    42.                 return;
    43.             }
    44.  
    45.             bool isRefresh = false;
    46.  
    47.             if (tvec.dims() != 0 && tvec.get(2, 0)[0] > 0 && tvec.get(2, 0)[0] < 1200 * ((float)rgbaMat.cols() / (float)webCamTextureToMatHelper.requestedWidth))
    48.             {
    49.                 isRefresh = true;
    50.  
    51.                 if (oldRvec == null)
    52.                 {
    53.                     oldRvec = new Mat();
    54.                     rvec.copyTo(oldRvec);
    55.                 }
    56.                 if (oldTvec == null)
    57.                 {
    58.                     oldTvec = new Mat();
    59.                     tvec.copyTo(oldTvec);
    60.                 }
    61.  
    62.  
    63.                 //filter Rvec Noise.
    64.                 using (Mat absDiffRvec = new Mat())
    65.                 {
    66.                     Core.absdiff(rvec, oldRvec, absDiffRvec);
    67.  
    68.                     //Debug.Log ("absDiffRvec " + absDiffRvec.dump());
    69.  
    70.                     using (Mat cmpRvec = new Mat())
    71.                     {
    72.                         Core.compare(absDiffRvec, new Scalar(rvecNoiseFilterRange), cmpRvec, Core.CMP_GT);
    73.  
    74.                         if (Core.countNonZero(cmpRvec) > 0) isRefresh = false;
    75.                     }
    76.                 }
    77.  
    78.                 //filter Tvec Noise.
    79.                 using (Mat absDiffTvec = new Mat())
    80.                 {
    81.                     Core.absdiff(tvec, oldTvec, absDiffTvec);
    82.  
    83.                     using (Mat cmpTvec = new Mat())
    84.                     {
    85.                         Core.compare(absDiffTvec, new Scalar(tvecNoiseFilterRange), cmpTvec, Core.CMP_GT);
    86.  
    87.                         if (Core.countNonZero(cmpTvec) > 0) isRefresh = false;
    88.                     }
    89.                 }
    90.             }
    91.  
    92.             if (isRefresh)
    93.             {
    94.                 rvec.copyTo(oldRvec);
    95.                 tvec.copyTo(oldTvec);
    96.  
    97.                 //rotations
    98.                 Calib3d.Rodrigues(rvec, rotM);
    99.  
    100.                 transformationM.SetRow(0, new Vector4((float)rotM.get(0, 0)[0], (float)rotM.get(0, 1)[0], (float)rotM.get(0, 2)[0], (float)tvec.get(0, 0)[0]));
    101.                 transformationM.SetRow(1, new Vector4((float)rotM.get(1, 0)[0], (float)rotM.get(1, 1)[0], (float)rotM.get(1, 2)[0], (float)tvec.get(1, 0)[0]));
    102.                 transformationM.SetRow(2, new Vector4((float)rotM.get(2, 0)[0], (float)rotM.get(2, 1)[0], (float)rotM.get(2, 2)[0], (float)tvec.get(2, 0)[0]));
    103.                 transformationM.SetRow(3, new Vector4(0, 0, 0, 1));
    104.  
    105.                 // right-handed coordinates system (OpenCV) to left-handed one (Unity)
    106.                 ARM = invertYM * transformationM;
    107.  
    108.                 // Apply Z-axis inverted matrix.
    109.                 ARM = ARM * invertZM;
    110.                     ARM = ARCamera.transform.localToWorldMatrix * ARM;
    111.  
    112.                     if (unityChan3Dmodel != null)
    113.                     {
    114.                    //I want to apply something similar to THIS to my gameobject joints:
    115.                     ARUtils.SetTransformFromMatrix(unityChan3Dmodel.transform, ref ARM);
    116.                     unityChan3Dmodel.SetActive(true);
    117.                 }
    118.             }
    119.         }
    120.  
    121.         Imgproc.cvtColor(rgbaMat, rgbaMat, Imgproc.COLOR_BGR2RGB);
    122.         Texture2D texture = new Texture2D(rgbaMat.cols(), rgbaMat.rows(), TextureFormat.RGBA32, false);
    123.         Utils.matToTexture2D(rgbaMat, texture);
    124.         gameObject.GetComponent<Renderer>().material.mainTexture = texture;
    125. }
    126.  
    127.     void SetObjectWorldPoints() {
    128.         Transform[] keyPoints  = unityChan3Dmodel.GetJoints();
    129.         Point3[] points3D = new Point3[keyPoints.Length];
    130.  
    131.         for (int i = 0; i < keyPoints.Length; i++)
    132.         {
    133.             points3D[i] = new Point3(keyPoints[i].position.x, keyPoints[i].position.y, keyPoints[i].position.z);
    134.         }
    135.         object3DPoints = new MatOfPoint3f(points3D);
    136.     }
    137.  
    138.     void AddVisiblePoints(List<Point> bodyPoints)
    139.     {
    140.         //Dictionary<string, int> FILTERED_PARTS = new Dictionary<string, int>();
    141.         List<Point> filteredPoints = new List<Point>();
    142.  
    143.         //adding specific points
    144.         if (bodyPoints[0] != null) { filteredPoints.Add(bodyPoints[0]); } //Nose
    145.         if (bodyPoints[1] != null) { filteredPoints.Add(bodyPoints[1]); } //Neck
    146.         if (bodyPoints[2] != null) { filteredPoints.Add(bodyPoints[2]); } //RShoulder
    147.         if (bodyPoints[5] != null) { filteredPoints.Add(bodyPoints[5]); } //LShoulder
    148.         if (bodyPoints[3] != null) { filteredPoints.Add(bodyPoints[3]); } //RElbow
    149.         if (bodyPoints[6] != null) { filteredPoints.Add(bodyPoints[6]); } //LElbow
    150.         if (bodyPoints[4] != null) { filteredPoints.Add(bodyPoints[4]); } //RWrist
    151.         if (bodyPoints[7] != null) { filteredPoints.Add(bodyPoints[7]); } //LWrist
    152.         if (bodyPoints[8] != null) { filteredPoints.Add(bodyPoints[8]); } //RHip
    153.         if (bodyPoints[11] != null) { filteredPoints.Add(bodyPoints[11]); } //LHip
    154.         if (bodyPoints[9] != null) { filteredPoints.Add(bodyPoints[9]); } //RKnee
    155.         if (bodyPoints[12] != null) { filteredPoints.Add(bodyPoints[12]); } //LKnee
    156.         if (bodyPoints[10] != null) { filteredPoints.Add(bodyPoints[10]); } //RAnkle
    157.         if (bodyPoints[13] != null) { filteredPoints.Add(bodyPoints[13]); } //LAnkle
    158.         image2DPoints.fromArray(filteredPoints.ToArray());
    159.     }
    this is what we got so far:
    unitychan3Dtest.png

    so in short, I would like to know how to get the detected 2D body points and convert it to a transform so I can individually assign my points to the gameobject joints. I've seen a bunch of examples with static 3D objects suck as boxes or faces but I have not found any example that could be applied to a body. I've been learning openCV for only 3 weeks so if there is any concept I got wrong or any point that needs further explanation, please let me know.

    Thank you very much.
     
    Last edited: Jun 23, 2020
  42. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    This example doesn't use OpenPose, but perhaps this example will help.
    https://github.com/digital-standard/ThreeDPoseUnityForiOS
     
  43. fm64

    fm64

    Joined:
    Oct 30, 2018
    Posts:
    3
    Hi, thank you for your reply. I've checked all 3 projects from that repository and they work way too different without opencv. (my bad, just realized this project does not use ARKit)That project in particular uses ARkit which I cannot use since I am developing for android and ARCore has only a handful of compatible devices for some reason (plus, bodytracking is not implemented yet). The best result I have got so far is by using SolvePnP() using all the joint points from the 3D model as Object points. However a human body does not have a static shape so if we can find a way to "segment" some body parts, or apply SolvePnP() to only one joint(point), maybe it would move.

    I've searched everywhere with no luck. Maybe instead of using SolvePnP(), is there a different way to convert the 2D image point from the body model to Vector3(and possibly rotation too) for just one point?

    EDIT: after further examination from the ThreeDPoseUnity project, looks like the heatmaps have the 3D points information in that model, it is different from Openpose but I think they have this information as well. I checked on the OpenPose repository but couldn't find a code example to get the heatmap outputs, I wonder if anyone has tried that?
     
    Last edited: Jun 24, 2020
  44. link1375

    link1375

    Joined:
    Nov 9, 2017
    Posts:
    11
    Unity version : 2019.3.15f1
    OpenCVforUnity version : I think 2.3.7. At least the ReadMe pdf says so
    Build Platform : Windows (Editor)

    Can you set the marker and the camera to fixed position. Set the z-value of the ARCamera position to 0.0 (just to make the offset more obvious if it occurs) and then check the position of the ARGameObject for both resolutions 640x480 and 640x 360. For me it differs.
     
  45. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    I actually did the same kind of test as you did.
    It doesn't look like there's a big discrepancy in the position of the ARGameObject in my environment.
    I'm not an expert, but the results of this test may be influenced by changes in extrinsic camera parameters due to different camera resolutions or different camera environments.
     

    Attached Files:

  46. kevin-masson

    kevin-masson

    Joined:
    Sep 24, 2018
    Posts:
    71
    Hi, is there a performance difference between the trial package and the release package ?

    For example, the WebCamTextureToMatHelper is faster on your WebGL Demo than in my editor with the trial version using 1920x1080 30fps camera.

    Also, when requesting a maximum resolution (9999x9999), the fps drop is insane. But the camera is actually delivering 2200x1??? which is not so far away from 1920x1080. Is it possible that there is actually a 9999x9999 Mat allocated ?
     
  47. robot-az

    robot-az

    Joined:
    Jun 10, 2014
    Posts:
    1
    Hello, I have a problem with performance.
    "LibFaceDetectionV3Example"
    When I used the demo file on CHPlay I got 30 fps, but I only got 10 fps when building from the plugin.
    I tried to follow your instructions but still failed to get a better result. Would you mind assisting me?

    Thank you very much.
    Unity : 2019.3.15f1

    2020-07-06 13.37.08.jpg
     
    Last edited: Jul 6, 2020
  48. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,536
    The trial version has the words "OpenCV for Unity" drawn in red at random locations, so performance may be slightly slower than the released version.

    If you request a resolution of 9999x9999, you can get a frame with the highest resolution supported by the device you run it on. For example, if you request 9999x9999 and the device returns a 1920x1080 frame, 1920x1080 mat will be allocated.
     
  49. mahna3411

    mahna3411

    Joined:
    Dec 11, 2018
    Posts:
    39
    Hello,

    lightweight_pose_estimation_201912.onnx
    this is lightweight for coco ,
    Is it available for mpi???
    thanks.
     
  50. StavEnwize

    StavEnwize

    Joined:
    Jul 14, 2020
    Posts:
    1
    Hey, I'm looking into the trail version, whenever I try building an APK or EXE (Unity 2019.2.21f) it gives me 590 errors

    One of them:
    Assets\OpenCVForUnity\Examples\Advanced\AlphaBlendingExample\AlphaBlendingExample.cs(11,22): error CS0234: The type or namespace name 'CoreModule' does not exist in the namespace 'OpenCVForUnity' (are you missing an assembly reference?)

    Thanks!