Search Unity

[RELEASED] OpenCV for Unity

Discussion in 'Assets and Asset Store' started by EnoxSoftware, Oct 30, 2014.

  1. zNAYAz

    zNAYAz

    Joined:
    May 4, 2019
    Posts:
    6
    Hello, I'm trying to use the hogdescriptor sample to detect the video source through url, but i can't change the original video source with the url source, how to solve that? Moreover, i want to use the url video source to exchange the webcam source to do the face detection, how to solve that?
    Many thanks.
     
  2. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    This post will be helpful. If you are using OpenCVForUnity 2.3.4, use opencv_ffmpeg400_64.dll instead of opencv_ffmpeg342_64.dll.
    https://forum.unity.com/threads/released-opencv-for-unity.277080/page-37#post-3750769
     
  3. zNAYAz

    zNAYAz

    Joined:
    May 4, 2019
    Posts:
    6
    Thank you for your kindly help. I'm using the 2.3.3 version and also in HOGDescription Sample, I copied first opencv_ffmpeg342_64.dll and rewrote the code:
    Code (CSharp):
    1. void Start ()
    2.         {
    3.             capture = new VideoCapture ();
    4.  
    5.             #if UNITY_WEBGL && !UNITY_EDITOR
    6.             getFilePath_Coroutine = Utils.getFilePathAsync("768x576_mjpeg.mjpeg", (result) => {
    7.                 getFilePath_Coroutine = null;
    8.  
    9.                 capture.open (result);
    10.                 Init();
    11.             });
    12.             StartCoroutine (getFilePath_Coroutine);
    13.             #else
    14.             capture.open("rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov");
    15.             Init();
    16.             #endif
    17.         }
    I also tried the url in the original post, but the error
    capture.isOpened() is true. Please copy from “OpenCVForUnity/StreamingAssets/” to “Assets/StreamingAssets/” folder.
    UnityEngine.Debug:LogError(Object)
    ocurred.
    Later i tried the opencv_ffmpeg400_64.dll again but it made no difference.
     
  4. sandman3

    sandman3

    Joined:
    Feb 21, 2017
    Posts:
    12
    Hey EnoxSoftware, when using the latest version of your library (2.3.4) in the latest version of Unity (2019.1.2f1) on WebGL I get an error when building, even if I'm using an empty project with just this library in it. I did use the set import settings button, and double checked to see that 5.6 and 2018.2 are not being used.

    Here's the error:
     
  5. Bigg_P

    Bigg_P

    Joined:
    Jul 16, 2014
    Posts:
    10
    Hey..
    I am getting 'code signing "opencv2.framework" failed' error when I validate for IOS app store in xcode.I have no problem in building the app from unity and the app runs perfectly when I build and run it on a ipad through xcode.Can you guide me what I am doing wrong here.
     
  6. ScottHerz

    ScottHerz

    Joined:
    Mar 18, 2014
    Posts:
    24
    Hi there. I just bought your asset and have been attempting to try my hand at some projection mapping, based on Andrew Macquarrie's sample. His work uses an older OpenCV C# wrapper, and only runs on 32bit windows. I think I've ported it over to use your OpenCVForUnity for use with my Mac, but it doesn't work quite right. The distortion/rotation/intrinsic matrices OpenCVForUnity outputs are oddly close though.

    His code makes great use of CalibrateCamera2, which I've converted over to Calib3d.calibrateCamera. Are there any differences in the implementation of Calib3d.calibrateCamera, in 4.0, using the Java API that I should be aware of? Something that would cause it to solve differently?

    Thanks!
     
  7. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    1)Import OpenCVForUnity2.3.3.
    2)Download "OpenCV for Windows Version 4.0.0"(http://opencv.org/downloads.html).
    3)Copy "opencv\build\x64\vc15\bin\opencv_ffmpeg400_64.dll" to Project Folder.

    I succeeded in playing these files.
    capture.open ("http://archive.org/download/SampleMpeg4_201307/sample_mpeg4.mp4");
    and
    capture.open("rtsp://184.72.239.149/vod/mp4:BigBuckBunny_115k.mov");
    ffmpeg.PNG
     
  8. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Thank you very much for reporting.
    Currently, it seems that a build error occurs in Unity 2019.1.2f1. This build error is planned to be fixed in the next version of OpenCVForUnity.
    For now, could you use Unity2019.1.1f1?
     
  9. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Could you tell me about your build environment?
    Unity version :
    OpenCVforUnity version :
    Xcode version :
     
  10. Tyndareus

    Tyndareus

    Joined:
    Aug 31, 2018
    Posts:
    37
    I am currently working on implementing a huge system which uses open cv.

    I am stuck on calibrating the camera, I need to do it in a way that CCV does it; having an overlay grid of points that the user touches and then calibrate the camera from that. Majorly struggling to understand what plugs into where since everything is called Mat; if they are all reference types they change constantly so without any form of commenting (or even looking at the documentation on javadocs its just a line describing the function... great) its complex to use.

    I have the 'after' done, which is detecting touch points which I assume is the imagePoints that the calibrateCamera method requires?
    Going from that I believe the objectPoints is going to be the grid that I make? but I need to construct all this in Unity so converting from Unity space to OpenCV space... I saw a solvePnP but I think thats only for the reverse; touch point on an image to the 3D space.
    The cameraMatrix I assume is going to be the Mat from WebCamTextureHelper since I need to work with a live feed

    I have attached an image of the setup, it is an IR camera which is looking at a laser field 'masking' a wall, to get touch coordinates on that wall correct I need to calibrate the camera so that the wall (area in between the white) is not warped/distorted and flat.

    Any help with this would be great, I have tried one of the previous posts involving solvePnP and Rodrigues with the points as a placeholder solution just to see if there was any change but there wasn't really anything changed. I also tried poking around the ArUco but from reading further into what it does I believe its not the solution that would work.

    Edit: I have looked further and found that OpenCV requires a chessboard pattern to calibrate the camera, which probably means manual calibration by touching points on the wall isn't possible.
     

    Attached Files:

    Last edited: May 15, 2019
  11. ScottHerz

    ScottHerz

    Joined:
    Mar 18, 2014
    Posts:
    24
    In an odd coincidence, the projection mapping code I was asking about above works this way. You click on an object and hit detect via the collider to determine where in 3 space it happened. You then move your mouse in projected space to show where in the image that point was projected. That's all fed into CalibrateCamera and the intrinsic / rotation / distortion matrices fall out.

    As I understood it (and I'm a noob at this stuff, so could be wrong) the checkerboard is just an easy pattern to detect, but the CalibrateCamera function doesn't need that point layout exactly.
     
  12. Tyndareus

    Tyndareus

    Joined:
    Aug 31, 2018
    Posts:
    37
    Ah, that actually seems like something I am looking for... Right now I am in the process of bare bones making it; at least as far as I can.
    Created a box to change to match the projected area think that becomes the image points and then going to do the grid in order to get touch locations which hopefully add up to the object points required, just need to find out which method actually converts the openCV Point to Unity point. I will actually go into that github thing to see further.
     
  13. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Hi ScottHerz,

    I have rewritten the repository you indicated to use OpenCVForUnity.
    https://github.com/EnoxSoftware/UnityProjectionMapping
    I seem it's working properly.
     
  14. Tyndareus

    Tyndareus

    Joined:
    Aug 31, 2018
    Posts:
    37
    I had a look at using this but was unable to produce anything from it; no particular feedback from the process of using so, if anything I have learned a bit more about opencv...

    Still going to be doing my brute force method of setting the 'desired screen' and then recording openCV points for touching calibration points on a projector wall and adding those to the list of points to use for calibrating the camera.
    I'm not sure what to do about converting point to unity coordinates though. Theres some code in there where the mouse position is being converted to a point but dunno if the reverse would give me the desired result.
     
  15. ScottHerz

    ScottHerz

    Joined:
    Mar 18, 2014
    Posts:
    24
    Great! I'll give it a shot today! I have something of a unit test from values I found to work with the other library on the PC. I'll run them through!
     
  16. ScottHerz

    ScottHerz

    Joined:
    Mar 18, 2014
    Posts:
    24
    It's a little confusing, and I had to tweak the original code a little to make sense of it. It's designed for projection mapping. The idea is that you build a real world scene of objects and then recreate it in Unity. You then calibrate the Unity camera/projector and are able to shoot virtual content that nicely aligns with the real surfaces.

    After building your real and virtual scenes in Unity, hook mirror your desktop to the projector and run the scene:

    1) Click on a corner of your virtual scene (this will drop a way-to-big sphere where you clicked. I found in the code where was creating these spheres and gave them a local scale of .1)
    2) Next move your cursor over the REAL object corner. Click.

    Do this a total of 7 times.

    When it's done, his code will take the world-space points (where it was in Unity) and image-space points (where it ended up being projected in 2D) and pass them into CalibrateCamera. The output of that is massaged and set on the Unity camera.
     
  17. ScottHerz

    ScottHerz

    Joined:
    Mar 18, 2014
    Posts:
    24
    It works! Can't wait to see what you did differently. Thank you so much! Five Stars!
     
  18. ScottHerz

    ScottHerz

    Joined:
    Mar 18, 2014
    Posts:
    24
    How did you come up with the flags 6377?

    That looks like the major difference between what I had and your code.

    6377 seems to consist of:
    Calib3d.CALIB_ZERO_TANGENT_DIST |
    Calib3d.CALIB_USE_INTRINSIC_GUESS |
    // Not Fix K1
    Calib3d.CALIB_FIX_K2 |
    Calib3d.CALIB_FIX_K3 |
    Calib3d.CALIB_FIX_K4 |
    Calib3d.CALIB_FIX_K5
    // Not Fix K6
    And then some mystery 0x800 / 2048 constant I can't find in Calib3d.cs.

    Thanks again for your help in understanding this!
     
  19. ScottHerz

    ScottHerz

    Joined:
    Mar 18, 2014
    Posts:
    24
    Are the constants in calib3d.cs somehow wrong / different / changed? If I look at the constants in OpenCV2, I see something that makes more sense:

    CV_CALIB_USE_INTRINSIC_GUESS 1
    CV_CALIB_ZERO_TANGENT_DIST 8
    CV_CALIB_FIX_K1 32
    CV_CALIB_FIX_K2 64
    CV_CALIB_FIX_K3 128
    CV_CALIB_FIX_K4 2048
    CV_CALIB_FIX_K5 4096

    Which would add up to 6377.
     
  20. zNAYAz

    zNAYAz

    Joined:
    May 4, 2019
    Posts:
    6
    Thank you, i succeded in playing the video.
    I have another question that is it possible to substitute webcam with videocapture in the FaceDetectionWebCamTextureExample.
     
  21. Tyndareus

    Tyndareus

    Joined:
    Aug 31, 2018
    Posts:
    37
    Ah I see it might still work in a sense, will have to see if the idea works well enough since we only want to map a wall that a projector+IR camera looks at, that’s only to create accurate blob tracking in a conversion space to unity.

    Although that part of my question seems to get skipped... OpenCV point to unity space.
    The camera resolution is 640x480 so the points output from 0 to 640 where it might need to be 1600 for a button or 50, 2, 30 for an object in the scene. Is there a method for doing that? Not entirely sure what to research if it wasn’t
     
  22. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Strangely, the official OpenCV Java wrapper configuration file seems to be wrong.
    https://github.com/opencv/opencv/blob/master/modules/calib3d/misc/java/gen_dict.json#L8-L16
    This bug has been fixed in OpenCVForUnity version 2.3.5.
     
    Last edited: May 21, 2019
  23. ScottHerz

    ScottHerz

    Joined:
    Mar 18, 2014
    Posts:
    24
  24. violetforest

    violetforest

    Joined:
    May 26, 2015
    Posts:
    10
    I'm trying to convert the HandPoseEstimation example into AR. So when a contour is detected, an AR object will follow the contour. My transformation values don't seem accurate. This is what my debug looks like for the transformation values:

    transformationM  1.00000 0.00000 0.00000 0.00000                  0.00000 1.00000 0.00000 0.00000                  0.00000 0.00000 1.00000 0.00000                  0.00000 0.00000 0.00000 1.00000


    I am using the bounding rect corners of the contour as my 2D points to solvePNP and the 3d points from the MarkerBasedAR example:

    Imgproc.rectangle (rgbaMat, boundRect.tl (), boundRect.br (), CONTOUR_COLOR_WHITE, 2, 8, 0);

    rectPoints = new MatOfPoint2f(new Point (boundRect.x, boundRect.y),//l eye
    new Point(boundRect.x + boundRect.width, boundRect.y),
    new Point(boundRect.x, boundRect.y + boundRect.height),
    new Point(boundRect.x + boundRect.width, boundRect.y + boundRect.height)
    );

    //estimate pose
    Mat Rvec = new Mat();
    Mat Tvec = new Mat();
    Mat raux = new Mat();
    Mat taux = new Mat();

    List<Point3> m_markerCorners3dList = new List<Point3>();

    m_markerCorners3dList.Add(new Point3(-0.5f, -0.5f, 0));
    m_markerCorners3dList.Add(new Point3(+0.5f, -0.5f, 0));
    m_markerCorners3dList.Add(new Point3(+0.5f, +0.5f, 0));
    m_markerCorners3dList.Add(new Point3(-0.5f, +0.5f, 0));

    m_markerCorners3d.fromList(m_markerCorners3dList);

    Calib3d.solvePnP(m_markerCorners3d, rectPoints, camMatrix, distCoeff, raux, taux);

    raux.convertTo(Rvec, CvType.CV_32F);
    taux.convertTo(Tvec, CvType.CV_32F);

    rotMat = new Mat(3, 3, CvType.CV_64FC1);
    Calib3d.Rodrigues(Rvec, rotMat);

    transformationM.SetRow(0, new Vector4((float)rotMat.get(0, 0)[0], (float)rotMat.get(0, 1)[0], (float)rotMat.get(0, 2)[0], (float)Rvec.get(0, 0)[0]));
    transformationM.SetRow(1, new Vector4((float)rotMat.get(1, 0)[0], (float)rotMat.get(1, 1)[0], (float)rotMat.get(1, 2)[0], (float)Rvec.get(1, 0)[0]));
    transformationM.SetRow(2, new Vector4((float)rotMat.get(2, 0)[0], (float)rotMat.get(2, 1)[0], (float)rotMat.get(2, 2)[0], (float)Rvec.get(2, 0)[0]));
    transformationM.SetRow(3, new Vector4(0, 0, 0, 1));
    Debug.Log ("transformationM " + transformationM.ToString ());

    Rvec.Dispose();
    Tvec.Dispose();
    raux.Dispose();
    taux.Dispose();
    rotMat.Dispose();


    Does anyone know why the transformation isn't calculating correctly?
     
  25. roadley

    roadley

    Joined:
    Mar 1, 2018
    Posts:
    5
    Has anyone experienced crashes with the latest version of Xcode & iOS?

    Build the Aruco Marker example onto the following device to replicate:

    Device - iPad Air 2
    iOS Version - 12.3
    Xcode Version - 10.2.1

    ... pointing it at 6 targets for about 20 seconds makes Xcode spit out an error and it forces the app to close.

    LOG AFTER CRASHING:
    (lldb)
    opencv2`cv::pointSetBoundingRect:
    [Line 110] 0x105724418 <+432>: ld2.2d { v2, v3 }, [x8]
    [ERROR] Thread 5: EXC_BAD_ACCESS (code=1, address=0x112994000)


    Any ideas?
     
  26. ScottHerz

    ScottHerz

    Joined:
    Mar 18, 2014
    Posts:
    24
    Works great now. Thanks!
     
  27. Mark-Moukarzel

    Mark-Moukarzel

    Joined:
    Nov 16, 2015
    Posts:
    1
    Hello, I just purshased the Open CV x Unity plugin. The reason I got it is that i need to detect the eye and replace the eye pupil with a texture (a colored contact lense to be more specific life some of the filter on instagram). How can this be achieved ? and if it possible to detect if the eye was closed ?
     
  28. roadley

    roadley

    Joined:
    Mar 1, 2018
    Posts:
    5
    UPDATE:

    So after a lot of debugging and trying to figure out where the issue was, a solution for me that so far seems to work has been to revert back to the previous OpenCVForUnity version (2.3.4).
     
  29. zNAYAz

    zNAYAz

    Joined:
    May 4, 2019
    Posts:
    6
    I used videocapture to get video from my ipcamera, but the fps is quite low and the video runs intermittently. Is there any way to solve this?
     
  30. zNAYAz

    zNAYAz

    Joined:
    May 4, 2019
    Posts:
    6
    Once again, I want to know if it is possible to substitude webcam with videocapture in FaceDetectionWebCamTextureExample.
    Thank you.
     
  31. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Thank you very much for reporting.
    The example where this crash occurs is ArUcoWebCamTextureExample?
    Could you tell me the settings to reproduce this crash?
    MarkerType :
    Marker ID :
    Unity version :
     
  32. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I think that the cause of the problem is that the order of points in rectPoints is different from the order of points in m_markerCorners3dList.

    rectPoints = new MatOfPoint2f(new Point (boundRect.x, boundRect.y), //top,left ( A )

    new Point(boundRect.x + boundRect.width, boundRect.y), //top,right ( B )

    // new Point(boundRect.x, boundRect.y + boundRect.height), //bottom,left ( D )

    // new Point(boundRect.x + boundRect.width, boundRect.y + boundRect.height) //bottom,right ( C )

    new Point(boundRect.x + boundRect.width, boundRect.y + boundRect.height) //bottom,right ( C )

    new Point(boundRect.x, boundRect.y + boundRect.height), //bottom,left ( D )



    );

    m_markerCorners3dList.Add(new Point3(-0.5f, -0.5f, 0)); //top,left ( A )

    m_markerCorners3dList.Add(new Point3(+0.5f, -0.5f, 0)); //top,right ( B)

    m_markerCorners3dList.Add(new Point3(+0.5f, +0.5f, 0)); //bottom,right ( C )

    m_markerCorners3dList.Add(new Point3(-0.5f, +0.5f, 0)); //bottom,left ( D )
     
  33. lbaptista95

    lbaptista95

    Joined:
    Jan 4, 2019
    Posts:
    4
    Hi,

    I'm trying to use OpenCV For Unity with my ZED Mini Camera to detect AR Markers. ZED has two cameras, so I can't use WebCamTexture because it will give me one texture with both camera images. I'm trying to get the image from the Left Camera and use it as a Texture2D instead of using a WebCamTexture. Using the function zedManager.zedCamera.RetrieveImage(mat, VIEW.LEFT); I get a ZEDMat, which has a field named matPtr which I think I can put in Utils.copyToMat(mat.MatPtr, frameMat); both ZEDMat mat and OpenCV Mat frameMat have the same width and height, the type is CV_8UC4, but when I call this function Unity crashes.


    The crash log is attached.
     

    Attached Files:

  34. ina

    ina

    Joined:
    Nov 15, 2010
    Posts:
    1,085
  35. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Unfortunately, I have never used ZED Mini Camera. However, according to the following post, you need to use the GetPtr () method instead of MatPtr.
    https://github.com/stereolabs/zed-unity/issues/25
    https://github.com/stereolabs/zed-unity/issues/15

    Code (CSharp):
    1. Utils.copyToMat(mat.GetPtr(ZEDMat.MEM.MEM_CPU), frameMat);
     
  36. VRxMedical

    VRxMedical

    Joined:
    Mar 8, 2018
    Posts:
    3
    Hi, I am using hand position library in my project from last few updates I am getting a problem that on the first start of app and after giving permission to use the device camera but it is not able to start the camera. Instead of getting camera feed I am getting hourglass screen. Can you please help me with this.
     
  37. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Could you tell me about your build environment?
    Unity version :
    OpenCVforUnity version :
    Build Platform :
     
  38. lleo52

    lleo52

    Joined:
    Apr 8, 2013
    Posts:
    14
    I want to use openni2 on this c# OpenCV.
    How can i do it?
     
  39. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  40. trakloz

    trakloz

    Joined:
    Jul 23, 2017
    Posts:
    14
    Thank you for the update to 2019.1, have you attempted real-time yet or could you share reference to study?
     
  41. violetforest

    violetforest

    Joined:
    May 26, 2015
    Posts:
    10
    Hi, I am attempting to use openCVforUnity with ARFoundation 1.0.

    I am using the cameraImage example and trying to find contours with the camera script they have.

    I am unsure if I am in the right direction. Im trying to find contours with the rgbaMat from the 2Dtexture they are creating but I don't believe its working. Am I in the right direction?

    Code (CSharp):
    1.         unsafe void OnCameraFrameReceived(ARCameraFrameEventArgs eventArgs)
    2.         {
    3.             // Attempt to get the latest camera image. If this method succeeds,
    4.             // it acquires a native resource that must be disposed (see below).
    5.             CameraImage image;
    6.             if (!ARSubsystemManager.cameraSubsystem.TryGetLatestImage(out image))
    7.                 return;
    8.  
    9.             // Display some information about the camera image
    10.             m_ImageInfo.text = string.Format(
    11.                 "Image info:\n\twidth: {0}\n\theight: {1}\n\tplaneCount: {2}\n\ttimestamp: {3}\n\tformat: {4}",
    12.                 image.width, image.height, image.planeCount, image.timestamp, image.format);
    13.  
    14.             // Once we have a valid CameraImage, we can access the individual image "planes"
    15.             // (the separate channels in the image). CameraImage.GetPlane provides
    16.             // low-overhead access to this data. This could then be passed to a
    17.             // computer vision algorithm. Here, we will convert the camera image
    18.             // to an RGBA texture and draw it on the screen.
    19.  
    20.             // Choose an RGBA format.
    21.             // See CameraImage.FormatSupported for a complete list of supported formats.
    22.             var format = TextureFormat.RGBA32;
    23.  
    24.             if (m_Texture == null || m_Texture.width != image.width || m_Texture.height != image.height)
    25.                 m_Texture = new Texture2D(image.width, image.height, format, false);
    26.  
    27.             // Convert the image to format, flipping the image across the Y axis.
    28.             // We can also get a sub rectangle, but we'll get the full image here.
    29.             var conversionParams = new CameraImageConversionParams(image, format, CameraImageTransformation.MirrorY);
    30.  
    31.             // Texture2D allows us write directly to the raw texture data
    32.             // This allows us to do the conversion in-place without making any copies.
    33.             var rawTextureData = m_Texture.GetRawTextureData<byte>();
    34.  
    35.             try
    36.             {
    37.                 image.Convert(conversionParams, new IntPtr(rawTextureData.GetUnsafePtr()), rawTextureData.Length);
    38.             }
    39.             finally
    40.             {
    41.                 // We must dispose of the CameraImage after we're finished
    42.                 // with it to avoid leaking native resources.
    43.                 image.Dispose();
    44.             }
    45.  
    46.             // Apply the updated texture data to our texture
    47.             m_Texture.Apply();
    48.  
    49.             Mat rgbaMat = new Mat(m_Texture.height, m_Texture.width, CvType.CV_8UC4);
    50.             Utils.matToTexture2D(rgbaMat, m_Texture);
    51.  
    52.             // Set the RawImage's texture so we can visualize it.
    53.             m_RawImage.texture = m_Texture;
    54.             FindContours(rgbaMat);
    55.         }
    56.  
    57.         void FindContours(Mat rgbaMat)
    58.         {
    59.             List<MatOfPoint> contours = new List<MatOfPoint>();
    60.             Mat srcHierarchy = new Mat();
    61.  
    62.             Imgproc.findContours(rgbaMat, contours, srcHierarchy, Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_NONE);
    63.         }
     
  42. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  43. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    I think that it is necessary to convert Texture2D to Mat once.
    Code (CSharp):
    1.             Mat rgbaMat = new Mat(m_Texture.height, m_Texture.width, CvType.CV_8UC4);
    2.  
    3.             // Convert m_Texture to rgbaMat
    4.             Utils.texture2DToMat(m_Texture, rgbaMat);
    5.  
    6.             // Image Processing
    7.             FindContours(rgbaMat);
    8.  
    9.             // Convert rgbaMat to m_Texture
    10.             Utils.matToTexture2D(rgbaMat, m_Texture);
    11.            
    12.             // Set the RawImage's texture so we can visualize it.
    13.             m_RawImage.texture = m_Texture;
    14.  
     
  44. MobackAlok

    MobackAlok

    Joined:
    Jul 6, 2017
    Posts:
    4
    Hi EnoxSoftware, How Can I use keras model .h5 files in opencvfor unity like tensorflow and yolo
     
  45. manu_unity388

    manu_unity388

    Joined:
    Jun 22, 2019
    Posts:
    1
  46. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
  47. dman8723

    dman8723

    Joined:
    Dec 18, 2017
    Posts:
    12
    Hi EnoxSoftware, I am trying to stitch the video streaming from two camera
    I have made a simple stitching function but I there there is not good enough since there is a crack line between two video.
    I find the following code may be useful in my case, and do you know how to replace uchar* and Mat.ptr since those are not supported in C#?
    Code (CSharp):
    1. void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst)
    2. {
    3.     int start = MIN(corners.left_top.x, corners.left_bottom.x);
    4.     double processWidth = img1.cols - start;
    5.     int rows = dst.rows;
    6.     int cols = img1.cols;
    7.     double alpha = 1;
    8.     for (int i = 0; i < rows; i++)
    9.     {
    10.         uchar* p = img1.ptr<uchar>(i);
    11.         uchar* t = trans.ptr<uchar>(i);
    12.         uchar* d = dst.ptr<uchar>(i);
    13.         for (int j = start; j < cols; j++)
    14.         {
    15.             if (t[j * 3] == 0 && t[j * 3 + 1] == 0 && t[j * 3 + 2] == 0)
    16.             {
    17.                 alpha = 1;
    18.             }
    19.             else
    20.             {
    21.                 alpha = (processWidth - (j - start)) / processWidth;
    22.             }
    23.  
    24.             d[j * 3] = p[j * 3] * alpha + t[j * 3] * (1 - alpha);
    25.             d[j * 3 + 1] = p[j * 3 + 1] * alpha + t[j * 3 + 1] * (1 - alpha);
    26.             d[j * 3 + 2] = p[j * 3 + 2] * alpha + t[j * 3 + 2] * (1 - alpha);
    27.  
    28.         }
    29.     }
    30.  
    31. }
     
  48. matthias0911

    matthias0911

    Joined:
    Jun 6, 2019
    Posts:
    1
    Hi, is there any example for background subtraction with dynamic background ? Hoping for your reply, thanks!
     
  49. igloospirit_unity

    igloospirit_unity

    Joined:
    Jun 28, 2019
    Posts:
    1
    Hi EnoxSoftware and everybody else !

    I have no idea if this is the right place to ask this (if its not, tell me and i'll delete my post :D )

    I am having trouble changing the device's camera from "front camera" to "back camera".
    I have a Unity application that I build for WebGL, the endgoal being to use it in Google Chrome.
    My problem is that when I am in the Unity editor my code to switch the device's camera works :

    i have a button (to switch camera) and a Quad (to display my webcam's feed) in my scene, and 2 webcams connected to my computer, when i press the button -> the webcam being used change -> it works.

    But when I build the game for WebGL and run it through a HTTPS server on Google Chrome, the front camera is being used and no amount of code can switch from front camera to the back camera.
    (I actually have to go in Chrome's parameters and allow the back camera, but then only the back camera is accessible...so back to square one...)

    I have tried many different ways to switch cameras, here is a sample of one of the simplest solution i tried :

    Code (CSharp):
    1. public void SwitchCamera()
    2.         {
    3.             if (webCamTexture.isPlaying)
    4.             {
    5.                 webCamTexture.Stop();
    6.             }
    7.          
    8.             for (int i = 0; i < WebCamTexture.devices.Length; i++)
    9.             {
    10.                 if (WebCamTexture.devices[i].isFrontFacing && myRequestedFrontFacing)
    11.                 {
    12.                     webCamTexture.deviceName = WebCamTexture.devices[i].name;
    13.                     //webCamTexture = new WebCamTexture(WebCamTexture.devices[i].name);
    14.                     break;
    15.                 }
    16.                 else if (!WebCamTexture.devices[i].isFrontFacing && !myRequestedFrontFacing)
    17.                 {
    18.                     webCamTexture.deviceName = WebCamTexture.devices[i].name;
    19.                     //webCamTexture = new WebCamTexture(WebCamTexture.devices[i].name);
    20.                     break;
    21.                 }
    22.             }
    23.             webCamTexture.Play();
    24.         }
    I know its hard to handle webcams on Chrome (mandatory HTTPS server and so on...) but I know its possible to do it. Maybe changing the current camera is not the right way to do it on Chrome, maybe I should change what the "default"camera is, but I have no idea how to do so...

    Any help welcome and appreciated ! But in any case have a great day everybody !
     
    EnoxSoftware likes this.
  50. EnoxSoftware

    EnoxSoftware

    Joined:
    Oct 29, 2014
    Posts:
    1,564
    Hi, dman8723.
    It is possible to write unsafe code such as pointers in C#, but I would not recommend it.
    The following example scene may be useful in your case.
    https://github.com/EnoxSoftware/Ope...rUnity/Examples/Advanced/AlphaBlendingExample