Search Unity

Question Portrait mode

Discussion in 'Barracuda' started by michalekmarcin, Mar 3, 2021.

  1. michalekmarcin

    michalekmarcin

    Joined:
    Mar 28, 2019
    Posts:
    16
    Hi,

    I'm working on project with image recognition. I use: Unity 2020.2.4 and Barracuda 1.3.0.

    I have a problem with portrait mode in application. In landscape mode everything works, preview image on fullscreen is displaying correctly, frames are in correct places and have correct size. When I change device orientation to portrait, then my neural network doesn't work and preview is in wrong orientation (black bars on top and bottom of the screen and image is distorted).

    I use WebCamTexture to get image from iPhone back camera to display preview on RawImage. I use same texture and pass it to tensor. I think I should rotate texture, but I don't know how to do this.

    Code (CSharp):
    1.  
    2. private void InitiateCameraDevice()
    3. {
    4.     var webCamDevices = WebCamTexture.devices;
    5.            
    6.     if (webCamDevices.Length == 0)
    7.     {
    8.         Debug.Log("No camera detected");
    9.         return;
    10.     }
    11.  
    12.     foreach (var device in webCamDevices)
    13.     {
    14.         if (device.isFrontFacing == false)
    15.         {
    16.             _webCamTexture = new WebCamTexture(device.name, Screen.width, Screen.height);
    17.             break;
    18.         }
    19.     }
    20.  
    21.     if (_webCamTexture == null)
    22.     {
    23.         Debug.Log("Unable to find back camera");
    24.         return;
    25.     }
    26.            
    27.     _webCamTexture.Play();
    28.     _previewUI.texture = _webCamTexture;
    29. }
    30.  
    31. private void Update()
    32. {
    33.     if (Source == VideoSourceForDetector.Camera && _webCamTexture == null)
    34.     {
    35.         return;
    36.     }
    37.  
    38.     var texture = (Source == VideoSourceForDetector.Camera) ? ProcessCameraImage() : ProcessVideoPlayerOutput();
    39.            
    40.     if (texture.width <= 16)
    41.     {
    42.         return;
    43.     }
    44.  
    45.    // pass texture to detector
    46.     _detector.ProcessImage(texture, _scoreThreshold, _overlapThreshold);
    47.  
    48.     var i = 0;
    49.     foreach (var box in _detector.DetectedObjects)
    50.     {
    51.         if (i == _markers.Length)
    52.         {
    53.             break;
    54.         }
    55.  
    56.         _markers[i++].SetAttributes(box, _webCamTexture.videoRotationAngle, ShowFrames);
    57.     }
    58.  
    59.     for (; i < _markers.Length; i++)
    60.     {
    61.         _markers[i].Hide();
    62.     }
    63. }
    64.  
    65. private Texture ProcessCameraImage()
    66. {
    67.     var ratio = (float)Screen.width / (float)Screen.height;
    68.     _aspectRatioFitter.aspectRatio = ratio;
    69.     _markersAspectRatioFitter.aspectRatio = ratio;
    70.            
    71.     var scaleY = _webCamTexture.videoVerticallyMirrored ? -1f : 1f;
    72.     _previewUI.rectTransform.localScale = new Vector3(1, scaleY, 1);
    73.  
    74.     return _webCamTexture;
    75. }
    76.  
    77. private Texture ProcessVideoPlayerOutput()
    78. {
    79.     return _videoPlayeRenderTexture;
    80. }
    81.  
    82.  
     
  2. michalekmarcin

    michalekmarcin

    Joined:
    Mar 28, 2019
    Posts:
    16
    I also tried to use ARFoundation XRCpuImage. Now preview image is ok, but image recognition still doesn't work.

    Code (CSharp):
    1.  
    2. unsafe private void OnDrameReceivedFromArCamera(ARCameraFrameEventArgs obj)
    3. {
    4.         XRCpuImage image;
    5.         if (_cameraManager.TryAcquireLatestCpuImage(out image) == false)
    6.         {
    7.             return;
    8.         }
    9.  
    10.         var format = TextureFormat.RGBA32;
    11.  
    12.         if (_texture == null || _texture.width != image.width || _texture.height != image.height)
    13.         {
    14.             _texture = new Texture2D(image.width, image.height, format, false);
    15.         }
    16.  
    17.         var conversionParams = new XRCpuImage.ConversionParams(image, format, XRCpuImage.Transformation.None);
    18.         int size = image.GetConvertedDataSize(conversionParams);
    19.         var buffer = new NativeArray<byte>(size, Allocator.Temp);
    20.  
    21.         try
    22.         {
    23.             image.Convert(conversionParams, new IntPtr(buffer.GetUnsafePtr()), buffer.Length);
    24.         }
    25.         catch
    26.         {
    27.  
    28.         }
    29.         finally
    30.         {
    31.             image.Dispose();
    32.         }
    33.  
    34.         _texture.LoadRawTextureData(buffer);
    35.         _texture.Apply();
    36.         _previewImage.texture = _texture;
    37.         _detector.ProcessImage(_texture, _scoreThreshold, _overlapThreshold);
    38.         UpdateMarkers();
    39.         buffer.Dispose();
    40. }
     
  3. alexandreribard_unity

    alexandreribard_unity

    Unity Technologies

    Joined:
    Sep 18, 2019
    Posts:
    53
    @michalekmarcin could you provide your model?
    My guess would be that it has something to do with the image from the webcam having different dimensions in landscape vs portrait.
    Is that the case?
    If so, remember to recreate the input Tensor if the image dimension changes. (and to dispose the previous one)
    Also do check if your model is dimension independent. So that it works with both input resolution.

    Let me know