Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Omit some pixels to achieve biologicals eye receptor camera

Discussion in 'General Graphics' started by Sab_Rango, Feb 23, 2021.

  1. Sab_Rango

    Sab_Rango

    Joined:
    Aug 30, 2019
    Posts:
    121
    Hey!
    I am trying to create biological eye camera for Machine Learning!

    Specifically, camera should render as traditional way but some of the pixel or compute shades should be omitted during the rasterization.

    Since animal eye receptors is not distributed smoothly across the all areas like physical cameras.
    eye pixel structure.png
    In the center of the animal eye, the pixel density is too dense, and it decreases non- linearly from a distance.

    My first way to achieve this, by omitting some threads during shaders calculation
    like the image below.
    dont compute.png
    These dots in the texture are the place to omit calculations in the shaders.

    My primary reason to do this is that use this camera to real time Machine Learning visualization.


    I know that this method can be improper for existing pipelines like built-in, URP, HDRP.

    But anyway, Is there any way to make this biological camera?
    Even, I am ready to create my own SRP to achieve this realistic eye pixels!
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Something like this is done for VR rendering already. Some PS4 PSVR games do this, and Steam VR APIs have it as a built in feature. Large parts of the screen are occluded, and then reconstructed as a post process.

    The "trick" to do this is the first thing the camera renders is an alpha tested all black mask at the camera's near plane. Render black anywhere you don't want to render anything later.

    Whether or not this actually saves any performance depends on how you render your dots. It also won't impact any compute or post process shaders directly since they just see black pixels in the rendered image they're handed and unless you rewrite those yourself they won't know to skip those pixels. Ideally you'd be applying any machine learning to the image before the post processing gets ahold of it anyway. That's how VR handles it at least, as the reconstruction filter is applied before handing it to the post processing.
     
    Sab_Rango likes this.
  3. Sab_Rango

    Sab_Rango

    Joined:
    Aug 30, 2019
    Posts:
    121
    THx!
    I have tested this method In HDRP with default template. And, It works somehow!
    Standard rendering gives about 94 FPS
    A camera with covered white canvas gives 122FPS


    Amazing anyway, there is a way to achieve this without creating new pipeline!:)