Search Unity

Most efficient way to access pixel color at predetermined viewport coordinate?

Discussion in 'Scripting' started by luniac, Nov 4, 2021.

  1. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    As title suggest, all I want is to find the pixel color at a predetermined coordinate in a particular camera viewport.

    I’ve done hours of research and am seeing stuff involving creating new textures, using read pixels, all this kind of complex stuff.

    all I need to is to access color information of whatever pixel in a particular camera that’s rendering only the layer who’s color data I’m interested in.

    I’m thinking maybe render texture is the way to go cause it’s updated in real time and I can set it to a camera that renders only the layer I need, but I’m not sure how to simply access the color of a particular pixel in the render texture.

    I’ve also read somethin about accessing pixel info directly on gpu for efficiency but this is too complex for me at the moment and confusing.

    any help very appreciated!
     
  2. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    9,411
  3. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    with a script, and read pixels is too expensive performance wise, I need to access colors every frame.

    and I don’t have a texture2d, I’m looking at whatever a camera is rendering, so maybe a render texture would work but I’m not sure if it’s the best way.

    I don’t wanna read all pixels of a texture, I just need access to a few pixels every frame, and read only.
     
  4. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    9,411
  5. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    I’m not sure what your first link can help with, doesn’t seem related to render texture, but the second one has latency, I need 0 latency for my case.

    is there really no way to simply access the pixel color in a camera viewport without going through all this read pixels and texture stuff!?
     
  6. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    9,411
    screen data is on the GPU, need to move it back to CPU to access..

    the first example uses small object with custom shader, to draw depth texture into it,
    then grabs that image, and then uses readpixels on 1x1 texture, to get data into cpu
    https://github.com/staggartcreation...aster/GraphicsRaycast/GraphicsRaycast.cs#L100
    it doesnt cause performance issues on desktops at least..

    what is your use case for it, or target platform?
     
  7. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    I see, I don’t know shaders so was confused, but idid read about making the needed texture smaller.

    I’m making mobile so performance is a must.

    my use case is a 2d platform but all collision detection is based on pixel colors.

    I only really need the information from the pixels around my character, say a controllable ball in this case.

    maybe I can have a camera that’s zoomed in on the player or something and generate a smaller render texture to access the color pixels from?
     
  8. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    9,411
    ok so you need pixel perfect collisions, is the view static?
    then could take screenshot just once, and use that as a collision "map" (regular array, where you can check if that area is collider or not)

    yeah that zoomed view sounds like a good idea too, much faster to read small area,
    but with the readpixels, you can already define area where you grab. (so it can take ball position, or even multiple single pixel calls around the ball)

    or google for: unity pixel perfect collision, there's some other options.
     
  9. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    I don’t quite need it to be pixel perfect but I’ll look into it thanks.

    I thought pixel perfect collisions still relied on collided and:eek:r the physics engine on some level.

    im not sure pixel perfect would work for my artwork or gameplay feel but we’ll see.

    also doesn’t that CPU still have to send data first to the gpu to render, is there no way to grab color data about the frame before the gpu renders it to screen?
     
  10. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    Actually pixel perfect collision might not quite work for me if it image based because I will hand draw entire level digitally and white pixels are supposed to be empty space, so my ball player is an image inside a huge level image lol
     
  11. mgear

    mgear

    Joined:
    Aug 3, 2010
    Posts:
    9,411
    so is there some issue if you use regular 2D colliders for the map?
     
  12. You can experiment with this or with this, but I wouldn't think you'll be successful with this pixel-color-based collision idea. Especially not on mobiles. It's too much overhead to read back every single frame. On the top of that, you will have timing issues. Either use the previous frame for this frame's calculations or you do everything at the end of the frame or in the case of
    OnRenderImage
    right after rendering the current frame. Anyway, you will be way after the
    Update
    and the
    FixedUpdate
    loop. Usually in case of platformers the one frame offset not too good feel. I would rather find a way to somehow tag the level with proper collision data.

    Why don't you know what is the color of a certain pixel anyway? Do you load user-created levels?
     
    Bunny83 likes this.
  13. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    I don’t wanna spend the time to make them, I just wanna draw my level using colors like black white red, and have the game pixel color logic make it interactable lol
     
  14. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    ill check out your links but yea it’s an experiment, I wanna see if it can work.

    no it’s not user created, but I’m hand drawing the levels, so I just wanna import a big png file and put my character sprite inside it, so I need pixel color detection to determine what’s a floor which will be black color, danger element which will be red, etc etc

    I wanna just draw theblevel, import it, and be instantly playable based on my pixel color code.

    I unerstand timing may be an issue, I will try sampling various pixels around and inside the player character to determine the logic of what will happen.

    If I can get the pixel color logic to work then I can even have particle effects so cool things and interact with my character, it could be pretty amazing.
     
  15. There is a general rule of thumb in software and especially in game development (there are some rare exceptions):
    - if something is easy to use while you're authoring, that's usually super-inefficient to use in runtime (slow)
    - if something performant in runtime, it is usually more complicated to set up during authoring

    Hashtag dev-life...

    I would like to propose another method for you: you author your pixels and then import the texture into Unity. Then in AssetPostProcessor you read that texture in editor and generate your colliders automatically.
     
    Bunny83 likes this.
  16. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    The screen capture won’t work because I might wanna do post processing but need original colors for logic.

    the graphics.blit looks interesting, I’m not sure exactly what it does but. If it helps access color info that would be helpful.
     
  17. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    maybe I’ll do that it all else fails lol
     
  18. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
  19. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    To reiterate some points.

    Render textures only exist on the GPU. If you want to get a color value back to the CPU you need to transfer data back from the GPU to the CPU.

    This is usually slow.

    ReadPixels()
    transfers data in a render texture back from the GPU to a Texture2D the CPU. And the slow part isn't always the data itself, but because getting data from the GPU requires waiting for the GPU to finish whatever it's already doing before it can start transferring the data. When you do
    ReadPixels()
    the CPU stalls waiting for the GPU to respond. Thus, it is usually slow. It can be made faster by reducing how much data is being transferred, like only transferring a small area of the screen. You do need to have a Texture2D to hold the data you read from the render texture so it is accessible on the CPU. But it will still be slow.

    The alternative is to read the data using a compute shader and storing it in a compute buffer that you then get back from the GPU. This is slower, because now you're adding an extra step of the compute shader and it doesn't matter if the data you're transferring is texture data or a buffer ... it's still data.

    The async method is "fast" because it doesn't stall the CPU. It issues the request and then you check again later to see if it's done. It might be done later the same frame, or it might be done several frames in the future. There's no way to know which one it'll be.

    CopyTexture()
    is just copying the data from one texture on the GPU to another texture on the GPU. If you copy a render texture to a
    Texture2D
    and then try to read the data in c#, it'll be blank because the data is still only on the GPU.

    Otherwise it's great if you're trying to copy data in a render texture to another texture explicitly because it avoids cpying data to and from the CPU.


    The short version is getting data from the GPU back to the CPU is generally slow. If you're relying on that for something you want to be fast, then you're in for a bad time. Basically getting data from the GPU to the CPU takes some amount of time that cannot be avoided. So the best way to deal with that is to not do that at all. You either need to be doing everything on the GPU, or everything on the CPU, unless you can deal with a frame or two (or more) of latency. There's no other answer.
     
    Menyus777 and Lurking-Ninja like this.
  20. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    I can't do it async, i need guarantee to get the data every frame of the game, because im doing pixel perfect collision stuff.

    What if i do this,
    CopyTexture on the GPU side to create several 1x1 Texture2D's on the GPU that represent the individual pixels im interested in sampling.
    Use ReadPixels on those 1x1 Texture2D's to transfer them back to the CPU so i can actually access color data in code.

    If i for example only need to sample 12 pixels per frame total, would it really cause a lot of latency to transfer 12 1x1 Texture2D's to the CPU every frame?

    Or maybe i can create a 13th Texture2D that is 12x1 texture that contains the pixels i need in a row, and then do ReadPixels on that one texture every frame, would it be faster? or insignificant difference?


    Or are you saying that even a single 1x1 texture 2D sent from the GPU to CPU could cause a few frame stutter?

    Is there a way then to access color information that should be represented on screen before its sent to the GPU to actually render?
    I guess that would require some kind of crazy CPU logic that would analyze all the elements on screen, and programmatically calculate what colors should be where which is too crazy lol

    hmm....
     
  21. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    I wouldn't do 12 individual
    ReadPixels()
    . I would expect each call to
    ReadPixels()
    to have some fixed overhead. Lets say it takes 5 ms to read a 64x64 pixel area, 4 ms of that might be the fixed overhead of making that request and only 1 ms of it is actual data transfer. So calling
    ReadPixels()
    12 times could end up being 48 ms even if the data transfer cost is zero. How much it actually is will depend on the device you're running on, so it could be much less than 4 ms, but it could also be more!

    I'd make a single 12x1 render texture and use
    CopyTexture()
    to copy the individual pixels you want over from the original render texture, and then call
    ReadPixels()
    on that. That's kind of the best option you have. Though using a
    Blit()
    with a custom shader or compute shader to do that copy is plausibly slightly faster, I would expect the
    ReadPixels()
    to be the major time sink still.

    But yes, a single 1x1 could be a multiple-frame-ms cost on some hardware, at which point there is no way to do what you're trying to do on that hardware, but only you can answer the question of if it's too slow for you or not by trying it. Try grabbing a single pixel and see how much that costs.

    Also realize some of the cost is in actually waiting for the GPU to finish rendering. Usually when you issue a rendering command in Unity, it goes off to the rendering thread and is handled asynchronously so it doesn't stall the main gameplay thread. By telling a camera to render, then calling ReadPixels() on the target render texture you're stalling the CPU to wait for the rendering thread to finish, then the GPU to actually render, then to transfer the data. Another thing you might want to focus on is limiting how much you render. You mentioned you're rendering just the layers you care about, but are you rendering using the same projection and at the same resolution as the main camera or are you only rendering the area you care about? Because doing the later may help a lot too.

    That's what @mgear was getting at earlier. CPU side rendering is totally a thing, Unity's occlusion culling is based on CPU side rendering, though it's not something exposed to users so you'd have to write it yourself. You can iterate over what sprites are near the character via their bounds, do a raycast on those that are at the pixel position you want information, get the UV at that position, and sample the sprite texture via c# to get the color. This can end up being much faster than the round trip to and from the GPU.
     
    luniac likes this.
  22. luniac

    luniac

    Joined:
    Jan 12, 2011
    Posts:
    614
    Hmm a lot to think about lol
    So I’d premake 2 render texture assets, one for camera that’s rendering the layers I care about and one 12x1.
    During play I can reference them when doing copytexture and the gpu will do stuff on its end, and then readpixels on 12x1 to access pixel data on cpu to get color.

    i don’t know much about shaders atm so no blit for me for now anyway lol

    The last thing you said about doing it on the CPU with ray casts is also something I found googling but wasn’t sure it’d work for my use case but I’m not sure.

    I assume that any pixels that I want the ray casts to detect will have to be a component of an object with a 2d collider for the ray cast to hit against, unless there’s non collider based ray casting? Like layer based only way that I’m not aware of?

    now I’m planning to digitally draw an entire level and import the png file into unity, so white pixels are “empty space”, black pixels are “surface”.

    id drop my player sprite right over the level png, I guess I could add a 2dcollider for ray casting on the level png and ray casting into the screen should get whatever pixel on the level png.

    if that works, then for sure this is the best way avoiding all the GPU round trip!