Search Unity

How to dynamically fit texture (or text) to screenspace projection of object?

Discussion in 'General Graphics' started by sleepandpancakes, May 1, 2019.

  1. sleepandpancakes

    sleepandpancakes

    Joined:
    Oct 1, 2016
    Posts:
    9
    I want to map a texture onto an object so that it gets fitted onto the entire screenspace projection of the object, i.e. for each vertical cross-section of the object as visible by the camera, the pixel values of the texture are linearly mapped it fits the height of the cross-section. I drew a picture to try to explain what I want. IMG_0017.PNG

    I can only think of how to fit the texture to the screenspace projection of the mesh bounds (by passing screenspace projected bounds to a shader which linearly remaps the UVs), which would result in some of the texture getting cut off (where the mesh does not reach the extent of the bounds). How can I fit it to the extents of the actual mesh?

    The reason I am doing this is that I want to fit TMPro text into the shape of a mesh as seen by the camera, and I thought it would be most doable if I captured the text through a RenderTexture. However, this might require that I keep creating and destroying RenderTextures at runtime, so if there's a way to do it without RenderTextures then that might be even better. From what I understand this is an unusual problem that might not be solvable with just shader and texture magic, so if there is a better place to post this question please let me know.
     
  2. sleepandpancakes

    sleepandpancakes

    Joined:
    Oct 1, 2016
    Posts:
    9
    Bump. Does anyone of a general direction I could go or a better place to ask this question?
     
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Honestly, I'm not entirely sure what you're trying to do. Your example image doesn't actually make any sense since there's no plausible projection I can think of with the result you show.

    I can think of how to do this:
    upload_2019-5-6_14-9-51.png

    Or this:
    upload_2019-5-6_14-10-2.png

    Or even this:
    upload_2019-5-6_14-10-11.png

    But this doesn't make any sense:
    upload_2019-5-6_14-10-24.png

    And if you want to deal with occlusion in any way, any even mildly efficient technique goes right out the window.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Btw, any option that's not the first or third example above will absolutely require render textures.
     
  5. sleepandpancakes

    sleepandpancakes

    Joined:
    Oct 1, 2016
    Posts:
    9
    What I was trying to communicate with my illustration:
    Let's say our texture coordinates T(x,y) go from (0,0) to (1,1) and the projection of the object goes from xMin to xMax. Let's also say for a given value of x in screenspace, yMin(x) and yMax(x) are the min/max y values of the screenspace projection of the object at that value of x. Our texture projection P(x,y) coordinates (in screenspace) would be
    P(x) = lerp(xMin, xMax, T(x))
    P(y) = lerp(yMin(P(x)), yMax(P(x)), T(y))

    That might still be confusing, I'm not sure how to explain it more succinctly.

    I am curious about the second example you posted. I am not sure what you mean by that projection and what it would look like dynamically. Can you explain it more and how you would approach it?

    Thank you
     
  6. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    The second example would essentially be just an object space projection rather than a screen space one. Wouldn't require any fancy real time stuff, just use the mesh's object space xy coordinates scaled by a fixed amount, or use a Unity projector similarly scaled.
     
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    And I do now understand what you're attempting, and it would require multiple render textures, which is fine. The bad part is the only way to do what you want requires iterating over every pixel of the drawn mesh. You’d probably have to render your object out to a render texture as a solid color. Then run a shader over that to calculate the coverage for each column and row, probably limited to the renderer’s screen space bounds so you’re not doing the whole image, and using that data to construct the appropriate UVs. It’s doable, but it’s not going to be fast.
     
  8. sleepandpancakes

    sleepandpancakes

    Joined:
    Oct 1, 2016
    Posts:
    9
    Ok, so the second example you showed wouldn't change based on the camera's perspective? The texture would stay fixed?
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    Correct. It'd be the same as a basic planar projection mapped UV. Though you could do it so you could rotate the object and have it change the alignment, it wouldn't be guaranteed to match the object's bounds with out rescaling the protection used. And it's still a linear projection, so it won't map to the visible pixels, just the min/max bounds of the overall mesh in 3D space.