I am looking into a way to calculate the mipmap level used in my fragment program. There are 2 objectives to this: To enable different shader implementations depending on the mipmap level used. To create a shader which displays the different mipmaps used in a color coded fashion. So far the most promising pieces of information i have found regarding this issue are: http://stackoverflow.com/questions/...-mipmap-level-in-glsl-fragment-shader-texture http://hugi.scene.org/online/coding/hugi 14 - comipmap.htm The code would be something like this: Code (Cg): float mip_map_level(in float2 texture_coordinate) // in texel units { float2 dx_vtc = ddx(texture_coordinate); float2 dy_vtc = ddy(texture_coordinate); float delta_max_sqr = max(dot(dx_vtc, dx_vtc), dot(dy_vtc, dy_vtc)); return 0.5 * log2(delta_max_sqr); } The problem is this code requires the the texture coordinates to be "UN-normalized" texture coordinate. So I have a few question regarding the suggested code: How would I UN-normalize the UV coordinates to Texel coordinates? Could this be done using ddx and ddy values from the normalized UV texture coordinates alone? Why do I need the results for ddx and ddy to be 2 dimension variables? (If all I need is the derivative/change along the U or V coordinate - wouldn't a regular float suffice?) What kind of values (Codomain) do I get from ddx and ddy when applying these functions to the normalized UV texture coordinates? - Would these be something like 0, 1, 1/2, 1/4 ,1/8......1/4096? Regarding question 1 - The only way I see this being done is actually supplying the texture texel size into the shader as a uniform?

As you said supply the texture size as a uniform, ie. 2048, then mult with normalized uv so 1 yields 2048, 0.5 yields 1024, 0 yields 0 etc. Probably not. Calculation of screenspace gradient and mip level necessarily depends on knowing the mip-0 texture size. You can see a dot product needs to be performed with the vector results from ddx and ddy, right?

Unity automatically populates float4 TextureName_TexelSize with (1/width, 1/height, width, height) if you declare it.

While I totaly agree with that - I am still wondering what kind of values (Codomain) do I get from ddx and ddy when applying these functions to the normalized UV texture coordinates.. This I don't understand - we have 2 functions ddx and ddy. ddx for supplying us with the value of change along the U axis and ddy supplying us with value of change along the V axis hence i would assume that dx_vtc.y and dy_vtc.x are always zero. I have never seen that before...is TextureName_TexelSize a uniform that unity supplies automatically? Do I need to declare it? Anycase thats a good one to know...Is there an updated list of all the uniforms unity does supply?

It is undocumented. You only need to declare it, and Unity will populate it for you. There is no up-to-date list of all of Unity's supplied uniforms. That would be nice, along with thorough documentation of the lighting pipeline and how each uniform is used. A man can dream.

Well finally found what I think is one of the clearest explanations to what ddx and ddy represent in regards to textures: http://stackoverflow.com/questions/...dimension-variables-when-quering-a-2d-texture

If you're writing HLSL, you can directly use https://msdn.microsoft.com/en-us/library/windows/desktop/bb944001(v=vs.85).aspx

Because of mathematics，u,v is un-normalize, u=u(x,y),v=v(x,y). (x,y)->(u1,v1), (x+1, y)->(u2,v2), ddx(u1v1) = (u2-u1, v2-v1), set (u2-u1,v2-v1) is (deltau, deltav), deltau= partial derivatives of u(x,y) function, deltav = partial derivatives of v(x,y) function.so, 1 x direction screen pixel = sqrt(deltau * deltau + deltav * deltav) texel pixel, this is x direction. set sqrt(deltau * deltau + deltav * deltav) = k, when k = 1 -> 1 screen pixel = 1 texel pixel, log2(k) = 0, this level 0, when k = 2, 1 screen pixel = 2 pixel, log2(k) = 1, and so on. set log2(k) = q, 2^q = k -> 2^q * 2^q = k * k -> 2^(2q) = k*k -> log2(k*k) = 2q -> q = 0.5 * log2(k*k) -> q = 0.5 * log2(deltau * deltau + deltav * deltav). then y direction is same, we choose the largest level in x, y direction, so......that's all

I, unfortunately, had a stroke reading that, but I would very want to understand what do to get miplevel and color-code it through a fragment shader in Unity...

Code (csharp): float mipLevel = mip_map_level(i.uv * _MainTex_TexelSize.zw); That’s calling the function from the first post, with the UVs multiplied by the texture’s resolution. That’s it. One key thing is for the _MainTex_TexelSize to work you have to define it in the shader code (just put float4 _MainTex_TexelSize; someplace outside of a function, but between the CGPROGRAM and ENDCG), and you must be using _MainTex still. Specifically the texture color needs to be used in the output color, otherwise it’ll be optimized out of the shader, and then the texel size won’t be set. Alternatively you could just have a material property which you manually specify the texture resolution.

For future google searches, this is the method to calculate the mipmap level: HLSL code: Code (CSharp): float mip_map_level(in float2 texture_coordinate) // texture_coordinate = uv_MainTex * _MainTex_TexelSize.zw { float2 dx_vtc = ddx(texture_coordinate); float2 dy_vtc = ddy(texture_coordinate); float md = max(dot(dx_vtc, dx_vtc), dot(dy_vtc, dy_vtc)); return 0.5 * log2(md); }

The code might be very slightly faster, but it can result in slower overall rendering compared to the reference implementation due to always using a higher mip level, and thus putting more pressure on the memory bandwidth. Though it should be noted … literally no common GPU on the planet actually uses the reference implementation exactly as it’s written. AMD and Nvidia have used various approximations or optimizations over the years, the exact specifications of which are a secret. Also interesting some Mali GPUs use something like your suggested method! But if you’re trying to match the exact behavior of a specific GPU’s hardware mip mapping calculations, you basically won’t be able to without access to confidential documents or a lot of experimentation to reverse engineer what is being done. And you’ll need to do that for every generation of GPU because I’ve found they change every so often. Also, on at least recent AMD GPUs, if you’re using this to feed a tex2Dlod() or similar sample function, that’s significantly slower than using tex2Dgrad() and passing in the derivatives, even when using 16x anisotropic filtering. Similarly, 16x anisotropic filtering on the last 5 years of Nvidia GPUs is nearly free, and using tex2() with 16x anisotropic filtering is faster than tex2Dlod() regardless of the mip level implementation! And tex2Dgrad() is always slower than that, though not by a ton. It can be very non-obvious what is “faster” when it comes to texture sampling on GPUs.

Yeah, I actually made that function to have sharper textures on slanted surfaces while being able to use tex2Dlod because I really NEED to be able to specify the mip map level manually since I'm doing realtime projection mapping and it generates ugly seams when the UVs change radically form a pixel to the next. The faster calculation was just a bonus.

You're right! In the end it's just better overall to find the difference between the current mip map level and your target mip map level using the vanilla method above and then use that difference as a bias with tex2dbias. I'll go ahead and delete that second method to avoid any confusion to anyone in the future.

It's still a useful option for some use cases, it's just not "faster". The bias technique is a decent one too, though again, it's impossible to perfectly replicate the mip level any particular GPU is going to produce from the derivatives. So this too is an appropriation. I generally try to always use tex2Dgrad unless the performance implications are especially bad (as they can be on some mobile devices).

I knew this technique on seeing POMShader of ATI. I have a question to ask you. FilterMode of texture does not seem to work as expected any place other than point. Do you know the method to change as expected in Bilinear and Trilinear?

This is a mathematical problem in essence. Don't worry about it too much. It's a little old, and you don't care about it now. Ha ha ha