I am trying to make a smooth mesh look flat, flattening the normals of each polygons with this shader but getting errors, can someone help out with GLSL (would use sorface shading but I don't think the flat attribute exists) Code (CSharp): Shader "Flattener" { SubShader { Pass { GLSLPROGRAM #extension GL_EXT_gpu_shader4 : require flat varying vec4 color; #ifdef VERTEX void main() { color = gl_Color; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } #endif #ifdef FRAGMENT void main() { gl_FragColor = color; // set the output fragment color } #endif ENDGLSL } } }
What do the errors you are getting say? If there is more than one, it is usually the top one that matters.
GLSL compilation failed: ERROR: 0:7: '' : extension 'GL_EXT_gpu_shader4' is not supported ERROR: 0:8: 'vec4' : syntax error syntax error
That extension being not supported seems very clear to me, don't know what you need help with. (did you try to remove that extension line? not sure what needs it, then again I really don't know much about glsl).
I'm pretty sure vec4 is not valid in Unity. Change it to half4 or float4. As for the other error, it looks like GL_EXT_gpu_shader4 simply isn't supported, either by Unity or your computer. Also, can you describe what you mean by "flat shaded"? Try selecting your model, and in the import settings change "Normals" from "Import" to "Calculate", then mode the slider to 0. Is this what you want?
I believe flat was added in OpenGL 3.0, maybe 3.1, but Unity might be trying to compile to 2.0. You might try adding: #version 130 Or #version 140 To force OpenGL 3.0 or 3.1 respectively. I assume those work in Unity with GLSLPROGRAM blocks, I don't know for sure as I rarely do direct GLSL or HLSL programming with in Unity. Alternately you could try writing it as a normal Unity shader within a CGPROGRAM and use the nointerpolation qualifier along with #pragma target 4.0
THANKS bgolus ! Just adding nointerpolation in front of the color v2f does the magic! Most of my shaders are surface shader, nointerpolation gives a syntax error, do you know what the surface shader equivalent is? Spoiler: magic flat shader Code (CSharp): Shader "Diffuse With Shadows" { Properties { [NoScaleOffset] _MainTex ("Texture", 2D) = "white" {} } SubShader { Pass { Tags {"LightMode"="ForwardBase"} CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" #include "Lighting.cginc" // compile shader into multiple variants, with and without shadows // (we don't care about any lightmaps yet, so skip these variants) #pragma multi_compile_fwdbase nolightmap nodirlightmap nodynlightmap novertexlight // shadow helper functions and macros #include "AutoLight.cginc" struct v2f { float2 uv : TEXCOORD0; SHADOW_COORDS(1) // put shadows data into TEXCOORD1 nointerpolation fixed3 diff : COLOR0; nointerpolation fixed3 ambient : COLOR1; float4 pos : SV_POSITION; }; v2f vert (appdata_base v) { v2f o; o.pos = mul(UNITY_MATRIX_MVP, v.vertex); o.uv = v.texcoord; half3 worldNormal = UnityObjectToWorldNormal(v.normal); half nl = max(0, dot(worldNormal, _WorldSpaceLightPos0.xyz)); o.diff = nl * _LightColor0.rgb; o.ambient = ShadeSH9(half4(worldNormal,1)); // compute shadows data TRANSFER_SHADOW(o) return o; } sampler2D _MainTex; fixed4 frag (v2f i) : SV_Target { fixed4 col = tex2D(_MainTex, i.uv); // compute shadow attenuation (1.0 = fully lit, 0.0 = fully shadowed) fixed shadow = SHADOW_ATTENUATION(i); // darken light's illumination with shadow, keep ambient intact fixed3 lighting = i.diff * shadow + i.ambient; col.rgb *= lighting; return col; } ENDCG } // shadow casting support UsePass "Legacy Shaders/VertexLit/SHADOWCASTER" } }
I don't believe there's a way to do it with surface shaders. Instead you'll likely have to take the generated vert/frag shader and modify that.
Hello, can anyone explain what happens when you use the keyword "nointerpolation", the entire triangle is filled with the value calculated for the first vertex?
Unlit shaders are, well, unlit, sometimes erroneously referred to as "flat shading" or "flatly lit", but unlit shaders aren't lit or really "shaded" at all. The flat shading being discussed here, and specifically the GLSL "flat" and HLSL "nointerpolation", allows for faceted surface shading without having faceted model normals. What the OP is referring to is the "PolyWorld" style faceted surface shading, like this: The example shader at the start is just testing the GLSL "flat" attribute designation rather than an attempt at the final flat shaded result.
Does that mean it is possible to keep a low vertices count while having the flat style? A visually faceted cube (custom one not the Unity one) will still have 8 vertices instead of 24? I guess my real question behind it is more how good is it performance wise?
Yep, you can have every vertex welded, though there's a chance two polygons will end up with the same values if they share the same first vertex. Performance wise there should be zero performance difference vs the same smooth mesh with a normal shader, and plausibly though unlikely even a very (very, very) minor perf gain over traditional interpolated attributes. On very high poly objects it might even be noticeably faster than manually faceted surfaces. Unfortunately for the platforms that this would actually matter at these low poly counts, they don't support nointerpolation / flat (low end mobile) so it's kind of moot.
Thanks for the answer. It made me think though, if I'm correct the smoothed triangles use an average of the vertices normals with a weight based on distance right? Isn't there a way to have a non-weighted average? It will do some calculations but it would avoid the problem of multiple surfaces sharing the same first vertex. I looked for it a little but didn't find anything so it might be hardware restrictions and in any case I'm curious why.
By default values passed from the vertex shader to the pixel shader use perspective corrected barycentric interpolation, basically the average weighted by normalized distance as you said. This is done at the hardware level and there's no direct control over it apart from if the graphics API supports various modifiers like nointerpolation (what this thread is about), noperspective (if used on texture UVs it'll make them look warped like they did on the original PlayStation and early arcades), or centroid (which has to do with MSAA, it is not the center of the triangle). Those are the only options and there's no way to change them or extend them. You can however work around them. Neither the vertex shader nor the fragment shader have access to the data you need for this as the vertex shader only knows of the data of that single vertex, and the fragment only knows the final interpolated value. Geometry shaders are able to get all of the data from the 3 vertices of a triangle at once and you can modify the values that eventually get interpolated. In that way you could take all 3 vertex normals and average them and then spit out a new triangle to replace the original one with unique vertices all using the same normal. Even better you don't actually need normal on your mesh at all at that point, you can calculate the actual normal of the surface and use that. However on old hardware geometry shaders either aren't supported or are really slow such that it's possible a pre-faceted mesh will be noticeably faster. On newer hardware it'll be a bit of a wash, but the pre-faceted mesh will likely still be faster. The nointerpolation should be faster than either of them.
I am sorry to jump this old thread, but @bgolus could you let me know if I understand your explanation correctly? There are currently 3 ways to do faceted "flat shading": 1) split vertices (aka pre-faceted, through model import or duplicating vertices during procedural generation) 2) geometry shader (GLSL, requires "#pragma target 4.0") 2.1) through interpolation qualifier "flat" (using one of vertex) 2.2) using all 3 vertices and calculate normal manually 3) interpolation modifier (HLSL, requires "#pragma target 4.0") 3.1) through nointerpolation (using one of vertex) If I understand correctly, option 1 is the only way for iOS (which uses OpenGL ES 3.0 thus doesn't support geometry shader, and "#pragma target 4.0" requirement blocks Metal). Am I right? Just trying to lay it out there, so others from Google search don't have to repeat my search...
You're mostly accurate. 1 - works on every device and target. 2 - requires #pragma target 4.0 (DX11, OpenGL 3.2, OpenGL ES 3.1+AEP aka recent android phones, and recent consoles. 2.1 - we'll get back to this one, but basically no. 2.2 - yep 3.0 & 3.1 - Should work with #pragma target 3.5 (DX11, OpenGl 3.2, OpenGL ES 3.0, Metal, basically the same as target 4.0 but without geometry shaders so includes metal and es 3.0) The interpolation qualifiers flat and nointerpolation are the same thing, the difference is one is used by OpenGL and the other by DirectX. Since Unity's shader lab is CG/HLSL based we use the nointerpolation qualifier. When unity does its magic to translate the shader to there others languages it should be capable of converting the HLSL "nointerpolation" to "flat" for OpenGL, OpenGL ES, and for Metal. However Unity's converters/translators aren't always complete or perfect and might be missing some features. In that case you could write the GLSL directly and use the "flat" qualifier yourself. For iOS Metal you might have to put in a bug if it doesn't work. Now let's go back to point "2.1" again more directly. When data is passed from the vertex shader to the geometry shader they is no interpolation, so no interpolation qualifier is needed, and if one exists it'll just be ignored. When outputting a tri from the geometry shader you could use an interpolation qualifier, but in this case there's no point since you can just set the same normal for all three vertices. That normal could be the first vertex's normal, which would emulate nointerpolation, or the average of the three vertices which is nice and cheap, or the actual surface's normal which is a little more expensive to calculate but there's some perf savings from no longer needing to have normals on the model itself. Now to answer your last question directly, though it's answered in the wall of text above, #pragma target 3.5 should allow for option 3 on iOS, but it might not just because of Unity not implementing it, but option 1 is guaranteed to work on everything.
Just a quick follow up, as @bgolus suspected, "nointerpolation" is not compiling for Metal API (fallbacks to OpenGL), so I added a feedback. But I have no vote left, so if anyone is interested in this, please help to vote https://feedback.unity3d.com/suggestions/nointerpolation-keyword-doesnt-work-with-osx-metal-api
Hmmm... I've spent much time trying to do stuffs related to this flat shading, and just stumbled upon this forum post =] My tree model got from 331 verts to 925 verts when I made it flat through import settings @_@ ... And its a mobile game sooooo... Is there any way I could achieve flat shading on OpenGL ES 2 devices through a shader? I dont need any shadows or any lights. Is it impossible to achieve a faceted look on OpenGL ES 2.0 devices without bumping up the verts number? *me sobbing* I could have 3 trees instead of 1 if I could do the flat shading through a shader instead of making the model faceted
Nope! Well, maybe. It's possible that some GLES 2.0 devices that have support for GL_OES_standard_derivatives could use the derivatives technique to calculate the normal, but this isn't universally supported. Unity used to have a way to explicitly request OpenGL extensions in shaders, but I don't know if it works anymore, or if it's needed. You can try this shader and see if it runs at all: https://forum.unity.com/threads/flat-lighting-without-separate-smoothing-groups.280183/#post-3696988
Thanks a lot for your reply =] My test results are... Works perfectly on Open GL ES 3 devices, but not on ES 2 ones ^^' I believe my device doesnt have support for GL_OES_standard_derivatives, and there must be other devices like mine too and i would lose those devices :/ ... Pics attached =] The good one is from my ES 3 device, the bad one from my ES 2 ^
Also there is this article I read a while ago, and am reading again rn.. The 1.1 sections writes about Derivative Instructions, and the 1.2 does about Geometry shaders.. https://catlikecoding.com/unity/tutorials/advanced-rendering/flat-and-wireframe-shading/ (im sorry im making you read for me, you must have much work to do....)
I'm familiar with that article. As discussed above, the problem is OpenGL ES 2.0 doesn't support any of the features needed to do flat shading purely in a shader. The only universal solution is the mesh based one.
I wonder if you could bake out a custom normal map that flattens the normals. That may not be the most performant option, but it should work on most platforms and might be a win for high vertex count objects where the extra split verts would require more memory than the normal maps. The edges might not look perfect due to filtering. It would be free if you are using normal maps anyway.
Indeed, using normal maps to get something like flat shading is possible, but the edges between faces will always be slightly rounded unless you're using a very high resolution normal map, or are insetting the UVs on each face, at which point you're creating seams and thus unique vertices anyway. You'll also never quite get perfectly flat faces, they'll always have a little bit of lumpiness to them from the lack of accuracy in 8 bpc normal maps.
Just a quick Update on this Thread since i always seem to bump into it again. A lot of Android devices support GL3.0 and above now https://developer.android.com/about/dashboards/index.html#OpenGL So using the "nointerpolation" keyword seems to be kind of ok now. I do some Terrainchunk generation for mobile and used vertex splitting for flat shading. Each triangle is using 3 separate vertices in this approach. Per chunk i got: 64x64 quads = 8192 triangles = 24576 verticesThis is my main bottleneck. I also can't do LOD-switching within these view distances No, with nointerpolation i can use triangle strips. Strips, because need at least 1 vertex per triangle to store the modified normal. This yields about 8k of vertices per chunk, a threefold reduction. Spoiler: stripnormals Code (CSharp): private void RecalculateStripNormals(){ // provoking vertices that need their normal set: // ▼__▼__▼__▼__ // | /| /| /| /| // |/_|/_|/_|/_| // ▲ ▲ ▲ ▲ // // ▼__▼__▼__▼__ // | /| /|\ | /| // |/_|/_|_\|/_| // ▲ ▲ ▲ ▲ for (int vi = 0; vi < stripTriangles.Length; vi+=3 ) { stripNormals[stripTriangles[vi+0]] = Normal( ref stripVertices[triangles[vi+0]], ref stripVertices[triangles[vi+1]], ref stripVertices[triangles[vi+2]]); } mesh.normals = stripNormals; } The next step would be the the flat keyword in geometry shaders supported in gl 3.2 upwards, but i did not consider it yet. https://www.khronos.org/registry/OpenGL/specs/es/3.2/GLSL_ES_Specification_3.20.pdf
Once you get to OpenGLES 3.0 you can use derivatives, which works on any arbitrary mesh with no preprocessing and is way cheaper than geometry shaders.
Exactly the kind of response i was hoping to get! So i can get away with a normal shared mesh AND save even more performance? Lets look this up... ok, looks like a longer tinker session for me. The ddx,ddy stuff is done in the fragment shader. I also need to combine this stuff with my vertex based radial fog distance, otherwise i get this rotating fog wall again... Thanks for the info Ben!
I came back from some testing. Once i use ddx & ddy (=derivatives) in the fragment shader, my performance still takes too much of a hit (60fps->40fps). Even if i do nothing else there and just use the normal as color: So currently the vertex strip with nointerpolation seems to be the fastest way for mobile flat shading. But i am by no means a shader-guy so i am open to any suggestions in this matter. But with interpolation i got another problem now. Somehow the provoking vertex seems to switch on Android compared to the Editor, as if some mesh optimization is done on the smartphone that changes the vertex order in triangles[].
Ouch. Mobile GPU performance is always a bit of a crapshoot, but that is way worse than I would have expected. There’s a bunch of build time mesh optimization settings that might be causing issues. Try looking for those and disabling as many as you can find.
I produce the meshes at runtime. Maybe this is a shader issue, can i somehow get the editor to show shaders as they would look on an opengles3 target? Editor looks fine but as soon as i put this thing on smartphone(s) the triangles use the wrong normals. Android must perform something on it or is unable to use no interpolation because i see the unused zero-normals that i generate in the mesh-strips. Same mesh in: Inspector with some legacy specular shader with the nointerpolation shader (has correct lighting) This is my shader btw. Spoiler: Frankenstein's Shader Code (CSharp): Shader "InfinityTerrain/VertexLitEDITOPTIMIZED" { Properties { _MainTex ("Base (RGB)", 2D) = "white" { } } SubShader { LOD 80 Tags { "RenderType"="Opaque" } Pass { Tags { "LIGHTMODE"="Vertex" "RenderType"="Opaque" } CGPROGRAM #pragma vertex vert #pragma fragment frag //#pragma target 2.0 #pragma target 3.0 #include "UnityCG.cginc" #pragma multi_compile_fog #define USING_FOG (defined(FOG_LINEAR) || defined(FOG_EXP) || defined(FOG_EXP2)) // ES2.0/WebGL/3DS can not do loops with non-constant-expression iteration counts :( #if defined(SHADER_API_GLES) #define LIGHT_LOOP_LIMIT 8 #elif defined(SHADER_API_N3DS) #define LIGHT_LOOP_LIMIT 4 #else #define LIGHT_LOOP_LIMIT unity_VertexLightParams.x #endif // Some ES3 drivers (e.g. older Adreno) have problems with the light loop #if defined(SHADER_API_GLES3) && !defined(SHADER_API_DESKTOP) && (defined(SPOT) || defined(POINT)) #define LIGHT_LOOP_ATTRIBUTE UNITY_UNROLL #else #define LIGHT_LOOP_ATTRIBUTE #endif #define ENABLE_SPECULAR (!defined(SHADER_API_N3DS)) // Compile specialized variants for when positional (point/spot) and spot lights are present #pragma multi_compile __ POINT SPOT // Compute illumination from one light, given attenuation //fixed3 computeLighting (int idx, fixed3 dirToLight, fixed3 eyeNormal, fixed3 viewDir, fixed4 diffuseColor, fixed shininess, fixed atten, inout fixed3 specColor) { fixed3 computeLighting(int idx, fixed3 dirToLight, fixed3 eyeNormal, fixed3 viewDir, fixed4 diffuseColor, fixed atten ) { fixed NdotL = max(dot(eyeNormal, dirToLight), 0.0); // diffuse fixed3 color = NdotL * diffuseColor.rgb * unity_LightColor[idx].rgb; return color * atten; } // Compute attenuation & illumination from one light //fixed3 computeOneLight(int idx, float3 eyePosition, fixed3 eyeNormal, fixed3 viewDir, fixed4 diffuseColor, fixed shininess, inout fixed3 specColor) { fixed3 computeOneLight(int idx, float3 eyePosition, fixed3 eyeNormal, fixed3 viewDir, fixed4 diffuseColor) { float3 dirToLight = unity_LightPosition[idx].xyz; fixed att = 1.0; #if defined(POINT) || defined(SPOT) dirToLight -= eyePosition * unity_LightPosition[idx].w; // distance attenuation float distSqr = dot(dirToLight, dirToLight); att /= (1.0 + unity_LightAtten[idx].z * distSqr); if (unity_LightPosition[idx].w != 0 && distSqr > unity_LightAtten[idx].w) att = 0.0; // set to 0 if outside of range distSqr = max(distSqr, 0.000001); // don't produce NaNs if some vertex position overlaps with the light dirToLight *= rsqrt(distSqr); #if defined(SPOT) // spot angle attenuation fixed rho = max(dot(dirToLight, unity_SpotDirection[idx].xyz), 0.0); fixed spotAtt = (rho - unity_LightAtten[idx].x) * unity_LightAtten[idx].y; att *= saturate(spotAtt); #endif #endif att *= 0.5; // passed in light colors are 2x brighter than what used to be in FFP //return min (computeLighting (idx, dirToLight, eyeNormal, viewDir, diffuseColor, shininess, att, specColor), 1.0); return min(computeLighting(idx, dirToLight, eyeNormal, viewDir, diffuseColor, att ), 1.0); } // uniforms int4 unity_VertexLightParams; // x: light count, y: zero, z: one (y/z needed by d3d9 vs loop instruction) float4 _MainTex_ST; // vertex shader input data struct appdata { float3 pos : POSITION; float3 normal : NORMAL; float3 uv0 : TEXCOORD0; UNITY_VERTEX_INPUT_INSTANCE_ID }; // vertex-to-fragment interpolators struct v2f { nointerpolation fixed4 color : COLOR0; float2 uv0 : TEXCOORD0; #if USING_FOG fixed fog : TEXCOORD1; #endif float4 pos : SV_POSITION; UNITY_VERTEX_OUTPUT_STEREO }; // vertex shader v2f vert (appdata IN) { v2f o; UNITY_SETUP_INSTANCE_ID(IN); UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o); fixed4 color = fixed4(0,0,0,1.1); float3 eyePos = UnityObjectToViewPos(IN.pos);//float3 eyePos = mul (UNITY_MATRIX_MV, float4(IN.pos,1)).xyz; fixed3 eyeNormal = normalize (mul ((float3x3)UNITY_MATRIX_IT_MV, IN.normal).xyz); fixed3 viewDir = 0.0; // lighting fixed3 lcolor = fixed4(0,0,0,1).rgb + fixed4(1,1,1,1).rgb * glstate_lightmodel_ambient.rgb; fixed3 specColor = 0.0; fixed shininess = 0 * 128.0; LIGHT_LOOP_ATTRIBUTE for (int il = 0; il < LIGHT_LOOP_LIMIT; ++il) { // lcolor += computeOneLight(il, eyePos, eyeNormal, viewDir, fixed4(1,1,1,1), shininess, specColor); lcolor += computeOneLight(il, eyePos, eyeNormal, viewDir, fixed4(1, 1, 1, 1) ); } color.rgb = lcolor.rgb; color.a = fixed4(1,1,1,1).a; o.color = saturate(color); // compute texture coordinates o.uv0 = IN.uv0.xy * _MainTex_ST.xy + _MainTex_ST.zw; // fog #if USING_FOG float fogCoord = length(eyePos.xyz); // radial fog distance UNITY_CALC_FOG_FACTOR_RAW(fogCoord); o.fog = saturate(unityFogFactor); #endif // transform position o.pos = UnityObjectToClipPos(IN.pos); return o; } // textures sampler2D _MainTex; // fragment shader fixed4 frag (v2f IN) : SV_Target { fixed4 col; fixed4 tex, tmp0, tmp1, tmp2; // SetTexture #0 tex = tex2D (_MainTex, IN.uv0.xy); col.rgb = tex * IN.color; col *= 2; col.a = fixed4(0,0,0,0).a; // fog #if USING_FOG col.rgb = lerp (unity_FogColor.rgb, col.rgb, IN.fog); #endif return col; } // texenvs //! TexEnv0: 02010103 01060004 [_MainTex] ENDCG } } }
I’d be more interested in how you’re generating the mesh. The shader for no interpolation stuff is relatively uninteresting, and what vertex is the ”primary” is totally up to the GPU (and should be defined by the API) and not something you can do anything about in the shader. I’d assume you’re right that it’s something with how you’re generating the mesh and the platform differences there, but honestly I have no idea.
You were right! This was it. I thought i already tested this by cycling through the triangle rotations. BUT i still wrote the normal into the first vertex of any given triangle everytime. Now i added a button to cycle into which vertex the normal gets written and: Android seems to use the last vertex as provoking vertex But can we be sure that this is the same for all devices? The docs say you can choose the index, but where would i put that? "void glProvokingVertex(GLenum provokeMode);" https://www.khronos.org/opengl/wiki/Primitive#Provoking_vertex Anyway, thanks for your continued help in these shader topics, Ben. To conclude what we found: Flat shading 2019: for Mobile (Opengles 3.0 or greater): Option 1: VertexSplitting: if you use vertex shaders and manage to keep Verts below 200k, you can hold 60fps on mid-end devices (e.g. my Samsung A3 2017) Option 2: nointerpolation: Fastest because you use one third of the vertices compared to above. But you need one dedicated vertex in each triangle that stores the normal for that triangle only. So it requires mesh tinkering Spoiler: setting the normal if you have a mesh Code (CSharp): public static int stripProvokingVertex = 2;//vertex 0,1 or 2 from triangle public void RecalculateStripNormals(){ // provoking vertices that need their normal set: // ▼__▼__▼__▼__ // | /| /| /| /| // |/_|/_|/_|/_| // ▲ ▲ ▲ ▲ // // ▼__▼__▼__▼__ // | /| /|\ | /| // |/_|/_|_\|/_| // ▲ ▲ ▲ ▲ for (int vi = 0; vi < stripTriangles.Length; vi+=3 ) { stripNormals[stripTriangles[vi +stripProvokingVertex]] = Normal( ref stripVertices[triangles[vi+0]], ref stripVertices[triangles[vi+1]], ref stripVertices[triangles[vi+2]]); } mesh.normals = stripNormals; } for Desktop: Use ddx/ddy, works with any mesh and you are most likely doing stuff in the fragment shader already Spoiler: getting the normal in fragment shader: fixed3 posddx = ddx(IN.posWorld.xyz); fixed3 posddy = ddy(IN.posWorld.xyz); fixed3 derivedNormal = cross(normalize(posddx), normalize(posddy));
We use nointerpolation and set the Desktop Graphics API to GLES 3.2 that way we vertex 2 is always the provoking vertex. Unfortunately, we switched to metal for IOS, so now we modify our "Flat" preprocessor to use either DX style (vertex #1 is provoking) or GL style (vertex #2 is provoking) depending on the platform. It is a bit of a pain, but the performance savings are great. Flat shading is even cheaper than smooth shading in many cases because we get fewer split verts. IOS and Vulkan use DX style. You can actually change the provoking vertex in Vulkan, GLES, and Metal, but I was unable to get that to work in Unity. It may be an extension that is not always supported A problem is that we would need 2 versions of meshes if we wanted to support Vulkan and GLES on Android. Fortunately, Vulkan appears to be too buggy to use so we don't have to worry about that yet. Android AAB's (or whatever they are called) should be the solution to that problem.
I'm looking to do something similar on my project. How does your preprocessor work? Is it something you can share? I'm considering writing a Blender plugin, but it feels like reinventing the wheel. Blender 2.82 has a nice "Weighted Normal" modifier, but that can introduce undesirable artifacts. I have to modify thousands of models, so automation will be necessary. Ideally a preprocessor would look at Edge Splits and merge their vertices while preserving their Face normals. Preserving Quads during model import introduces UV issues, but it would be nice if a preprocessor assigned normals based on Quads...