Search Unity

New SVG importer pixelation (vectorgraphics package)?

Discussion in '2D' started by chiefartificer, Jun 26, 2018.

  1. chiefartificer

    chiefartificer

    Joined:
    Jun 13, 2018
    Posts:
    5
    I am trying the new SVG importer (vectorgraphics), however my images get very pixalated. For example I imported the following SVG:

    https://image.flaticon.com/icons/svg/145/145864.svg

    An this is what I get with a default import:



    I tried changing changing tessellation to advance and different combinations of step distance and sampling with virtually no enhancement. According to the advertisement I shouldn't be getting that much pixelation. I am using the same drag-and-drop import demonstrated on the following link:

    https://forum.unity.com/threads/vector-graphics-preview-package.529845/
     
    Last edited: Jun 28, 2018
  2. tertle

    tertle

    Joined:
    Jan 25, 2011
    Posts:
    3,761
    You have scale set to 2x in the game view and unless something has changed, that scale just blows up the image for a very poor preview.

    Scale it in the actual game.
     
  3. chiefartificer

    chiefartificer

    Joined:
    Jun 13, 2018
    Posts:
    5
    You were right sir! After building it looks way better. It isn't perfect as a pure SVG but way better than the preview!
     
  4. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    The result looks very aliased. The SVG importer relies on MSAA for anti-aliasing. You should try to turn it on (in Edit > Project Settings > Quality).
     
  5. chiefartificer

    chiefartificer

    Joined:
    Jun 13, 2018
    Posts:
    5
    I appreciate the recommendation. This is the result after building for PC with 2X MSAA and ultra quality. Any other suggestion or this is the best result I should expect? By the way in terms of game performance should I use raster images or the vector graphics importer provide similar speed?

     
  6. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    You will get better results with more samples per pixels (4x or 8x), but this requires more GPU memory. So you will have to evaluate if this is acceptable for your project.

    In terms of performance, simple colored SVG sprites should perform very similarly to normal sprites. SVG sprites that contain textures and/or gradients are a bit more expensive to render. If you don't really need the "infinite" resolution of the SVG sprites, you may use normal sprites instead. Some users made simple tools to render SVG sprites into texture, and rely on normal sprites afterward. We have a method to help for that matter: VectorUtils.
    RenderSpriteToTexture2D(). That said, it's always a good idea to use the profiler to see if SVG sprites are causing performance issues.

    I hope this will help! :)
     
  7. wbahnassi_unity

    wbahnassi_unity

    Unity Technologies

    Joined:
    Mar 12, 2018
    Posts:
    28
    From the looks of the image, there is no MSAA active at all (not even 2x). Double-check the quality settings as well as the camera settings to allow MSAA on the scene. It should look better than this.
     
    Sampl819 likes this.
  8. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    While I can see how this would work well, I really think Unity is missing a dynamic antialiasing method for vector graphics right now. Especially when you consider the previous state of the art was the likes of RageSpline etc, where you can set the thickness of an outline "edge" which uses a thin texture to produce an antialiasing effect. I recognize though that this means when the graphic or camera are scaled, the thickness of that outline is compromized and it would either have to be regenerated dynamically all the time or the shader would have to do some trickery based on a scale factor/camera distance etc to generate it dynamically. Unless the game doesn't do any scaling of course.

    It's also possible to compute and render curved antialiased edges in a shader, on a per-triangle basis, if enough info is passed to the shader. So maybe you need a dedicated curve-rendering shader?

    Switching on the full-screen antialiasing and having tons of extra samples is probably a big drag on performance without giving really ideal 256-levels of antialiasing smoothness.

    Is something like this in the works?
     
  9. wbahnassi_unity

    wbahnassi_unity

    Unity Technologies

    Joined:
    Mar 12, 2018
    Posts:
    28
    Valid points indeed. AA without MSAA is something we'd like to tackle too. Of course, without the additional samples you can never get back pixels that fell out the rasterization. Personally I wouldn't bet that a cheap software AA can beat HW 4xMSAA quality. Probably it will be a tradeoff of some sort where (as you mentioned) you need to pack more data in your vertices to fuel the SW AA shader, which also takes memory and performance. As more vertices are included in the scene the cost will increase and probably hit a breaking point where HW MSAA would become the faster and tighter-in-memory path.
    Just wanted to shed a light that a potential SW AA solution might not be the magic bullet people hope for ☺
     
  10. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Yes it would likely double the number of triangles, and there is the problem of scaling. But it does look good. Another approach is to actually compute curves in the shader across a triangle, with perfect antialiasing. That said, microsoft files a patent on that method so I'm not sure you could use it. Doesn't MSAA quadrouple the number of pixels of memory when you go to 2x, and then basically 16x more when you go to 4x? It's not really as perfect as it could be though particularly on edges close to vertical/horizontal.

    What do you think to possible using some kind of blur operation across the whole screen?
     
  11. wbahnassi_unity

    wbahnassi_unity

    Unity Technologies

    Joined:
    Mar 12, 2018
    Posts:
    28
    2xMSAA = 2 samples per pixel instead of 1 sample, 4xMSAA = 4 samples instead of 1 sample, so the multiplier is also the memory size multiplier. A 2xMSAA surface takes double the memory amount of a non-MSAA surface... etc.

    This can work for certain cases where the art is mainly flat (no texture details), otherwise a naïve blur will smudge the details along. Also, it can never restore pixels that escaped rasterization, so it might look ok on a static image, but if the camera starts panning for example, shimmering artifacts due to undersampling will still occur.

    Fighting undersampling in computer graphics is never an easy task. MSAA happens to be an ok approach for now because of the HW support, but other techniques might work well for certain cases too. It's a matter of choosing your battle: memory vs perf vs quality vs detail. The more high-level knowledge you have about the final scene, the better-educated will be your decision. Personally, I'd love to play a bit with some SW AA for vector graphics, but I know already I won't be able to achieve something that pleases all needs. It will be just "one more option" that people can choose when trying to address undersampling in SVG.
     
  12. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Another option of course is to run it on a high-dpi retina display like an ios device or retina mac so that you can barely see the pixels anyway.

    Would something like temporal antialiasing or something work as well?
     
  13. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    I suppose someone could painstakingly implement the outlined antialiasing geometry themselves as part of the svg image. *shudder*
     
  14. wbahnassi_unity

    wbahnassi_unity

    Unity Technologies

    Joined:
    Mar 12, 2018
    Posts:
    28
    It sure helps. This balances quality & detail over memory and performance, which certain targets can take (e.g. PC and PS4) but doesn't sound like a good option for limited phone devices.
     
  15. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    When the geometry is being rendered as triangles, is there some way in a shader perhaps to calculate the 'angle' of the slope of the side of the triangle, and then render that edge antialiased at a 1 pixel width no matter the scale? I guess you'd have to pass in all 3 sets of vertices to every vertex.

    Another thought based on that... would be to store the normals of the angle of the "edges" in some output buffer then run that through a shader that antialised based on the edge angles.
     
  16. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    That's interesting. In a future where every device has a high-DPI display, maybe sub-pixel antialiasing will be a thing of the past!

    This is something @wbahnassi_unity and I were thinking about. This has a few downsides as each vertex become heavier with the extra data, as you mentioned. Also, we may need extra space to compute the antialiased edges. We could add extra geometry around the triangles or enable conservative rasterization, both of which have downsides as well. But this is one of the approaches we are exploring. :)
     
  17. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    At least you're not using an "accumulation buffer" to jitter the position of the output over several frames to try to make it look antialiazed ;-)

    I wouldn't mind seeing the extra geometry outlines around the edges of shapes, maybe as an option that can be switched on/off for those who want the quality. It doesn't add a lot of extra fill rate, mostly more mesh data. Maybe a vertex shader could adjust the thickness of the extra quads to match the zoom factor.

    Or... you could convert all straight edged triangles into spline data and render the whole thing antialiazed inside each triangle so that even when you zoom in there are no straight edges seen. Somewhat more overdraw though I guess.
     
  18. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Re High DPI yes I think that apple's intent to try to get rid of "the pixel" by having them be so small you can't see them, so that you don't even need to antialiase anything... though it's much the same as a full-screen multi-sample, just showing the hidden buffer to the display rather than blending groups of pixels together.
     
  19. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    OR hey... an option to convert the geometry to a textures made from a signed distance field, so that it always looks crisp and antialised at almost any size :)
     
  20. samth

    samth

    Joined:
    Aug 9, 2015
    Posts:
    3
    On your camera make sure "Allow MSAA" is checked.
     
  21. Harry_Jack

    Harry_Jack

    Joined:
    Jul 18, 2018
    Posts:
    8
    Can we Read and Write the svg image for drawing colouring app? if yes then how and if no then why?
     
  22. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    You can easily read an SVG file, but we don't provide any "export to svg" functionality. Is that what you were looking for? I think it would be feasible to write an exporter that takes a Scene object and outputs a text svg file, since the scene representation is roughly translatable to SVG. That said, you may lose some meta information that can't be stored in the scene object (XML IDs, comments, etc.).
     
  23. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    I was just thinking... if you render the svg geometry to a large rendertexture, you could then run it through a shader to do some kind of custom super-sampling to produce a smoother output. The render texture would have to be quite large and be supported on the hardware though, and its questionable trying to go beyond 2048..4096 max texture size. It would be similar to MSAA I suppose, and maybe slower since its running it as a 'program' on the gpu rather than some dedicated MSAA hardware functionality.

    Any progress on any other options for svg antialiasing, and is unity's SVG support out of preview now?

    I seem to recall in OpenGL there used to be a function where you could switch on antialiasing of triangle edges. I guess if only the edges need antialiasing applied, maybe the edges could be output to some kind of temporary outline buffer (similar to if rendering in wireframe), and then very precise supersampling calculated just for those pixels. Or maybe in the shader it can detect "edge pixels" and do extra calculations there to smooth it. Sort of a localized MSAA that only applies to the actual edge pixels not to the entire buffer.
     
  24. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    I was thinking about the antialiasing some more.

    Ideally the generation of the antialiased edge pixels should be done before anything is output to the screen. In all methods like MSAA FXAA SMAA etc it's working on "after the fact" data which has already lost significant information and can't handle gradual inclines very well. The quality is also not really there, its not perfect. But perfect antialiasing can be generated in a shader IF the shader has enough information about the mathematical properties of the geometry. For example you can make a shader which outputs a circle (within a quad of geometry for example) and perfectly antialiases it based on distance to a threshold etc.

    It seems to me though that the problem here is, in order to correctly output any kind of antialiased fragments with the correct coloring and the correct blending, ALL fragments have to output alpha blended with the background. The antialiased edge pixels have to have transparency which forces ALL of the rendering of EVERY triangle into a transparency render queue. And then all those fragments have to perform an alpha-blend operation so now basically the entire geometry is going to have to be rendered as transparent, which obviously incurs something of a performance hit. BUT possibly this performance hit is not "as bad" as various post-processing antialiasing efforts or multi-sample efforts, given that the quality can potentially be perfect. And maybe the tradeoff is worth it given that hardware is now much faster at blending than it used to be years ago.

    It could be possible that you modify the geometry of the triangulated svg shapes, so that the main interior of the shapes is rendered solidly with no blending and then a separate thin rim of triangles lives around the outer perimeter of the shape. This outer perimeter is where the semi-transparent pixels live and they are then rendered in a different shader. But the problem now arises that with a whole bunch of "layers" of structured svg shapes, forming an object or scene, there's going to have to be a correct rendering order, and all that has to be sorted, and likely alternating between transparent and opaque render queues on and off and on and off, which I presume is a ton of draw calls.

    One other issue is the way fragments are generated by the pipeline. If e.g a piece of geometry only covers 50% of a pixel, perhaps the pixel will be considered empty, whereas 51% maybe it is considered 'filled', generating a fragment. But the problem is that even the pixels which are only 1% filled need to generate a fragment in order to later be able to output that fragment with a low transparency level for antialiasing purposes. In OpenGL there used to be an option to switch on polygon smoothing (see GL_POLYGON_SMOOTH) which would trigger the hardware to calculate coverage of these edge pixels and include the almost-transparent pixels as fragments to be rendered. At some cost of course. But this would be needed in order to correctly calculate the antialiasing. I don't know if this functionality is still supported. What comes to mind then is possibly some kind of modification to the geometry, perhaps a custom geometry shader for svg, which slightly expands the triangles to cover like an extra half pixel so that the coverage of the edge pixels can be properly calculated in the fragment shader.

    I had this other idea too whereby, if we would be comfortable with a slight "shrinkage" in the size of triangles (up to 0.5 pixels), (and triangles could be expanded to compensate) we could feed texture coordinates in with all the vertices. There would be no texture, but the coordinates will then interpolate across the triangle. If they are set up correctly with e.g. Y coords moving away perpendicular to the silhouette edges of triangles, and sized correctly to maintain a consistent rate of change over the coordinate system, then a shader could use the texcoord, in combination with the screen-space rate of change (to support scaling), so that the outer 1 pixel border would produce a soft gradient ranging the transparency from 1 to 0. This would produce a perfect antialiasing effect at the exposed edge. If triangles are exposed on two edges, they will need to be split into 2 triangles, and similarly for a single orphaned triangle it needs to be split into 3. A further variation on that is, if necessary, to treat the texcoord like a signed distance field, so that we can always ensure a perfect 1-pixel-wide antialiasing around the border no matter the camera/scale of the geometry. But I'm thinking maybe that's not needed. Also interior triangles whose "tips" touch the outer edge need to have texcoords set up as well and may need to be split into two triangles.

    Another idea I had is, what if we expand the geometry triangles slightly by roughly 1 pixel (at a given res), and then in the fragment shader we pass in all 3 vertices to EACH vertex, and have it calculate a "virtual triangle" inside the shader. Each fragment will calculate whether it is inside or outside of this virtual procedural triangle and render accordingly, with perfect antialiasing. All triangles are alphablended with the background. The virtual triangle "lines up with" where the original geometric triangle was. It doesn't need any additional changes to the geometry or any extra triangles, but the shader is a bit more complex with extra vertex data needed (9 vertex coords instead of 3, per vertex), which triples the vertex data.

    There is also a method used in ragespline, where an approximately 1-pixel-wide triangle strip is added around the edge of a shape. The strip then uses either a texture or a procedural output, which is a fade of transparency from 1 to 0. This is alphablended with the background. Obviously this means the edge geometry uses a 4-vertex quad per original border triangle, so the vertex count goes up quite a bit. And the edge pixels then have to have a transparent render queue. The procedural fade could be combined with screen-space rate-of-change to always ensure it outputs a 1-pixel width antialiased edge regardless of the camera/geometry scale. This would be similar to treating it like a signed distance field at the edges. I also read that you can do much the same thing, based off a texture, if the texture has a 1-pixel alpha=0 transparent pixel around the edges, and bilinear filtering will then blend between the alpha=0 and alpha=1 based on coverage. But this requires the use of a texture.

    One other alternative. ... convert the triangles into a high resolution signed-distance-field texture where each triangle is mapped to some portion of the texture, as an atlas. This would produce good antialiasing at a reasonable zoom resolution, at the expense of a texture lookup to calculate how close the pixel is to the threshold, and uses bilinear filtering. The only downsides are 1) high-res texture needed to avoid 'smoothing' of corners and 2) texture lookup in the shader and 3) sharp corners/joins become rounded if zoomed in too far. Texture resolution could be adjusted as a tradeoff.

    Does any of this sound feasible? I think if Unity had a simple switch on the svg asset whether to render it aliased or antialiased, and an appropriate shader selected for the purpose, people could choose whether to take the performance hit or not. And the quality of the results would be close to perfect, far better than any kind of post-processing/screen-space effects. Essentially the original geometric/mathematical definition of the triangles has to ideally be available in the shader and computed in the shader, in order to correctly rasterize and quantize/fit to pixel grid the output pixels, in order to generate perfect antialiasing on the fly. And in all cases the smoothed edge pixels HAVE to always been alphablended as transparency, otherwise this will not work.

    I think the method using a single texture coordinate at each vertex to set up a perpendicular fade at the geometry edges, possibly coupled with a slight expansion of triangle sizes to compensate for the "shrinking", would be 1) the least increase in vertex data, 2) widely supported, 3) easy and performant fragment shader ..... requiring in some cases some modification to some triangles (those with 2 or 3 exposed borders) which is probably not that many (convex tips, orphans). It'd be an extra 8-12 bytes per vertex. Rendering should be very fast, except for having to render all triangles as transparent alpha blended. I don't see any way around this to correctly draw perfectly antialiased overlapping shapes.

    The ideal hardware approach is the gl_polygon_smooth, but I don't know if it's widely supported on modern devices.
     
    Last edited: Mar 13, 2020
    UziLullaby likes this.
  25. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Here's a pic of the gl_polygon_smooth result, coupled with alpha-blended triangles. The result is perfect antialiasing. This is how it was designed to be done many years ago, and is exactly correct and great quality no matter the size, zoom, rotation, shape etc. But it requires (activates) larger pixel/fragment generation and coverage calculations at each pixel. Does this still work on modern hardware, or could it be reproduced with a geometry shader and/or vertex shader? Perhaps the vertex shader can pass on a modified alpha channel to the fragment shader, after calculating how close to the 'edge outline' the fragment is. Might have to modify the vertex positions as well somehow, or customize/expand the geometry, and pass in extra vertex data to allow these calculations for each pixel.

    19.6.jpg
     
    Last edited: Mar 13, 2020
  26. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Example of using texture coordinates to smooth interior edges.... a rough mockup... some triangles have to be split in order to allow 'corner points' to be represented as two adjacent gradients. If that same triangle has other exposed edges it has to be split even more. The interior light-gray triangle there is a 'hole'. The width of the transparency fade would be ideally around 1 pixel at all times, adjusted dynamically based off the texcoord and the screen-space rate of change spatially. I suppose this means as you zoom in, the geometry will seem to expand slightly?

    I suppose rather than being just internal, since we're in the business of adjusting the geometry, we could alternatively add geometry OUTSIDE the existing shape (even to the extend of producing an overall rectangle containing the shape), and then adjust the texture coords so that 0.5 lies on the exact edges of the triangles rather than inside. This requires extra triangles but is more accurate.

    InnerEdgeAntialiase.png

    I can see this might present issues for very acute triangles that need a really long corner point 'within' another triangle and gets cut off or fights with a subtriangle.

    This seems to be a somewhat similar approach measuring the distance to the edge: http://iryoku.com/aacourse/downloads/10-Distance-to-edge-AA-(DEAA).pdf
     
    Last edited: Mar 13, 2020
  27. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Something like this may work... https://abandonedwig.info/blog/2013/02/24/edge-distance-anti-aliasing.html

    Basically expand the triangles by about half a pixel in screen-space (derivatives etc), to include all needed fragments, then calculate the distance from the the fragment to the nearest edge and color the alpha channel appropriately. This would run mainly in a vertex shader. But it needs to be fed extra data describing the triangle.
     
  28. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    Not only that, but I'm not aware of this feature on D3D and Vulkan. The modern way of doing the equivalent effect is to used a multisampled framebuffer.

    We have explored ideas like these. They give pretty good results. As you mentioned, you need to provide an additional 3 vertices (9 floats) per vertex so that they have the full triangle information. This means that you cannot have shared edges anymore, and you'll also have to duplicate some vertices when they are shared between multiple triangles. You also need additional per-vertex information to avoid smoothing interior edges.

    However, we borrowed from some of the techniques you mentioned to prototype some ideas. Instead of encoding a full triangle per vertex, we encode curve data in an efficient manner. This has two benefits:
    - We can compute the distance to the curve and evaluate sub-pixel coverage for antialiasing
    - We don't have to over-tessellate the curve into triangles.
     
  29. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Interesting. I realize that if the mathematical representation of the curve/edge is provided to the shader, then it can successfully perfectly output an antialiased border. Anything that is done "after" the shader outputs, such as post-processing or multisampling of the buffer etc, can NEVER produce the perfect results you'll get from outputting the correct antialiased coverage values from the source data.

    Microsoft had a way of calculating a 3-point bezier in a triangle using the texture coords, which was interesting. But they patented it. It sounds like you're finding a way to better represent the mathematical 'object' through minimal vertex data so that the shader can output beautifully accurate vector shapes. I look forward to seeing that little checkbox that says "antialiasing on/off". It sounds like if you can pass 'perfect curves' to the shader then you need far less geometry to enclose the fragments, maybe just a handful of triangles to output a really large infinite curve. The only overhead I then see is the required alpha blending.

    Any rough idea how long it may take before this feature becomes stable and available?
     
    Last edited: Mar 22, 2020
  30. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    I don't have an ETA at this time. The work to bring this feature to production is planned, but not officially tackled yet.
     
  31. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    It's been a while. .... revisiting this ... has any progress been made on. built-in high-quality antialiasing for SVG ... ie not MSAA etc. ?
     
  32. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    Yes. We have implemented a new antialiasing technique to be used by UI Toolkit (which has similar vector graphics requirements than the SVG package). It was briefly mentioned in this "What's New" blog post (in the "Harness crisp textureless UI rendering capabilities" section):
    https://blog.unity.com/technology/whats-new-in-ui-toolkit

    We implemented an (almost) complete vector graphics system using this technique, which will be used by the SVG package later on.

    It's still not ready yet, but progress is being made. Stay tuned!
     
  33. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    If I may ask, how does the new antialiasing work?
     
  34. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    To make a long story short, we subdivide curved paths into multiple arcs that matches the curve shape. We extrude some geometry around the arc and a very simple per-fragment signed-distance to the arc is used to compute the pixel coverage. In many cases, this allows a very rough tessellation, while rendering infinitely smooth antialiased shapes, all of this with a rather trivial shader. The main cost is on the CPU, when trying to find arcs that matches complex curves (such as Beziers). But we have fast paths for simpler shapes, such as circles, lines, etc.
     
  35. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    Neat, so not only you get antialiasing perfectly you also can become resolution independent on the triangle count, ie zooming real big still has completely smooth curved edges. Sounds like a win win. And I presume it also works fine with any kind of zooming in and out etc, always a 1-pixel-wide antialiasing?

    So this is not actually out yet?
     
  36. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    Yes, exactly.

    This is currently in use by the UI Toolkit rendering backend, and the upcoming vector API that will be included by UI Toolkit (will be out probably in 2022.1 or 2022.2). When completed, the SVG package will be available to use the same vector API.
     
  37. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    cool. btw in the unity 'manual', I could find practically zero references to the importing of SVG assets. I did find 'vectorimage' in the scripting area. There doesn't appear to be a page dedicated to it?
     
  38. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    You are correct, the documentation team is working on it as we speak! :)
     
  39. imaginaryhuman

    imaginaryhuman

    Joined:
    Mar 21, 2010
    Posts:
    5,834
    How does the new system lend itself to animation, like the morphing of vector objects from one shape to another?
     
  40. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    It depends what kind of animation you are dealing with. If you are moving the vector shapes control points, the output will be re-tessellated and everything should work just fine.

    However, if you were thinking about using the 2D animation system, this only works with meshes, so this won't work out of the box (although we would like to make it work with vector input eventually).
     
  41. ANTONBORODA

    ANTONBORODA

    Joined:
    Nov 16, 2017
    Posts:
    52
    Sorry for reviving this old thread, however I would like to know if the aliasing issue of Vector Graphics package in conjunction with UI Toolkit is still being worked on?
    I have recently tried to migrate our project from sprites to Vector Images in UI Toolkit, however the results are sub-par by a huge margin and aliasing issues are horrendous, for example:
    upload_2023-8-14_12-3-48.png

    And this is the same icon converted to Texture Sprite from the same SVG:
    upload_2023-8-14_12-5-36.png
     
  42. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    It's still planned! But we don't have an ETA yet...

    The vector graphics package in its current state relies on MSAA to be turned on for antialiasing. If you enable antialiasing on your camera, the output should be as good as the Textured Sprite output you've shown.
     
  43. ANTONBORODA

    ANTONBORODA

    Joined:
    Nov 16, 2017
    Posts:
    52
    Antialiasing does not affect anything with this sprite. I see changes in a scene, so AA is applied, but the UI (UI Toolkit) is completely unchanged as well as the vector image used in UI.
     
  44. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    If you are using a Canvas, you'll need to set its mode in "Screen Space - Camera", if it's in "Screen Space - Overlay" the antialiasing won't affect it.
     
  45. ANTONBORODA

    ANTONBORODA

    Joined:
    Nov 16, 2017
    Posts:
    52
    What? This is UI Toolkit, there's no canvas...
     
  46. mcoted3d

    mcoted3d

    Unity Technologies

    Joined:
    Feb 3, 2016
    Posts:
    1,003
    Oh! Since UI Toolkit always render in overlay, the camera antialiasing won't have any effect on it. You can configure the PanelSettings to render in an MSAA-enabled RenderTexture instead, and render that texture over the scene (for example, a full-screen quad). But in your situation it's probably easier to stick to textured sprites instead.