Search Unity

Guilty Gear 3D models look like 2D sprites.

Discussion in 'General Discussion' started by miya, May 21, 2013.

  1. RSH1

    RSH1

    Joined:
    Jul 9, 2012
    Posts:
    256
    You can animate a guy's chest expanding/contracting in 2D...

    You've selectively picked one frame from that cutscene where "polygons" are visible, the rest of it looks like standard drawn anime. Even if the fighting gameplay is 3d rendered, I see no reason to believe the animated character cutscenes in the trailer are.
     
    Last edited: Oct 12, 2013
  2. MarigoldFleur

    MarigoldFleur

    Joined:
    May 12, 2012
    Posts:
    1,353
    You can do this at any point in both of the trailers. I'm pretty sure the people at Polycount who have gone over this and went "yeah, this is 3D with a special normal map and shadow map to help the shadows look right" and the people who made the game and said "yes we are working in 3D" know better than you.
     
  3. Yoska

    Yoska

    Joined:
    Nov 14, 2012
    Posts:
    188
    When you watch it in 1080p 3D artifacts are pretty easy to spot even without pausing. But man it looks cool. I hope developers themselves will at some point explain what they are using with full details. This is not Unreal 4 game, right?
     
  4. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    There is no mystery here. Doing anime in 3D or doing what Pixar does. It is just boned animated meshes. Bones, especially if you have enough of them, are capable of all sorts of anime style effects as you're able to scale them as well - something people often forget. In addition you can have morph targets - these aren't just for facial animations.

    The real key which people are arguing over isn't even engine centric: it's the fact you still need great artists.
     
  5. andiCR

    andiCR

    Joined:
    Sep 6, 2012
    Posts:
    27
  6. MAJORgoose

    MAJORgoose

    Joined:
    Dec 9, 2014
    Posts:
    2
    The art in this game is incredible, I too was originally wondering how they got 3d to look so good. Even cartoons on television don't look this good, and those are pre-rendered! I just found a great article (in english) talking about the process:
    http://www.sirlin.net/posts/guilty-gear-xrd
    and here's a japanese article with more pictures
    http://www.4gamer.net/games/216/G021678/20140703095/
    The short version:
    High poly characters - needed for good smooth shadows, and surprisingly smooth silhouettes
    The characters also switch through meshes for some of their special abilities such as Millia's hair shown below.

    Shader - for outlines, flat colors and highlights
    Shadow Maps - This is what stands out to me as the most important, they control how shadows play on characters to get a stylized anime shading, the shadows don't play by the rules of 3d and that is hugely noticeable.

    Horrible polygon shadows before

    Great stylized anime shadows.


    I think that this is a terrific example of rendering. 3d needs to continue to expand it's aesthetic style, it's a versatile tool and shouldn't be forced to look 3d.

    EDIT: Just found a great video explaining their art in great detail at GDC 2015
     
    Last edited: Jun 5, 2015
  7. Anton1274

    Anton1274

    Joined:
    Dec 7, 2013
    Posts:
    10
    I've extracted some 3d model ,texture and animation from the game using gildor umodel extractor, all the characters are 3d , the work is made from texture and light, shadows aren't obtained in the standard way they are made using 3d invisible mesh that are used only for casting shadows but aren't visible.
     
  8. BrandyStarbrite

    BrandyStarbrite

    Joined:
    Aug 4, 2013
    Posts:
    2,076
    Nice. :D
    You could do this Cel shaded cartoony look, in Makehuman. :D
     
  9. Not_Sure

    Not_Sure

    Joined:
    Dec 13, 2011
    Posts:
    3,546
    Wow! That is easily the best cell shader work I've ever seen.

    If Futurama could use something like that, maybe they could afford more seasons...
     
  10. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,035
    Reviving this as I am using some technique from the video.
    Anybody has any clue how they do the outline? Not the line in the models(which I saw explained), but the one outside.
    I am asking because I noticed that regular outline shader rely on extruding the faces along the normal, and if you non smooth normals, when extruded the model will break apart.
    Anybody have any clue? Other than using a normal map to store a smoothed normal, which I would like to avoid.

    EDIT
    I just noticed there was actually a slide on the video presentation I didn't remember. They are controlled by vertex colors. Need to do some testing now.
     
    Last edited: Dec 10, 2015
  11. Master-Frog

    Master-Frog

    Joined:
    Jun 22, 2015
    Posts:
    2,302
    "2D Sprites look just like 3D models"... now that would be interesting
     
  12. recursive

    recursive

    Joined:
    Jul 12, 2012
    Posts:
    669
    We're looking into a similar technique ourselves.

    A buddy of mine who is better at shaders and I are waiting for our day job's crunch time to wrap (and by wrap I mean ship) so we can take a few evenings to try and implement some of our ideas based on this.
     
  13. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Been done in the 90s. Any infinity engine game, pretty much. They used prerendered 3d and converted it into sprites.

    Run edge detection filter over depth texture. For the best results combine it with edge detection filter run over view-space normalmap.

    Using vertex shader is silly (although fast) and will produce artifacts on cutout objects with alpha transparency.

    I think there was siggraph(?) paper on that about 7 years ago, try googling the subject.
     
  14. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    Yeah the 90s explored many cool techniques for game graphics from digitized images to pre-rendered 3D to clay (Claymation it was called) to vector balls and other things.

    I always thought digitized stuff was an easy way to do graphics:


    And here is a more recent Indie version:


    Probably the most well-known early 3D pre-rendered game


    Claymation (Clay Fighter was probably the first but I like these two Indie examples better):




    Vector balls: Check out Vector Man for the Genesis

    Lots of cool stuff back there.
     
  15. Master-Frog

    Master-Frog

    Joined:
    Jun 22, 2015
    Posts:
    2,302
    Actually, that was 3D models rasterized into 2D sprites. When ya think about it
     
  16. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Not really, because you'll need truckload of equipment and lots of space.
    If you can model awesome set, then to turn that into sprite, you'll need a space for that set, GOOD camera, lighting and materials. Shooting video would require space, lighting costumes and actors.

    Also, if we're talking about 3d digitizing, then that software is pricey (I think one of the good photogrammetry-based image reconstruction packages was about $4000 for pro version that let's you fix measurements and object orientation) .

    When you start building something amount of time materials and money wasted eventually approaches costs of making the same thing using software.

    If you're interested in the subject, there was a game called swapper. That one was handcrafted, literally. The guy built sets and models, animated them and turned everything into sprites. Looks great. Also C&C series had great tradition of using live action videos. Same applied to Wing Commander and some of the other games back then.

    I prefer digital mediums though. Simply because they require less space and less mess on your desk.

    Mortal Kombat was people turned into sprites.
    Your initial comment reminded me of this video:


    Just watch the 1st minute.
    AFAIK, it was done without CG and computers. Hand animated, every frame.
     
  17. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    @neginfinity you can do that with just a digital camera. Years ago I used to sit a camera on a tripod on the multi-frame shoot with a 10 second timer or have a friend take photos. The real trick is in using the right backdrop. I used a dark blue or black blanket spread across the wall behind me. That just makes it easier to clean up the images.

    Anyway they used to use all kinds of cool techniques. I remember some (probably not AAA) games using stop-motion photography. These days it seems like all we see is pixel art or 3D. lol
     
  18. Master-Frog

    Master-Frog

    Joined:
    Jun 22, 2015
    Posts:
    2,302
    They used to use cel sheets and layer them. You can move sheets around to create all sorts of effects, not too unlike flash animation. It's not something you would worry about normally. I spent some time in an anime/manga community and learned a lot about old school animation. Believe it or not prior to CG there was film editing.
     
  19. cyberpunk

    cyberpunk

    Joined:
    Mar 20, 2013
    Posts:
    226
    They explain it in the GDC talk. It's using the old inverted hull method. Basically you make a clone of the mesh, expand it somewhat, and then inverse the normals (and, of course, set it to a black texture or vertex color). They said this mesh is one with standard averaged normals, not the main mesh with the tweaked normals. Hope that helps.
     
  20. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,035
    I saw the video again where they speak about the outline, I don't hear anything about it being another mesh. Did you read it on some website?
    Or was it mentioned later in the video?
    If I can't find a compromise I'll discard outlines altogether, getting a good outline on a complex mesh(too often the inverted mesh intersect badly) is really time consuming and I have lot's of characters to design.



    Aiming for mobile, permormance is key here.
     
  21. bamncan

    bamncan

    Joined:
    Dec 15, 2013
    Posts:
    47
    Anyone able to explain a bit more about what they're doing with the UVs? Are they just using the UV/Textures to create fine line and permanent shading? Is he doing that to the multiplied texture for easier use?

    Also, a bit unsure on the shader itself. It's taking in a 50% threshold on the light vector to which it uses what to determine which color to show on the model?
     
  22. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,035
    They basically don't do anything with the UV unordinary. They just made sure that the black lines are perfectly horizontal so that they won't get distorted when scaled down because of the camera distance.
    Then if they need to have a thicker or smaller line they just make the uv of that polygon go more or less inside the black area. Instead of drawing the texture according to the UV they do the opposite, model the UV to obtain the line they wish.

    The shader is basically a supercharged cel shader. They calculated the angle between the light and the normal of the polygon, as any diffuse shader then if is angle is greather than the threshold value they show the color in the shaded texture, otherwise they show the color of the lighted texture( they have two texture, because they want to chose the shaded color arbitrarily and not have it result from a multiplication).
    This threshold value is not the same for all polygons but it is stored in the vertex colors so that some poly have a certain threshold while other have a different one.

    They store a lot of info in the vertex colors, so I suppose they are using more than 1 set of vertex colors, which is not currently possible with Unity I think.
    Plus another thing I recently discovered is that Unity do import vertex colors as a low precision fixed point decimal and it's quite noticeable when you have fine details, like baked ambient occlusion for example.
     
  23. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,151
    They only use one set of vertex colours, but they use the alpha channel, which is a hassle to work with.
     
  24. bamncan

    bamncan

    Joined:
    Dec 15, 2013
    Posts:
    47
    Thanks for the more thorough description. Though the second texture is used as a multiplier to create a sub surface scatter so they can say how dark the area should be when not lit.

    I believe they also used this plugin so they could change the face vertex normal values to do what I think you misunderstood as multiple vertex normals.



    I wish I understood or knew someone who understood shader creation as I would love to fool around with the process.
     
    BrandyStarbrite likes this.