Search Unity

How to blur with Z depth in a 2D game

Discussion in 'General Graphics' started by FeastSC2, Sep 8, 2017.

  1. FeastSC2

    FeastSC2

    Joined:
    Sep 30, 2016
    Posts:
    978
    I'm doing a 2d game with a perspective camera.

    I want to reproduce this blurry background that Seasons after fall has:


    I used Unity's post-processing stack Depth Of Field's module but it didn't affect my sprites based on their z components, everything was either blurred or sharp.
    What can I do to achieve a similar effect?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    Blur the sprites in Gimp / Photoshop. I'm pretty sure that's what they did.
     
    theANMATOR2b likes this.
  3. FeastSC2

    FeastSC2

    Joined:
    Sep 30, 2016
    Posts:
    978
    Is this the only way? Something based on the Z position would be much faster for my artist.
    I recently saw someone creating a shader that blurs anything behind it, maybe that's one way?

    Here's the example:
     
  4. brownboot67

    brownboot67

    Joined:
    Jan 5, 2013
    Posts:
    375
    Would you prefer it be horrible for all your player's gpus? //real talk
     
    theANMATOR2b likes this.
  5. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    If you insist on using DoF:

    Set the background art to use cutout instead and DoF will work. The reason it does not work right now is because transparent stuff does not write depth by default so... Depth of Field doesn't work. You will lose partial transparency.

    Cutout (alpha test) does work though.


    Personally? I'd go with blurring source art for that particular game. Seems like the easiest way?
     
    theANMATOR2b and FeastSC2 like this.
  6. FeastSC2

    FeastSC2

    Joined:
    Sep 30, 2016
    Posts:
    978
    Losing partial transparency is a no no in my game's case. Thanks for pointing that out though.

    I don't really understand why that would be this costly for the player's gpus since 3d games have blurred objects in the distance all the time, is it because it's done on the camera rather than on the game objects?

    And in these 3d games, what do they do then when they have transparency on certain props?
     
  7. richardkettlewell

    richardkettlewell

    Unity Technologies

    Joined:
    Sep 9, 2015
    Posts:
    2,285
    How about: pre-blur your textures in a script as a custom build step? Maybe generate a few blur sizes per Sprite, if it's going to by dynamic in your game.

    Then just pick the right blurred texture based on z distance, during rendering.

    Then it's easy for your artist, easy to tweak blur strengths via script, and fast for whatever gpu you are targeting.

    Only downside is memory cost if you want to store many blur strengths per sprite.
     
    Kalkatos and FeastSC2 like this.
  8. FeastSC2

    FeastSC2

    Joined:
    Sep 30, 2016
    Posts:
    978
    That sounds very interesting! Feels like it's more flexible than DoF, so it's probably even better! Great idea.
     
    richardkettlewell likes this.
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    So a quick explanation for why we're all basically saying "don't use DoF image effects". It certainly would seem like that would be the easiest solution, and indeed many games do this.

    The problem is depth of field effects work by rendering the scene "in focus", then using the depth of each pixel (as stored in the camera depth texture) to blur that "in focus" image. This works on opaque, hard edged geometry, but not on anything with alpha. The simple reason is the depth texture can only hold one depth value, but multiple overlapping objects with partial transparency means each pixel may actually have multiple depth values. You could use an approximation of the depth, but you'll always be dealing with unexpected hard edges. Real time depth of field on semi-transparent objects is still kind of an unsolved problem. The easiest solution is to blur everything before hand. Some people have implemented solutions where they blur each sprite in their shader in real time, but this is much slower than the depth of field image effect as you'll be doing this calculation on multiple depths per pixel, and often for pixels that won't ever be seen.
     
    FeastSC2 likes this.
  10. FeastSC2

    FeastSC2

    Joined:
    Sep 30, 2016
    Posts:
    978
    I see, thanks for the explanation bgolus.

    Is it complicated to generate a new texture from inside Unity, using Unity as if it was Photoshop? based on an existing sprite (made in PS) + a shader (made in Unity).
    This might alleviate lots of optimization issues since quite a few times I won't need the shader in real time. It could be useful for simple things like maybe blending the color of a sprite with another texture or a color, things of that sort (overlay, hard light...).

    I know that stuff could be done in PS obviously, but there's some advantages to seeing the effects in the scene.
     
  11. brownboot67

    brownboot67

    Joined:
    Jan 5, 2013
    Posts:
    375
    It is not complicated to make yourself a tool to blur a bunch of images. But it is tremendously more complicated than batch running an action in Photoshop (basically an ironclad version of the thing you'd be re-making).
     
  12. TheSixthHammer

    TheSixthHammer

    Joined:
    Feb 13, 2017
    Posts:
    6
  13. starfckr1

    starfckr1

    Joined:
    Apr 11, 2021
    Posts:
    23
    Been struggling with finding a way to do this for a while now, and luckily found upon your article @TheSixthHammer

    Would it be possible to provide a few more examples on how you did the ratio calculations for figuring out the best way to scale down sprites before blurring them?

    Testing this out now, but i do have a variety of sizes on my sprites and would love to get a bit more insight instead of experimenting for ages :)

    What i am most interested in is how you dealt with the scale-down vs blur ratio when upscaling the original sprite. From my initial testing i got good results from (as an example) downscaling a 1024px sprite to 256x with 2px blur and tested the upscaled version of that against the 1024px version with 8 px blur. Looks good.

    Where i fall of is that when moving the sprites further backwards in z-space and need to scale them up based on their position. Did you use a various of sprite sizes with the same blur ranges to achieve an upscaling to the corresponding upscaled size? As in (with the above example) one 256px version with 2px blur to achieve 8px blur when upscaled x4 (1024) and then 512px with 2px blur upscaled 4x (in this case is should fill 2048px screen space)?

    Hope you understand my question! :)
     
    drfuzzyness likes this.