Search Unity

Enhancing Photorealism Enhancement

Discussion in 'General Discussion' started by VIC20, May 17, 2021.

  1. VIC20

    VIC20

    Joined:
    Jan 19, 2008
    Posts:
    2,688


    Possibly in the future we will simplify the graphics and make them look photorealistic by using AI in real time.
    The images are enhanced by a convolutional network that leverages intermediate representations produced by conventional rendering pipelines.

    Intel's AI model captures the game environment from "GTA 5" and traces it with elements learned from the Cityscapes dataset. According to Intel, the procedure works better than other methods because it also takes the G-buffer with geometry information into account.
    The entire reconstruction of the in-game graphics takes half a second even on the current top graphics card RTX 3090, according to Intel. Therefore, the graphics model is not suitable for practical use in video games in its current form - the latency is too high for gaming. However, Intel writes that the approach can be optimized further and even made compatible with ray tracing. The method could also be integrated directly into game engines, which could further reduce the latency.

    https://intel-isl.github.io/PhotorealismEnhancement/
     
  2. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,570
    I saw this thing. The problem is that it also makes all surroundings look the same. Grayish-greenish overcast day.

    Meaning if you slap it onto a racing game, the game will look like Cityscapes dataset.
     
  3. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,154
    The obvious solution here would be to train with a dataset that'll generate more LUT friendly data that can be adjusted on the fly. That's assuming this would ever get into an actual game or anything, of course.
     
    angrypenguin and VIC20 like this.