Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Neural net / AI to use painting as basis for photorealistic face?

Discussion in 'General Discussion' started by HonoraryBob, Mar 2, 2021.

  1. HonoraryBob

    HonoraryBob

    Joined:
    May 26, 2011
    Posts:
    1,210
    Given the number of online tools that allow you to use neural net technology to animate or convert photos etc to realistic portraits, I assumed there would be an option to use a drawing or painting to define the facial features for a photorealistic face; but lengthy searches haven't found anything that works very well (there's one site which takes a portrait sketch and nominally converts it to a photo, but the result almost never resembles the input at all). Does anyone know of any tool which takes contours from a painting or sketch and uses it to change the features of a photorealistic portrait? I would think this would be a relatively common usage of neural net technology because I've seen somewhat similar things being done.
     
  2. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,298
    There's a mention of "DeepFaceDrawing":
    http://geometrylearning.com/DeepFaceDrawing/
    And that's the only one I could find.

    However, I believe trying to pursue this angle is likely a waste of time. That is unless you have research level hardware and a lot of free time to train the model yourself.
     
    NotaNaN, Ryiah and MadeFromPolygons like this.
  3. HonoraryBob

    HonoraryBob

    Joined:
    May 26, 2011
    Posts:
    1,210
    The "DeepFaceDrawing" is the one I already tried (via their online tool) and found that the result doesn't usually look anything like the sketch. It's surprising that there aren't any good alternatives for this type of thing given the large number of similar online tools (all the neural net-driven facial animator tools, facial filters etc) and the fact that it's fairly routine to match facial features from one face to another - it's done in chat filters all the time, like that one which resulted in a lawyer being rendered as a cat during a Zoom meeting with a judge. So why is it so difficult to find one which takes facial features from a sketch or painting (or photo) and uses that to change the features of a photo portrait?
     
  4. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,298
    That's not even remotely the same type of problem.

    Zoom chat filter only detects facial features very roughly, and using some relatively dumb approach like HAAR cascade. The lawyer cat is not rendered via a neural net. Instead it is equivalent of an animated 2d model with preconfigured blendshapes which morphs based on percieved state of features. The filter doesn't transform face into a cat. It can only display ONE cat and make it look left/right/etc and shake head slightly. Basically this is a functionality of FaceRig. It can detect your facial expression somewhat, and then alter the single prebuilt model based on it.

    There are two issues here: those system take PHOTOGRAPH as input, they can be wrong, and they're really dumb, but they can be probably implemented on weak hardware.

    What you're describing, however, is a completely different class of problem. You want to take a non-photorealistic sketch, and based on that produce a photorealistic result.
    This is StyleGAN territory where you suddenly need to have 8 $10000 specialized GPUs to run for several days in order for them to train and produce tolerable result.

    https://xkcd.com/1425/
     
    angrypenguin and NotaNaN like this.