Search Unity

Compare drawings to input image

Discussion in 'Editor & General Support' started by rooskie, Oct 3, 2017.

  1. rooskie

    rooskie

    Joined:
    Jun 3, 2015
    Posts:
    20
    Hello,

    I have a small game idea that I have had for a while and one of the main blockers for me is that I can't figure out how to compare the user drawing to an original image.

    The idea: player draws on a virtual canvas and the game then gives them points based on how close the drawing is to the input image.

    How would I go about comparing the images they have drawn to the input image?

    It's a tricky one, I know ;)
     
  2. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    It depends on how evil you want to be. A decent way to compare the image is to downscale it to something small like 256x256 and then compute distances between individual pixels within colorspace (src[x,y].rgb-dst[x,y].rgb). It also might make sense to do it in a different color space (HSV, maybe? I heard LAB color is more suitable, though I never worked with it).

    There are also things like phash (http://phash.org/) or PSNR comparisons, but those would probably an overkill and too precise.

    You could also reduce number of colors on both src and dst images, split it into grid of squares, and compute number of dominant colors in each quadrant, comparing to original.
     
    Martin_H likes this.
  3. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,052
  4. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,022
    It's not an easy problem at all. I've done some image processing so here's what I think. Generally the most straightforward way to compare image information is by running the image through an edge detection algorithm to get the edge pixels, such as a Sobel filter:



    If the player's drawing will be an outline (not filled) this is probably the best format to compare the data.

    Assuming that the player won't be tracing directly over the input image, the main problem would be matching the drawing and the image in terms of rotation, scaling and offset.

    One idea would be to take small blocks of the image (e.g. 16x16 or 32x32) and find the best match to sections of the same size on the image, since the errors in small local gradients would likely be much less that the image as a whole. Within these blocks, it would probably be best to not only compare the errors of individual pixels, but also to compare the direction of the pixels, maybe using a best fit line.

    Once you've found the best match for each block, you can then perhaps average the errors to find how to best scale, rotate and offset the drawing to fit the image. And once you've done that, maybe just score the image based on the distance of each pixel in the drawing to the closest pixel in the filtered image.

    Performance will definitely be an issue, since generally speaking the more processing you do, the better the result, and there's no end of operations that you can do on an image. But everything very much depends on the quality of the information you provide in the form of the image - if it's a clean, high contrast image it's much easier than something fuzzy and blurry.
     
  5. rooskie

    rooskie

    Joined:
    Jun 3, 2015
    Posts:
    20
    Thank you very much guys!

    Like I thought this is not a simple task but all of the replies have valuable information for me to make a decision on the approach.