Search Unity

Can indies use this AI texture upscaling to boost their game content?

Discussion in 'General Discussion' started by Arowx, Mar 13, 2019.

  1. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194


    Upscaled 4K version of Metroid Prime from the Gamecube version.



    The source article also mentions that this technology has been used in DOOM.



    So any small team indie developers thinking about or actually using AI texture upscaling in their development pipeline?
     
  2. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    If game has strong art, it has strong art. Upres the textures, sure why not?

    If it has weak art, and you upres the textures, it will multiply the weakness of the art.
     
  3. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I'm going to say that soon an AI will be smart enough to be trained on good artist and it won't just upres, but also fix your art based on what it was trained on. A primitive version of that is style transfer. Then there is also semantic inpainting, which take just the colored silouhette indicating objects and fills in.

    Ai will be next programmer art but good enough this time.
     
  4. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    right now the main benefit of AI on art I've seen is rapid and high quantity of iterations. So it boost design. Rather than 5 concept artist making 50 designs, you can spit out 500 designs in less time and without paying so many people. You just need one person with a designers eye to sift through the computer results and then take it from there.

    So AI cleaning up and sharpening textures, blending colors to more pleasing degree or whatever, that might be a small boost -- like a nice post process effect -- but it cannot save weak art or be something to rely upon.
     
    SparrowGS likes this.
  5. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Yeah but I mean full blown inpainting really. You can literallt paint a stick figure now and get out a realistic character (a bit fuzzy right now), and you can probably chain multiple specialize (existing) ai to compose a whole scene with great art. The fact ai can manipulate semantic visual is the very big game changer!

    The recipe:
    - draw stuff, pose, location, action and semantic get parse by the ai (see the ai that guess drawing)
    - second ai is a composer ai, it correct location and pose based on composition, get a semantic heatmap out
    - third ai is inpainting ai, convert it into a full picture
    - fourth ai is the style transfer and make the picture in a given style.

    I think it can be done right now as a proof of concept, though not at good enough level.

    In fact you can probably just input text, and get a composition back.
     
  6. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    yeah, but that's totally different from the topic of this thread isn't it? or is this the evolution it's working towards?
     
    ShilohGames likes this.
  7. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I think they kinda goes hand in hand, upscaler are actually inpainter, in that they guess details and make decision to try to stay consistent. I simply expended on addition you made (on the benefit).
     
    BIGTIMEMASTER likes this.
  8. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    so this kind of technology will reduce labor and multiply design iterations, which will put grunts out of work, but the higher level of design still has to be there.

    So vote for UBI, or be the boss. Or learn to live in the woods. :)
     
    neoshaman likes this.
  9. Arowx

    Arowx

    Joined:
    Nov 12, 2009
    Posts:
    8,194
    Well if pixel density = hours worked then reducing the number of pixel input to the number output is a massive potential gain in art pipeline.

    Now if we had smart subdivision of model mesh geometry as well as texture resolution scaling then even basic low poly game developers could in theory produce AAA 4K+ games.
     
  10. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    no. that doesn't make sense at all. pixel density do not equal hours worked. You cannot break down a complex job like this the same way you write game logic. It's vastly more complex. A AAA character is not just a low poly character subdivided.
     
  11. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,141
    Arowx, please talk to some game artists. I'm begging you.

    Hell, please start talking to people who have practical experience in the game production pipeline instead of just looking at the latest tech and scrambling to your keyboard to make a thread about it.
     
  12. kdgalla

    kdgalla

    Joined:
    Mar 15, 2013
    Posts:
    4,635
    I would imagine that an artist capable of having made the textures for Metroid Prime or Doom, could have just as easily made them hi-res to begin with, were it not for the technical limitations at the time.
     
    roojerry likes this.
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    @Murgilod Well BIGTIMEMASTER is an artist so that's done lol :D

    @Topic
    We have a concrete use case in the term of fan remake, for example the ff7 remake had trouble finishing the 500+ screen to uprez due to the amount of works, but they use the upscaler to actually do it. It's a proof of concept in a similar production set up.
    https://kotaku.com/mod-creates-hd-final-fantasy-vii-using-ai-neural-netw-1832173317

    Also it make sense with 3d render, start with a low rez then NN uprez it to get faster result?
     
  14. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,261
    The issue is that you have to have existing base artwork to extrapolate from. The technology takes that base artwork, and guesses at the details it needs to add in to make a higher-resolution, up-scaled texture. The problem starts to arise in that it can only guess, and at some points it will guess wrong. As long as the percentage of wrong guesses is low enough, it is still viable, but it can never really be perfect.

    But at the end of the day, none of this is original, nor can it really be re-purposed to create original work. So you still have to have the original work, and that original work has to actually be good/decent art or the resulting higher-res version will simply be a higher-res version of bad art. So you still have to have an artist, and they still have to be good at their job. Also, most modern artists are already working at higher resolutions, thanks to the much more scalable nature of modern tech. So the need for such technology in modern games is actually quite low. It won't magically make your low-res art good, just high-res. Unless you are very specifically focused on making low-res pixel-style art, and kind of want to try a high-res version, this doesn't provide that much utility for modern game developers.

    Obviously, it's real advantage is in game preservation, and up-scaling older titles to run on modern systems. For a lot of games that came out in the late 90's to mid 00's, this kind of tech would be fantastic for automatically updating them.
     
  15. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Kind of like Nvidia's DLSS?
     
  16. Kiwasi

    Kiwasi

    Joined:
    Dec 5, 2013
    Posts:
    16,860
    You can bet that if its useful, the AAAs will probably be using it first. They are the ones with the time and money to dedicate into new experimental tech.

    When you have one artist, saving half an hour of their time results in a minuscule difference to the bottom line. When you employ a hundred artists, saving half an hour for each results in significant cost improvements.
     
    BIGTIMEMASTER and Ryiah like this.
  17. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,149
    Kiwasi likes this.
  18. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I would really depend, I think it benefit the extrem, not the middle, I'm technically poor, although I have some art eduction I can easily see where I can benefit from this because I literally can't pay an artist at all. It won't save me just half an hour.

    I don't agree with that, experiment in tech to be from lone wolf doing stuff and then trying to drum up awareness about the issue, AAA is all about efficient workflow, they tend to coopt tech when they are good enough and the lone wolf had made a proof of concept that work within a pipeline. Lone wolf being often indies.

    Yeah but not necessarily real time, if you have fixed or offline render. That's the low hanging fruit.

    The main adventage of that tech is that it work through example, as long as you can provide example you can synthetize it. And that's kinda how you do it with an artist already, they do stuff and you provide details so they can do a better job.

    Well if we keep it to upscaller, that's kinda true, however the same underlying tech can do much more. And most work aren't original either. You can do so many variation of dragon in so many different style, they are still dragon.

    /devil's advocate
     
  19. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    I have some assets that would benefit from higher resolution, I wonder how good it would look.
     
    neoshaman likes this.
  20. one_one

    one_one

    Joined:
    May 20, 2013
    Posts:
    621
    Ah, right. Someone probably has done that already/is dabbling with it, but the more popular approach seems to be reducing the quality of raytracing and then using AI denoising. Blender for example has an AI denoiser that AFAIK also utilizes screen depth and normal data, which seems to work quite well.
     
  21. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,141
    Right, but when? When does this tech reach a state where it's useful for production? How many years do we have to wait before this tech can do anything more than do a passable-until-scrutiny upscaling job?
     
  22. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    In 5 years it will be totally impossible to miss, it's already production ready and used in mobile phone, which is why they have those NN acceleration chips right now, to enhance their cheap camera images. Because big company has more data to train and more people dedicated to have something that's not an academic proof of concept. Those proof of concept are open source, but the training onus in on you, it's only as good as your data, and big company has the resources to get just that.

    Also the field is improving crazy fast, compared to any other field. I would expect a total disruption of audiovisual content synthesis. Google already made voice synthesis fall, so much that ubisoft is investigating cinematic voice synthesis.
     
  23. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,141
    No.

    Not when it will work at all.

    When it will be applicable to what we're discussing right now.
     
  24. RichardKain

    RichardKain

    Joined:
    Oct 1, 2012
    Posts:
    1,261
    There's also the fact that this tech is only useful for low-res pixel art. A lot of my graphics and textures I produce in vectors. I literally have no use for this tech at all. All of my textures can be scaled up as high as I want to, anytime I please.

    Again, this isn't actually the kind of tech that can be incorporated into a production pipeline. You can't make art with this tech in mind, you can only apply it to low-res art after-the-fact. It is useful as a tool for post-production only.

    This tech can only evaluate and extrapolate on what is already there, it can't actually generate new details, just guess based on the details that already exist. Great for after-the-fact upscaling, but little else.
     
    BIGTIMEMASTER likes this.
  25. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    this is just a matter of opinion, i suppose, but when they up res old games that had chunky graphics, to me that makes it look worse. Like, when something is chunky looking, your imagination goes to work. But when you make it all crisp and clean, then it just looks like really lousy wanna-be-realism.

    seems like just a gimic to revive old titles from the grave.

    contrast with the Shadow of the Colossus reboot in which artist manually rebuilt much of the game -- it's a totally differnet story.
     
  26. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Okay, let's limit it to game (sorry mobile phone) and only uprezzing (sorry style transfer):
    1- Nvidia DLSS: real time upscalling
    2- remastering old game pipeline (save labor cost)
    3- CG render time: use smaller render target, uprez, potentially selectively re render bad tile patch (ie less works, cut days of render off) I think this can be done up to 8x to 16x the rez., since it's quadratric, it does make a big ROI.

    All are done right now and used in pipeline.
    When I say in 5 years , I was talking about mostly the whole NN package about image manipulation not just uprez.
     
  27. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    restoration of old games is just one thing though... artist who make backgrounds for movies could probably have some benefit from cleaning up old photography -- matte painters and such.
     
  28. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,141
    Doesn't even work. Awful results, just as bad as playing at a lower resolution.

    Results don't look good enough and it annihilates art direct. There's a reason we don't use 2xSai or Eagle for retro games.

    Except, again, the results aren't any good. Algorithms are not a replacement for art direction.
     
    neoshaman likes this.
  29. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Murgilod are we talking about the same thing? because it seems like you are being contrarian on purpose, which I'm fine with :D

    But this is no eagle or 2xSai and Digital foundry approved (for 4k dlss though not as good as more expensive technique, better than cheap other).

    Anyway I'm done, the case has been laid down, I don't want to make an essay that show image by image comparison and explaining all teh minute differance. And No I never said it's a replacement for art direction, but art direction itself is a very high level concept, and this is agnostic to that.
     
  30. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Okay I figure out I should at leats laid down what's so special about this algorithm and how it works.

    Basically Arowx shared a NN upscalers, notably ESRGAN:
    https://github.com/xinntao/ESRGAN

    By virtue of being machine learning and NN it is example based. That is, it's as good as the input data. Also it depend on fine tuning the parameters to the target quality a bit.

    The fact that it is example base is what bring it's power, ie it can be art directed using a few samples, done by an artist, then he try to match the same quality by using pair of low scale, upscale. Ie it will learn to upscale based on what you feed it with.

    ESRGAN is pre trained with random realistic image, and correctly tuned, does a good job upscaling realistic-ish image out of the box. Of course featureless flat shaded will stay featureless flat shaded, it doesn't do anything it hasn't seen. Which is why I consider it a type of style transfer.

    So to get the best result, you wil have to trained with a few pair sample of hirez image with low rez of your own, so it picked up what you want. Once this training is done, you can then batch the work remaining, even with a few defect, that's still a magnitude less work potentially to correct. It's the perfect art intern kid, not a master but useful as you only deal with retouching instead of doing everything from scratch..
     
  31. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,149
    I'm not impressed with it. It increases performance for people who want to run 4K, but 1440p with TAA is even faster with quality that is on par with 4K DLSS. We'll have to wait and see how much NVIDIA can improve it over the next few months.

    https://www.techspot.com/article/1801-nvidia-dlss-metro-exodus/ (newest - 4K DLSS vs 1800p)
    https://www.techspot.com/article/1794-nvidia-rtx-dlss-battlefield/
    https://www.techspot.com/article/1712-nvidia-dlss/ (oldest - 4K DLSS vs 1440p TAA)
     
  32. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,013
    Why not just make the textures properly to begin with? You can't 'upscale' something from nothing - and if the tool is creating things, better to use one designed for that purpose to begin with.
     
    zombiegorilla and Murgilod like this.
  33. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    We indies without art talent on the team do not always have good high res textures to choose from. This 2k texture comes in mind. Will try to upscale it to 4k with that method and see if it can get any better

    Untitled.jpg
     
    Last edited: Mar 15, 2019
  34. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,141
    If you don't have art talent, no amount of upscaling will help you.
     
  35. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    I meant no art talent on the team, we buy all art.
     
  36. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    whats wrong with that texture? looks fine to me. Maybe it's just stretched over to large a unit area if you think it is too blurry.

    i think most of the time what makes good game art is not a matter of resolution but a matter of composition and high level design. Lighting and composition is where the real magic happens.
     
    neoshaman likes this.
  37. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    It's not too bad especially inside VR but it would definitely look better at 4k. Sure could tile it at 0.5 instead of 1 but the concrete segments might look too small then
     
  38. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,013
    There's nothing wrong with that texture. If it really looked like crap, no amount of upscaling would help.

    Upscaling might help remove a bit of blurriness going from 2k to 4k. Sure it's fine for 'fixing' a texture that needs a bit more definition, especially if that texture is mostly characterized by small scale noise. But that's a solution for a problem that is best avoided to begin with. I don't see what's the utility in a normal workflow.

    If you're looking for some good textures, have you checked out the substance store? They have a ton of great materials on there that are mostly procedurally generated, which means you can tweak them and there's no res issues to begin with. GameTextures.com is another good one, though I've found they tend to use a lot of non-procedural textures which limits control of how they look.
     
  39. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    textures.com is a great one for hi-res freebies. You got to do the work in photoshop to make them tileable but that's something you can learn in an afternoon. Takes some time but it's not rocket science.

    I'm not an enviro guy but, is it typical to use 4k textures? On anything other than like up-close hero character cinematics? To show off skin pore level detail. I didn't think that high was ever used on environments. I'm asking though, I really have no idea.

    In any case, I'd think the most bang for your buck to increase the visuals on that scene would be more prop clutter and decal stuffs. Little plants coming up from the cracks, a secondary moss material or something to blend around. Just more stuff. Very easy to look way too deeply into a single component if its the only thing there.
     
    neoshaman and Billy4184 like this.
  40. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,013
    I used to try my hand at that, but there's often too much shadow and light involved to get a proper albedo map, and normal maps often come out weird from light angle -> shadow issues. I think there's tools for helping remove that, but I'd rather get a texture that's made from scratch.

    The main reason I like substance designer so much is I have no talent for creating textures in photoshop/gimp. Back around when I started with 3D, they usually ended up looking like paper or some kind of aluminium foil. Now that I understand a bit more about them I could probably go back and make better ones, but Designer really made things easy when I wanted to get stuff done. You can easily open up someone's material and see the components that make up a particular kind of detail (like wood or ice).

    The other thing I like about procedural textures is that they give a small amount of stylization that you can't get from photos.
     
  41. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    No question, substance materials have wide range of benefits. But... you got to know how ot use Designer. Which is a big time investment.

    I mean, photosourcing textures in photoshop is a bit of an art thats got to be learned and has it's limitations, but I figure for the non-artist who just needs a few realistic textures like concrete or noisy stuff like that it's a pretty viable solution -- besides just buying some nice substance materials from places like you mentioned. That's definitely the best solution short of hiring an enviro artist.
     
  42. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,013
    For me it was very easy, and it's easily the most useful tool I've ever used.
     
  43. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    i bought it awhile back but haven't done anything with it. i started getting into characters so just haven't had a need for it. it looks pretty straightforward but ive seen a lot of people make it sound like learning it is really intensive. I suppose it's how deep you want to go with it. Probably the real challenge isnt the program itself but rather breaking down the material you want to create into simple patterns so you can create it.
     
    neoshaman and Billy4184 like this.
  44. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,013
    Well the thing I found very useful with it is looking at other people's work (like from substance share) and getting an idea of how they built the material, because it's very hard to reverse engineer a single composed texture. From that I've created a bunch of my own materials that represent the simplest, most distilled set of components necessary to create them, which I can use as a starting point for creating a texture for a specific setting. That to me was worth every penny.

    I've never used much of the maths libraries for creating your own filters and generators, but I can see that it would have a learning curve for sure - maybe that's what people are talking about. I've found that the filters and generators they provide are enough for me to get what I want.
     
    BIGTIMEMASTER likes this.
  45. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    So we back at off topic? :D Designer remind me my time with photoshop filter back in the 2000s, I was trying to convince people this would be the future because you could use action to "program" texture and have infinite variation by hijacking the process, shuffling the parameter and layers. Then there was mapzone. And now everyone discover designer as a revelation. TRUTH IS YAGNI ... kinda ... substance just make a great workflow to do stuff you can do elsewhere without it, you can import the knowledge you have with substance to any complex photoshop clone, or basic understanding of programmation of visual. BUT nothing beat a visual editor that update all the step visually, fortunately it has create habit before someone made a powerful script language version out of it. Just yesterday I was showing a young graphist how to do stuff like that on photoshop and go on to showcase substance designer, to which she reply "why pay that when I can already do it in toshop? that's stupid I don't need another software".
    You don't need an upscaler for that, you need a details texture on top, as the quickest fix, and that texture probably don't even need to be a color texture, save yourself some memory and turn that into a grayscale and apply a ramp, which will allow you to get many more variation if you modulate it with a vertex mask ... and you can probably have good result by upscaling the grayscale version in the good old way, apply a few sharpening filter and some noise, then save it back, it will take as much time as using that technique above, because you still have to fine tune the NN parameters to get good result. The NN stuff is really only great for automate the batching huge amount of works (like video upscaling), for one or few textures it's way overkill. That's kinda the kicker, none of that demand any art talent, just an understanding of visual elements. Also I'm kinda done playing the devil's advocate lol, a upscaler, no matter how good it is, is just an upscaler, it makes things crisper, any talk about art direction is just pure nonsense and out of the scope.
     
  46. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,141
    This is fundamentally different from photoshop filters.
     
  47. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    yeah agreed. but the premise of the original post was that upscalers were gonna turn S***e graphics into AAA masterpieces. So it was necessary to dispel that idea.
     
    neoshaman likes this.
  48. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Substance is merely a series of node function (filters) applied in a given sequence to give you an intended result. You can start with anything, but mostly pattern and noise. Which it has a very competent library of, and also a very competent noise composer, herited from mapzone that recursively apply distribution rules "fractally".

    This noise composer is the single reason why you should probably use that instead of photoshop, and the second thing is that node based composition is just simply more efficient that constantly shuffling and merging layer to get intermediate result. Also Substance is natively thought for game, so sugar like normal map and other filter, are literally life saving.

    HOWEVER the fundamental are still the same. Give me anything in substance, I'll do it in a free version of photoshop, blender or blitz3D.

    Substance is a great tool, its workflow save you time over doing that on toshop like.
     
  49. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,141
    This one's on me, I just woke up and thought you were talking about upscaling :v
     
    neoshaman likes this.
  50. AndersMalmgren

    AndersMalmgren

    Joined:
    Aug 31, 2014
    Posts:
    5,358
    Its hard to see in a static pic, here I show in a video. Also throw a 4k prop on it and the difference really pops



    Make sure you select 1440p@60 hz and wait for youtube to buffer it so you get max bitrate.