Search Unity

Does Unity make game dev too difficult?

Discussion in 'General Discussion' started by GarBenjamin, May 18, 2017.

Thread Status:
Not open for further replies.
  1. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    I'm not buying it and will believe it when I see it.

    Do you draw?

    This is a sketch:
    cat-gesture3.png
    ^^ Trick question: What is the color of this cat?

    This is a photograph:
    Difference:

    Photograph provides excessive amount of details. There's a load of local gradients that can be used to determine shape of object.

    Sketch provides just enough hints for a human brain to create an image of object in human mind.

    The data is fundamentally different. The photographic data produces information and this information could be possibly reasonably processed by a basic neural net.

    Interpreting a sketch requires understanding of symbolic language used by human mind. That's pretty much on ai level.

    An "AI assisted tool" in ideal case would take an artist's input, recognzie the intent, and then fill in the details. However, this is a completely different and significantly more complex problem compared to a simple depth reconstruction from a photo.
     
    Martin_H likes this.
  2. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    First, some of the currently available technologies to prove that it's entirely possible for an AI to recognize illustrations in non hyper-realistic style:
    Of course, they are rudimentary examples, but just consider how video games look like 15 years ago, and imagine how much such a technology can evolve in the same amount of time in future, and you'd probably see my points.

    And as to your 'trick question', why can't the artist who can draw such a picture paint a cat in color? And even if he or she chooses to draw it in b&w, such software can easily suggest number of realistic textures based on what cats look like in real life.
     
  3. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    I just found a more relevant example:
    It's quite amusing that it contains the exact example of converting an arbitrary drawing into a cat picture, by the way. (I love cats! :))
     
  4. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,620
    Yeah, but @neginfinity was talking about recognisning things compared to photos. Quickdraw specifically recognises sketches by comparing them to other sketches.
     
    Martin_H likes this.
  5. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    You definitely haven't keep tab with what's happening in the field, and yes application has been out, I predict in 10 years a movie could be created from fine air with neural network. You won't have to train them, you will have ready made brain by society like google. The new gears of war have experimented with them to make music in the released product.

    They have generative property by having implicit concept in their learning manifold. It's still a classification but teh classification can be reversed, great stide where made by using GAN architecture, basically having two neural network competing with each other (one trying to detect fake image the other generating fake image) to get good result.

    And have you heard of adobe VOCO? That's not even deep learning! It can be augmented by deep learning too.

    https://www.fxguide.com/featured/what-sorcery-is-this-adobes-remarkable-new-voco-tool/

    @GarBenjamin
    https://github.com/alexjc/neural-doodle
    This was just a proof of concept, it's old news now, but
     
    GarBenjamin likes this.
  6. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    Yes, I suppose that's why we'll need another 15 years or so to have them as an actual, viable alternative to the toolsets we currently have, as I noted above. (And please see my the last link I posted, as it deals with photo-like images rather than drawings.)
     
  7. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
  8. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    It's worth mentioning that all those breakthrough happen in the beginning of the year and use to take 20mn to compute and now down to milliseconds. When I say movie out of fine air in 10 years I was extremely pessimistic to not perturb people, but I think it's closer to 5years.

    I mean we can generate text from image and text to image. And we are beyond the result of that video below:


    The main problem now is long term dependency, once this is solved we would have movie generation. But in fact we can already make a non toy architecture with simple template base pgc of story. Then use various neural network to generate part that will be assemble by a normal software. Think generate a skin for a game by having individual part generated.

    Btw here the video about the cat translation lol



    Remember that quality go up everyday, and once it has the budget of investor the quality shoot up even more.
     
    ToshoDaimos likes this.
  9. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    -_-

    That's disappointing and this is not what I was talking about, at all.

    In your example, neural network is used as a classificators. They recognize the shape, and insert a known symbol instead of it. This is a shape recognition, based on previously known set of "symbols".

    And here it is only used to draw ONE object type it was trained for. So, nope. It ain't it.

    Those rudimentary examples are not even dealing with the problem I described.

    No, I won't imagine. When people "imagine", they get distracted and start inventing all the wonderful things straight out of fantasy land, and the result is never close to reality. We already have certain someone who does that a lot "But what if unity had quintiple floating point precision? Think of the possibilities!"

    The idea of a trick question was to demonstrate what kind of work your brain does behind the scenes based on limited data. I think this cat is black. What about you?

    By the way, the reason why it isn't colored is because it was drawn in 15 seconds.

    Nope. This thing can only draw landscapes.
     
    Last edited: Jun 7, 2017
  10. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    neoshaman, your examples do not deal with problem of transforming any sketch into a 3d model. They're narrowly defined problems where neural network is using as a classifier. They do not deal with a problem I described. Depth recognition uses the overload of data provided by photo, as I described in my first post with the cat sketch. And while the text copy-pasting example looks like a fun toy, this is, once a gain, a classification problem with determining word boundaries.

    This is not what I was talking about, and this is why I said "I'll believe it when I see it".
     
  11. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    You don't get it, all these example use the same underlying technique, the same thing can do translation from sketch, text, video, sound and volume, that nobody have tested your preferred pair is just a matter of time. Saying it is classification don't change much, because translation is classification, you translate one form to another of the same class in another domain, it's not incompatible so that's not an argument.

    last video lol


    I was participating to the broader debate introduced by @GarBenjamin and mysticfall in how automation will change things anyway.

    Just watch all video.
     
    mysticfall likes this.
  12. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    @neginfinity Well, if you don't see possibilities and potentials in those examples, but instead, chose to use them as 'proofs' that such technologies will never cut it, then I don't think there's any point in continuing this discussion.

    Just like our previous discussion about the usefulness of OOP, I think it's just that we have vastly difference experiences and opinions regarding such subjects.

    Before I go, I'll just try to remind you that 15 years in this field is such a long time, during which a game like Wolfenstein3D(1992) can evolve into something like Call of Duty 4(2007).
     
  13. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    @mysticfall @neoshaman :
    The problem with all the examples provided is that they do not really deal with the problem I described.

    No, I actually do get it. None of the examples use an approach suitable for turning any sketch into a 3d model. None.

    The cat/shoe/landscape thing does not draw cats, shoes and landscapes. instead it operates in a fashion "when input doodle color matches X or when input doodle pixel is at this distance from the edge, output pixel value Y". That's all it can do. There's no magical thinking going on, it is a fuzzy associative database at best.

    Same thing with speech recognition. "When waverform is of shape Y it usually correspodns to this word/letter".

    And this is the only thing neural net can do.

    And that's why I don't want to "imagine possibilities". Because the approach does not scale up all the way to proper intelligence.

    The examples do not deal with the problem described. If you're trying to "extrapolate" them, by "imagining the possibilities" well:
    https://xkcd.com/605/
     
  14. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    They do much more than that, and even if it was just that, take fuzzy image and match correctly realist symbol is quite the hard task, but actually those symbol aren't premade, they are hallucinate by the generator, ie it doesn't take them from a dictionary and blend them with images.
     
  15. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    that's the requirement for
    We are not saying it's magical.
     
  16. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    And neither had Wolfenstein3D dealt with a problem of creating realistic looking soliders or weapons in 1992. But did it prove that 3D technology is no good for such tasks?

    I'm pretty sure there were people back then, who claimed that 3D modelling technology won't ever replace traditional 2D sprite based games, because all it can produce are ugly looking blocks which can't compare to carefully hand drawn pixel arts.

    Well, how many do you think still held their belief after 15 years from then?
     
  17. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    That's not relevant.

    -_-

    You ARE aware how neural net operates, right? That's exactly what it does.

    No.

    Because when provided the same input, it'll always hallucinate the same output. For AI assisted sketching you need to be adjust the visual result through iterative process. And that's where neural net alone is not sufficient.

    It is a part of the problem, yes, but alone it is not sufficient. Those examples represent maybe 10% of the solution. Not all of it.

    People, when they start "imagining the possibilities", extrapolate the problem to the point where it does something it is actually unable to do. Like that xkcd examples. They see a neural net and imagine sentient machines, while neural net alone is not enough to produce one.

    Another problem is that there are people who are into Deep Learning hype and imagine that the tech will solve everything in the world. I hope you aren't one of those.

    If anything, I found provided examples deeply disappointing. I thought the difference between "I can draw shoes based on contour" and "I can make 3d representation of any obejct based on a doodle" was quite obvious. Guess not.

    Anyway, I have stuff to do.
     
  18. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Anyway I looked at if there was some research PRECISELY on sketch to 3d without intermediary model (though I would say turning real life random unprocessed image into 3d view is already extremely impressive). There is not a lot, but there is this experiment that isn't doing enough anyway lol

    http://i.cs.hku.hk/~xghan/papers/deepske2face.pdf
     
    mysticfall likes this.
  19. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    And I wasn't even posting for a debate. lol I'm just saying there has to be better ways at least for producing content for my needs. The sketch-based modeling, rigging and animating looks promising.

    At the least I am quite sure there is a better (more efficient more natural) way even if it hasn't been done yet.
     
  20. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    Yeah, I have mine, too. I need to work on my project which contains over thousand interfaces, which you were so certain that have absolutely no use for anything.

    I guess having a bit of open mindedness can go a long way.
     
    Last edited: Jun 7, 2017
  21. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    It still isn't it. This thing can operate on human faces only, and most likely it only uses same base mesh for every face it produces.

    Switching to "subtle jabs" means running out of arguments.The way I see it, I see an obvious elephant-sized problem, and I tried to explain what it is. If it didn't work, well... I walk away.
     
  22. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I know perfectly how it works, it depend at which granularity you want to cut the definition, it operate on concept so small that I don't think it should be call dictionary, also it demonstrate understanding of context which is beyond a simple dictionary. We can talk technicality of the latent space all day long, I don't think it's important.

    At the end a sketch can be reduced to a set of symbol, and its translation to 3d is just the same set of symbol in another domain.

    What's the problem with that, I don't get it, all tools are functioning this way, we aren't arguing creativity, just mere possibility.

    I don't see how it is incompatible. We are not saying the deep neural net itself is the entirety of the solution, but that's one heck of a contribution. Fuzzy translation, there is no other tool to do it like that. If you looked at all the video, you will se they show tool that encourage iteration and does just that, the one with this title show it in like it's first 10 second: AI Makes Stunning Photos From Your Drawings. I don't get your critics since the video already demonstrate that NOW, no extrapolation.

    I mean it already do 3d


    But we aren't talking interpolation, we are talking tangible result with released source code you can toy around. There isn't much leap to make. It's not about faith anymore. It already works. Not the exact domain you want, but actually much more complex domain.

    It doesn't solve all problem of the world but do revolutionize image processing in a very visible and tangible way. And not just that, driving car is a big deal.
     
    mysticfall likes this.
  23. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I said it wasn't complete, but that's because that was the scope of their research. I don't see what's special about face that other model don't have? for the network it's as random as anything else.

    It does demonstrate a network can match a sketch to a given volume, and we have proof it can match arbitrary image to volume too and complete the volume. Tell me it can do do sketch to volume arbitrarily too? I mean that's barely extrapolating.

    I'm out the discussion, i's not ready yet, because it's not build for production YET. but the proof is in the pudding IMHO.

    Also code are available, why not train your own using their data? I will eventually join this, but I need to finish a game first :D
     
    mysticfall likes this.
  24. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    The problem is that for an art tool - the one I described - you'll need ability to recognize the exact same input as a completely different thing. So, you need the system talk back and forth with the artist, recognize intent, store it, and produce results based on it. And that is a full blown AI solution that at least has concept of memory and ability to communicate. "No, I don't like this cat, I want it to be bigger. Of different color. With feathers. Now we add horns and wings to that". That's how it should work. Iterative process. Create, receive feedback, adjust, repeat. Understand the scene. Which part corresponds to what, which part holds which function, to what is the feedback received referring to. Basically, in the end you'll need a tool that can understand what is a cat, and not blindly map input value to output.

    Instead, neural net provides one to one mapping. Same input - same output. It is an image filter, that is as dumb as a brick.

    So you get "cat filter", "shoe filter", "face filter", "bag filter", "skin pores filter", ""it kinda-sorta looks like a painting" filter". It is a highly specialized solution, but in the end it is just a filter. And I think this kind of application is not good enough.

    The whole deep learning thing in the end is about ability to have a bigger filter.

    The videos are quite depressing because it is the same thing - "Hey we found a new thing to plug our filter in!" So? It is the same old principle. Where's the rest of the puzzle?

    The deep learning hype is basically a lot of people getting excited about their new photoshop filter, letting their imagination go wild and then fly in the sunset, thinking that one day their filter will gain sentience and start talking to them. And that is depressing.

    The examples you provided are obvious application of neural net, and I don't really see much reason to get excited over them, and there aren't "many possibilities" to think over. Where's the rest of the puzzle? There's the next step beyond neural net? That's what I'd like to see. Not this "hey, we've plugged our filter into one more thing". Yeah, you did, so what? I'd prefer to see results better than this.

    I hope this is clear enough.
     
    Last edited: Jun 7, 2017
    ToshoDaimos and zyzyx like this.
  25. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,013
    I get what you mean, but isn't a lot of this just a question of applying machine learning to more abstract concepts? Such as style information rather than specific properties of an object.

    But I think the biggest problem by far with any tool such as this would be the ability to communicate with the user in a very efficient, contextual way, as you described. Most things I've seen are either 1 to 1 mapping, or the user struggling to communicate some obvious aspect without having to upload a crapton of contextual information, or having to prevent the tool from destroying information that had already been provided previously.

    So yeah I look at all these things very skeptically. If it's a struggle to even make a verbal programming tool, where the environment is highly standardised and restricted, what's the hope of suddenly communicating layers of subtle artistic information that nobody has really even come up with a generic way of describing.

    In the end it's not hard to get something from a tool driven by machine learning, the problem is when that something is not quite right and you need to tell the computer to fix it.
     
    neginfinity likes this.
  26. sngdan

    sngdan

    Joined:
    Feb 7, 2014
    Posts:
    1,154
    If your hobby is game making, why not focus on publishing a game now while accepting the work that comes with it

    Part of this is getting to know the tools required for it, but it should be in relation. I can look for 4 weeks for the ideal map creation tool (or any other tool in the process) or I can just use the next best one. In the end, if the scope is relatively small the chance is that the research time does not pay back and one is better off in just getting the map done.

    On that token @neginfinity - your progress on the Doom game is great. Your realism is killing the dreams here :) Haha, well you are possibly the only one doing 3D models currently, instead of hoping they magically appear...
     
    GarBenjamin likes this.
  27. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I'm shaking my head in disbelief, it seems YOU are the one caught up with the hype lol. I'm exciting because it makes X task easier and more efficient. You are depressed because it doesn't do hype AI task x, y z, where you talk to the computer as an equal artist ... And It's me with the imagination and extrapolation? :eek:

    BRO! I think we are not discussing the same thing.
     
  28. zenGarden

    zenGarden

    Joined:
    Mar 30, 2013
    Posts:
    4,538
    You'll need to make some choices about software otherwise you'll never complete anyhthing.
    This is not your priority, so indeed you can test as much as you want any software.
     
    Last edited: Jun 7, 2017
    GarBenjamin likes this.
  29. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    Absolutely and I can't tell you how happy it makes me... no deadlines. I can spend 10 years just testing software if I like and still bills are paid and life goes on as usual. lol

    It won't be near that long though before I complete another game. I just want to get rid of the bs aspects of game dev as much as I can before returning to it again. That is the main point. And as long as it all takes (particularly content creation) spending even 6 months researching the best tools is a drop in the bucket. And damn well worth it if it reduces the time required by even 10% from that point forward. I think there is potential for a 20 to 30% increase in productivity (hell with the right tools maybe even 50% or more).
     
    Deleted User likes this.
  30. Jingle-Fett

    Jingle-Fett

    Joined:
    Oct 18, 2009
    Posts:
    614
    To all the people talking about the 3d stuff...

    Do you even Zbrush bro?

    Like seriously, here's what Zbrush was doing--back in 2009. Please watch the whole video. And yes, it is indeed as intuitive and fun as it looks.


    This feature is 8 years old. Get on with the times. Buy a Wacom tablet. Photogrammetry or computer learning or whatever is the least of your problems.

    I swear this thread is starting to sound like the programmer equivalent of artists asking for the "Make MMO Button".
     
  31. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I think you confuse having decent asset with having excellent asset in vision. The make me character button already exist with fuse and co. You won't have a deeply affected character, but for most it does the job, and only minimal work for an artist to bring them from decent to awesome. I would say art is very susceptible to automation right now, only the biggest production value will have dedicated good artist.
     
    zenGarden likes this.
  32. chingwa

    chingwa

    Joined:
    Dec 4, 2009
    Posts:
    3,790
    The revolution will start the day machines force artists onto the breadlines!
     
  33. Deleted User

    Deleted User

    Guest

    Mixamo is on the way out, let it gooo let it goooo..!!
     
    neoshaman likes this.
  34. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    What will happen is that we will just move to the next tier of handcraft, like with the introduction of computer, people who don't want to adapt will either be converted by necessity, retire, became boss of a team of hip young that use the technique or move to other domains.:p

    It's just that the way we are doing it now will feel dated and new expectation will appear, then as the market saturate we will reach a new equilibrium with new high standard. People say today, holyshit you can't make witcher 3 alone, it took 300 person during 5 years! Tomorow, it will be, HOLY S***, you can't make Grand theft spaceship alone! it has 3000 distinct civilizations and had a team of 1000 during 5 years! :cool:

    The future will be the golden age of art direction too (as in high level art decision), whatever that mean ...:D
     
  35. Deleted User

    Deleted User

    Guest

    Issue is that's not now, y'know what's really changed in games development in the last decade and a half? Not a lot, hardware has been responsible for most effecting changes and primarily that's affected scale and graphics. Luckily people cottoned on that pre-made engines could be profitable, so I suppose that's one of the major pro's of the last decade or so.

    I learned a long time ago not to wait for new features of an engine or the latest tech.. It still remains in terms of making games, rolling up your sleeves and cracking on is still the most efficient way to get it done. Until the neural make my game button appears (some day)..
     
  36. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I don't believe in the make game button either, if it's what you think I'm promoting, and also I think you underestimate the impact of tools on top of hardware advancement. We do make in 5mn things that use to take month before. And currently big company DO integrate the latest techniques, they just move to the next tier of complexity in the same time (hence why the joke on Grand theft space ship). I mean Wildland was made using procedural generation using a team of 14 at UBIsoft, motion matching was used to make animation transition faster in for honor, photogrammetry have compress dev in battlefield, PBR made lighting rigs easier to build across the board, etc ... and unity allow solo people to dream making skyrim! Many people have made decent game visually without being artist too, the web series RWBY started on poser! I would say that's already the reality.

    Of course we should not wait for features and make our games now, it does not mean it won't impact games making tomorrow, and I mean literally tomorrow. And many tools I linked here have code source and you can use them, and some are using them to make game (though it's too fresh to have released to show proof from, and early game will experimental indie game with weird artsy style).

    Tools progress (and know how) can be seen on hardware that are lagging, the game on the DS are another level than the game made on the psx or the n64, despite being in the same ballpark in power (in fact you could punch above the DS because it is capped to 2048 poly in hardware).
     
    GarBenjamin likes this.
  37. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    Unity wasn't even around 15 years ago, and Unreal was just a 3 year old technology back then. So, how can we say that things haven't changed much during that period in the game development field?

    I actually agree that it'd be a mistake to wait for a new technology to solve all our problems, instead of trying to make the best of what we currently have.

    But it'd be equally mistaken if someone claims the technology we are accustomed to today wouldn't radically change or even get abandoned in the next 15 years, because there cannot be any better way than the one we already know.

    From what I've seen, 15 years is such a long time in a field that has anything to do with IT technologies, and machine learning is one of the most rapidly advancing area among them.

    So, if we are already started seeing rudimentary prototypes of such a technology to solve our problems with 3D modelling, I believe it'd be only natural to assume we can expect an actual, practical applications of it in the next 15 years or so.
     
    Last edited: Jun 8, 2017
    neoshaman likes this.
  38. GarBenjamin

    GarBenjamin

    Joined:
    Dec 26, 2013
    Posts:
    7,441
    Yeah for sure from what I have read the AAA companies are constantly making use of new tools, pouring money into R&D to build new tools and so forth. And actually you writing the above reminded me when I was checking out alternative modeling methods last night specifically that sketch-based modeling I came across a video by a AAA game company. They were showing the artists working on the game graphics and they were using the sketch-based modeling & animating. Just drawing lines and so forth.

    I don't know which company and which game because it wasn't of any interest to me (wasn't what I was looking for) but I do remember thinking aha naturally those folks would be using "better" methods. lol Of course, theirs is probably some very expensive solution not very well known or else a tool they built in house. Anyway I remember always seeing those behind the scenes articles in mags and videos over the decades showing a glimpse into what those huge AAA companies were doing. Always making powerful unique tools. I imagine it is kind of like the military and consumers in a way. What the AAAs are using today might flow down to us mere mortals in 5 to 10 years and become commonplace.
     
    neoshaman likes this.
  39. Deleted User

    Deleted User

    Guest

    *** ShadowK checks back over the post to see where it said "Unity was made 15 years ago", turns out he NEVER said that..! :p Unreal Engine 1 came out in 1998, also even back then when dinosaurs ruled the earth we still had frameworks and the likes of Blitz3D ;).. We just used to use clubs to program games as opposed to keyboards.

    Sure, I'll wait 15 years for all this make me a game AI.. Or I could do something now, not sure what this has got to do with Unity making game dev too difficult or the struggles of developers right this second but sure.. Please continue.
     
  40. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    Am I talking to a wall or something?

    The process I described is how it SHOULD work. It won't work this way right now. It will get to this point, eventually. In 50 years or a hundred years. The reason why the videos don't work on me, is because they all demonstrate the same thing, so I don't see a reason to get impressed or excited - there's nothing new in them.

    Skulpting works, but it is still slower than 2d sketching. You still need 1..2 hours per model minimum, and then go through retopo proces. Basic 2d sketch can be done under a minute. So there's a lot of room for optimization.
     
    Murgilod likes this.
  41. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    Yeah, I know, but I thought it might be a bit unfair to say that "things haven't really changed" in the field, when Unity appeared, matured, and now it allows so many indie developers to populate Steam with their games, during that period ;)

    And likewise, I don't intend to wait another 15 years while I'm just idly waiting for such a technology to do all the artworks for me, and neither do I believe that it will make game artists to be redundant when it becomes a reality.

    I just think that it'll become another 'keyboards' for artists to create their artworks with, in future. And I'm quite content with playing with such 'clubs' like Blender or Substance today, to create what I might be needing for my game right now.

    I guess in such sort of matters, it'd be best if we try to be both realistic and open minded at the same time.
     
  42. Deleted User

    Deleted User

    Guest

    It's not unfair, whether I make the engine (which I used to) or Unity makes the engine it's still the same thing in regards to development methodology.. Still based on the same or similar core SDK's.

    I understand and I'm thankful that Unity allows us to compete with games of scale and saves us a whole boat load of time to remain competitive, but they're aren't a necessity.. If I was making a small game (like the Doom Clones) I'd probably use Eclipse and LWJGL or QT + SDL etc. to reduce overheard.. I also don't use them as a measuring stick either..

    Open minded sure, I (and a lot of dev's) use tech that's probably years away from being implemented into the core of Unity.. Gazing at the future, well it's cool and exciting but for practical matters not very helpful.
     
  43. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    But haven't we been talking about how such a technology like machine learning can make things easier for everyone, rather than defining theoretical minimum requirements for building a game for a seasoned developer?

    I'm pretty certain, if what it takes to build a video game was to create one's own engine as you said you did, or relying on such low level abstraction layer like LWJGL or SDL, I suppose many of those who have successfully published their own games using Unity today would have given up, or at the very least failed to produce games of such quality.

    And the point of the discussion about the machine learning was not that if it would be impossible for a professional artist to create quality models without it, but that if such a technology would become a reality in nearby future, and if it can really generate 3D models of viable quality from arbitrary photographs.
     
  44. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    @neoshaman: One more thing is that those kind of videos often are distractions and do not correspond to finished product. For example, when was Unity Timeline announced? Where is it now? Same thing applies to many of those technologies, except timescale is longer.

    "oooh, look at the cool thing I made that worked once!" - "Can I use it in my project?" "Ha, of course not!"

    This video, for example, is actually cool and depicts something I'd love to play with:


    However, it is not available as something you can use in your project, and nobody tried to turn it into a finished product.
    The same thing applies to many experimental techs and applications.

    And that's ultimately why I take "I believe it when I see it" approach. If someone thinks that X is possible, X still doesn't exist. If someone made a cool demo featuring X, the X still doesn't exist, because you can't grab it and use it. However, when they release source code, sample product, application, or just an overpriced commercial version, then we're talking. It is only at this point people can finally grab X, test it in their project and find in how many of "infinitely possible" scenarios X doesn't work (probably in most of them).

    Sounds reasonable to me.

    ----------

    On somewhat related note, if someone knows a way to make a 3d model at a speed of 2d sketch, I'm all ears. Speed of 2d sketch mean 30..60 seconds to finish the model.
     
    GarBenjamin likes this.
  45. Deleted User

    Deleted User

    Guest

    In short the message was, crack on I'm looking forward to your product ;)..
     
    neginfinity likes this.
  46. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,013
    Problem is that by the time it has turned into a commercial product, it would have been already 10 years since it was announced as a viable technology, and would already have been implemented in 200 AAA games, and basically wouldn't be incredibly interesting anymore because some much cooler technology would have just been announced that makes it already look like 'last gen' technology.

    I think that if us indies want to play the tech game, we have to make it ourselves, for a specific narrow purpose with specific strengths and limitations, and ship a game with it 10 years earlier than when all of the corner cases have been worked out to the extent that it can be sold as a commercial product.
     
    Deleted User likes this.
  47. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    Well, in this case there's no point in getting excited over cool demonstrations and "possibilities", because "indies' aren't going to get access to the tech involved anyway.

    "Playing tech game" would mean that instead of making games, you're going to spend next 5 years experimenting and polishing that one technology. It is a good idea to decide if this is what you want.
     
  48. mysticfall

    mysticfall

    Joined:
    Aug 9, 2016
    Posts:
    649
    Thanks! But just don't hold your breath, as I'm intentionally trying to do it differently. There's a reason why I'm planning to finish it in next 10 years or so :p

    On a side note, I'm having a lot of fun so far, and I think I've already made a decent progress considering that I've been mostly limited to working only on weekends.

    Last week, I've managed to whip up a custom Json based localisation API, and a Swing like wrapper around the legacy IMGUI API a week before. Everything is more or less in a prototype status so far, but considering it's just a 2 month old project, I'm pretty sure it will reach somewhere in next 10 years.

    I just hope there won't be a flood of open source developers with more time and skills than me to make my work redundant by then :)
     
    Last edited: Jun 8, 2017
    GarBenjamin and Deleted User like this.
  49. Billy4184

    Billy4184

    Joined:
    Jul 7, 2014
    Posts:
    6,013
    Yeah I was pretty much agreeing with you. Although I don't think it takes 5 years to make a game using aspects of a new technology, precisely because it doesn't have to be incredibly polished, user friendly, or handle anything except the situation it's developed for.

    I actually like the idea of a game that makes a step 'into the future' even if that step is a small and shaky one. I think it's one of the most interesting things about game development actually, both as a developer and a gamer.
     
  50. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    You must be kidding right? :confused: most video have the source code linked in the description, there is ton of that stuff out there AND I HAVE SAID it's already in use in production :eek: there is many successful product out already (google translate is one, siri and co are other, driving car are out there fighting regulation, and even a kid made its on driving car company, a 12years old girl made one to improve cancer detection, etc ...). That tech is super available and there is more and more resources to use them, most of them are free, though coder oriented.

    I shared those video as short hand, but those thing are finding application less than 3 month after their paper is out. In the spoiler section there is a video showing one of these app used for leisure.

    Follow this industry insider (use to make game) https://twitter.com/alexjc, you will see indie using this too when he shared it sometimes.

    Now you are just in denial, I'm out:rolleyes:

    But that's too hype! I was talking about easing costly process, I don't what you imagine, but I guess there was a cross of two discussion thread that merged anyway, we were talking about easing stuff. We are definitely talking about different things, hence the wall. I'm talking tool that simplify task, you are talking strong AI.




    However I want to try as an exercise to break down your vision!

    Now there is some technicality:
    1- what is the interface implied here, is dragging an handle communication or only voice command? Does making "fuzzy" sketch part of the communication (like we do when communicating design to a team). Because I could argue that's what you do with photoshop already using the interface. Also if we want to be accurate, the same input give the same output RELATIVE to the random initialization, so different initialization, different output.

    Now if we are talking verbal communication I have to introduce deeper concept like latent space. Latent space is where ALL the long term AI hype is really is. It encode deep relation between concepts, and I mean abstract concept, like subject verb relationship, noun and adjective, etc ... not just for textual representation but also graphical, motion and other. Ie it can generalize property other domain it has never seen, like applying to big to semantic that was never trained with. The thing with latent space is that we have found we can just do simple operation on the activation and get interesting results like making semantic arithmetic:


    This has lead to the discovery of latent vector, ie direction in the multidimensional space that encode one type of semantic relation:


    When you train a network from text to image or any domain to any other domain, the network create vector that match the similar relation ship of one to the other, ie a big cats in text lead to a big cat in an image. But also concept like neck, fur, tails, across many objects and generalized. We can find and exploit these these vector to transform image like adding a smile to a neutral face, and making someone older or younger. Ie applying attribute which are abstract concept beyond the symbol themselves, it mean it understand semantically the symbol and apply the appropriate attribute.


    And that's basically how a neural network can describe what happen in an image, which is a rather hi level classification, he does not just recognize symbol, it also recognize action and their combinaison with symbol it has never seen, even for :


    It's templated description for now, so it just fill out the blank correctly, which is minimal reverse parsing (or so I think? it may not be templated actually if it's an RNN MY BAD, that's even better).

    But one property of deep neural network that was discovered is that it is generally reversible, so you can pick a description and make it generate things it has never seen ever:



    So this is fresh and has not been perfected yet, but basically you have a network able to understand a full sentence and generate accordingly image that relate to the sentence input. And some example have entire set of action and objects generated, it does have failure on edge cases, but it's surprisingly strong.

    But anyway that lead to:
    Yep it can do that, literally NOW, no cheating. Of course you will have to build a new network not fed and random image to make it stronger (like the tool that generate example through sketch) and production ready, but that's not hard or impossible.

    And I haven't talk about neural turing machine!
     
Thread Status:
Not open for further replies.