Search Unity

What generative AI tools are you currently using in your workflow?

Discussion in 'General Discussion' started by zombiegorilla, Mar 29, 2023.

  1. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,051
    We have been dipping our toes in it since day one practically. We have lots of small supporting UI images (like pictures of beaches or various biomes) that thought would be a nice place to try it. It has been generally useful and time saving. Mostly from MJ with some cleanup.

    Also we have a lot of character images, and our AD found some phone app that does very consistent "toon" versions of photos. With some tweaking he got it match our style pretty much on the money, with minimal paint over corrections. So now we take pics of friends and family and run it through this process, its much quicker and very consistent, and fun to see your friends in the game. There is other stuff we are doing, but mostly experimenting at this point.

    Have you started using any image or code related AI content tools in current projects? Which ones? How is working for you?
     
  2. MadeFromPolygons

    MadeFromPolygons

    Joined:
    Oct 5, 2013
    Posts:
    3,980
    Midjourney for concepting (the one thing my team lacks currently is a proper concept artist)
    ChatGPT to help with some text sometimes
    Doing some AI material generation using substance packages sometimes

    Really what we are waiting for is something like an AI UV'ing package - when that sort of stuff lands, its going to be hard for many teammembers who are currently not as keen try try AI, to dispute the usefulness of AI.

    I mean everyone that has done UVs has thought at one point or another "god I wish there was a high quality one button operation for this that didnt create garbage UV islands and terrible texel density"

    Im certain someone is already doing this!
     
    Sluggy, GDevTeam, Ryiah and 1 other person like this.
  3. Andy-Touch

    Andy-Touch

    A Moon Shaped Bool Unity Legend

    Joined:
    May 5, 2014
    Posts:
    1,483
    Midjourney for super quick concept/mood board. Also use its img2img for taking concept art and spitting out variations of it for tangent ideas.

    ChatGPT for rubber-ducking code problems and converting knowledge from one API to another (IE: In Unity I would do X, how do you do that in Godot?)

    Started to look into Scenario for scalable asset variance; but nothing integrated it.

    I see AI as just another productivity/utility tool to add to the toolbox of workflows. :) ChatGPT alone has cut down a ton of googling time and has solved some questions in seconds as opposed to an hour digging through forum threads from years ago.
     
  4. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    I found it is a good idea to feed potential forum post to ChatGPT and look for problematic statements I have tendency to produce. Because ChatGPT is instructed to be inoffensive as possible, It detects a lot of stuff that can accidentally make people explode. However, because ChatGPT is instructed to be as inoffensive as possible and subservent, it is not a good idea to ask him to rephrase those, as the results are incredibly similar to his "I'm afraid I'm a language model, Dave" spiel.

    ChatGPT also works as "oracle AI" where you can discuss new concept with it, like a talking wikipedia. For example, try discussing hamilton's quaternion equation with it. It does make errors occasionally, so you have to watch out for those and can't believe it blindly.

    I also tried to use it for translation and review of writing material, but in this case it is not very useful. It is trying to be as encouraging as possible, and attempts to sugar coat everything, and for a serious review and translation, the attention window is tiny for anything useful. I did translate several short stories with it, but then I started to doubt its efficiency.

    For code, it can discuss topics related to blender api and APIs in general. It however, has a small attention window, so you can't review large code fragments with it, and it frequently invents non-existing functions, which is annoying. Also, the information is outdated.

    In regards of Stable Diffusion and image generators, at the moment I'd probably be able to generate backgrounds for a small visual novel without much trouble. I'd be vary of using it for generation of any hero pieces, however. There was an example of steam game that heavily relies on AI art mentioned in another thread, and characters there looked VERY out of place. The main issue "skin is too perfect", faces gravitate towards specific archetype, etc. Looks really odd, and does not look good. I also saw some book artists trying to use stable diffusion for character illustration, and those do not look good. Mostly because of the same "similar faces" problem.

    You can use it for landscapes, monster designs, some prototyping, etc. For example, you'd be easily able to produce entire cast of opponent for a souls-like with it. The default checkpoint for stable diffusion 1.5 is quite good for it, especially if you use several "artist" labels. It works better for semi-abstract illustration and not those that need to maintain coherence of an object or character. Default checkpoint also can be used as an idea generator, to an extent. Specialized checkpoints which chase realism are less useful for this purpose.

    That's the rough idea of it.
     
    SassyPantsy and Ryiah like this.
  5. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,086
    and classes, sometimes even imaginary frameworks.

    "hallucinations" ... is a euphemism for bullshit, in these cases.

    ----------

    Am using you.com as it provides enough context/links to rapidly sort the wheat from the AI chaff.
     
  6. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,068
    I'm using Midjourney to prototype a roguelite card game. V5 is decent for backgrounds, as long as the prompt is detailed enough, you can get decently consistent results and there are not a lot of requirements for a card game background besides being visually appealing. The resolution is not quite FullHD, but it's close enough. And their niiji (anime trained) model is good for generating card art as well.

    I also use ChatGPT4 (the paid version) to write the said card game. I asked it to replicate Slay the Spire, and it promptly handed me the basic structure for such a game, so in 3 evenings I have the basic gameplay in place. Though memory limits are still a big blocker. Asked it to write the game with Dotween which reduces the amount of card tweening code by at least a 3rd if not half, which it could do surprisingly, learned quite a few new things I didn't know about Dotween in the process.

    And once I had the right feel, I asked it to refactor code to focus on readability and it also did that in a minute or two with rather incredible results. At this point it's quicker to ask ChatGPT to write code for me, rather than doing it manually. I only dig into the trenches when memory limits are reached, and it loses context, although I can still feed context in with individual scripts and detailed description of what I need.

    Without ChatGPT it would probably take me a couple of weeks to replicate this manually due to obscure errors or weirdness with Screenspace vs Overlay UnityUI that I didn't know how to solve, but ChatGPT resolved as soon as I described the problem and explained the cause as well. Slay the Spire is UGUI based, so I wanted to do the same.

    All in all, ChatGPT resolves my analysis paralysis by handing me a good base I can jump start from for pretty much anything, it knows Unity better than any single person on the planet and writes code quicker as well. ChatGPT3.5 wasn't quite there yet, but ChatGPT4 is a true gamechanger and seeing the speed of progress, these tools will only get better. I wouldn't be surprised seeing programmers lose jobs due to this tech in the future.
     
    Last edited: Mar 29, 2023
    GDevTeam, SassyPantsy and Ryiah like this.
  7. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,144
    Bing Chat and nothing else, basically. It's exceptionally useful for things like breaking down in plain language how certain things work or could be implemented and can actually generate reasonably accurate code snippets, especially compared to ChatGPT's current free offerings. The real reason I use it, however, is because it also cites sources so I can see if the reason something doesn't work is on me or if it's just suffered the standard issue of AI being confidently wrong.
     
    Ryiah, Voronoi and Unifikation like this.
  8. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    587
    Bing Chat has replaced Google for all of my code questions. It gives footnotes so I can check if I'm suspicious of the answer and look at the source. Honestly though, I prefer Bing's very clear summary and explanation more than reading the original sources. Rather than read the original, I ask follow-up questions and it references new sources to expand on the initial search, refining the answer and explaining the concept from a slightly different point of view. I could never do that after reading a blog post, and so I think it is saving me hours of time.

    In particular, Cinema4D has atrocious Python documentation. It's all there, but written densely by programmers with little explanation. Bing knows everything in the documentation, but is able to reference and explain concepts about the answer it came up with. I feel like it's reaching an Executive Assistant level of helpfulness, with the added benefit of seemingly knowing all things.

    Bing's also great at copy/paste code and fixing it for me. Sometimes it even asks me to paste the code in so it can answer better.

    I'm using Stable Diffusion as a photo-reference generator for painting and drawing ideas. I've actually lost interest in making digital paintings after seeing what AI can do. I don't want to spend my time touching up an AI render when 90% of the piece was generated. That makes me feel like I'm working for the robot and not the other way around.

    Instead, I'm reverting to my pre-digital state and making only original art with traditional media. I like using AI to come up with ideas and use it like an infinite photo-reference maker, but so far not for the final art.

    Currently not making a game, but I could see using it for backgrounds, concept art and maybe some props for a 2D game. It seems like a multiplier for something like that, rather than assign all the grunge work to a team, let AI provide it and I could focus on just the key pieces and things that interest me.
     
    Last edited: Mar 30, 2023
    GDevTeam, Ryiah and ippdev like this.
  9. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,051
    Ditto. I have done more digital art in than 6 months than in probably the previous couple of years (non-work related). I very much use (mostly MJ) AI daily for giggles and for reference and wallpapers.
     
    MadeFromPolygons likes this.
  10. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,853
    OK. I forgive you for nuking the AI stuff. I am building a Unity version of a BERT styled NLP/Natural Human Interface that is core of a suite of tools for looking, seeing, hearing, decision crafting, command and control. I am building all these tools from the ground up in dotnet and Unity..no python. I will be doing the first round of training over the next week. I used wiktionary for lexical and syntactic correlations and parts of sentence tagging, have all words vectorized and am setting up the sentence offset routines this morning for capturing nGram sequence structures and weighing them according to usage. When done one should be able to use the core dataset and add special domain knowledge..our first use case is for bomb and corpse dog simulator for police training. So we "decorate" the main wiktionary corpus with lots of texts about dog training and the inference engine should be able to converse and answer questions about the knowledge domain you primed the pump with.
     
    neoshaman and Ryiah like this.
  11. SunnySunshine

    SunnySunshine

    Joined:
    May 18, 2009
    Posts:
    976
    ChatGPT. It's fantastic for inspiration and coming up with solutions to technical challenges. Instead of investing energy into mentally exhausting details, I can concentrate on the broader design aspects. This makes the process a lot more enjoyable. It's as if I have a team of highly skilled, efficient programmers by my side, offering solutions to various problems, and then I can choose the best one. All that for just $20. It's insane.

    It's understandable why individuals express concern regarding AI. Unless you're managing your own venture or hold a higher-ranking role in an organization, there's a significant possibility that the demand for your services may diminish when there's powerful tools like this, which will only continue to grow in sophistication and capability.

    Wild times!
     
  12. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    i've got in the habit of just leaving chatgpt open. I am just using the free version. What are benefits of the subscription? (thats just general question)

    I use it for formatting or text cleanup probably more than anything. I had a bunch of hard to read text describing beats to go along with a level blockout, along with cutscene dialog, and when I wrote it I didn't want to slow down to think about formatting or anything so its just jumbled text. Didn't even write character names.
    Dump into chatgpt and told it "format for screenwriting" and it figured out character names, differentiated between actions, scene transitions... everything. Just perfect.

    Thats thirty minuets of tedius work turned into 2 seconds of waiting.
     
    neoshaman, DragonCoder and Ryiah like this.
  13. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,157
    ChatGPT. If I made a list of everything that it's doing for me now it would be a very long list. Google has become my way of searching for content that I already know exists when I don't remember the URL (API docs), but I've almost entirely stopped using it beyond that. A basic search engine just isn't good enough now.

    Like @Andy-Touch I'm constantly rubber-ducking with it as well as translating APIs (eg the old input system to the new input system) but beyond that I treat it like an intern programmer. If I have a task that is simple but tedious I'll just pass the task to it while I do the complex tasks.

    I'm constantly discovering new things that it can do too. Yesterday I discovered I can ask it for a diff showing the changes that it made to a script and it color coded the additions (though it failed to do it for the subtracted lines) without having to be told.

    ChatGPT's subscription unlocks a much faster (about double) version of GPT-3.5, and unlocks access to the more accurate GPT-4.
     
    Last edited: Mar 31, 2023
  14. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,068
    The big one is access to ChatGPT4 which is significantly better for pretty much anything, especially generating perfectly usable code for Unity. It has more memory so it retains context for longer and can process larger input data. It's multilingual capabilities have been significantly boosted, I hear people generating usable translations for games and other media. And there are anecdotal reports of ChatGPT4 producing 40% more factual results than ChatGPT3.5, I definitely see it "hallucinating" a lot less than it used to in 3.5. It's basically better in every single metric but generation speed right now.

    ChatGPT4 is currently limited to 25 messages every 3 hours due to high demand. For most of my inquiries, I find that enough but I've run out on occasion. It's also a bit unclear how the 25 messages are counted since I've been able to stretch it for longer within the limits of the same thread. The caps also could go even lower if demand keeps rising. It was 100 messages every 4h at the beginning. The paid version also has a faster 3.5 version of ChatGPT as Ryiah mentioned, it lets you continue the thread with 3.5 if you max the GPT4 cap, but the drop in quality of produced results is immedietly clear.
     
    Ryiah and BIGTIMEMASTER like this.
  15. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    AI have greatly appease some of my worries over some task, I have been using it daily, but I haven't forgo using search engine. Knowing and how and why the tech works (overfitting and over generalization), I would never trust the output of such tech for factual information, even when it look good, always will double check.

    Case in point: I asked it to list all gameplay innovations at each of the entry of the ace attorney series, it did good for all of them, except the latter of the series (the one with herlock sholmes and the cross over with layton) where it gave the right name (dance of deduction) but completely failed to describe it (observing people maneer vs the right answer: correcting helock when he "hallucinate facts', irony!), and for the layton cross over it straight made up a new mechanics "puzzle rebuttal" over multi witness testimony. Those are case of over generalization, it memorized (overfitting) and understood the logic of innovations in the series and extrapolated new idea that are plausible (over generalization).

    For me that's perfect!

    My use case is all about creativity, that ability to over generalized is exactly why I use it! I can have it speculate on ideas and interpolate them ad infinitum. Even when it got the wrong answer, that's enough to put me half way through a solution I might have choked on! Same for fact actually, I don't trust it, but it's a faster starting point than blind googling, especially when I have little knowledge about a domain, i probe it for semantics and keywords but do not rely on it for straight informations.

    One way I works on project is that I front load, the most difficult task, that is what people usually called the last 95%. I do small prototype to test feasibility of concepts, which I accumulate but prioritized the one that are hard and tough. Which mean early part of a project is traversing a desert. The other issues is analysis paralysis on part I don't want to give up on, but are too vague and therefore have to many options to go.

    Now I just beam search my mind with chat gpt. I had issues with deadlock on story, now I know I can finish any story in a finite amount of time, so I de-prioritized them. Same for game design, it won't do my job, and it will propose basic stuff, but it's still faster to rely on his proposal and have it generate hundreds of variant until I can decide, so anything that s pure progression and not tactile design, that's solved. Also people don't know, but you can use Da-Vinci gpt3 just fine on open ai playground, and it's not censored, I tested by having it wrote explicit erotica ... :oops:

    Same for image, it seems like most people just want the ai to do the work directly. It's best to use it as reference or as a source to recompose. the tech has evolved a lot and it's becoming easier to do specificity by the day. Tools like Comfy UI and control net, really makes your live super easy and allow you to automate a lot.

    If you know python, you can probably have a complete automated pipeline, that can run locally 24h24. Same with the advent of LLaMa, the so called "cognitive architecture" (possible with chat gpt or gpt 3 API) allow you to completely abstract a lot of task. I forbid myself to use them for now, until I finished some of the task I prioritized.

    One way Image generation is appeasing me is hair rendering. Because HOLY FRAKING HECK! it has no inhibition, in the right way! nothing it generate could be in source data verbatim, it did pull many influence to create the result, but as someone who looked hard and long on the internet, I can assure you there is no reference for the type of rendering it does, and it does it flawlessly! It has an issues with generating black women though, it's hard to have dark skin, and it default to portrait too heavily when teh prompt is given.

    The GOOD, it's bringing innovation in hair rendering. Some I couldn't thought on my own after studying the issues for 5 years, and it bring sophistications! What's great, is that I can beam search across style, like anime, retro 3d, etc... and it will compose plausible result that are spot on with the tech and limitation, down to plausible artefacts, 2 times on 5. It's like falling into a dimension where black hair were never passed over! And it also explore the wildest hairstyle possible!

    I use Chat GPT for math, it's infamous for being bad at it, but that's because people don't know how to prompt it to unlock the proper performance, I noticed the latest updates corrected that, probably adjusting the hidden "system" prompting. It's more math reasoning and bouncing ideas, than pure math though. A lot of hard stuff I have to do is all about the math, it's not solving it for me most of the time, but I can explore dead end in hours instead of month. I'm probably about to finish my biggest challenge thanks to it.

    So in the end, I use it as a way to appease my mind, knowing that certain task can be taken care of without uncertainty, allow me to focus on hard works, instead of shifting toward another task every time I met a roadblocks. I have been holding back on use them fully, but I have tested them enough to planned for "cognitive architecture" aka multi step automated pipeline, which would be like having a full studio of junior that don't sleep.

    Some people think having LLM run at less than interactive frame rate is useless, they haven't made a cost benefit analysis, and understood the values of async work. They can do much more than chat and you don't have to stay in front of them to be useful. For example having text summarization (I use a text summarization services already to get teh gist of long pdf) and topic sorting run in the background, only for you to check in the morning. Heck even image generation of arbitrary quality every 5mn (running stable diffusion on a old cpu) still beat beat the speed of human artist (1 full character inked line art every hour).

    I'm not using gpt4, can't pay for it. I have tested bing, but that was inconclusive, it's smarter, probably, but I don't need it. I'm probably going to try and experiment with LLaMa and Alpaca, once I have made significant progress on my project.
     
    Last edited: Apr 3, 2023
  16. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,853
    I have been building an inference engine in Unity using a dataset I constructed of 333k wiktionary based word and parts of sentence, topical, lexical, vector cross ref indexed, binary searchable, 50 language word translations with recursive statistical weighting, exclusionary bidirectonal weighting and a few other tricks with some fancy assed cryptic names the high priests of python use. I prune the results at each step if they don't reach a minimum threshold. It is capable of extracting the correct question from a few thousand even if the prompt is flawed. The time elapsed was .01 to .04 seconds to extract and compare a few thousand tokens in the string domain for a match. This will be much quicker when I convert the brain functions to ints for extraction. Got a full Wolfram hookup and can look up any darned fact or dataset I can think or or calculate most any mathematics, algebra, geometry and get graphs, maps with routes etc.. Not a line of python so far. I appreciate all the links though others may lack the imagination nor need these toolsets..but hey..luddites..the future is now and it ain't exactly a reiteration of a reiteration. AI is the game now. Ignore at your financial and professional peril.
     
  17. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,770
    I am using Stable Diffusion, to keep my team motivated (working on Sci-Fi RTS), when wishing Happy Easter :D
    And wishing all best to you too.

    upload_2023-4-8_14-47-15.png

    Also finding ChatGPT useful in various simpler task.
    It happens to find answers quicker and more concise that engine search.

    Finding ChatGPT also useful, for prototyping and build up simple scripts, for which I forgot exact syntax. Saves me tons of time on searching through docs and net.

    However, it has not replaced yet for me, a good old manual git search, for suitable solutions.
     
  18. kdgalla

    kdgalla

    Joined:
    Mar 15, 2013
    Posts:
    4,635
    Started playing around Ai.Fy from the asset store. It could just be that I haven't learned how to prompt effectively but I'm not finding it very useful so far. I just have so little control over the results I can't really get exactly what I want.

    The text to image is kind-of randomy.

    The iterative feedback generation tends to misinterpret what the initial image is supposed to be so it usually nonsensifies the image rather then enhance it.

    I've had better luck with textures, but the generated textures don't really look any better than some generic proc-gen perlin noise that you could make in substance designer in 2 seconds. The displacement maps tend to be just black or not really useful.

    Once again, I'm open to the idea that I may simply have not learned how to use the software properly, so I'm not ready to say it's a bad product necessarily, but I'm not impressed with the results that I'm getting. I think I might be able to use it for upscaling or normal map generation, though.

    YMMV

    https://assetstore.unity.com/packages/tools/ai/ai-fy-text-to-image-238967

    Edit: Oh, I should mention that the product is heavily reliant on Unity's barracuda package for generation, so that's where most of the limitation actually comes from. I would say that the asset itself is mostly just to create an easy-to-use workflow for barracuda, so the asset itself is successful in that regard.
     
  19. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    We're going a bit offtopic here, but...

    The reason why I'm giving info, because this is an easy/simple stuff which can improve the results you get by a lot, the information took a bit of time to assemble. I think I spent a month on stable diffusion when it was released.

    The problem with stable diffusion, is that by default results for naked prompts are going to be look not good.

    So the idea here is to:
    * Either experiment or go to prompt collection site like krea, find whatever prompt you like, and slap it everywhere.
    * Or go to civitai find a checkpoint which produces decent results by default with a simple query like "a cat".

    Personally I don't like prompt engineering and think that it is largely a waste of time, and I'm not going to spend eternity constructing negative prompts with 50 keywords in it, all with custom weights, just to ensure that everything is "PERFECT", like some people do. I also have a favorite recipe myself which I use a lot.

    So, basically, I'm suggesting a simple way to greatly improve results with something like 3-4 extra words. Goes like this:
    * Specifying a medium or type of drawing often has massive impact. "pencil sketch", "oil painting", "illustration", "watercolors". You can even do "origami", though this one can fail on some subjects. You can find examples on "prompt builder" sites.
    * Specifying an artist also has a massive impact. If you're using automatic1111, you can find list of examples in "artists to study" extension. Even if you do not want to invoke name of Rutkowski in vain, you can do stuff like "by Unreal Engine", which will usually produce a somewhat cheap looking 3d render.
    * And failing that, "intricate, highly detailed" maybe coupled with "cinematic" will boost your image a lot.

    Past that you can just set sampler to DPM++ 2M Karras, number of steps to 50, CFG scale to 7, and there you go. Decent results for nearly anything. And plenty of ways to tweak it.
     
    Daydreamer66 and Ryiah like this.
  20. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,770
    Sure I agree.

    However for my use case, it has highly diminished return for any additional iteration. Unless I do it for my own exploration and experimenting.

    It is matter, what kind of message generated content should carry. Rather than artistic look, which adds little to the message itself.

    However I am thinking, if to use Stable Diffusion, to train SD models based on the product content, which our team is producing. Which may potentially allow us to generate more variations of generative artistic content, with focus on our title game.

    This may be potentially interesting use case, for various games and maybe marketing, or at least for concept artists. I wonder if anyone does that already, by training models on specific titles of games.
     
  21. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    Well, civitai has "elden ring" checkpoint. It looks like this.
    upload_2023-4-9_4-44-46.png

    So, yes, this should be perfectly doable.

    Also it should no longer require insane hardware. LORA is said to be fast enough. There's also textual inversion, which can store data in a tiny file, but results are worse and harder to control.
     
    Antypodish likes this.
  22. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    The difference here is that the discussion is about what people are doing.

    Other threads are speculation, which naturally devolve into useless arguments over nothing. I'm keen to pick up on new techniques people are using with new tools to make games.
     
  23. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    By the way, speaking of generative tools. I managed to run local chat model, named vicuna. GPU version, to be more specific "vicuna-13b-GPTQ-4bit-128g".

    The model is 7 gigabytes and the speed is comparable to Stable Diffusion render. This is one of the results:
    upload_2023-4-11_11-58-4.png

    one of the articles claimed that it gets to 90% of what ChatGPT4 could do, this could be true.

    Things I tried:

    * Explain code to me: (depicted in the above comment). Seems to be working, though I'm unsure if it is extracting the data from code, function names, or just is making correct guesses.
    * Summarize a huge chatlog to me: Gets the gist of the events, occasionally goes bonkers and tries to write continuation of the story, or makes a different story, inspired by the original.
    * Translate non-english text to english: Clearly gets the gist of the events, but many things are not correctly translated, context recognition is inferior.
    * Write a song or poetry: Has difficulty making text rhyme.
    * Explain blender bmesh python API and mesh data structures: Falls flat on its face, I think blender mesh API is simply not in the training data.
    * Pretend to be a character (EldritchGPT): Works well, the quality of ominous riddles hinting at forbidden prophecies is roughly the same.
    * Speak on non-english language: Actually this one is almost better than FreeGPT. It produced a lot more text, it did it faster than FreeGPT, there was one english word mixed in, and minor errors, but I was quite surprised.

    I think this is heading somewhere interesting. I tried similar to one of the conversational models in the past with KoboldAI, this result is much better. It is not at ChatGPT level yet, however, but for stuff I can run locally at decent speed, this is very good. (With my connection, vicuna running on GPU operates at the same speed as FreeGPT or faster).
     
    MediaGiant likes this.
  24. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,770
    @neginfinity you are entering off topic area again. This thread is not about what similar tools can or can not. But how you use them in projects. Please do not keep derailing threads.
     
    MadeFromPolygons and DragonCoder like this.
  25. AdrielCodeops

    AdrielCodeops

    Joined:
    Jul 25, 2017
    Posts:
    54
    I don't want to make offtopic or divert the topic, but when we talk about using generative AI's, we are largely referring to AI's in the cloud, as they are the only ones that by power reach interesting results. However, these require a specific input that many times is our own code or protected corporate code. Is no one questioning to whom we are giving access to our personal/corporate work? Same for images when using Img2Img or upscalers.

    I think especially in this case, it makes a lot of sense to show options that run locally as neginfinity has done. (But I still understand it could lead to derailing the thread).
     
    ippdev and neginfinity like this.
  26. Tom_Veg

    Tom_Veg

    Joined:
    Sep 1, 2016
    Posts:
    619
    I'm still waiting for "one click" solutions to create perfect retopology of my 3D characters and UV unwrap them. But still have to do that manually like a caveman.
    So what's this "AI" talk all about?
     
    dogzerx2 likes this.
  27. NeonDagger

    NeonDagger

    Joined:
    Jul 20, 2016
    Posts:
    6
    Pretty much in the same boat as many here. Seems most are using Midjourney & ChatGPT adhoc.

    What I'm waiting for is an AI assistant that can sort of "live" in my IDE and have a pretty much full grasp of my entire project, so that when I make a reference to a class it can make meaningfull inferences what that class is trying to do.

    As it stands i have to go to ChatGPT and kind of abstract the problem for it and hope it spits out something that is easy enough to integrate with my actual code, and is relevant.

    So for me, even though I'm super impressed with the AI tools (it's still very early days) I recognize that what I'm getting out of it right now is quite limited and not accelerating my output all that much, at least on the coding side of things.
     
    DragonCoder likes this.
  28. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    What I'm thinking is that at some point someone will use one of the current "tiny models", add plugin API to those similar to what GPT4 plugins are said to be able to do, and then the ball towards understanding entire project can start rolling. I'm also looking forward to someone making an addon with unity docs for those.

    However, one major problem is tiny number of tokens those models support. For example, I tried to get both offline and online models to summarize Unity EULA, and it is 25 thousand tokens. That's without appendix. They can't summarize it. A huge project will have comparable size or will be even bigger.

    I'm going to keep the "tiny models" as a part of toolset, though. At the very least chat with EldritchGPT can life my mood a bit.
     
  29. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,853
    I don't either but being on-topic but not adhering to the "only talk about cloud based python solutions that can only be useful as imported content [if at all]" is highly restrictive. I bought the Asset Store SD generator. Lots of fun but absolutely nothing useful. Tried altering pelt maps for canines and the schticks it would come up with ws hilarious..but not useful in production at all. I am more interested in how these things work and building my own implementation in Unity for unity editor and runtime and these toolsets should be portable with a small footprint. So what may seem off-topic to those who focus on games strictly I personally have leveraged info from links provided from what appear to be off topic into generative tools in Unity. This thread is a throwaway by the mod who shut all the other AI threads down telling us to go on discord. I don't use discord and the general AI stuff out there is python this, python that, ChaGPT wants to kill the human race. Nothing at all to do with what interests me in regards to Unity based tools and what devs think about this and that and how they use it. and apparently many others by how rapidly the thread counts rise.

    To wit. Interior decorating with cloud based solutions is not game development. Making generative tools specifically for realtime 3D interactive applications needing no network connection is the kind of cutting edge stuff game devs should be interested in IMHO. If you cannot discuss the mechanics and algorithms and interactive methodologies without getting a thread lock on a forum where this is what people do for a living and a hobby seems facetious to me.

    Perhaps the mod can post a link to a discord that specifically speaks of prompt engineering, algorithms in Unity C# and their use in portable interactive 3D applications and simulations within a Unity context./There are apparently a gazillion of them. I may even find somebody that knows how to make a set of hypervectors mutually orthogonal as apparently hypervector graphs can super compress the encoding and decoding processes of generative AIs and can narrowly focus in their training for specific knowledge domains. The advantage here as well using hypervector hypergraph AIs is that they are deterministic..which essentially means you can control them where they do not hallucinate via opaque processes in hidden layers. This is the approach IMHO to getting these tools into a format that is useful to game and realtime interactive 3D devs.
     
    neginfinity likes this.
  30. TheNullReference

    TheNullReference

    Joined:
    Nov 30, 2018
    Posts:
    268
    I use midjourney to mock art style and feel.



    Rough indication of how I'd like the Jungle level to look like.



    Creating armor sets.



    world map view.



    Loading screens



    UI


    Icons



    Textures.

    Most are just for reference, but I will definitely be using images for Icons and Title screens directly from midjourney.
    Midjourney doesn't have an API yet but imagine it would be fairly simple to generate sprite in editor with it.
     
    vertexx and Ryiah like this.
  31. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    587
    Rather than start a new thread, I thought I would revive this one with a specific question about using new AI tools like Layer in a game studio context.

    Looking at the demo and suspecting that this tool started in an enterprise environment, can anyone share how their game studio is or is not using tools like these in their workflow? And, specifically, how has it impacted the hiring of freelance artists and/or the size of your in-studio art department?

    I teach illustration and would like to be thinking of how to approach teaching digital painting in a world where this tech seems to be eliminating a lot of the technical hurdles of the past. For example, the Making Medieval Helmets demonstration:


    shows an artist making variations of a prop helmet. They look great, match the style, etc. but more than that, the things that are being skipped - masking, selecting, layering, brushes, rendering and drawing abilities are huge. Just doing one or two variations of the horns would be an hour or so of skilled manual work, now it's done in seconds.

    I can't imagine game studios aren't leveraging these kinds of tools in a huge way, and I guess my question is what is the impact on those lower-level entry type of jobs that would normally have been assigned to a junior member of the art team? Obviously an artist weilding an AI robot is probably more efficient than an art director, is AI profiency a marketable skill for a new artist?
     
    zombiegorilla likes this.
  32. useraccount1

    useraccount1

    Joined:
    Mar 31, 2018
    Posts:
    275
    Everything in this video can be done by using a Stable Diffusion with plugins. You can download automatic1111 webui and have the same tools. The layer provides better UX and advanced models making everything artist friendly.

    By default, automatic1111 requires a lot of technical knowledge, therefor additional technicians that will manage SD, train and merge models, create loras, etc.

    Some companies create the same tooling, and in the case of requirements, there are two main things:
    - Capability to draw things from scratch (for ControlNet).
    - Experience using SD.

    But to run this sort of thing, you still need artists that will create high-quality art that the AI can learn from.
     
  33. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    587
    Thanks for the link, I had not heard of A1111 before!

    I realize that studios would still need artists for the high-quality art, but it seems to me that would typically be older, more experienced artists and a decreasing need for artists to do the 'grunt work' that can now be automated. I would think this would lead a studio to decrease it's art staff and keep only the most highly skilled people.

    OTOH, perhaps the speed just means more games, more special editions of games etc., so it's possible it leads to more opportunities rather than less.
     
  34. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,051
    Depends on the studio. With many studios the "grunt work" is often outsourced. For a small studio, it is more cost effective than keeping someone on staff because the need for that work comes and goes. For large studios it is kind of the same, being able to ramp the pipeline up very high when needed. There are tons of great art outsource studios that are really, really great at this. I can see (and have seen) this impact that flow. It might not have a huge impact on hiring within in a studio directly, but could heavily impact outsourcing and outsourcing houses.

    It may even add staff to a studio. Basically AI is outsourcing, just super cheap (free, practically speaking), but not very good. External art sourcing almost always requires adjustments/tweaks and turning it into final assets. A good outsourcing house often can provide final assets. With AI, the art will always have to fixed, tweaked/adjusted and put into the pipeline. A studio might consider adding "art wrangler" position to do that sort of thing.

    Right now, we are using some AI to handle some of our "grunt work" type of image creation. Last year these things would have gone to our outsource art house, but we are now doing it in house with the help of AI. It is actually putting more work on us. (though obviously significant cost and time savings).
     
    Voronoi and Ryiah like this.
  35. useraccount1

    useraccount1

    Joined:
    Mar 31, 2018
    Posts:
    275
    That might be true, but sometimes you need hundreds of pictures to train something highly specific.

    The rest of the artists will need to know how to compose and draw a good picture, how to use these tools, and how to compose a good scene in something like a blender. For example:
    lineart_3.png

    As of right now, AI is not an independent tool, it needs constant human oversight, the more of it, the better result. Overall generative AI is another variation of automation.
     
    Ryiah, zombiegorilla and Voronoi like this.
  36. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,051
    That video is interesting... but I am not sure I see the practical value in that case (beyond marketing/demo). Not saying it couldn't fit some case, but I don't see it. For example if that is just an icon representing an in-game asset, you would just generate those from the 3d source. If it is a 2d game, I would still go the same route. With minimal effort you could generate hundreds of unique helmets in the exact same style. But there could be other more practical examples.

    There are two things that often come up on the excited/pro AI art side of things. One is speed, which is often touted, but professional artists are incredibly fast, getting AI art to get to an actual final piece (or many pieces) may take a ton of rerolls/tweaks, upscaling, overpainting, ect.. (and consistently). And the Second is accuracy and flexibility/specificity. even with the best trained stuff, they are often trained to favor aesthetics or other targeted stuff. I can think of tons of real world cases where it just isn't practical. Key-art in particular (for example), that contains multiple characters, specific characters, doing specific things in a specific style. Real artists are just going to be faster. And if it has 3d source, significantly faster. And AI simply may not be able hit that target with significant paint overs. And even as the tools improve, specificity still is a challenge. The amount of description needed to achieve what can easily be done by a human artist intuitively is going to take longer than just doing the art for real. AI is black box, it doesn't really understand what is happening in the output. Telling an artists to make a scene more whimsical vs telling an AI to do the same will get vastly different results.

    The common refrain is that AI art is just another tool in the tool box. That is very accurate. It is incredibly useful to help do or improve some tasks, and a lot that it is ultimately much slower and not suited for at all.
     
    useraccount1, Ryiah and Lurking-Ninja like this.
  37. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    587
    That's what I find so strange, last year I would be confident teaching digital processes I have been using since the mid-90's. The only real change in that time was Photoshop added layers and their selection tools got a bit better!
    Those examples are helpful, thanks for posting. Looking at those examples, it's clear that the bottom right is really the only professional images. A non-artist might be happy with any of them. Teaching students to look harder and recognize how to wrangle a truly professional image out of the tool would seem like a good skill. And, ultimately, they'll still likely have to get into Photoshop to finish it off completely.
    One thing AI does not seem to do well is composition, it tends to want to average all compositions and sort of centers everything. Definitely a place where a human needs to intervene.
     
    zombiegorilla likes this.
  38. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,051
    Indeed! When this stuff first really surfaced and I started playing with SD, I thought it was a neat toy, just something fun to play with. I didn't expect that I would be using it in production so quickly. Our whole art team was playing with it from start, sharing all the silly stuff we prompting. And at some point, our AD tried using it to generate a base image for some simple content art, and we were hooked. We recently added custom UI background images for flavor for specific elements that we rejected doing last year because we just have the time to generate 30ish unique backgrounds. Now, we are doing even more. All changed in around 6 months or so? Nuts.
     
    neginfinity likes this.
  39. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,051
    In the PS beta recently they added generative AI fills. The idea is neat, the results are less than stellar. (or useable)
     
  40. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    587
    I'm not sure if I found the right tool but Adobe prompted me to Adobe Express online for text to image. I thought the results were terrible, the generated art looking like something out of an Adobe tutorial from the 90's. I know they are drawing from their own licensed image database, but the results really did not seem useable. Seems more useful for background replacement.
     
  41. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,051
    It's in the PS beta, though I think it is exactly the same backend. All results are pulled remotely. Yea, it is not great at all. If you have creative cloud, there is a beta section and you can choose to install it. in PS you can select an area, add a prompt and hit generate.
    upload_2023-6-29_14-41-5.png
     
  42. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,068
  43. frosted

    frosted

    Joined:
    Jan 17, 2014
    Posts:
    4,044
    I more or less start every shader in gpt. I'm still weak writing shaders, so every time I need a shader I crack open gpt. Strange asking a machine that can't actually see anything for visual fx code, but it is what it is. I prefer GPT to using something like a shader graph for most things.
     
  44. Ng0ns

    Ng0ns

    Joined:
    Jun 21, 2016
    Posts:
    197
    Talking chatgpt, anyone had success with the free version?
    Haven't used it for games, but mainly for the lols in my daily work. So far I've never got anything useful. Even outlining down to a technical level what I want, and what I don't - it will just spew back dreamy references to non existing functionality etc.. Other times it will respond with a solution I clearly stated I didn't want. I find its reluctancy to state it simply doesn't know really annoying and wasteful.
    Can be fun to ask it for solutions to issue which I already know cannot be solved, just to see what fiction it will come up with. Perhaps I should drop into a subscription and check out 4.

    As for concept art, I can't help but think that most of it is almost indistinguishable from the billions of images already present on the web. Unless you're embedding it within your game, why not just reference the source the Ai is clearly taking from? I mean the images "TheNullReference" show are basically concept art for a LoL or wow like game, without really adding anything.
    Isn't the point of concept art to come up with something new, not just create pretty pictures? Look at costume drawing from the first star wars films, they look like they're drawn by a 5y old - but the ideas are solid and interesting.
     
  45. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,157
    Yes. Just as an example I was making a mobile prototype and wanted to have it auto-adjust itself based on the memory available on the device. I ended up creating something along these lines using GPT-3.5 (the free one).

    I did have to massage the results a bit as well as request it fully expand the list but that's normal for ChatGPT.

    Prompt:
    Result:
    Code (csharp):
    1. using System;
    2. using System.Collections.Generic;
    3.  
    4. public class iOSDeviceDictionary
    5. {
    6.     public Dictionary<string, (int memory, int weight)> Devices { get; }
    7.  
    8.     public iOSDeviceDictionary()
    9.     {
    10.         Devices = new Dictionary<string, (int memory, int weight)>
    11.         {
    12.             { "iPhone1,1", (128, 0) },
    13.             { "iPhone1,2", (128, 0) },
    14.             { "iPhone2,1", (256, 0) },
    15.             { "iPhone3,1", (512, 0) },
    16.             { "iPhone3,2", (512, 0) },
    17.             { "iPhone3,3", (512, 0) },
    18.             { "iPhone4,1", (512, 0) },
    19.             { "iPhone5,1", (1024, 0) },
    20.             { "iPhone5,2", (1024, 0) },
    21.             { "iPhone5,3", (1024, 0) },
    22.             // Add more devices as needed
    23.         };
    24.     }
    25. }
     
    Ng0ns likes this.
  46. frosted

    frosted

    Joined:
    Jan 17, 2014
    Posts:
    4,044
    there is a huge quality difference between chatgpt 3.5 and 4 in terms of code gen.

    Also, generally, the weakest aspect of LLM is 'fact based' stuff. LLMs are generally an inversion of our assumptions of what a computer should be good at. It is terrible at giving you precise facts but exceptionally good at concepts.

    So listing out precise memory specs, it will be extremely unreliable, but asking it for conceptual approaches will generally give excellent results.
     
    Ng0ns and PanthenEye like this.
  47. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,068
    Not the free version, but I used ChatGPT4 to generate an editor script that creates an AnimationClip asset from selected PNG frames via right click context menu. I'm exploring foregoing Animator with Playables API and other alternatives. ChatGPT is pretty good with these kinds of niche things and a major speed boost when exploring ideas conceptually.
     
    Ng0ns likes this.
  48. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,566
    I'm using it for brainstorming, text review and so on. "Can a red dwarf star system have planets six asteroid belts?", "In a fictional space sci-fiction setting and a supercharged particle take out the jump drive?", "what military rank would a geneticlaly engineered bear have if used as a pilot in a space ship?" and "please review the text fragment below".

    It works.

    For code I'll prefer my own head, but it functions as a talking reference book, and you can ask it to generate samples. The main issue is tiny context window, that one sucks.

    Regarding generative images, there's "dream textures" addon for blender which had some right ideas, but it kinda needs to be updated to support controlnet and auto1111 api.
     
    Last edited: Jul 2, 2023
    Ng0ns likes this.
  49. vintage-retro-indie_78

    vintage-retro-indie_78

    Joined:
    Mar 4, 2023
    Posts:
    285
    code monkey showed some amazing tools made for the Unity engine, and after all the ' controversy ', and think the better software vendors, or now also engines know how to make the stuff, and also support the artistic community, and atm haven't used the stuff, however been looking forward to the Unity - stuff . . .



    gonna wait until the early wave of controversy is gone, and then try and see if those Unity - apps make sense, or could at least start there . . .

    the answer is, and that'd be my go - to tools, and then also to make sure the initial ' rage ' over the AI - tools is gone, and then the better companies have made the ' right ' AI - tools . . .

    atm, not sure there are better, or then more responsible tools than those made by either Unity, or then --perhaps-- the Adobe - suite, and that's where might start, or at least that's also the goal, and also read reviews, and what people think, or listen to various debate, or etc etc . . .
     
  50. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,696
    @vintage-retro-indie_78 Waiting will likely only put you in a worse position. The risk of AI causing troubles isn't higher than all the other reasons why a game may fail.
     
    Antypodish likes this.