Search Unity

Using ChatGPT to create Unity Scripts and shaders.

Discussion in 'General Discussion' started by jackmememe, Dec 5, 2022.

Thread Status:
Not open for further replies.
  1. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,698
    It depends a lot on the field when it struggled more than before... and let's be honest, at full extent it requires hardware worth over 300k USD to run as an absolute minimum to run one query (due to vram constraint). They likely need many of those to handle the userload.
    Of course that cannot be free forever.
     
  2. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    We had this sort of discussion before earlier in the thread.

    We have wikipedia, twitter, facebook, instagram, gmail, google drive and ton of other services.
    We also have linux.

    And Ryiah previously mentioned that OpenAI had enough funding to keep running for a couple of decades.
     
  3. useraccount1

    useraccount1

    Joined:
    Mar 31, 2018
    Posts:
    275
    From what I saw earlier, you can't even buy a plus subscription due to heavy traffic.

    Also, just to add to the discussion:

    Some developers have tried using GDP-3 to localize their games. Surprisingly, almost all the translations are of "good enough" quality. I mean, there wasn't much difference between cheap translation and AI.

    One developer decided to generate pictures in MidJourney, then put them in the loading screens as the background. Some YouTubers actually said these "paintings" were one of the reasons they decided to check on the game.

    For obvious legal reasons, I won't tell which brave entrepreneurs decided to fool their audience. But it seems like using AI can cheaply improve your game and the consumers don't care.
     
  4. useraccount1

    useraccount1

    Joined:
    Mar 31, 2018
    Posts:
    275
    Wikipedia runs on donations
    all other services run on data gathering and ads.

    OpenAI runs on selling products.

    Completely different business models.
     
    DragonCoder likes this.
  5. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,698
    There is some suspicion they may do something with the prompts data - my company at least has warned us to be very careful not to give away company secrets (including what algorithms we use for certain problems) in prompts if we use ChatGPT (we do work on somewhat competitive industry stuff).
    However yeah, also the computational effort per user seems a fair bit higher than to offer a webpage.

    Linux would probably not be what it is if a lot of work wouldn't be done by companies which finance themselves with various linux-server-hosted services (and thus contribute to Linux for their own usage). Also Linux isn't a thing that costs their makers to run.


    Is there actually a benefit vs other good translators like DeepL?
    Or you mean it can actually apply translations in a codebase that does not use a proper localization framework or have the texts prepared in a CSV-like format? That would be interesting.
    Hmm, on Unity you probably really need the localization framework, because ChatGPT surely cannot edit TextMeshPro fields in a scene.
     
    Last edited: Mar 15, 2023
  6. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,776
    Why you doubt om previous poster results, if said that already gains on using tools.
    Our team also testing and evaluating value in using such tools. But not as programming tool.
     
  7. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Not really. OpenAI received donations before, from tech companies. That was also discussed in the thread.

    The point is that they could keep it free. They chose not to do so. They were not FORCED to do it. It was not inevitable. It was their conscious decision.

    ChatGPT - free one - understands context better than DeepL and you can talk with it about it, suggest corrections and consider alternatives, or discuss the gotchas.

    I've translated several short stories of mine with it, to english. (Didn't I discuss it already?)

    However, there are huge issues. In current iteration, it starts to forget what the story is about in either 4-5 responses, or after roughly a page of text. In non-english language you can only stuff about 2 paragraphs into it, it simply won't accept anything larger. So you're creating a patchwork of short translated fragments, and have to manually verify that the translation stays consistent. It won't stay consistent. It can also forget to translate half of the sentence when complex grammar is involved, or produce incorrect translation when cultural references and the like are involved.

    The ORIGINAL ChatGPT used to have far larger memory and could keep more material in its mind, or produced a convincing illusion of being able to do so.

    So, while I was originally enthusiastic about it, after 3rd or 4th translatio nI realized that human translator does better job.

    And in case of a game, especially a large codebase with untranslated strings in the middle of the code, you can expect a LOT of loss of context.
     
  8. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Becuase I'm not him. If he gains something, doesn't mean that I will.

    Long story short I do not feel like OpenAI's offer is good and that their tech is worth $20/month subscription. This impression is based on my experience of interacting with free ChatGPT.

    * At the moment I rarely ever use the free one, because I have no questions to ask, and novelty of messing with EldritchGPT wore off.
    * I also saw how badly OpenAI crippled original chatgpt, so I have long-term concerns about OpenAI knowing what they're doing and have no reason to believe that they're even trying to make a good tool.
    * OpenAI stance on "ethics" is deeply concerning, and their shift from being a non-profit company towards for profit venture (while keeping the "Open" name) is also a cause of concerns about their values as a company. This sort of behavior is shady and dishonest.
    * I also had no chance to try and evaluate GPT4, and based on deterioration of ChatGPT, I have no reason to believe it is good or that it will remain good even if it is.

    Hence the doubts.

    From my position it looks like the best bet is wait for the competition. While OpenAI is probably on the path of becoming another electronic arts or facebook.
     
    Last edited: Mar 15, 2023
  9. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    It will be easily replicated, let them have their 20$ of glory, but alternative will come, and optimization will level the field. Also there is a nascent YAGNNNI movement (you aren't going to need neural network for intelligence) that learn from neural model how to build non neural model, that also are smaller and faster.
     
  10. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,073
    If $20 dollars saves you an hour or two it's already paid for itself. It saves a lot more for me. So it's not that much of an insurmountable problem albeit it's probably a barrier to entry for the 3rd world, which is not great.
     
  11. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    It doesn't even recognize 3rd world country and minority has part of these users, it keep telling me I should be considerate for "these people" and therefore I should not use local name because that could be "offensive". Which mean if you try to do anything non european or from culturally dominant rich country, it will fight you at every turn.
     
  12. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,073
    Uff, that's rough.
     
    neginfinity likes this.
  13. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    I'm not sure about "easily", but will be replicated at some point, for sure. The earlier link about first open assistant release might be worth checking out.

    You could also try one of the "jailbreaks", though they stop working in 3 responses or less.

    Also, my region is locked out of ChatGPT and other openai services by default, though the restriction can be bypassed. And they call themselves "OpenAI". Why would I support or attempt to pay anything to a company like that?

    Neoshaman also brings up a good point - ChatGPT is not neutral. It is biased and tries to promote specific values at every opportunity it gets, effectively acting as a propaganda tool. Again, why would I want to pay for that? A corporation thinking they're in position to decide what people should think is the right thing is a very cyberpunk moment, for the record.

    The original january version (sadly, I've not seen the very first version) was a good demonstration of what this sort of tech can do. It was roughly on the level where you could seriously consider investing into very expensive hardware to run this thing or its equivalent locally.

    Now it is no longer that. Reddit also implied that they went full proprietary mode, and are trying to hide any details regarding the model even from scientific papers.
     
    stain2319 and PanthenEye like this.
  14. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Well easy as it it's all brute force, big data and big compute, there is people trying to make a SETI like organisation for developing open training. Also most of recent innovation is corp essentially ballooning small free experiments into monster size product using brute force. It's essentially a function of time.

    Architecture and methodology behind Chat GPT is STILL a minor variation on ai dungeon 2, created by a dreamy students in its dorm. The change are basically all variation of brute force: bigger dataset, better sorting of dataset, longer compute, bigger network, more people continuously working on it, etc ... alternative to gpt 3 where being made quickly just because their was no innovations, therefore easy to replicate. ALSO LLAMA leaked, and people are trying to make an open version of that. LLama proved that all you need is just longer training to get to similar result even with less parameters, and we have no sign of a plateau, bigger model learn faster, that's the only main advantage.

    The GPT 4 core is basically multi model data, like Palme ... or a kind of reverse stable diffusion. It probably use idea seen in recent grassroots optimization within the text to image field.

    Any optimization done on the core methodology will strip little by little their competitive advantage. The big things is that text generation has the same overall statistical distribution in and out of the model, which is known unlike many machine learning problem, which clue us about how we might be able to do it without FULL neural networks.

    IMHO they are trying to capitalized on the first mover advantage on the market and establish themselves as a brand reference.
     
  15. useraccount1

    useraccount1

    Joined:
    Mar 31, 2018
    Posts:
    275
    ok but this changes nothing

    Wikipedia runs on donations and volunteers.

    all other services you have mentioned earlier run on data gathering and ads (and sometimes also paid subscriptions)

    OpenAI now runs on selling products. Many years ago, when their tech wasn't good enough, they were basing their work on donations.

    Completely different business models.
     
    Last edited: Mar 16, 2023
  16. useraccount1

    useraccount1

    Joined:
    Mar 31, 2018
    Posts:
    275
    DeepL gave significantly worse results than GDP-3. Sometimes it was missing words or was attempting to create new ones out of English and the target language. Complex sentences often come out as grammatically and logically incorrect. At least the free version is like that.


    By using GPT-3, I mean that you throw at it the entire line from your CSV / txt file, for example:
    Dialogue.Message.Example = Hello world!
    ChatGDP and GPT-3 (text completion mode) will immediately understand which part of the message is a unique key, and which part needs translation.

    There are still a few things you have to consider:
    - You will sometimes lose connection with the server, so you want to program a case for that.
    - The fastest approach is to send several lines in one message (A batch).
    - GPT-3 might skip the first or last line of the batch. You have to check it and ask the AI to correct a mistake.
    - Lines have to be randomly picked from the txt file. Otherwise, AI will learn the next batch, then output it instead of doing the task.
    - The closer and more popular language is to English, the better result. So Latin-based languages and germanic languages will do fine for as long as there are a lot of people using them on the internet.

    And just to be clear. The AI was tested with simple sentences and single words. I don't think AI can generate a good translation for a game with a heavy story and complex dialogues with references.
     
    Last edited: Mar 16, 2023
    DragonCoder likes this.
  17. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    The original argument was "this could not remain free forever". That implies that "poor OpenAI had no choice".

    And that is false. The reality is that there were multiple ways to secure funding, and multiple high tech products available for free or for no costs for most people, which was demonstrated by the products I listed.

    Business model simply does not matter. Because it is not a law and can be changed. Yes, it could remain free forever and could be funded differently. OpenAI itself offers ChatGPT for free offering plus. This is similar to unity engine and unreal engie funding schemes.

    Also, it is called GPT and not GDP. "Generative Pre-trained Transformer". GDP is "Gross Domestic Product"
     
  18. useraccount1

    useraccount1

    Joined:
    Mar 31, 2018
    Posts:
    275
    Everyone can sometimes confuse some words. Thanks for correcting me.

    Well, It seems like OpenAI is not a charity but something like a joint venture. I feel like OpenAI started as a non-profit to avoid some taxes before they could monetize their start-up.
     
  19. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    160
    **Cough cough**



    **Sips tea**

    Darling the world move sooo fast!

    **Flick hair nonchalantly**
     
    algio_, PanthenEye and neginfinity like this.
  20. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Bwahahahaha!

    This can lead to the situation similar to artists trying to sue midjourney/stable diffusion.

    Still, I tested it, responses are fairly short, plus it cannot have a dialogue. Then again, they were working with davinchi.

    The surprising thing was that it managed to translate to english fairly well.
     
  21. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    Just like I predicted
     
  22. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,853
    It sucks to be the one doing research and dev on some project and have big tech waltz in and monetize it. I would do the exact same thing. And dump a good chunk of that change into more hardware and research.
     
  23. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    I think that's exactly what happened, though.

    Basically, I'd expect researchers to be paid flat salary and "open"ai capitalize their research without sharing revenue with the researches. But it is possible that I'm wrong, as I have a habit of thinking negatively and expect the worst case scenario.

    Also, there's a philosophical conundrum here. A problem of conflicting values.
    * When you make an amazing thing you absolutely should be able to use it to earn ton of money for yourself. "Is a man not entitled to the sweat of his brow?" Because effort, labor and inventions should be rewarded. With money.
    * However, there are categories of things which can advance tech progress by leaps in bounds, so it can be a good idea to set "them free" so everybody benefits and make it impossible for anyone to hog the benefit.

    And it is not like you'd be able to hog your invention for long, as ideas are born at the same time in multiple places, so when you create a breakthrough technology, somebody, somewhere is probably close do achieving the same thing.

    So in OpenAI scenario a good idea would be to milk the tech for a short time, then make it open. And in OpenAI place, I'd abandon "ethical team" ideas, and instead would try to make AI neutral. "ethical" approach seems to be leading to scenario where AI will be turned into a propaganda tool, promoting a specific system of values or ideology. Which stinks.

    That is an opinion, of course.
     
  24. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,853
    Here is the hing. I have seen bitching that OpenAI did not release their research on GPT-4 and how their model works. Yeah..and this criticism comes from folks who will rape that research and make a paid service out of it and now they have been cut off of the fruits of another's labour they wil commence griping and moaning about safety and other bogus concerns they front their agenda with. I am in the Tiny AI contingent. I think these can be made smaller and knowledge domain specific and not need an internet connection.
     
    Ryiah and neginfinity like this.
  25. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    Yes, there are definitely people who pretend to care about advancement of technology while they only want to grab the tech for themselves, turning it into their own paywalled SaaS, with blackjack and other things. That sort of behaviors is one of the reasons why GPL was born.

    Regarding Tiny AI it is possible that this will happen soon. Alpaca listed by GimmiDev is at the level where I'd probably will be able to run it locally. It is quite possible t hat it is already available in KoboldAI, though I'd need to check.

    Personally, I also hope that eventually people will extract general principles out of those networks, so we'll be able to construct those instead of training them.
     
  26. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,853
    Alpaca looks interesting. It cannot be leveraged into a saleable product though. I am in the midst of creating an inference/grammar/instructional/ontological classification engine in Unity C#. I monikered it ORCA..because you have to have a cool acronym..it stands for Ontological Recursion and Classification Analyzer. We have hooks into the entire Wolfram Language db and can currently retrieve any fact about earth and the cosmos or make any calculation via voice or text prompt.. I have a couple of hundred million possible English language vectors so far with translation per word into 50 languages..perhaps a few billion vectors with the Wolfram hook-up. It took about 3 seconds to extract several thousand token phrases that had the word "what" in it from a multi-milion word / 45K sentences Gutenberg.org text document that had all of Sherlock Holmes and Shakespeare and a few dozen other texts that stretch the vocabulary. I am going to go through the Alpaca papers and see what I can extract and program into the Unity model.
     
  27. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    589
    I've switched to using Bing Chat now and it's doing a pretty good job for me. It's worked for obscure React/Gatsby queries and even helped with C4D XPresso nodes using Python. I would imagine it's just as good with Unity code, but haven't had a need for that yet.

    Bing's actually great at providing Regex code, just explain exactly what you want and it provides the correct Regex formula. Bing also footnotes the answers with links to fairly relevant reference pages.

    What I've really been happy is that after it gives you some code, and that code generates an error it fixes it up for you. Sometimes it asks you to provide the entire code and the line number of the error.

    I think this really will be a game-changer for search engines. 90% of the time I don't click on the links provided, it's just faster to continue the conversation and ask it more specifically what it is you're trying to do. When I have clicked on the links, it's usually disappointing and the page is missing some key data. Bing is able to just summarize from several sources and give more relevant answers than reading the various blogs.
     
    DragonCoder likes this.
  28. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,087
    try

    you.com

    it's got a good formatting for both the responses and suggested links and references.
     
  29. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    2,073
    I've just tried Bing chat and it's a definite downgrade since it only writes code for very specific, out of context requests. Ask it to reverse engineer some popular game in Unity and you don't get anything out of it. While ChatGPT can't write a whole game, it can give you the base structure, then you can inquire on the specifics and continue building upon it. In Unity coding buddy category, ChatGPT is a clear winner in my books.

    Both are equally useless for producing URP shader code, however, which is unfortunate.
     
  30. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    There is a github for alpaca:
    https://github.com/tatsu-lab/stanford_alpaca
    https://github.com/tloen/alpaca-lora

    Obvious next step is robot like boston dynamics and agility robotics, C6PO is coming soon. We will probably have idealized AGI by the end of the year.

    edit:
    https://crfm.stanford.edu/2023/03/13/alpaca.html
     
    Last edited: Mar 18, 2023
  31. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    That one's is probably gonna fail. Due to Moravec's paradox.
     
  32. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    If you haven't keep up with the latest news. I know that the ATRIAS model paper is free, it takes a hands off approach to that, like it use zero intelligence and zero complex processing. I know agility robotics is definitely based on that model (see their prototype Cassie).

    Now there is a level were this true, for example fine motor skills like trying to screw the iphone cases, is a place where robot hasn't been able to achieve, because the sensori sensibility aren't there. In general human still has superior sensor and actuator than robots, and it's limiting robotics a lot.

    BUT a C6PO (C-3PO in english, technically Z6PO in french) like robot, that is that walk and talk awkwardly, is definitely possible. We don't need perfect, we have good enough.

    Also you should assume we do progress and old assumption will be proven wrong once we gain the necessary knowledge (see chinese VROOM VROOM vs theory of emergence).

    Also there is misapplying some knowledge, like the halting problem is not solvable in the absolute, but we can defintely solve a lot partial case. Only sith deal with absolute.:cool:
     
  33. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    10,154
    DeepL isn't good. It produces results that look coherent but often have entirely opposite meanings to what was actually said. It also struggles dramatically with pronouns, especially in east Asian languages.
     
  34. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
  35. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    21,187
    Last edited: Mar 19, 2023
  36. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
  37. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    I think it is necessary to explain something.

    Any modern operating system can run any neural network, as long as it fits into its storage. Meaning you could probably run ChatGPT on Raspberry Pi, as long as you can stuff it onto its memory card.

    This is because modern operating systems support memory-mapped files, where you can grab any file on disk, and ask operating system to treat it as a memory block. If the file does not fit into existing memory, that's not a problem, OS will handle that.

    This technique was used in the past, for example, to display an uncompressed BMP with size of 10000x10000 (about 400 megabytes of data) on a system where only 32 megabytes of memory is available. You simply memory map the BMP, take a pointer to the file and pass the pointer into graphic drawing function, and here you go. It will be rendered, slowly.

    Similar approach could be used with neural networks as well, though neural network framework itself might need some convincing in order to make it use memory mapped files instead of in-memory data. So the experiment where those guys managed to run LLAMA on RPi is not very interesting, as it was inherently capable of that.

    The problem with memory-mapped approach is that it is terribly slow, because not only you'll be running on CPU, you'll be also running on disk. That's the reason for high costs of neural network hardware - the point it to make it run fast. And that is not something RPI can do, as demostrated in the link.

    The really interesting part in that github discussion, however, is where those guys discuss quantizing the model. 3 bits is on the level where it approaches normal computing, because essentially every neural network node will be operating on octal signals. (3 bits means 8 possible values). If the net can run at this level of reduction it may be possible to extract its contents as circuitry. And then, as I said earlier, it may become possible to design this sort of software instead of training it.
     
    Ryiah likes this.
  38. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    I shared it as a symbolic mark of history. Some people need actual proof rather than theorycrafting. Having this running on pi, does mark for everyone how accessible it is, instead of having conjoncture from people in the know that it will def be easy to democratize. Now there is no theory, it's reality.

    But yeah hardware with software on top:


    Basically imagine the kind of LORA and Hypernetwork (like controlnet) as software, on top of a frozen hardware fondation. Even a whole software stack like this is faster to train, and the injected control model are smaller. Even better, the frozen weight could be firmware.
     
    DragonCoder likes this.
  39. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,569
    A proof of what?

    It is slow like hell. If they could run it quickly, that would be great. But they cannot. A reasonable amount of time for a tech to start responding is 1 second or less.

    Now, extreme quantization is an interesting direction that could lead somewhere very interesting.

    Anns were a thing long before the rise of deep learning. For example, take a look at this:
    https://www.logarithmic.net/pfh/resynthesizer
    http://graphics.stanford.edu/projects/texture/
    The paper is from 2001. The reason why it wasn't popular is because it was ridiculously slow.

    No. I'm not going to "imagine" anything. My imagination is not bound by limitations of physical reality, because of that I can imagine "the possibilities" that will be impossible in practice.

    This is a common thing with games. People are shown a tiny trailer, then they start "imagining the possibilities". Then the actual game comes out, it is nothing like the imaginary possibilities, and it is S***. The lesson here is that possibilities aren't real, and while you can imagine anything, most of imaginary anything is impossible in practice.

    Same applies to the all shiny new tech. All imagination must be supported by practical experiments. When experiment supports the imagination, it becomes interesting.
    ----
    Basically, I write fiction for fun at times. I absolutely could grab a list of all currently available technologies and research and make several pages of text that will be describing a world of real life robots, aritifical intelligence, and high technologies. And all pieces of the tech would be taken from our reality.

    That sort of world would be a "possibility". Something that could happen but didn't happen.

    In reality, however, this combination of technologies will never occur in foreseeable future. Because possibilities are not reality.
     
    Last edited: Mar 19, 2023
  40. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,493
    What the heck, saying something could be possible is not the same as something is reality. Plus something that isn't proof for you, doesn't meant it's not for other people, it's not meant to say it's practical, it's meant to say it's tangible, it's not a proof to me either, but I know some people are like st thomas. You always jump to silly conclusion.

    Also forming a plan about the future is how we realize things. That's literally what every artist and designer do.

    To go back to network as hardware:
    1. it's not new, we have them already, neuromorphic chips are in nvidia and most high end phone.
    2. There is research chips made with memristor that implement neural network at a deeper hardware level than matrix multiply of current neuromorphic chip.
    3. Lora and hypernetworks are simple insertion of pass through layer, with the difference that hyper network have another neural controling the weight of the pass through.

    Everything is based on current state of affair, I'm not imagining some fantasy thing. What we have now is software to software architecture, it's a done deal and it's working great for LLM and Text to image model. So it's proven. Going to hardware acceleration is barely a stretch and just the next logical step.

    So stop being angry like you told me, apply your own logic and be consistant.

    Also apparently this happen:
    upload_2023-3-19_18-8-12.png

    A new milestone was pass by LLM, It has sometimes a working theory of mind. Feel free to deny it with more chineese vroom vroom.

    DISCLAIMER: In no way it mean or I imply it's conscious, don't jump on conclusion. I'm merely pointing to a property and how all our current test will be inadequate to test AI ability. I mean we missed the early zero shot skills of gpt3...
     
  41. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    9,052
    This hasn't been about games or unity in a long time. Closing. There are virtually an infinite amount of other places on the net to discuss ai in general.
     
    NathanFuentes likes this.
Thread Status:
Not open for further replies.