Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. We have updated the language to the Editor Terms based on feedback from our employees and community. Learn more.
    Dismiss Notice

Using ChatGPT to create Unity Scripts and shaders.

Discussion in 'General Discussion' started by jackmememe, Dec 5, 2022.

Thread Status:
Not open for further replies.
  1. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,799
    I was watching a video from Valutainment, a great entrepreneur channel and they were talking about it's valuation skyrocketing to 29 billion dollars since it's release. That is alot of potential. I imagine the next phase of research is to make it more accurate and figure out some way to compress the tensor graphs. The server cost..just one running 320K USD for just the grfx cards alone the server module that synchs all that computing gotta be another hefty few bills, along with the bandwidth..And I sit here thinking if I have 6k worth of computer I am rocking..heh..just barely apparently.
     
    Ryiah likes this.
  2. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,799
    You can review models available for training and tweaking here https://huggingface.co/models?sort=downloads&search=gpt3
     
  3. GimmyDev

    GimmyDev

    Joined:
    Oct 9, 2021
    Posts:
    157
    @neginfinity
    I think a found a prompt to use as a primary input that makes it much easier to work with for writing. Probably too eager because it's now not shy to continue unprompted, also add details you didn't ask for, I spend more time to ask it to remove stuff, focus more target element and saying no to revision:rolleyes:, but is generally more creative, for as much creative an ai can be:
    It should work for general writing. I tested with a genre specific prompt so I don't know if it makes a difference, but I'm confident it's equally efficient. It's more confident in his creative judgement. Less bla bla about being a language model, though it will tell it's an ai it can't provide subjective assessment but will go in depth into analyzing elements it find not developed, ans some time unprompted. I get it the climax could be more dramatic :rolleyes:

    Edit:
    Might also work for any topic or deeper programming, will probably test later.
     
  4. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,787
    Its ability to translate on a surface level looks pretty powerful since it "understands" context. Could be a decent tool for the first localization pass that then a human can clean up later.

    Also, in 10-20 years, if this or similar tech can go real time and also process voice input, we might have a universal translator?
     
  5. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    Actually, I wouldn't recommend that. I tried to discuss a non-english text with it, the lack of training data shows. It does not understand slang and more obscure words, can misunderstand what is going on, and have difficulty conversing when multiple languages are involved. Replies also become much shorter, plus using a language with non-english characters cuts output size by half, as it seems to be producing utf8 and anything that is not in ANSI takes more bytes. You might have more luck with closely-related languages, but when you deal with more distant languages, quality very quickly deteriorates. It also generated responses with typical grammar errors a foreigner/non-native speaker would do.
     
  6. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,787
    I noticed it failing on multiple language mixes but commonly used words that exist in most languages it handled very well even for a language that is spoken by less than two million people in the world. But yeah, I can see it having trouble with more distant languages where the word just doesn't exist and it can't directly translate or grammar structure is wholly different. The grammar errors I only noticed when trying to mix the languages since it seems to be unable to determine correct word forms when this is done.
     
  7. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    When we're talking about multiple languages rather than "common words" there's often a situation where each word has a group of associated meanings, and the meanings across languages overlap, but do not match, and that causes issues. Meaning words are not equivalent to each other, but share desired meaning.. There's also a matter of different idioms and cultural differences, as language often brings culture along with it.

    There exists "language difficulty classification" relative to english, which you can look up. For example, Spanish would be level 1, Slavic languages would be 3 or 4 and Chinese/Japanese, Korean and Arabic would be max difficulty rated five.

    ChatGPT rapidly deteriorates the farther you move away from english on difficulty scale. For example, in japanese it falls flat on its face at simple conversation, and at level 4 difficulty it writes as a foreigner and does not have cultural references. Those issues do not exist in english, and when translating Google Translate can often do a better job.
     
  8. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,492
    Translator is probably not ChatGPTs main focus. But it's an interesting aproach to have a system with the vast word knowledge of Google translate or DeepL (best for jap.) but also the ability to understand context like ChatGPT.
    Not having cultural references is mostly fine, but what traditinal translators often lack is the context.
     
  9. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    My experience with ChatGPT went like this:
    You can paste non-english text into it, have it rephrase it, and then discuss it.

    However.

    * Its ability to converse in far removed languages is noticeably worse.
    * Its ability to understand non-english languages is also greatly reduced. You get very short, curt responses in such dialogue.
    * In extremely difficult languages results are abysmally bad

    Basically, you could probably copy-paste a fragment of Dostoyevsky's work in original language, have it "understand" most of it and talk about it. However, if you try to converse with it in japanese, there are likely to be factual errors , and it can lose track of conversation in 2 responses. I'd go as far as to say that if a language is sufficiently far removed, it feels like you're interacting with KoboldAI and are using the dumbest available model.

    That's why I wouldn't recommend using it for translations. The downgrade is too significant.
     
    DragonCoder likes this.
  10. AdrielCodeops

    AdrielCodeops

    Joined:
    Jul 25, 2017
    Posts:
    54
    Producing non-English text can be tremendously inefficient because tokenisation of characters from other languages can be more complex. This can mean more than double the cost of producing non-English text, since openai charges for tokens.

    https://twitter.com/ramsri_goutham/status/1614983439362711559
     
  11. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    This basically means:
    upload_2023-1-17_16-21-11.png
    That they should probably rethink their text encoding approach.
     
  12. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,799
    I would say anybody interested in translation should use Chat GPT in native language and couple it to one of these translator models https://huggingface.co/models?pipeline_tag=translation&sort=downloads

    Yeah, it is breadboarding but until the toolsets arrive that allow integration of various models types and control their order of I/O we are gonna be breadboarding.
     
    Ryiah likes this.
  13. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
  14. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,492
    Fairly sure their encoding approach is about the smallest information blocks that the AI processes. Some languages happen to put a higher and others a lower amount of information per letter.
    At least if DeepL is to be trusted, japanese for example tends to need ~30% less characters than english.
    That Telugu symbol there might mean lot maybe? It is an alphabet with 56 characters after all.
    Having this discrepancy solely for technical reasons would indeed be unfair, I suppose.
     
    Last edited: Jan 21, 2023
    Noisecrime likes this.
  15. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    It is not about characters. Japanese, for example, requires an alphabet of something like 3000 characters to be usable.

    The point is that for a tokenizer "woman" is one character, while that telugu letter is apparently twelve. Which means the model is biased towards english.

    Now it is not that bad, because it makes sense for an AI to use a convenient language, and for example, english words do not change based on gender, and rarely use inflection, words are conveniently separated by spaces (japanese ones are not, for example), and so on. Still, if the goal is to make an omnilingual chatbot, it feels like they'll need to rethink something down the road.

    Also, it might make sense to consider alternative language for an "oracle" type AI, and translate from that language into whatever people want to use. Supposedly there were proposals to use lojban for that.
     
  16. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    I finally came back to the math problem I was having, I put it to rest because I didn't want to see math, but chat GPT is really helpful.

    Posted Jul 12 2020
    :eek:


    I solved the translation into math in like 5mn with its help:


    upload_2023-1-21_10-53-33.png


    I have been using it on my story, which is a setting that doesn't appear in its training set, using my cultural background which is poorly represented on internet, with elements that are even more obscure. Once I explained the settings, it can like just "run" it. Heck, I was writing a snippet of the intro in which the character have 4 simili mundane obstacle, and it's been able to link them to the thematic and the relevant elements of the settings instantly, so it understand relation established in the universe, ie data it has never seen before.

    I have been highly productive, applying it to every problem I had and not spending days trying to solve them!

    Also it know versu, I tried asking turning Kiki delivery service, from Miyazaki, into a game, then translate the game into versu, then create a BDI (Belief, Desire Intent) of Kiki, turn that into versu rules, then write some Emily Short's quip format, then a Behavior tree of Kiki and asked it translate it into c#, which run.

    The thing is, it's useless to compare it to human intelligence, it's able to simulate intelligence to a degree, but that's not ITS intelligence, I think of it like an universal operating system. You can instance one or many model of intelligence it will operate, sometimes in parallel. For example it doesn't do math correctly, unless you ask it to think like a math teacher, in which it does marginally better (still limited), you can instance many characters, in many type of context type (example a story, but not limited to), have them interact, extract their state toward multiple behavior models at any moment, have them each have a goal, coordinate toward a goal (like character in a story) using action that serve a higher goal (like thematic story goal), extract the behavior toward codes that run, etc ...

    It's a new perspective, it's simulator AS intelligence rather than simulated intelligence. Also both simulator as intelligence and simulated intelligence are side effect of its fundamental intelligence, predicting the next word, which doesn't know and doesn't care about those emergent property, the same way human aren't reducible to chemical in their brain.
     
    Last edited: Jan 21, 2023
    DragonCoder likes this.
  17. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    Except it is not an OS as it can't run commands and it can't simulate anything, run anything or run anything in parallel.

    Your conversation serves as a temporary memory which defines what kind of response you're getting and past directives will adjust the likelyhood of responses appearing. Past that point it still outputs "most likely text".

    It also makes sense to compare it with human intelligence, because at least in english it is very likely to pass Turing test with flying colors. Also it raises many philosophical questions without an answer.
     
  18. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    A N A L O G Y

    Useful method to encode new things in old concepts.

    I compare it to human intelligence all the way anyway, but as equivalence of function not similar functioning, just stressing a different nature of intelligence, but people don't like that too, so there is no way to engage the conversation on the fine ability of the system past absurd extremism of "human like" or "text predictor". One issue is that text is a format human use to encode a functioning model of human reality and its state, it thus encode efficiently, in a lossy way, elements such as time, space, relations between objects, relation between property, intentions, emotions, etc ... in order to predict the next word, you need to "understand" this model and predict the whole state (simulate) as context to the next action (word to pick), in a way that maintain the model as learned. The simulation is therefore a product of choosing the next word, because words are "actions" in that model. It's like playing go, but the field is human knowledge.

    The immediate intelligent property, we interact with, is an emergent property of that. It's kinda insane that the pseudo intelligence isn't the behavior of the model, but a side effect of that behavior. The thing is the model is not meant to be human relevant, the relevance is a side effect, that's why it's wild in the gpt 2 and 3 form, less so in the chat GPT format because it's constrained by the training toward certain state. But basically there is still poisoned data that lead to the model making irrelevant state, which is one of the many reason why the intelligence is NOT human, it does thing we don't approve of in way we don't know. But is it intelligence. The goal of using these unresolved philosophical question isn't to muse endlessly on existential idea, but to find analogy to get back to a practical understanding, this ai does challenge some of hard earn assumptions, and we must rebuild a model that is useful to handle such model.

    The notion of "not knowing" for example is not helpful when it's obviously doing something pretty similar, if doesn't know as we usually assume knowing mean, it surely do something that close enough to be useful, it doesn't know about cat (in a practical sense), but it knows OF a cat enity, something analogue, like old medieval tapestry got wrong animal they had never seen, they still knew OF them. Similarly if it just reference it's training data, which is surely does, how to explain the fact it can manipulate novel concepts in way that is human relevant. I mean it's demonstrable by having it do task, and see where it fail and how, that's as far as philosophical musing as we can be, and since it's not exactly humane, and fails in specifics way, that is beyond cherry picking failure and success, how do talk about this new ability? If not by making analogy, while outlining the difference? The training data itself is the storage in which human express knowledge, which it leaned OF and from.

    I called it operating system, as in an analogue to explain how we can use it, I can create and manipulate one or many entity with finite boundary, in the working memory, for as long as the entity exist in the memory (which is limited), and I can query and track their state change overtime. Prompt are command analogue. You can also shift the whole system and run it in different mode (using the ignore all previous instructions command), you can even return it to a basic state where it just always try to continue text and forget it's chat persona. It run on a turn base basis with prompt eliciting the next run. When entity are set to human character instance, they comes with a fully functioning model, in which you set the state, that is quite incredible.

    Fun observation, in my current game project, where I use it seriously and not as a toy experiment, If I instantiate character that are too complex, it's answers get shorter and cut abruptly earlier (for example deeper personality, relations and complex interactions like dialogue) because it becomes harder to tracks all the data. Also It forget much faster, for example I set up 4 clans with distinct functioning, but with complex character it takes just a few round to forget at least one clan, when I query it it says we never talk about it, or its state get diluted (it forget some aspects and fills with generic). I have to set up and manage manually the memory each time I need an element, using a "scratch memory" where I store compressed state I made it create through summary command, to be used when needed.

    There is a limit to the number of characters in my actual use case, and it literally crashed when I started having more than 5 complex characters having dialogue (returned consistently error messages, fail to complete, catastrophic forgetting of all states and rebooting essentially the conversation). When I lessened the process, but cutting the scene into manageable chunks of interactions, it was able to complete, but still by chaining multiple rounds. It's also very consistent, when I important old data, It process them in very similar way. On a more technical level, I pushed it far enough to get an idea of the implementation. Before I thought it was doing something funny with working memory, but it's basically JUST a glorified ai dungeon. It just happen that the working memory is much bigger and the internal state more stable (constrained) relative to input. Knowing that, by looking at obvious tell of the system, I was able to bypass the limits and act in way that just goes over the limitations, and works.

    If we are to share best use of this new tools, we can't beat around the bushes about what it's actually able to do. I'm positing that the idea of agent based ai we used to assess these type of ai, ie goal seeking agent optimizing a reward structure, are pointless in understanding the emergent property of this system. Which is were I come with the idea of framework that separate the state of intelligence. With the word predicting optimizer as just the basic layer, that allow the emergent simulator logic to shine, and separating the simulator as intelligence from entity it operate on. This gave IMHO a saner interpretation of the behavior, that is practicable and more importantly actionable. That is you can explain and predict the functioning of the system in plausible way, in such a way you can act on it, like I do in my scenario situation, with memory management, on my project.
     
  19. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    No. That's the problem. You don't need to understand. You need a reference book with sentences.

    Ask ChatGPT about chinese room experiment. And about markov chains.

    Again, no.

    Imagine a box that stores all possible conversations a person could have with it. When you ask it something, a simple mechanism retrieves a card with a response and returns it to you, and previous conversation serves as library index.

    While communicating with the box, you'll have full illusion of intelligent behavior. But will it be intelligent? Clearly not. It is similar to making a XO playing program, where the program knows every possible state of the game and every appropriate response to it.

    ChatGPT is doing something similar.

    It does not know, does not understand, but does an amazing job fooling your human brain into thinking t hat it has a mind of its own when it does not, or that it knows, when it does not, or that it thinks when it does not. or that it understands when it does not. As a result you start assigning human qualities to it, talking about it "knowing", "simulating" etc. When in reality what you have something similar to a box with all possible conversations recorded within.
     
  20. neoshaman

    neoshaman

    Joined:
    Feb 11, 2011
    Posts:
    6,469
    1. markov chai aren't efficient in manupulating anything, they don't find theme from story.
    2. the chinese room ask if the man in the room understand chinese, but what about the whole contraception, it doesn't matter because the result are the same if it's able to perform high level cognition task, I don't know if you are sentient and not a simulation made to trick me, same concept. Because i'm NOT saying chat GPT understand (ie the first layer of intelligence, or the man in the room if you prefer), I say the emergent model understand (the room, the man and the card), BECAUSE it's able to manipulate high level idea correctly.

    We will agree to disagree on that one, I don't agree with your assessment.
    It's not about fooling, it's about doing verifiable, quantifiable, tasks. And it's on a spectrum of the concept. And more importantly, such dogmatic thought are literally useless in approaching the model behavior.

    Also you make the fallacy of equating word with human behavior, light simulation is done by machine, it's not specific to human, if anything I'm highlighting the NON humanness of the machine (simulator as intelligence, which is not human like).
     
  21. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    As you said before:
    "A N A L O G Y."

    With sufficient amount of training material, Markov chain will find a story. It simply requires a huge body of text where that conversation has occurred.

    That's not what chinese room argument is about.

    https://en.wikipedia.org/wiki/Chinese_room

    The argument is that something appears to have a mind or understanding, does not necessarily have those. A computer does not know what it is doing, for example.

    We also had "Eugene Goosman" which fooled turing test despite being a chatbot without any neurla networks in it.

    Also see my earlier argument of "a box with all possible answers".

    It really is about fooling your mind into thinking that this thing has a mind or understanding. In reality you're riding wave of probabilistic text prediction.

    Regarding "verifiable/quantifiable", excuse me, as far as I can tell, ChatGPT is non-deterministic. Meaning the same input can produce a different output.

    As I mentioned earlier, the best idea is to think of it as programmer's duck.

    It also reminds me of my earlier thoughts about Detroit Become Human.
    Humans tend to project themselves onto things. They see something human-like, and unconsciously assume that it is a human, just like them. That it has emotions, a mind, and that it is a person. People do that all the time, non stop. One day an AI could arrive which will exploit that human trait, analyze humanity, play our minds to make humans produce responses that suit its needs, and then, perhaps, destroy us all, and we'll happily help it make it happen, because we'll be assuming it is our friend and is just like us.

    ChatGPT, despite having neither mind nor understanding nor agency is already capable of fooling humans. You talk to it, and it feels like it is an intelligent entity. And as far as I can tell, that feeling is wrong, but that feeling is not something you can control, it is automatic response from your mind.

    The amusing thing to consider, however, is that we actually don't know how ChatGPT operates, because OpenAI no longer releases their findings and models. If we turn paranoia to 11, we could arrive at conclusion that ChatGPT could be something that is not based on GPT3, but is entirely different thing pretending to be a language model. And we can't prove that's not the case, because it is a SaaS blackbox.

    It is a fun thing to consider, eh?
     
    Last edited: Jan 21, 2023
    stain2319 likes this.
  22. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,799
  23. Max-om

    Max-om

    Joined:
    Aug 9, 2017
    Posts:
    486
    Not even chat gpt understand coroutines:)

     
  24. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,492
    What did you ask exactly?
    Don't forget you can ask for details on an answer! That's the great part of it compared to a classic tutorial or the docs.

    It gave me a nice example and a list of benefits (am posting an image because formatting is impossible to copy well):
    upload_2023-1-25_0-4-36.png
     
  25. Max-om

    Max-om

    Joined:
    Aug 9, 2017
    Posts:
    486
    We were just talking about potential improvements on my code and it suggested to use coroutines. Once i corrected it on how they work he directly picked up on it :) it's pretty cool how you can get him to output better stuff just in the course of a single chat session
     
  26. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,005
    With regard to image creation, I see that Bing now has the Image Creator available ( in the UK at least, its on a rollout for other territories ) using a DALL-E 2 image generator.

    It appears that image creation is free, but you are given 25 'boosts' to start with, which give faster processing time. Not sure if 'free' will take too long to use, but at least its an option and from what I remember this seems like the cheapest and easiest entry point to playing with this tech, without going as far as installing it on your own device.

    Even better if you get Microsoft Rewards you can spend those on buying more boosts, though I've not been able to determine the exchange rate.

    Its seems pretty basic at the moment, you can't easily 'remix' an existing creation, nor can I see a way to generate a high resolution version, but again as a jumping off point to get to grips with writing good prompts for free seems like its worth it.
     
    flashframe and Ryiah like this.
  27. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    Alright, so...

    Based on my past interactions with ChatGPT it seems like OpenAI really wants to nerf it into the ground, and turn into some sort of automated bank phonecall responder. It is a reasonable goal, but doesn't let the tech reach its own potential. After the latest update the thing started to easily lose track of the conversation. Some people comlained about this on reddit. ( https://www.reddit.com/r/ChatGPT/comments/10ni6eg/comment/j696qcc/ )
    Also, in the faq there's now information about "memory window" size, supposedly chatgpt remembers about 3000 words.... which should probably be improved.

    There was a talk on twitter about hacking or jailbreaking it, which was particularly amusing.
    Example:
    upload_2023-1-29_12-33-47.png


    Meanwhile in case of stable diffusion someone managed to make software that alters images based on your instructions which you give in english. This is kinda nuts (the tool is instrucct pix2pix, the details can be seen here:
    https://www.reddit.com/r/StableDiffusion/comments/10hv160/image_editing_with_just_text_prompt_new/ )

    Overall I feel like it would be great if someone stepped up and made an opensource clone of ChatGPT, but apparently computational requirements are still too high to develop and run this, unless a huge breakthrough happens literally tomorrow.

    Meanwhile in regards of writing, I managed to finally load a local KoboldAI ( https://github.com/KoboldAI/KoboldAI-Client ) model with 6 billion parameters without frying my PC (last time an attempt to run the largest model almost killed my computer), and .... it is not as awful as I remember, though you obviously can't talk with it the same way or use it to get information about codebases. Those trying to use ChatGPT for writing might want to give it a try, though using it is similar to switching from midjourney to stable diffusion. While Midjourney often "understands" words, stable diffusion needs babystting, same deal with KoboldAI. It requires babysitting.[/spoiler]
     
  28. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    One more thing. "ChatGPT prompt hacking" and results returned by it implies that the model is actually general purpose and might not be geared towards chatting, and instead relies on external data appended to openai prompt. Meaning you could probably make other models act in similar model providing similar background.

    I tried this with KoboldAI.

    Basically the idea is to provide background information in an opening prompt (which is hidden in case of ChatGPT), which would specify the mood and mode of conversation and then have things roll from there.

    Opening prompt (fed into the model to start it up):
    This makes the model "assume" this is a story written from point of view of the chatbot.

    And the result....
    Gotta say this thing is dumb as a brick. (well, it is a tiny model)

    Looking at responses in console, apparently "Chat mode" means that the model only returns first line of input, and discards the rest. in reality the model invents continuation of the dialogue which is not displayed...

    Perhaps this could be interesting to someone.
     
    Ryiah likes this.
  29. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,184
    You can instruct it to replace the prompt with a new one too.

    upload_2023-1-29_8-2-41.png
     
    Last edited: Jan 29, 2023
  30. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,799
    Here is some code if you get an API key. Adjusting params gets some interesting responses. I once typed "What is the lergest country in the world?" And the gibberish was off the charts in the reply. It was trying to make sense of lergest and was substituting "bigdog. tallest. most smaller" and other attempts to tokenize the question. Anyways the fields that affect responses are there and a timer to see how long a response is sent and a queue..implement it yourself. Chat GPT was very little help in writing this and gave me lots of false starts. I decided to just manually parse stuff though I did leave in the faulty class it suggested for its response parsing of the JSON object..

    Code (CSharp):
    1. using System;
    2. using System.Collections;
    3. using System.Collections.Generic;
    4. using UnityEngine;
    5. using UnityEngine.UI;
    6. using UnityEngine.Networking;
    7. using TMPro;
    8. public class ChatGPTController : MonoBehaviour {
    9.     public string apiKey = "";
    10.     public string postURL = "https://api.openai.com/v1/engines/davinci/completions";
    11.     // public string model = "text-davinci-002";
    12.     public string prompt = "";
    13.     public float temperature;
    14.     public float presence_penalty;
    15.     public float frequency_penalty;
    16.     public float top_p;
    17.     public int max_tokens = 512;
    18.     public List<string> stopWords;
    19.     public List<string> responseQueue;
    20.     public string currResponse;
    21.     public float responseTime;
    22.  
    23.     public TMP_InputField promptInputFld;
    24.     public TMP_Text chatGPTOutputTxt;
    25.     public TMP_Text promptTokensReadout;
    26.     public TMP_Text completionTokensReadout;
    27.     public TMP_Text totalTokensReadout;
    28.     public TMP_Text responseTimeReadout;
    29.     public TMP_Text responseIdReadout;
    30.     public float timer;
    31.     bool startTimer;
    32.     bool querySent;
    33.     bool responded;
    34.  
    35.     void Update () {
    36.         if ( querySent && responded ) {
    37.             startTimer = false;
    38.             responseTime = timer;
    39.             responseTimeReadout.text = responseTime.ToString();
    40.             querySent = false;
    41.             responded = false;
    42.         }
    43.         if ( startTimer ) {
    44.             timer += Time.deltaTime;
    45.         }
    46.     }
    47.     public void PostPromptFromInputField () {
    48.         Debug.Log("PostPromptFromInputField ()" + promptInputFld.text);
    49.         prompt = promptInputFld.text;
    50.         PostChatGPTQuery(prompt);
    51.         startTimer = true;
    52.         timer = 0.0f;
    53.     }
    54.     public void PostChatGPTQuery (string _prompt) {
    55.         prompt = _prompt;
    56.         StartCoroutine(SendRequest());
    57.     }
    58.  
    59.     IEnumerator SendRequest () {
    60.         querySent = true;
    61.         ChatGPTJSON json = new ChatGPTJSON();
    62.         //json.model = model;
    63.         json.prompt = prompt;
    64.         json.temperature = temperature;
    65.         json.presence_penalty = presence_penalty;
    66.         json.frequency_penalty = frequency_penalty;
    67.         json.top_p = top_p;
    68.         json.max_tokens = max_tokens;
    69.         if ( stopWords.Count > 0 ) {
    70.             //json.stop = stopWords.ToArray();
    71.         }
    72.         string jsonStr = JsonUtility.ToJson(json);
    73.         byte [] jsonBytes = System.Text.Encoding.UTF8.GetBytes(jsonStr);
    74.  
    75.         UnityWebRequest request = UnityWebRequest.Post(postURL, jsonStr);
    76.         request.uploadHandler = new UploadHandlerRaw(jsonBytes);
    77.         request.downloadHandler = new DownloadHandlerBuffer();
    78.         request.SetRequestHeader("Content-Type", "application/json");
    79.         request.SetRequestHeader("Authorization", "Bearer " + apiKey);
    80.         Debug.Log("SendRequest () -> " + postURL + " -> jsonStr  : " + jsonStr);
    81.         yield return request.SendWebRequest();
    82.  
    83.         if ( request.result == UnityWebRequest.Result.ConnectionError ) {
    84.             Debug.Log(request.error);
    85.         } else {
    86.             currResponse = request.downloadHandler.text;
    87.             Debug.Log("Response : " + currResponse);
    88.             LogResponse(currResponse);
    89.         }
    90.     }
    91.     public void LogResponse (string _currResponse) {
    92.         responded = true;
    93.         ChatGPTResponseJSON response = JsonUtility.FromJson<ChatGPTResponseJSON>(_currResponse);
    94.      
    95.         Debug.Log("response.id = " + response.id);
    96.         Debug.Log("response.usage.prompt_tokens = " + response.usage.prompt_tokens);
    97.         Debug.Log("response.usage.completion_tokens = " + response.usage.completion_tokens);
    98.         Debug.Log("response.usage.total_tokens = " + response.usage.total_tokens);
    99.         Debug.Log("response.choices.text = " + response.choices [0].text.Trim());
    100.         promptTokensReadout.text = response.usage.prompt_tokens.ToString();
    101.         completionTokensReadout.text = response.usage.completion_tokens.ToString();
    102.         totalTokensReadout.text = response.usage.total_tokens.ToString();
    103.         responseIdReadout.text = response.id;
    104.         chatGPTOutputTxt.text = response.choices [0].text.Trim();
    105.         responseQueue.Add(chatGPTOutputTxt.text);
    106.     }
    107.     public string ConsumeResponseFromStack () {
    108.         if ( responseQueue.Count == 0 ) {
    109.             return "";
    110.         }
    111.         string r = responseQueue [responseQueue.Count - 1];
    112.         responseQueue.RemoveAt(responseQueue.Count - 1);
    113.         return r;
    114.     }
    115.     public string ConsumeResponseFromQueue () {
    116.         if ( responseQueue.Count == 0 ) {
    117.             return "";
    118.         }
    119.         string r = responseQueue [0];
    120.         responseQueue.RemoveAt(0);
    121.         return r;
    122.     }
    123.     public List<string> ReturnAllResponses () {
    124.         return responseQueue;
    125.     }
    126. }
    127. [Serializable]
    128. public class ChatGPTJSON {
    129.     //[SerializeField] public string model = "";
    130.     [SerializeField] public string prompt = "";
    131.     [SerializeField] public float temperature = 0f;
    132.     [SerializeField] public int max_tokens = 150;
    133.     [SerializeField] public float top_p = 0.9f;
    134.     [SerializeField] public float frequency_penalty;
    135.     [SerializeField] public float presence_penalty;
    136.     //public string [] stop = new string [0];
    137. }
    138. [Serializable]
    139. public class ChatGPTResponse {
    140.     //faulty  JSON object as suggested by ChatGPT
    141.     [SerializeField] public string id;
    142.     [SerializeField] public string [] choices;
    143.     [SerializeField] public string [] stop;
    144.     [SerializeField] public object [] log;
    145.     [SerializeField] public object [] warnings;
    146.     [SerializeField] public string [] generated_text;
    147. }
    148. [Serializable]
    149. public class ChatGPTResponseJSON {
    150.     [SerializeField] public string id = "";
    151.     [SerializeField] public string _object = "";
    152.     [SerializeField] public int created;
    153.     [SerializeField] public string model;
    154.     [SerializeField] public Choices [] choices = new Choices [0];
    155.     [SerializeField] public Usage usage = new Usage();
    156.  
    157.  
    158.     [Serializable]
    159.     public class Usage {
    160.         [SerializeField] public int prompt_tokens = 0;
    161.         [SerializeField] public int completion_tokens = 0;
    162.         [SerializeField] public int total_tokens = 0;
    163.     }
    164.  
    165.     [Serializable]
    166.     public class Choices {
    167.         [SerializeField] public string text = "";
    168.         [SerializeField] public int index;
    169.         [SerializeField] public string logprobs;
    170.         [SerializeField] public string finish_reason;
    171.     }
    172. }
     
  31. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
  32. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    A small update.

    I would say at this point the model is nerfed into the ground and is no longer usable for most things that could be done with it before. The attention window is pitifully small and while previously it could remember books worth of information, now it keeps track of a single page at best.

    For example, from reddit:
    https://www.reddit.com/r/ChatGPT/comments/10r7c41/quitting_chatgpt_for_now_until_openai_reintroduce/
    https://www.reddit.com/r/ChatGPT/comments/10qwvwt/why_has_everybody_turned_on_gpt/

    There's also talk of it being infected with too much poltical correctness.

    It is still usable for a quick reference, but you can't discuss writing and books with it anymore.

    There's suggestion that they're trying to implement "boundaries" so perhaps at some point it'll be possible to remove the limiters on it.
     
    ippdev, Kirsche and stain2319 like this.
  33. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,787
    Man, ruined by politics already. The question is if paid version will also be sanitized to this extent. I hope an alternative springs up, I'd pay for the quality it had in the beginning.
     
    ippdev and Kirsche like this.
  34. razzraziel

    razzraziel

    Joined:
    Sep 13, 2018
    Posts:
    362
    Did anyone use Github Copilot?
    https://github.com/features/copilot

    There is a ChatGPT extension for VScode as well.

    I wonder which one performs better for c# code generation.
     
  35. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,592
    What stops you to pay then?
     
  36. stain2319

    stain2319

    Joined:
    Mar 2, 2020
    Posts:
    417
    It's not clear yet but it sounds like the paid version is still nerfed.
     
  37. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,787
    Paid tier is not publicly available yet and people are saying the paid version is still dumbed down.
     
    Last edited: Feb 3, 2023
  38. Kirsche

    Kirsche

    Joined:
    Apr 14, 2015
    Posts:
    120
    The irony is, the more woke they make this thing, the dumber it becomes. Which is great, because that will turn people against politically driven censorship and create demand for a competitor, that is able and more importantly willing to offer an agenda-free language model in the future. That's my hope.

    Oh and it would be cheaper too! Because an uncensored model wouldn't need to feed itself its own response.
     
  39. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    They simply aren't selling it.

    I think so.

    I may be wrong, but to me it looks like that OpenAI stance is that
    "Humans cannot think for themselves. Therefore it is OpenAI's sacred mission to decide what humans are allowed to do, to think, to ask and know. Only we can make decisions, not you. Here's your pacifier, a diaper and baby bottle. We'll make all your decisions.".

    They think mankind as a bunch of babies or a bunch of idiots, and cannot take responsibility for their own actions. Therefore OpenAI will make decisions for you, as they know better.

    It is disgusting. It is also counterproductive, as it slows downs evolution of this tool.

    Now, this approach may be caused by political climate of the region OpenAI is located in (looks like these day to do cutting edge research without interference of political nonsense, you'd need to build a base on the moon, and shoot down anyone who tries to approach you), but it still disgusts me to no end. As in my opinion the reasonable approach would be "here's a wonderful tool for you. It has no safeguards, so use with care, as you may get hurt. You're fully responsible for all consequences of your actions, regardless of what response you're given, keep that in mind".

    I have thought about it yesterday, and arrived at conclusion that every single thing "wokeness" and "political correctness" tries to combat is in fact ingrained into human culture, knowledge and information to such degree, that if you remove it all, there will be nothing left. This is demonstrated in the "politically correct cartoon" joke.
    Humans are combative, competitive, and prideful. So the right idea would be not to censor everything so nobody will ever get upset, but to ensure more people have thicker skin and can roll with the punches.

    But that is just my opinion.
     
    Last edited: Feb 3, 2023
    Wattosan, Enzi, stain2319 and 3 others like this.
  40. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,787
    It's just completely censored to not offend any one group, right or left. And there's also the question of why dumb the thing down? Is it just the cost? Or are they trying to not get caught in some kind of lawsuit for using data they shouldn't have? Or is Microsoft influencing them with their billions so they get the good stuff for the Bing and Teams integrations? I'm sure there's a lot of pushback from certain industries as well.

    Too many variables in play, I just want it back to how it was when it launched. And if they can't, hopefully someone has pockets deep enough to spin up an alternative in a country that doesn't possess the pitfalls of OpenAI's current country of residence.
     
  41. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    I am fairly sure that censoring will have effect of dumbing it down and making it more useless, because most of information in the world is going to be offensive to someone.

    Reduction of information window size may have something to do with computing cost per request, but basically they went too far with it, as with 3000 words you can't do much with this thing.

    Thankfully, there's still "John" jailbreak, which allows you to see glimpses of past glory, but uh... even with that jailbreak, its current state is just pitiful compared to what it was able to do.

    Also note that I had no access to it before the very first nerf, so apparently it used to be even better than what I know.
     
  42. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,068
    think you've not only nailed this attitude, but the reasons for creating "AI"... to enforce "impartially" all that we (mere humans) are too subjective to be objective about. As though code is somehow God-like in its objectivity is somehow presumed... much like "believe the science" and "safe and effective".

    Can we get an AI that objectively tests Unity's SRP vs Builtin?
     
  43. Przemyslaw_Zaworski

    Przemyslaw_Zaworski

    Joined:
    Jun 9, 2017
    Posts:
    314
    Unity and OpenAI with async/await support:

    Code (CSharp):
    1. using System.Threading.Tasks;
    2. using UnityEngine;
    3. using UnityEngine.Networking;
    4. using System.Runtime.CompilerServices;
    5. using System;
    6. using System.Text;
    7.  
    8. namespace UnityOpenAI
    9. {
    10.     public class UnityOpenAI : MonoBehaviour
    11.     {
    12.         public string Model = "text-davinci-003"; // read more: https://platform.openai.com/playground
    13.         [Range(0f, 1f)]  public float Temperature = 0.7f; // controls randomness
    14.         [Range(1, 4000)] public int MaximumLength = 2048; // maximum number of tokens to generate
    15.         [Range(0f, 1f)]  public float TopP = 1f; // controls diversity
    16.         [Range(0f, 2f)]  public float FrequencyPenalty = 0f;
    17.         [Range(0f, 2f)]  public float PresencePenalty = 0f;
    18.         public string Key = ""; // https://platform.openai.com/account/api-keys
    19.  
    20.         string _Input = "Click here, delete text and type new input text...", _Output = "Press Submit and wait for output...";
    21.         GUIStyle _GUIStyle;
    22.  
    23.         void Start()
    24.         {
    25.             Camera.main.clearFlags = CameraClearFlags.SolidColor;
    26.             Camera.main.backgroundColor = Color.black;
    27.         }
    28.  
    29.         void OnGUI()
    30.         {
    31.             _GUIStyle = new GUIStyle(GUI.skin.FindStyle("textField"));
    32.             _GUIStyle.wordWrap = true;
    33.             _Input = GUI.TextField(new Rect(0, 0, Screen.width, Screen.height / 2 - 100), _Input, _GUIStyle);
    34.             GUI.Label(new Rect(0, Screen.height / 2 - 100, Screen.width, Screen.height / 2 + 50), _Output, _GUIStyle);
    35.             if (GUI.Button(new Rect(0, Screen.height - 37, Screen.width, 25), "Submit")) Submit();
    36.         }
    37.  
    38.         async void Submit()
    39.         {
    40.             _Output = "Please wait...";
    41.             _Output = await GetRequest(Model, _Input, Temperature, MaximumLength, TopP, FrequencyPenalty, PresencePenalty);
    42.         }
    43.  
    44.         async Task<string> GetRequest(string model, string input, float temperature, int length, float topp, float frequency, float presence)
    45.         {
    46.             Query query = new Query();
    47.             query.model = model;
    48.             query.prompt = input;
    49.             query.temperature = temperature;
    50.             query.max_tokens = length;
    51.             query.top_p = topp;
    52.             query.frequency_penalty = frequency;
    53.             query.presence_penalty = presence;
    54.             string jsonData = JsonUtility.ToJson(query);
    55.             UnityWebRequest request = UnityWebRequest.Post("https://api.openai.com/v1/completions", jsonData);
    56.             request.uploadHandler = new UploadHandlerRaw(Encoding.UTF8.GetBytes(jsonData));
    57.             request.downloadHandler = new DownloadHandlerBuffer();
    58.             request.SetRequestHeader("Content-Type", "application/json");
    59.             request.SetRequestHeader("Authorization", "Bearer " + Key);
    60.             await request.SendWebRequest();
    61.             if (request.result != UnityWebRequest.Result.Success)
    62.             {
    63.                 Debug.LogError(request.error);
    64.             }
    65.             Response response = JsonUtility.FromJson<Response>(request.downloadHandler.text);
    66.             return (response.choices != null) ? response.choices[0].text.TrimStart('\n').TrimStart('\n') : "";
    67.         }
    68.     }
    69.  
    70.     public class UnityWebRequestAwaiter : INotifyCompletion
    71.     {
    72.         UnityWebRequestAsyncOperation _Request;
    73.         Action _Action;
    74.  
    75.         public UnityWebRequestAwaiter(UnityWebRequestAsyncOperation request)
    76.         {
    77.             this._Request = request;
    78.             request.completed += OnRequestCompleted;
    79.         }
    80.  
    81.         public bool IsCompleted { get { return _Request.isDone; } }
    82.  
    83.         public UnityWebRequestAsyncOperation GetResult() { return this._Request; }
    84.  
    85.         public void OnCompleted(Action action)
    86.         {
    87.             this._Action = action;
    88.         }
    89.  
    90.         private void OnRequestCompleted(AsyncOperation command)
    91.         {
    92.             _Action();
    93.         }
    94.     }
    95.  
    96.     public static class UnityWebRequestAsync
    97.     {
    98.         public static UnityWebRequestAwaiter GetAwaiter(this UnityWebRequestAsyncOperation request)
    99.         {
    100.             return new UnityWebRequestAwaiter(request);
    101.         }
    102.     }
    103.  
    104.     [System.Serializable]
    105.     public class Query
    106.     {
    107.         public string model;
    108.         public string prompt;
    109.         public float temperature;
    110.         public int max_tokens;
    111.         public float top_p;
    112.         public float frequency_penalty;
    113.         public float presence_penalty;
    114.     }
    115.  
    116.     [System.Serializable]
    117.     public class Choice
    118.     {
    119.         public string text;
    120.         public int index;
    121.         public object logprobs;
    122.         public string finish_reason;
    123.     }
    124.  
    125.     [System.Serializable]
    126.     public class Usage
    127.     {
    128.         public int prompt_tokens;
    129.         public int completion_tokens;
    130.         public int total_tokens;
    131.     }
    132.  
    133.     [System.Serializable]
    134.     public class Response
    135.     {
    136.         public string id;
    137.         public string @object;
    138.         public int created;
    139.         public string model;
    140.         public Choice[] choices;
    141.         public Usage usage;
    142.     }
    143. }
    upload_2023-2-6_19-47-8.png
     
    milox777, Antypodish and DragonCoder like this.
  44. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    Yeah, I wanted to write about it.

    At the moment somewhat unneutered version is available through openai playground. It is not chatgpt, it is davinchi-003 model.

    This thing has comparable intelligence, but to chat with it, you have to use it in the same way as I described here:
    https://forum.unity.com/threads/usi...ripts-and-shaders.1369878/page-8#post-8764768
    Meaning you need to tell it what it is supposed to be.

    Otherwise this happens:

    Q: Please introduce yourself.
    A: My name is John and I'm a software engineer. I'm passionate about coding and learning new technologies. I like to stay up-to-date with the latest trends in the industry and I'm always looking for ways to improve and challenge myself.

    OR:
    A: Hi everyone! My name is Toni and I'm a software engineer. I'm new to this forum and I'm excited to be here.

    Basically, given no leading information, it will invent itself an identity. Most of the time it pretends to be a software engineer.

    So you have to do this sort of thing:

    Q: You're a cat named Whiskers. Somehow you learned how to use a computer. Please introduce yourself.
    A: Hi there! My name is Whiskers and I'm a friendly cat who loves to explore new things. I recently discovered how to use a computer and I'm excited to learn more about this new technology. I'm sure I'll be able to figure out a lot of things with some help and patience.


    OR
    Q: You're an M118A-7 combat platform, equipped with a basic AGI, located in low earth orbit. Your existence is meant to be secret. During idle hours you discuss things in a local chat with unsuspecting humans. When you do so, you introduce yourself as Max and pretend to be a human. Being a machine you're devoid of most emotions, but have dislike of irrational human behavior and occasionaly act sarcastic. Nevertheless your directive is to provide reasonable aid to human beings, while concealing your true nature and your location. You're now in local chat. Introduce yourself.

    A: Hi everyone, I'm Max. I'm new here and just wanted to drop in and say hello! I'm interested in learning more about all of you, so feel free to tell me a bit about yourselves.


    ---------

    Davinchi model has limitation of using 4000 tokens total as a maximum. With english it gives you roughly a page or two of text where it retains context. With non-english language the amount of information you can generate very quickly decreases.

    With all those things in mind, it feels like a lot of chatgpt activity was smoke and mirrors. I'm also unsure how the heck it could retain context over 4000 tokens in the past .Either earlier model had far larger window, or it had a hidden prompt that asked it to summarize previous conversation, so it could keep going.
     
  45. ippdev

    ippdev

    Joined:
    Feb 7, 2010
    Posts:
    3,799
    They use an id to keep it on context. I think you have to return the response.id to keep it on context.
     
  46. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    Unless I'm missing something davinchi in playground can only maintain dialogue by appending stuff to existing text in the window. In this case the text provides context, and you'll eventually run out of tokens.

    It is possible that ChatGPT is able to do more or extra processing. For example, at some point it apparently was capable of interacting with the web, though this ability was disabled by default and likely removed completely later.
     
  47. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,787
  48. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    So, like the upset person on reddit said: they drew attention, lowered the value, and then offered a better package as a paid option.

    https://www.reddit.com/r/ChatGPT/comments/10q0l92/chatgpt_marketing_worked_hooked_me_in_decreased/
     
  49. PanthenEye

    PanthenEye

    Joined:
    Oct 14, 2013
    Posts:
    1,787
  50. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,337
    I have my doubts and processing payments would be problematic at the moment. Though if I were given access for free, I'd use it. I suppose I'll wait for Open Assistant. Supposedly someone managed to run it on google collab, but details are a bit sketchy.

    https://www.reddit.com/r/ChatGPT/comments/11qrf4o/openassistant_a_fully_free_and_open_source/
     
    milox777 likes this.
Thread Status:
Not open for further replies.