Search Unity

  1. Welcome to the Unity Forums! Please take the time to read our Code of Conduct to familiarize yourself with the forum rules and how to post constructively.
  2. Dismiss Notice

Enough with ChatGPT

Discussion in 'General Discussion' started by Murgilod, Jul 12, 2023.

  1. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,724
    Why would they?
     
  2. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    Why can't AI brush my teeth already?
     
  3. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    8,955
    Because:
    Other than amusement and exploration, there is no money/value in it. Games are extremely difficult to make money off of in the best cases. And is actually difficult to create a complete (and quality) game (let alone one that people will pay money for or play enough to support via ads). And while AI might be able to generate code that sometimes works, generating a complete game and the rest of the needed elements is a long time off at best. Even then, it wouldn't be a revenue generator, it would just lower the bar. More importantly, why use Unity? The whole point of Unity is for humans to make games. If you wanted AI to make games, just use C++ and some specialized libraries for that.
     
  4. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    8,955
    I would say (a) AND (b), (not and/or). (at least for now)
    I mean, if you know an answer and use chatGPT to help you write it out, that fundamentally seems fine. But if someone doesn't know an answer and just regurgitates what chatGPT says, that helps no one and ultimately is more damaging.

    I don't think we want to make any hard and fast "rules" around it just yet. Just bearing in mind that the overall point of this forum is help folks develop games. (and Unity broadly).

    Side note: Have banned maybe a couple of dozen accounts that only responded with chatGPT replies. Not really sure what the point was for many of them. About 20%(ish) were simply spam, they provide a response that contains hidden links. The others, dunno why they were doing it. Probably just delayed spam.
     
    Antypodish, angrypenguin and Ryiah like this.
  5. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,026
    So it can check the code it generates in response to your questions.

    I thought this was obvious, given how much it makes crap up, that this'd be the way to get the AI to "learn" faster than the creation of materials it comes to rely on are made, and provides it the ability to test and recommend performance and flexibility options for any given tasks.
     
  6. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,724
    Okay but why would they do this when they have no incentive to do so whatsoever? This also isn't how the LLM learns in the first place.
     
    zombiegorilla likes this.
  7. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,317
    Because there's no value. They want to earn money, not promote unity. Promoting unity does not bring them money, because percent of people using unity is very small. Also whoever needs it, can write a plugin.
     
  8. DragonCoder

    DragonCoder

    Joined:
    Jul 3, 2015
    Posts:
    1,453
    Don't blame you for not having heard of this yet, this field moves so incredibly fast.


    Not sure we'll see Unity any time soon but official C# by MS which compiles fast should be in the realm of possibilities soon.
     
  9. Taro_FFG

    Taro_FFG

    Joined:
    Jun 24, 2022
    Posts:
    57
  10. zombiegorilla

    zombiegorilla

    Moderator

    Joined:
    May 8, 2012
    Posts:
    8,955
    Lets stay on topic. There are other threads discussing usage.
     
  11. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,026
    Think you've missed the point.

    Self analysing so that it stops making up crap is the point I'm on about.

    Which is the core point of this thread, that it spouts nonsense, confidently. Which is what I call "lying with confidence" because it's causing problems for those that can't assess that "AI" is lying to them.

    Which leads to all the problems of its usage. If it can "teach" or "test" itself to find if what it's saying is true, then all that goes away.
     
  12. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    thats not what lie means.

    there is not intent to deceive. It literally cannot correct itself. How could it? All it can do is parrot. It cannot go into the world, conduct research, analyze data... all it can know is observing patterns in language.
     
    angrypenguin and stain2319 like this.
  13. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,026
    It's not important what the intent is of a machine. It's a machine.

    The result, that subjective reality, and objective truth, those things matter.

    In that context, which is the one we're in, and we're the people, any confidently proclaimed falsehood is a lie.

    It was and is known to do this by the creators, also human. Somewhat.

    They knew it would do this, and made it do so confidently.

    They created a lying machine, knowingly.

    They intend for their machine's output to deceive, confidently.

    That's lying, by proxy, with confidence.

    And they had a euphemism ready to go: "hallucinations"

    After all that's happened in the last 3 and a half years, are you still falling for contrived euphemisms placed into the media repeatedly?
     
  14. tsibiski

    tsibiski

    Joined:
    Jul 11, 2016
    Posts:
    569
    That might be a matter of semantics. Plus, we anthropomorphize everything that we can, applying humanity to things that aren't human, or aren't even alive. It's human nature. I think you can just assume that if someone is saying it is lying, they mean that it's giving false information. Easier to just say "lie".
     
  15. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,317
    Except you deal with all of that when getting help from humans too. Humans spread false information with confidence all the time.

    So this is no different, and is business as usual.
     
    Antypodish and Ryiah like this.
  16. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,026
    Not at all. There's reputation consequences to humans lying.

    And not all people lie all the time when they don't know something, as they're going to get caught out, and have feelings about getting caught in a lie.

    Nor do all humans lie as well as this software does.

    Nor will humans tend towards always creating perfectly believable API names, as one example of its hallucinations, and uses them with absolute confidence.

    Nor will most humans EVER bother to do this. It takes too much effort for too little gain, at too great of a cost (reputation, future cooperation, considerations etc).

    And a lot of humans choose not to lie, at all, and do all sorts of other things instead, when they either don't know, are unsure or otherwise unable to determine a best path forward - some stay silent and allows others to interject, some loudly make it known that they don't know in the hope others will step forward with answers, others find others they can ask, etc etc.

    Your rationalisation is bizarre.
     
  17. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,026
    It's programmed to speculate what might be words in place, next, with confidence. The intent is there, by the programmers, to meet even the definition of a lie. It just takes the joining of the intent of the programmers to the observable reality of a machine's output to complete the semantic loop. Given that this is a "language machine" designed to discern questions and inject "answers" I think we can and should include the intent and desires of its programmers, and the incentives that drive them.
     
  18. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,082
    A reputation has no meaning when someone can just hide behind a false name and change when people get too close to the recognizing them. Who really is @Ryiah? If I just deleted my account and made a new one would you be able to recognize me?

    I still remember the last time I changed my avatar and how people commented that they didn't immediately realize it was me, and I've had a similar experience with @Lurking-Ninja not realizing that one of their posts was theirs as they removed their avatar.

    ChatGPT is only recognizable if you know what to look for, and if it hasn't been told to use a different pattern to its responses.
     
    Last edited: Jul 19, 2023
  19. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,026

    Exactly. We're in a space with others. And you have attachment to your avatar within this space, and have built it a reputation and identifiable personality that you're even further attached to than the mere avatar and presence it immediately engenders.

    All of which takes energy of a real human that requires both effort and purpose, and is done in a manner and way that's recognisable somewhere, by some, such that many did recognise you under a new moniker, and you knew they would because you brought (and used) some of your identifiable personality's traits, communications styles, content considerations, viewpoints, etc.

    None of which nullifies the greater point that very few people are inclined to do this full-time lying with the degree of confidence inherent to ChatGPT (and others, Claude does this just as well, if not better), and none of them have infinite energy to do so to all people asking all questions at all times, and none of them are posited as a voice in direct communication with another, isolated from immediate criticisms, contradictions and corrections by others, as we are in here.

    But you know all this. Let the AI loose in here, and see how long before it's banned for being AI.
     
    Last edited: Jul 19, 2023
  20. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,082
    upload_2023-7-19_14-1-40.png
     
    neoshaman and DragonCoder like this.
  21. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,026
    You trying to prove my point?

    This isn't even close. Even someone for whom English is a third language could identify the mimic.
     
  22. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,082
    It's easy to say that when you're knowledgeable on a subject. Most people wouldn't be able to tell without having regularly worked with the AI. It's common to see people make a similar mistake when trying to teach programming (or any subject really) and think that a sub-topic is easy when it's only that way thanks to knowing how to do it.
     
    VertexRage likes this.
  23. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,026
    This is a wonderful paraphrasing of the OP's original point.
     
  24. BIGTIMEMASTER

    BIGTIMEMASTER

    Joined:
    Jun 1, 2017
    Posts:
    5,181
    Hmm. I'm not convinced unifikation isn't in fact an AI psyops working double reverse psychology on us.
     
    Unifikation likes this.
  25. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,317
    Same applies to ChatGPT. After a while you learn that 20% of answers it gives are wildly incorrect and adapt to that. Additionally, all forum users are wearing masks. Who am I really, for example?

    I do not understand why so many people are trying to portray minor everyday inconvenience as an insurmountable critical problem. It is exact same business as before. All information online by default is untrustworthy. Reputation is not reliable, a user knowledgeable in one field can give incorrect or outright insane information in another, and someone new would not know who is trustworthy. Expert status can be faked, which happened more than once with wikipedia. For example: "A woman wrote fake Russian history on Chinese Wikipedia for 10 years"

    You've already been dealing all that for the most of your internet life. This is not different in any way.

    You can't identify a mimic. ChatGPT is perceived as a person for the first month or two of using it, because human brain fills knowledge gaps and imagines missing bits. If someone replaced you with a fine-tuned LLM it would take a month or two for people to catch on that something is wrong, and there's a VERY good chance people wouldn't ever notice that your account is now controlled by a digital doppelganger. .
     
    Last edited: Jul 19, 2023
    Tanner555 and Ryiah like this.
  26. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,503
    Do you?

    If I realise that someone "lies" to me 20% of the time then I will actively avoid dealing with them in general, and specifically not ask them anything wherevthe answer is important. That person is untrustworthy, that is a bad reputation, end of story.

    In contrast, perhaps because this is a machine instead of a person, you're trying to convince others to get involved with it or, at least,to see it positively. Treating it as if it has a great reputation even though, by the way, remember the 20%, mate!

    From my perspective, the fact that it's a really cool bit of tech has at the very least introduced a massive bias to it's "reputation", if not thrown the concept out of the window completely. "Same applies"? Not in any practical sense I'm seeing.

    Edit: I should clarify, I agree that there's a big difference between a mistake and a lie, hence the quotes.
     
  27. Lurking-Ninja

    Lurking-Ninja

    Joined:
    Jan 20, 2015
    Posts:
    9,900
    Sorry, I have to pile on this. Dude, if you had that 20% miss rate (with your usual confidence) when you talk to us in these forums, you would be on my ignore list a long time ago. This is why ChatGPT actually is on my ignore list for the time being.
     
  28. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    10,977
    The wildly incorrect answers are not the problem actually, the problem is there are other percentages on top of that where it is "just" incorrect, or subtly incorrect, where the answers look plausible, but are in fact bullshit. This makes chatGPT unusable and unreliable. But since a lot of people seem to trust chatGPT a lot, here:

    upload_2023-7-20_1-3-59.png
     
    Tanner555, bugfinders and spiney199 like this.
  29. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,082
    Yes, and I would actually contrast it to learning how to use a search engine like Google. You may not have noticed but it no longer has page numbers on their searches. Google tells you how many results it found and how quickly it found them but after the first ten or so the results rapidly start losing value.

    With ChatGPT you have to keep in mind that once you reach a certain number of tokens the value of that response is going to rapidly decrease and a new thread will be required just like a new search is often required to get better results from Google.

    There are other things too like learning how to write good prompts. In my experience the people who complain the most about getting bad results have been giving it less than ideal if not outright bad prompts. With Google if you want better results there are ways to narrow it down (eg "site:forum.unity.com").
     
    Last edited: Jul 19, 2023
    Antypodish likes this.
  30. Antypodish

    Antypodish

    Joined:
    Apr 29, 2014
    Posts:
    10,556
    Term "lie" is misused here and only been used, to strongly discredit the tool, which been used successfully by many in various fields. Surely it may not fit to everyone. But should not be marked as USELESS by default.
    Also using such "lie" term in the context of search tool, it shows lack of understanding the tool that is in subject.
    Unless tool is engineered to mislead (see social media).
    Discussion float around ChatGPT, like it is only tool in a world, which gives inaccuracies in returned results.

    And yet you are using search engines daily, which are AI powered for long time. And returns "possible" answers, not "definite" one.
    The accuracy of results are as good, as your engineered prompt, with error probability (that may vary hugely).
    And still can yield many misleading, useless links among some good information, plus lot of sponsored content. Which often are lies in a face.
    Wondering how much time daily spending navigating and scrolling through links, just to get into desired information?
    By the quote, you should not be using search engines at all.

    How that differs, than using other searching methods?
    Or asking someone and getting answer with "maybe", "possible", "probably" like words inside. Is that lie? Guess? Inaccuracy? It gives you subject to think? Makes you less confident? Or it is waste of time? And yet would you discredit person, just because doesn't know?
     
    DragonCoder likes this.
  31. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,082
    That's not to say it can't lie.

    https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt-1850227471
     
    Antypodish likes this.
  32. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,317
    For me these forums have very high "no response" or "no information" rate. Way over 50%, as I feel it, probably in ballpark of 75 or 80%.

    It means, that for me there's no reason to ask questions. Because if I couldn't find an answer, it means I will not receive an answer no matter where I go.

    However, if I ask ChatGPT, I'm guaranteed to get a response, an I'll receive it immediately. If the response does not solve my issue it is likely to contain clue to lead me further towards solution.

    And that makes systems like ChatGPT attractive. Immediate information with a 20% chance of falsehood in response to ANY query (no matter how complex or insane) is very valuable, because for most questions I would want to ask I usually get nothing in return through traditional means.

    The choice is not between "correct, but slow response" and "possibly incorrect response". The choice is between "possibly incorrect response" and "no response at all". Add to that that ChatGPT is almost always available, 24 hours per day, except occasional outage. Which is not the case with humans.
     
  33. Unifikation

    Unifikation

    Joined:
    Jan 4, 2023
    Posts:
    1,026
    Where are you guys getting 20% false rate?

    The best I've seen ChatGPT do, early on, was 50% right.

    Nowadays it's 20% right, on things that have an answer.

    It will never get to "that can't be done" for things that can't be done.
     
  34. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,503
    That's all fair enough, I agree that as users of atool we can functionally adapt, but that wasn't the question. The question was about whether reputational consequences apply.
    But they're not giving me an answer, nor adopting a conversational style. They're giving me a list of resources relevant to my query, and while they often have a go at drilling down for me (often successfully, too) my expectation when using one is that I'll need to evaluate and decide for myself.And to answer your later question, that typically doesn't take me long, in part because search engines also surface some data to help with that.

    It's a collation service primarily, with some extras on top.

    The chat bot is attempting to provide a single, conversational style answer. The usage pattern there is that I'm outsourcing the evaluate stage to a non-domain-expert and hoping that the one answer it returns is good. I then have to evaluate it anyway, but don't know it's sources (though I haven't tries asking it, I should), so to do that I'm doing the same search I would have anyway.

    So, back to the topic, if that's how someone decides to contribute here, then cool. If someone can help me or others out then it's their business how they do it. But just regurgitating a chat bot response without adding the benefit of human expertise is low value at best (if someone wanted to ask ChatGPT, they probably could have) and potentially harmful at worst (20% chance of propagating junk, etc.). Though, I repeat, I doubt we're going to stop it, any more than we can stop any other type of spam.
     
    Last edited: Jul 20, 2023
  35. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,503
    I just used neg's number, as I've been using the (incorrect) term "lie", because I see no point debating details when it's the broader points which matter.
     
  36. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,317
    In latest iterations, search engines seem to be switching towards conversational style, and also decide which resources are useful for you.

    For example:
    upload_2023-7-20_3-10-56.png
    Here's a more extreme example:

    upload_2023-7-20_3-13-17.png
    All of those represent resources I did not ask for.

    Meanwhile nobody remembers google power tools like +/- intitle, etc. Google keep removing those, by the way. On top of that search engine is not neutral and can push specific agenda.

    I think in near future google style search through indexing may end up being phased out completely. And replaced with conversational style queries. Probably through bard or something.

    The issue with someone copy pasting ChatGPT answers is that those can be used to farm rep on resources where it matters, because the robot never gets tired and is always available. Meanwhile the copy-paster himself provides no value. Additionally there may be more than one copy-paster, and they'll effectively become a sock puppet troupe for ChatGPT. Which will flood resource with answers originating from single source.
     
  37. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,724
    20% lines up nicely with my success rate, but the problem isn't just that those 20% responses are wrong, but that they're often wrong in very confidence and plausible sounding ways, to the point where testing that generated code becomes an annoyance.
     
  38. Lurking-Ninja

    Lurking-Ninja

    Joined:
    Jan 20, 2015
    Posts:
    9,900
    You see me rarely asking questions on the forum as well. That's not an issue here.

    To reiterate the problem:
    - for an advanced user, false answers aren't a big problem, because they usually know the answer (so basically playing with ChatGPT is a colossal waste of time, if you invest the time you're correcting the problems and engineering the query into writing the code, you would be finishing it by the time you fixed all the screw up the AI dumped on you). Not to mention, any advanced engineer worth their weight will have a well-tested library of code for the common problems and to reduce rewriting plumbing-code all the time
    - for a beginner, ChatGPT is a serious minefield, because it gives false answers with high confidence, so they will spend even more time to try it out and when they finally think that they just can't wrap their head around, they will come to the forums (or the discord or whatever), will present the AI answers as facts and will ask the question why they feel so stupid that they can't get the answer working... it's destructive

    I do not recommend using ChatGPT in the realm of software engineering because of this, except as a toy to play around on weekends and laughing at the "results". Unless it's a guided tour (someone tells poor newbie that the AI is likely lying to them and ask questions from an expert immediately when they can't get the answer working).
     
    Last edited: Jul 20, 2023
  39. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,503
    All this talk of "confidence" tells me that people are being swayed far too much by the illusion. It's essentially a search tool with fancy presentation which gives you only one result per search.

    Can anyone say whether or not it's able to correctly reference its sources?
     
  40. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    10,977
    It keeps making up names when it doesn’t know the answer. What sources? It just makes S*** up.

    IMG_3495.jpeg
     
    bugfinders likes this.
  41. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,082
    Mine is close to the estimate provided by @neginfinity but a significant part of that is because I've largely come to understand what it's limitations are and am mostly using it for trivial tasks. For example if I have an error in a script and it doesn't stand out within the first 30 seconds I'll just toss it at GPT-4 and ask it where it thinks it is.

    Other little tasks that would take me longer to accomplish on my own include generating LINQ and regex queries.

    upload_2023-7-19_21-47-41.png
     
    Last edited: Jul 20, 2023
    AcidArrow and Rewaken like this.
  42. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,317
    I feel like you guys are greatly overthinking it. I use ChatGPT very frequently, know what it is capable of, and all the talk about "it is not an expert", "but if I knew someone who is wrong 20% of the time". It is all off the mark.

    It is a tool. It is a very moody, very strange tool, which has to be used in a specific way. It is not an expert, it is not a forum user, and it is not a person.

    When you start using it, it is perceived as a person and triggers empathetic response. The illusion is strong and complete. Takes time to start recognizing it as a robot. Takes even more time to start seeing typical speech patterns, then you develop banner blindness for the typical phrases it uses. "I'm here to help!". "Certainly, I'm happy to talk about <subject>!". Then you can use it efficiently.

    Basically, the complaints I hear to me feel like you should use it more until it clicks for you how to use it. Expecting it to solve problems, treating it as a human, comparing it to an expert is not using it right.

    It can't reliably reference sources out of the box. It knows things, but it doesn't know why it knows things where it learned those things from. Some pieces of knowledge come with reliably embedded sources, but in many cases that's not a thing.

    Extensions are being developed for LLMs that do act as database. Those are usually "talking documents" kind of thing. You dump files into it, it can discuss their content. They can also point out where the pieces of information came from. h2o is developing those, plus there were couple of python modules for that.
     
    Antypodish and Ryiah like this.
  43. Murgilod

    Murgilod

    Joined:
    Nov 12, 2013
    Posts:
    9,724
    It's just a term being used as shorthand, similar to how there's no intelligence in artificial intelligence. You can ask it a question and it will generate text that says "yes, you can do this, and here's how!" and then proceed to say things that are overtly wrong but written as if they're correct because it is trained to respond to requests.
     
    Unifikation likes this.
  44. angrypenguin

    angrypenguin

    Joined:
    Dec 29, 2011
    Posts:
    15,503
    By some people, sure. In other cases I'm far from convinced. Some people accuse it if lying, which suggests intent. Neg said how convincing the illusion is and then went right ahead and personified it by calling it "moody" while trying to convince me to use it more.

    Conversational user interfaces and other directions this could take are exciting. It's fun to play with now, and I can see how it's already useful as a writing aid, and people seem to getting analysis mileage out of it. There are no arguments about how cool it is from me.

    But, when evaluating it as a search tool, as it is now, and for stuff I might discuss here, the fact that it only returns one result with no sourcing makes it redundant. To validate its answers I need to already know them, or to do my own search elsewhere anyway.
     
  45. bugfinders

    bugfinders

    Joined:
    Jul 5, 2018
    Posts:
    711
    i think if it used words like "not sure" so, you ask it a question and if it has conflicting info it could say, "Im not sure, ive read that <insert something> but I also read that <something else contrary to the first>" .. its the fact it will act like it is so confident that its 100% right, then if you say no, it will throw you some otehr random answer thats probably even more dumb than the last. Id feel more inclined to hear its opinion if it showed doubt that this is all the answer or that this is right, but maybe it will help.
     
  46. neginfinity

    neginfinity

    Joined:
    Jan 27, 2013
    Posts:
    13,317
    It can't do that, because it does not think. It also does not have an opinion and can't doubt.

    ---

    Basically, after post by angrypenguin I thought what would be a good analogy for chatgpt, and arrived at this one (among other possibilities).

    Imagine a library that contains all possible meaningful text. All things that were ever written, are being written, could've been written, or will be written.

    You write your query on a page, a robot takes it and uses it as a search idnex. Then returns a copy of random page where your query has occurred and what happened afterwards.

    ChatGPT is a lite version of it.

    You are not allowed to access the library directly, only to ask queries. The library has no brain and does not think, it is a mechanism. It can, however, create illusion of personality, because among the infinity of possible text, there will be conversations where personality emerged.

    In this scenario the "library" has no obligation of giving you expert advice, or fixing your code.

    It is, however, a useful tool. Because among the infinity of possible conversations, there is information you do not know, but need.
     
  47. GTA_6

    GTA_6

    Joined:
    Jun 19, 2023
    Posts:
    9
    For all those concerned about "lies" (errors) (this opinion is limited to coding)
    1. learn to code and learn to use the software (Unity).
    2. learn how to use CHATGPT, know its limitations and possible places it can go wrong.
    3. Test the generated code yourself, fix errors manually or point it out to chatgpt, because doing this increases the chance of getting more accurate output.
    4. Don't make the conversation too long because of limited memory, if it gets too long just go back to your 2nd or 3rd question and edit it or copy paste the responses received from chatgpt and reframe as if its yours and ask further questions.
    5. Don't ask for complete code (in case of very large and complex)! Instead ask for small snippets of code individually. This can also save a lot of time

    ChatGPT isn't as perfect as many of u guys expect to be! But, It saves a lot of time if u don't have your own library or if you are creating one and really reduces the time needed to do things completely from scratch by giving you a "bare-bone" code to start from!
     
    Last edited: Jul 20, 2023
    Rewaken, Ryiah, Tanner555 and 3 others like this.
  48. bugfinders

    bugfinders

    Joined:
    Jul 5, 2018
    Posts:
    711
    I get it has no personality but they are trying to make it act like one. At least if it found contradicting answers it could inform you that there seem to be split opinions and here are the 2 opinons .. much like flat earthers, they are convinced they are right. thats fine.. but they also know there are normies who think it round. they dont have to agree, but they do admit they exist
     
  49. Ryiah

    Ryiah

    Joined:
    Oct 11, 2012
    Posts:
    20,082
    Which it can't do as an LLM is just a probability engine. It's only capable of choosing the next token based off of the probability of the current sequence of tokens. It's not just that it has no personality. It has no capacity for reasoning either. That's why it's often bad at math or has errors in code. It's not actually calculating or executing anything.
     
    Last edited: Jul 20, 2023
    angrypenguin likes this.
  50. Lurking-Ninja

    Lurking-Ninja

    Joined:
    Jan 20, 2015
    Posts:
    9,900
    Maybe it's an unpopular opinion, but we're humans. In conversation we ARE swinged by the other speaker's perceived emotions. If you think you aren't affected by it, then you're either wrong or a psychopath. I mean it in a medical sense, not as an insult.
    We should evaluate our tools on the way they actually work on and with us, not on an artificial level that no one experiences. Just a thought.

    ps: oh and this also means if you want to take out emotions from the equation, using a conversational AI which whole purpose is to ease you into the illusion of a real conversation with a supposed to be intelligent agent on the other end is a bad idea.
     
    angrypenguin and Unifikation like this.